id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2308.01680 | Disorder Effects on the Quasiparticle and Transport Properties of
Two-Dimensional Dirac Fermionic Systems | Despite extensive existing studies, a complete understanding of the role of
disorder in affecting the physical properties of two-dimensional Dirac
fermionic systems remains a standing challenge, largely due to obstacles
encountered in treating multiple scattering events for such inherently strong
scattering systems. Using graphene as an example and a nonperturbative
numerical technique, here we reveal that the low energy quasiparticle
properties are considerably modified by multiple scattering processes even in
the presence of weak scalar potentials. We extract unified power-law energy
dependences of the self-energy with fractional exponents from the weak
scattering limit to the strong scattering limit from our numerical analysis,
leading to sharp reductions of the quasiparticle residues near the Dirac point,
eventually vanishing at the Dirac point. The central findings stay valid when
the Anderson-type impurities are replaced by correlated Gaussian- or
Yukawa-type disorder with varying correlation lengths. The improved
understanding gained here also enables us to provide better interpretations of
the experimental observations surrounding the temperature and carrier density
dependences of the conductivity in ultra-high mobility graphene samples. The
approach demonstrated here is expected to find broad applicability in
understanding the role of various other types of impurities in two-dimensional
Dirac systems. | Bo Fu, Yanru Chen, Weiwei Chen, Wei Zhu, Ping Cui, Qunxiang Li, Zhenyu Zhang, Qinwei Shi | 2023-08-03T10:47:35Z | http://arxiv.org/abs/2308.01680v1 | Disorder Effects on the Quasiparticle and Transport Properties of Two-Dimensional Dirac Fermionic Systems
###### Abstract
Despite extensive existing studies, a complete understanding of the role of disorder in affecting the physical properties of two-dimensional Dirac fermionic systems remains a standing challenge, largely due to obstacles encountered in treating multiple scattering events for such inherently strong scattering systems. Using graphene as an example and a nonperturbative numerical technique, here we reveal that the low energy quasiparticle properties are considerably modified by multiple scattering processes even in the presence of weak scalar potentials. We extract unified power-law energy dependences of the self-energy with fractional exponents from the weak scattering limit to the strong scattering limit from our numerical analysis, leading to sharp reductions of the quasiparticle residues near the Dirac point, eventually vanishing at the Dirac point. The central findings stay valid when the Anderson-type impurities are replaced by correlated Gaussian- or Yukawa-type disorder with varying correlation lengths. The improved understanding gained here also enables us to provide better interpretations of the experimental observations surrounding the temperature and carrier density dependences of the conductivity in ultra-high mobility graphene samples. The approach demonstrated here is expected to find broad applicability in understanding the role of various other types of impurities in two-dimensional Dirac systems.
pacs: 71.23.-k, 72.15.Lh, 72.10.-d, 72.80.Vp +
Footnote †: Corresponding author. E-mail: [email protected]
+
Footnote †: Corresponding author. E-mail: [email protected]
+
Footnote †: Corresponding author. E-mail: [email protected]
## I Introduction
The role of disorder in two-dimensional Dirac fermionic systems [1] was intensively explored in the early 1990's, in part motivated by the observations of localized states in \(d\)-wave superconductivity of cuprate superconductors [2] and plateau transitions in integer quantum Hall effect [3]. In those pioneering studies, it has been shown that even impurities with weak scattering strengths have non-perturbative effects on the quasiparticle properties near the Dirac point [4; 5; 3], resulting in intriguing new physical consequences. As an example, an exact conformal field theory was developed to successfully describe the contributions from multiple impurity scattering processes if the nature of weak disorder preserves the continuous chiral symmetry [4; 5; 6; 3]. In this scenario, the electron density of states was shown to possess a power-law dependence on energy with fractional exponents, instead of logarithmic behaviors obtained within perturbative treatments [7; 8]. Those findings not only enriched our physical understanding about such disordered systems, but also highlighted the importance of multiple impurity scattering processes. Nevertheless, the conformal field theory is not applicable to scalar type of impurities that breaks continuous chiral symmetry. In such cases, the elastic scattering time \(\tau\) is short, and the Dirac fermionic systems easily enter into the strong scattering limit (the dimensionless parameter \(E_{f}\tau/\hbar\leq 1\)) as the Fermi energy \(E_{f}\) approaches the Dirac point (here \(E_{f}\) is measured relative to the Dirac point), calling for new theoretical treatments.
Separately, since the experimental discovery of graphene [9], the past decade has seen a substantial rejuvenation of interest in the study of the role of disorder in two-dimensional Dirac fermionic systems. In particular, graphene serves as an ideal platform for studying disorder effects at or close to the Dirac point, because the system displays linear dispersion over a large energy range. Indeed, extensive unusual transport properties have been reported using ultrahigh-mobility samples [10; 11; 12; 13; 14; 15; 16; 17; 18; 19], including that the minimum conductivity at the Dirac point strongly depends on temperature, the conductivity is sublinear in carrier density very close to the Dirac point, and there exists a critical carrier density separating a nonmetallic and a metallic regime characterized by the temperature dependence of resistivity. Such novel transport behaviors not only reflect intriguing physics around the Dirac point in weakly disordered graphene, but potentially also highlight the importance of exotic disorder
effects. To date, those unconventional transport properties of graphene remain to be fully understood, in part because prevailing theoretical treatments have various limitations. For example, the standard Boltzmann transport theory treatments [20; 21; 22] only capture subsets of scattering events. Several analytical approaches have also been developed to study the role of disorder in graphene, as examplified by the functional renormalization-group (fRG) approach [26; 27], but these new developments again only considered the contributions of some subsets of multiple impurity scattering processes and still failed to properly describe the quasiparticle behavior around the Dirac point. Therefore, it is still a standing challenge to reliably treat multi-scattering events in the presence of physically realistic disorder without continuous chiral symmetry. An enabling theoretical approach is needed to include all the multiple scattering events in order to capture the underlying disorder physics, especially near the Dirac point.
In this work, we present numerically exact results of the quasiparticle and transport properties of disordered two-dimensional Dirac fermionic systems as obtained using an accurate momentum-space Lanczos method [28; 29], with disordered graphene around the Dirac point as a concrete example. As shown recently, this method is able to rigorously treat all multiple scattering events from random scalar disorder potentials or other types of impurities. Strikingly, we extract from numerical data a universal power-law functional form of the self-energy in describing the multiple scattering effect of disorder on the quasiparticle behavior, which is valid from the weak to strong scattering limit. The newly established universal power law enables us to further reveal the novel quasiparticle behaviors near the Dirac point, such as the unusual energy dependence of the quasiparticle residue. We are also able to reproduce the experimentally observed conductivity versus the carrier density at different temperatures [10; 12; 14; 16], thereby attesting that a proper account of multiple impurity scattering processes is essential in understanding the transport properties of disordered graphene. The approach demonstrated here is expected to find broad applicability in other disordered systems where multiple impurity scattering events play a decisive role.
This paper is organized as follows. The model and methodologies are introduced in Sec. II, followed by the numerical results for the self-energy and quasiparticle properties in Sec. III. The transport properties are given in Sec. IV. We discuss two kinds of correlated impurity: Gaussian- and Yukawa-type disorder in Sec. V. Finally, in Sec. VI, we draw some conclusions from our main results.
## II Model and method
In the absence of disorder, graphene can be modeled by a \(\pi\)-band tight-binding Hamiltonian. In our calculations, the short range Anderson-type disorder is introduced by the on-site energy distributed uniformly and independently within \([-W/2,W/2]\). So we consider the following Hamiltonian on a honeycomb lattice:
\[H=t\sum_{<ij>}|i\rangle\langle j|+\sum_{i}V_{i}|i\rangle\langle i|, \tag{1}\]
where \(t\) is the hopping energy between the nearest neighbouring carbon atoms. A dimensionless parameter \(\alpha=\frac{A_{c}W^{2}}{12(\hbar v_{f})^{2}\pi}\) is defined to characterize the strength of uncorrelated Anderson disorder, where \(A_{c}=\frac{3\sqrt{3}}{2}a^{2}\) is the area of the unit cell, \(a\) is the C-C distance, and \(v_{f}=3at/2\hbar\) the bare group velocity for clean graphene. To explore the quasiparticle properties of disordered graphene we choose a large graphene sample containing millions of atoms (\(L^{2}=10000^{2}\)) to calculate its retarded self-energy (\(\Sigma\)) by the momentum-space Lanczos recursive method [28; 29]. The large sample in our calculations allows us to choose a small artificial cutoff \(\eta=0.001\) to simulate the infinitesimal imaginary energy so that we can extract
Figure 1: (a) Imaginary and (b) real parts of the self-energy for disordered graphene with different disorder strengths (\(0.02\leqslant\alpha\leqslant 0.12\)). The open symbols are the numerical results, while the solid lines are the fitting curves by Eqs. (2) and (3). The inset in (b) shows the comparison of the density of states (DoS) obtained by the real-space Lanczos method (RS, open symbols) with the calculated results based on the fitted self-energy in momentum space (KS, solid lines) for \(\alpha=0.05\), \(0.09\), and \(0.12\).
the self-energy function with high-energy resolution.
## III Self-energy and quasiparticle properties
The imaginary part of the self-energy (\(\mathrm{Im}\Sigma(E)\)) for disordered graphene with different disorder strengths (\(\alpha\)) is shown in Fig. 1(a). One can see that, as the disorder strength increases, \(\mathrm{Im}\Sigma(E)\) gradually deviates from a linear behavior, and the absolute value of \(\mathrm{Im}\Sigma(0)\) increases accordingly. These characteristic features inspire us to use a power-law formula to fit our numerical results, given as
\[\mathrm{Im}\Sigma(E)=-\Sigma_{0}-\Delta|E|^{1-\beta}\qquad 0<\beta<1, \tag{2}\]
where \(\beta\), \(\Sigma_{0}\) and \(\Delta\) are the fitting parameters only determined by the disorder strength \(\alpha\). As shown in Fig. 1(a), the agreement between the self-energy function form in Eq. (2) and numerical results is excellent within the low-energy window of \([-0.2t,0.2t]\). Eq. (2) is further confirmed by the log-log plot of the imaginary part of the self-energy as a function of energy as shown in Fig. 2(d). More remarkably, only via the Kramers-Kronig relation, we identify the functional form of the real part of the self-energy (\(\mathrm{Re}\Sigma(E)\)) without introducing any other adjustable parameter except the high energy cutoff as
\[\mathrm{Re}\Sigma(E)=D\mathrm{sgn}(E)|E|^{1-\beta}+CE, \tag{3}\]
where \(\mathrm{sgn}(E)\) is the signum function, \(C=\frac{2E^{-\beta}}{\pi\beta}\Delta\), and \(D=-\mathrm{cot}(\frac{\pi}{2}\beta)\Delta\). The high energy cutoff is chosen as \(E_{c}\approx 2.7t\), which has the same order of magnitude of the band width. Such a functional form in Eq. (3) can well fit the numerical results of \(\mathrm{Re}\Sigma(E)\), as shown in Fig. 1(b). This further confirms the correctness of our proposed power-law formula for the imaginary part of the self-energy. We have also calculated the spectral function as shown in Appendix. A, demonstrating that this power-law relation significantly renormalizes the quasiparticle properties around the Dirac point. Moreover, the inset in Fig. 1(b) plots the comparison of the density of states (DoS) obtained by the widely used Lanczos method in real space [30] with that calculated using our fitted self-energy, showing again the perfect agreement with each other. More discussions about the DoS are given in Appendix. B.
Equations (2) and (3) are the main discoveries of the work [31], reflecting that proper treatments of all orders of multi-scattering events will uncover the novel quasiparticle properties around the Dirac point. More interestingly, the existence of nonzero \(\Sigma_{0}\) in the obtained self-energy functional form is reminiscent of what is reported in multichannel Kondo problem [32], thereby suggesting that some novel quasiparticle behaviors and unconventional transport properties should be observed even in weakly disordered graphene.
In the following, we digest the central findings in several important physical aspects. First, we discuss the relationship between the fitting parameters and disorder strength. In Fig. 2(a), a linear fitting of \(\beta\) versus \(\alpha\in[0.04,0.12]\) gives a slope of \(2.00\pm 0.05\). \(\Delta\) also has a linear relation with \(\alpha\), and the fitting slope is \(1.70\pm 0.05\), as shown in Fig. 2(b). On the other hand, \(\Sigma_{0}\) can be fitted by an exponential function \(Ae^{-B/\alpha}\), and the fitting parameters are \(A=1.0\pm 0.1\) in unit of \(t\) and \(B=0.57\pm 0.05\), as shown in Fig. 2(c). Note that the exponential fitting parameter for \(\Sigma_{0}\) is \(B=0.57\pm 0.05\), which is roughly a factor of 2 off the prediction (B=1) within the self-consistent Born approximation (SCBA) [25; 33].
Since all information of the quasiparticle properties is encoded in the self-energy function, next we discuss how the multi-scattering events considerably affect the quasiparticle properties. The real part of the self-energy (\(\mathrm{Re}\Sigma\)
Figure 2: Numerical fittings of the parameters (a) \(\beta\), (b) \(\Delta\) and (c) \(\Sigma_{0}\) as functions of the disorder strength \(\alpha\). \(\beta\), \(\Delta\) and \(\Sigma_{0}\) are obtained from the power-law self-energy fitting in Fig. 1. (d) Log-log plot of the imaginary part of the self-energy and the power-law fittings by subtracting the values at the zero energy for several disorder strengths.
Figure 3: (a) Quasiparticle residue \(Z_{E}=v_{q}/v_{f}\); (b) dimensionless parameter \(E\tau/\hbar\) as a function of \(E\) for different \(\alpha\).
in Eq.(3) contains two terms, a linear term (\(CE\)) and a singular one (\(D\text{sgn}(E)|E|^{1-\beta}\)). We find that the singular term will dominate the quasiparticle behavior around the Dirac point, leading to a super-linear dispersion of \(E_{k}\propto k^{1/(1-\beta)}\), where \(E_{k}\) is the root of \(E-\hbar v_{f}k-\text{Re}\Sigma(E)=0\). This result clearly indicates that the linear dispersion for the ideal graphene is unstable against disorder due to multiple scattering events. Moreover, the power-law correction for the real part of the self-energy leads to the quasiparticle residue \(Z_{E}=1/[1-\partial_{E}\text{Re}\Sigma(E)]\propto E^{\beta}\) vanishing as \(E\to 0\), so does the effective group velocity \(v_{g}=\partial E_{k}/\hbar\partial k=Z_{Ev}t\), as shown in Fig. 3(a). In the weak scattering limit (\(E_{f}\tau/\hbar\gg 1\)), \(Z_{E}\) is close to 1.0, and decreases slowly as the Fermi energy decreases. In the strong scattering limit, however, \(Z_{E}\) (or \(v_{g}\)) drops rapidly to zero at the Dirac point. This unusual feature directly demonstrates that multiple scattering events significantly modify the quasiparticle properties near the Dirac point. Therefore, it is naturally expected that unconventional low-energy transport behaviors may arise in disordered graphene.
Indeed, the elastic mean free path \(\ell_{e}\) is given by the self-energy as \(\ell_{e}=v_{g}\tau=3at/[-4\text{Im}\Sigma(E)]\), where the elastic mean free time \(\tau\) can be expressed as \(\tau=\hbar/[-2Z_{E}\text{Im}\Sigma(E)]\). Using our finding for \(\Sigma_{0}\), the mean free path remains finite \(\ell_{e}(0)\sim a\exp(0.57/\alpha)\) at the Dirac point, which is consistent with the results from the one-loop RG calculations [25; 34]. But the lifetime \(\tau\) diverges as \(\propto E^{-\beta}\) in the limit \(E\to 0\), in stark contrast with the Fermi's golden rule prediction (\(\tau\propto E^{-1}\)), clarifying the significance of the multi-scattering events again. Fig. 3(b) plots the dimensionless parameter (\(E\tau/\hbar\)) as a function of energy (\(E\)) in order to point out the low-energy window \(|E|\approx E_{c}\exp(-1/2\alpha)\), which corresponds to the strong scattering regime (\(E\tau/\hbar\leq 1\)). As shown in Fig. 4(a), we plot the energy dependence of \(\ell_{e}\) for \(\alpha=0.08\) and \(0.11\). With the decreasing disorder strength, \(\ell_{e}\) at the Dirac point becomes longer. According to the scaling theory of Anderson localization, the 2D localization length (\(\xi\)) can be evaluated exclusively based on the diffusive transport properties \(\xi=2\ell_{e}exp(\pi\sigma_{d}/G_{0})\) (orthogonal symmetry) [35; 36; 37], with \(\sigma_{d}\) the conductivity of the system and \(G_{0}=2e^{2}/h\). Fig. 3(d) shows that \(\xi\) depends sensitively on the disorder strength and is strongly suppressed as \(\alpha\) increases. The energy dependence of \(\xi\) is mainly dominated by \(\sigma_{d}\). As a result, the behavior of \(\xi\) shows a minimum value at the Dirac point, exhibiting an opposite trend as \(\ell_{e}\). Moreover, the localization length estimated by our numerical results agrees well with that obtained by the transfer matrix method [36].
## IV Transport behavior
Based on the above self-energy results, we further investigate the transport properties of disordered graphene. First, we study the conductivity with impurity scattering including the multi-scattering events, and then take account the effect of electron-phonon scattering. At last, we consider the higher order correction in addition to the bare current bubble.
### Conductivity With Impurity Scattering
On the Drude formula level, the conductance is given by \(\sigma=\frac{e^{2}}{h}g\) with the dimensionless electrical conductance \(g\sim k_{F}\ell_{e}\), where \(k_{F}\) is the Fermi wave vector and \(\ell_{e}\) is the mean free path. Using the Fermi energy \(E\sim\hbar v_{f}k_{F}\) and mean free path \(\ell_{e}\sim v_{f}\tau\), the dimensionless conductance can be rewritten as \(g\sim E\tau/\hbar\). The conductance \(g\) is a good measure of disorder and can be used as a parameter to interpolate between the weak scattering regime \(g\gg 1\) and the strong scattering regime \(g\leq 1\). Previous theoretical studies are mainly restricted to extrinsic or doped graphene wherein the Fermi level is away from the charge neutral Dirac point (or weak scattering regime). The numerically exact results about the self-energies allow us to explore the transport behaviors around the charge neutrality point where the dimensionless conductance is not much larger than 1. To include the non-trivial contribution of the quasiparticle residue, we take more rigorous quantum-mechanical treatments based on the Kubo for
Figure 5: (a) Conductivity \(\sigma_{xx}\) and (b) resistivity \(\rho\) as functions of the carrier density \(n\) at different temperatures \(T=4\,\text{K}\), \(60\,\text{K}\), \(200\,\text{K}\), and \(300\,\text{K}\). The disorder strength is \(\alpha=0.09\).
Figure 4: (a) The mean free path \(\ell_{e}\) and (b) the localization length \(\xi\) as a function of the energy for disorder strengths \(\alpha=0.08\) and \(0.11\).
malism to calculate the Drude conductivity by
\[\sigma_{xx}(T,E_{f})=\int dE\left(-\frac{\partial f(E,E_{f})}{\partial E}\right) \sigma_{d}(E), \tag{4}\]
where \(f(E,E_{f})\)= \(1/[e^{(E-E_{f})/T}+1]\) is the Fermi-Dirac distribution with \(T\) being the temperature, and \(\sigma_{d}(E)\) is the zero temperature conductivity given as
\[\sigma_{d}(E)=\frac{G_{0}}{\pi}\left[1+\chi(E)\tan^{-1}\chi(E)+\frac{\tan^{-1 }\chi(E)}{\chi(E)}\right], \tag{5}\]
with \(G_{0}=2e^{2}/h\) and \(\chi(E)=[E-\text{Re}\Sigma(E)]/\text{Im}\Sigma(E)\). After introducing a dimensionless function \(\mathcal{G}(E)=\int_{0}^{E}\frac{Z(E)}{2(E)}\frac{dE^{\prime}}{E}=\frac{1- \text{Re}\Sigma(E)/E}{\Omega E\text{Re}\Sigma(E)}\), \(\chi(E)\) can be rewritten as \(\chi(E)=\mathcal{G}(E)E\tau/\hbar\). For a small disorder strength (\(\alpha\) or \(\beta\sim 0\)), our numerical calculation shows that \(\chi(E)\) can be approximated as \(E\tau/\hbar\). Thus, the Drude conductivity is only determined by the dimensionless parameter \(E\tau/\hbar\). The conductivity (Eq. (5)) contains two types of contributions: the first term (unity) in the bracket is the contributions of two Green's functions of the same kind (retarded-retarded or advanced-advanced) whereas the second and third terms come from the contribution of the retarded-advanced sector. In the weak scattering regime (\(E\tau/\hbar\gg 1\)), the conductivity is dominated by the retarded-advanced term and takes the form \(\sigma_{d}(E)\simeq\frac{G_{0}}{2}|\chi(E)|\), suggesting that weak disorder leads to weak dependence of conductivity on the Fermi energy. Around the Dirac point, however, the sublinear behavior of \(E\tau/\hbar\) as plotted in Fig. 3(b) yields a sublinear power-law energy dependence of the obtained zero-temperature conductivity, in agreement with numerical calculations using the finite-size Kubo formalism [38], but in sharp contrast with the prediction calculated by the Fermi's golden rule [39]. More remarkably, it naturally produces the sharp peak in resistivity at low temperature and the strong temperature dependence of the maximum resistivity, due to the sharp dip of \(E\tau/\hbar\) around the Dirac point as shown in Fig. 3(b). Those novel behaviors have been widely reported in ultrahigh-mobility samples at and near the Dirac point [10; 11; 12; 13; 14; 15; 16; 17; 18; 19].
To compare with the experimental transport results of high quality graphene in more detail, in the following quantitative evaluations, a typical weak disorder strength is chosen as \(\alpha=0.09\) without any other adjustable parameter being used. Fig. 5(a) and 5(b) plot the corresponding conductivity \(\sigma_{xx}\) and resistivity \(\rho=1/\sigma_{xx}\) as functions of the carrier density \(n\) from the temperature \(4\,\text{K}\) to \(300\,\text{K}\), respectively, where \(n=\int_{0}^{\infty}D(E)f(E,E_{f})dE+\int_{-\infty}^{0}D(E)[1-f(E,E_{f})]dE\), and \(D(E)\) denotes the density of states. Sharp dips (or peaks) in the conductivity (or resistivity) are observed precisely at the Dirac point at low temperatures. By increasing the temperature, the conductivity very close to the Dirac point has a pronounced increase, showing a strong temperature dependence. Most remarkably, there exists a \(T\)-independent carrier density (roughly \(n^{\star}\sim 1.5\times 10^{11}\,\text{cm}^{-2}\)) that divides the systems into two different density regimes. In the low density regime (\(|n|<n^{\star}\)), the resistivity exhibits a nonmetallic behavior, that is, increasing \(\rho\) for decreasing \(T\). For \(|n|>n^{\star}\), the resistivity displays a weak \(T\)-dependence and decreases for decreasing \(T\).
We separately consider the two density regimes \(|n|<n^{\star}\) and \(|n|>n^{\star}\) and compare our theory with experimental results. We first consider low density regime, \(|n|<n^{\star}\), and address the \(T\) dependence of the minimum conductivity. Fig. 6(a) shows the comparison of the minimum conductivity \(\sigma_{\text{min}}\) as a function of temperature between our theory and experimental data for three monolayer devices from Ref. [16]. According to our theory, \(\sigma_{\text{min}}\) increases monotonically with \(T\). \(\sigma_{\text{min}}\) versus \(T\) follows a roughly linear relationship for \(T<100\,\text{K}\) and becomes sublinear for \(T>100\,\text{K}\). The experiment and theory show good agreement for devices \(\square\) and \(\bigcirc\). For device \(\triangle\), the theory only fits the experimental data well at low temperature. At finite temperature, electrons in both the conduction band and the valence band can contribute to the electrical conductivity. From Eq.(4), the broadening width of the electron and hole contributions is proportional to the temperature according to the Fermi-Dirac distribution \(f(E,E_{f})\). As the temperature increases, the broadening width also increases, allowing more electron-hole pairs to contribute to the electrical conductivity. The temperature dependence of \(\sigma_{\text{min}}\) depends critically on the transport properties near the Dirac point. We then turn to the high density \(|n|>n^{\star}\). We depict \(\Delta\rho(T)=\rho(T)-\rho(50\,\text{K})\) as a function of temperature with different carrier densities in Fig. 6(b). The solid dots and dashed lines are experimental data from Ref. [10] and our theory, respectively. In the high temperature range, \(\Delta\rho(T)\) increases nearly linearly with \(T\). In Ref. [10], the linear temperature dependence is believed to be due to electron-phonon interaction. How
Figure 6: Comparisons between our theory and the experimental data for the (a) minimum conductivity \(\sigma_{\text{min}}\) and (b) resistivity \(\Delta\rho\) as function of temperature. In (a), the open symbols indicate the temperature dependence of \(\sigma_{min}\) at the Dirac point for three devices extracted from Ref. [16]. The dashed lines indicates the results according to our theory for three different disorder strengths \(\alpha\). In (b), the solid symbols represent the temperature dependence of \(\Delta\rho\) for different gate voltages extracted from Ref. [10]. The dash lines are the results according to our theory for different carrier densities.
ever, the slope of \(\Delta\rho\) versus \(T\) cannot be explained solely by electron-phonon interaction, as it also depends on the carrier density. Our theory can consistently explain the carrier density and temperature dependence of \(\rho(T)\). The overall trends of our numerical results are in good agreement with experimental observations [10]. The discrepancy at high temperatures could be due to the neglect of electron-electron scattering and electron-phonon scattering, which become significant at high temperatures. Those findings attest that the strong \(T\)-dependence of the conductivity (resistivity) in the low density regime stems from the multi-scattering effects, which amounts to another important aspect of the present work.
### Conductivity With Phonon Scattering
We are now going to take into account the effect of electron-phonon scattering. In graphene, there exists a characteristic wave vector \(q_{c}\) below which the anharmonic effects become important [40; 41]. It has be estimated that \(q_{c}=\sqrt{\Delta_{c}T}/(\hbar v_{f})\approx\sqrt{T(K)}\times 0.7\times 10^{8} \,\mathrm{m}^{-1}\), where \(\Delta_{c}\approx 18.7\,\mathrm{eV}\)[40]. Since our interest is the low carrier density with \(n\leq 0.5\times 10^{12}\,\mathrm{cm}^{-2}\) where the transport properties are strongly influenced by the multiple scattering processes. The Fermi wave vector can be estimated by \(k_{F}\approx\sqrt{\pi n}\leq 1.25\times 10^{8}\,\mathrm{m}^{-1}\), and is small compared to \(q_{c}\) (3.2 K). Therefore, the anharmonic electron-phonon interaction should be taken into account. In this situation, the scattering rate caused by phonon scattering can be expressed as [40]
\[\frac{\hbar}{2\tau(\epsilon)}=\frac{g^{2}T^{2}}{\pi|\epsilon|}CZ^{2}(\frac{| \epsilon|}{\sqrt{T\Delta_{c}}})^{2\eta}, \tag{6}\]
where \(g\approx 5.3\) is the dimensionless electron-phonon coupling constant, \(C\approx 2.26\) is an integral coefficient, \(\eta\approx 0.85\) is a critical index [42], and the numerical prefactor \(Z\approx 1\).
As shown in Fig. 7(a), we compare the magnitude of our calculated imaginary part of the self-energy \(\mathrm{Im}\Sigma\) due to impurity scattering and the contribution arising from the electron-phonon scattering (Eq. (6)) at different temperatures \(T=4,100,300\,\mathrm{K}\). The disorder strength has been chosen as \(\alpha=0.09\). At low temperatures (\(T<100\,\mathrm{K}\)), \(-\mathrm{Im}\Sigma\gg\frac{\hbar}{2\tau(\epsilon)}\), and the resistivity of graphene is dominated by scattering of impurities. We also plot the resistivity and conductivity after taking phonon scattering into consideration in Figs. 7(b) and 7(c), and the temperature dependences of the minimum conductivity \(\sigma_{\mathrm{min}}\) with and without phonon scattering are contrasted in Fig. 7(d). By comparing with the results in sec. IV.1, we find that our main conclusion would not change even considering the electron-phonon scattering. The crossover carrier density \(n^{*}\) still exists, separating the regions with the "metallic" (high density) and "insulating" (low density) behaviors.
### Higher Order Conductivity Correction
In addition to the bare (zeroth order) current bubble which yields the main contribution to the classical conductivity, the disorder averaging will generate other current bubbles which are expanded in terms of scattering vertices. Two classes of diagrams are usually calculated, the ladder diagram and the maximally-crossed diagrams, which account for the vertex correction and quantum interference correction, respectively.
#### iv.3.1 Vertex Correction
The Bethe-Salpeter Fermi equations for the vertex correction can be solved by using the single-particle propagators with the full self-energy. With the vertex correction, the Kubo formula for conductivity is given by
\[\begin{split}\sigma_{xx}(E)=&-\frac{\hbar e^{2}v_{f} ^{2}}{4\pi}\sum_{ss^{{}^{\prime}}=\pm}ss^{{}^{\prime}}\int\frac{d^{2}\mathbf{k}}{( 2\pi)^{2}}\mathrm{Tr}[j_{x}G(\mathbf{k},a+is\eta)\\ &\times J_{x}(\mathbf{k},a+is\eta,a+is^{{}^{\prime}}\eta)G(\mathbf{k},a+ is^{{}^{\prime}}\eta)].\end{split} \tag{7}\]
Figure 7: (a) Imaginary part of the self-energy induced by disorder scattering and phonon scattering at 4 K, 100 K, and 300 K.(b) Conductivity \(\sigma_{xx}\) and (c) resistivity \(\rho\) as functions of the carrier density \(n\) at different temperatures \(T=4\,\mathrm{K}\), 60 K, 100 K, 200 K and 300 K including resistive scattering by graphene phonons described by Eq. (6). The disorder strength is \(\alpha=0.09\). (d) Comparison of temperature dependence of the minimum conductivity \(\sigma_{\mathrm{min}}\) with and without phonon scattering.
Here the current vertex \(J_{x}\) satisfies the following Beta-Salpeter equation [43]:
\[J_{x}(\mathbf{k},a+is\eta,a+is^{{}^{\prime}}\eta)=j_{x}+\sum_{\mathbf{k}^{ {}^{\prime}}}\langle V_{\mathbf{k}-\mathbf{k}^{{}^{\prime}}}G(\mathbf{k}^{{}^{\prime}},a+is \eta) \tag{8}\] \[\times J_{x}(\mathbf{k}^{{}^{\prime}},a+is\eta,a+is^{{}^{\prime}}\eta) \times G(\mathbf{k}^{{}^{\prime}},E+is^{{}^{\prime}}\eta)V_{\mathbf{k}^{{}^{\prime}}- \mathbf{k}}\rangle_{\rm dis},\]
where we have defined \(E-{\rm Re}\Sigma\equiv a\) and \(-{\rm Im}\Sigma\equiv\eta\) for simplicity and \(G(\mathbf{k},a+is\eta)=1/(a+is\eta-\hbar v_{f}\mathbf{k}\cdot\mathbf{\sigma})\) is the disorder averaged retarded (\(s=+\)) and advanced (\(s=-\)) Green's functions with our calculated self-energy. By further assuming that compared with the bare current \(j_{x}=\sigma_{x}\), the renormalized current \(J_{x}=\Lambda\sigma_{x}\) only differs by an energy dependent dimensionless coefficient \(\Lambda\), we can put it into the iterative equation of Eq. (8).
For the short range disorder, after taking inter-valley scattering into account, the vertex correction can be shown to vanish identically due to the symmetry of the first Brillouin zone. Therefore, the vertex correction only contributes in the long range disorder case. As shown in Appendix. C, with the vertex correction, the minimum conductivity will be dependent on the disorder strength. For the Fermi energies far from the Dirac point, the vertex correction \(\Lambda=2\) recovers the result for the weak scattering regime [44, 33]. As one gets close to the Dirac point, the vertex correction becomes negligible due to the sharp reduction near the Dirac point and eventual vanishment at the Dirac point of the quasiparticle residue as plotted in Fig. 3(a).
#### iv.2.2 Quantum Interference Corrections
Another mystery in graphene transport is the absence of the localization-induced insulating phase in the vicinity of the Dirac point, violating the Ioffe-Regel criterion which states that the electron state will be localized in the region \(E\tau/\hbar\ll 1\)[45, 35]. In undoped samples of graphene, the minimum conductivity is observed to remain almost constant over a wide range of temperatures, from room temperature down to sub-Kelvin temperatures[16, 11]. This behavior is in stark contrast to the well-established results on the conductivity of 2D systems, where localization effects typically drive the system into an insulating state at low temperatures. The absence of localization in graphene is still not fully understood. Our calculations show that the multi-scattering events may provide a plausible mechanism in understanding the absence of the localization. In realistic graphene samples, inter-valley scattering is inevitable, leading to backscattering between the two valleys. As a result, the inter-valley Cooperon channel dominates at small magnetic fields or large sample sizes, leading to weak localization effects and even localization when the quantum interference correction becomes comparable to the classical conductivity. This is the reason why the magnetoresistance in experiments at small magnetic fields is commonly negative, exhibiting a weak localization behavior [46, 47]. Here, we extend the standard calculation from the weak scattering limit [48, 44] to the strong scattering limit to discuss the contribution of the maximally-crossed diagrams by considering the accurate single-particle propagator, shown in Appendix. D. In the weak scattering regime, we recover the weak localization correction which arises from the inter-valley scattering induced Cooperon channel [44]. In the strong scattering regime, however, we verify that multi-scattering events will introduce finite Cooperon gaps so that the small momentum singularities in the Cooperon momentum integrals is avoided. Thus, the weak localization correction is strongly suppressed in the vicinity of the Dirac point. This may explain why the Anderson localization is absent in the transport measurements in graphene [16, 11, 11].
## V Correlated Impurities
To simulate various defects in real experimental conditions, we expand our regime of discussions to the cases where each impurity has a finite range. In such cases, two impurities become correlated, and the systems can be characterized as containing correlated potential disorder. Since the correlated potential disorder is smooth at the atomic scale, the inter-valley scattering or backscattering is suppressed. For a correlated potential, the self-energy depends on the energy \(E\) and wave vector \(\mathbf{k}\), which can be directly obtained by changing the initial state \(|\mathbf{k}\rangle\) in
Figure 8: (a) Imaginary and (b) real parts of the self-energy with a correlated Gaussian potential as a function of energy for different disorder strengths (\(0.04\leqslant\gamma\leqslant 0.17\)). The circle symbols denote the numerical results obtained from our Lanczos method and the solid lines are the fitting curves. 60 samples are collected for each curve. (c) Imaginary and (d) real parts of the self-energy with Yukawa-type charge impurities as a function of energy for different screening lengths \(1/q_{s}=3,4,5,6,7\) (\(\gamma_{c}=0.04,0.11,0.22,0.39,0.61\)). The circle symbols denote the numerical results obtained from the Lanczos method and the solid lines are the fitting curves within the energy window [-0.08t,0.08t].
our numerical method [28]. Here we only focus on the self-energy \(\Sigma(E)\) for \(k=0\), which is symmetric about \(E=0\). Note that the wavelength of the low-energy quasiparticle approaches infinite in the vicinity of the Dirac point. Therefore, a random potential with a shorter spatial correlated length cannot be seen by the Dirac electronic wave, and it will not influence qualitatively the quasiparticle (self-energy) behaviors. Here we also use the power-law formula [see Eqs. (2) and (3)] to fit the numerical results for the correlated disorder potentials.
### Gaussian Potential
First, we consider the most common type, Gaussian correlated disorder potential \(V_{i}=\sum_{n=1}^{N_{\rm imp}}\pm u_{0}\exp[-|\mathbf{r}_{n}-\mathbf{r}_{i}|^{2}/(2\xi^{ 2})]\), where \(\xi\) is the Gaussian correlation length. The scatters of \(\pm u_{0}\) are randomly distributed with equal probability, and \(N_{\rm imp}\) impurities are randomly located among the \(N=4000^{2}\) lattices. We fix the impurity density \(n_{\rm imp}=N_{\rm imp}/N=1\%\) and take \(\xi=2a\) as an example in the following calculations. After the disorder averages, the disorder potential has a vanishing mean and a smooth form of the correlator:
\[\langle V_{i}\rangle_{\rm dis} = 0, \tag{9}\] \[\langle V_{i}V_{j}\rangle_{\rm dis} = \gamma\frac{(\hbar v_{f})^{2}}{4\pi\xi^{2}}e^{-|\mathbf{r}_{i}-\mathbf{r} _{j}|^{2}/4\xi^{2}},\] (10) \[\langle V_{\mathbf{k}-\mathbf{k}^{\prime}}V_{\mathbf{k}^{\prime}-\mathbf{k}}\rangle _{\rm dis} = \gamma(\hbar v_{f})^{2}e^{-\xi^{2}|\mathbf{k}-\mathbf{k}^{\prime}|^{2}}, \tag{11}\]
where \(\gamma=\frac{n_{\rm imp}n_{0}^{2}}{A_{c}}\frac{(2\pi\xi^{2})^{2}}{(\hbar v_{f} )^{2}}\) is the dimensionless disorder strength. As shown in Figs. 8(a) and 8(b), the agreement between the power-law fitting and numerical results obtained for the Lanczos method is very good within the energy window of \([-0.15t,0.15t]\). The linear fittings of \(\beta\) and \(\Delta\) with the disorder strength \(\gamma\) are given in Appendix. E, yielding \(\beta=2.04\gamma\) and \(\Delta=0.42\gamma\).
We compare the results of this manuscript with that in the existing literature based on real-space methods and refer to [37], where the momentum relaxation time \(\tau_{p}(E)\) was investigated. Inverting \(\tau_{p}(E)\) yields the imaginary part of the self-energy, which can be expressed as \(-{\rm Im}\Sigma^{R}(E)=\hbar/[2\tau_{p}(E)]\). We then fit our proposed power-law formula to the data, and as shown in Fig. 9, the formula is a good fit in the low energy regime for \(E\in[-0.15t,0.15t]\) (equivalently \([-0.405\,{\rm eV},0.405\,{\rm eV}]\)).
### Yukawa Potential
Considering the observation of electron-hole puddles, we also consider the case of a Yukawa-type potential \(V_{i}=\sum_{n=1}^{N_{\rm imp}}\pm\frac{e^{2}}{\kappa|\mathbf{r}_{n}-\mathbf{r}_{i}|}\exp [-q_{s}|\mathbf{r}_{n}-\mathbf{r}_{i}|]\), with positive and negative charged impurities possessing equal probabilities, where \(e\) is the electron charge, \(r_{s}=e^{2}/\hbar v_{f}=2.2\) is a constant [49], \(\kappa\) is the background dielectric constant, and \(q_{s}\) is the inverse screening length. The charge impurities are randomly distributed in the substrate, and we fix the impurity concentration to be \(n_{\rm imp}=0.25\%\) (\(\sim 10^{12}\,{\rm cm}^{-2}\)) and the distance between the charge and graphene plane to be \(d=3a\) in the following calculations. By the Fourier transformation, in momentum space, the potential is given by
\[V_{q}=\frac{V_{0}}{\sqrt{q_{s}^{2}+q^{2}}}e^{-d\sqrt{q_{s}^{2}+q^{2}}}. \tag{12}\]
After averaging the disorder, we obtain
\[\langle V_{i}\rangle_{\rm dis} = 0, \tag{13}\] \[\langle V_{\mathbf{k}-\mathbf{k}^{\prime}}V_{\mathbf{k}^{\prime}-\mathbf{k}} \rangle_{\rm dis} = \frac{n_{\rm imp}}{A_{c}}\frac{V_{0}^{2}}{q_{s}^{2}+|\mathbf{k}-\mathbf{k} ^{\prime}|^{2}}e^{-2d\sqrt{q_{s}^{2}+|\mathbf{k}-\mathbf{k}^{\prime}|^{2}}}, \tag{14}\]
where \(V_{0}=2\pi e^{2}/\kappa\). Here we define the dimensionless disorder strength as \(\gamma_{c}=\frac{n_{\rm imp}}{A_{c}}\frac{(2\pi r_{s})^{2}}{\kappa^{2}q_{s}^{2 }}e^{-2q_{s}d}\). As shown in Figs. 8(c) and 8(d), the self-energies are given for different screening lengths. In a small energy window, such as \([-0.08t,0.08t]\), the power-law formula can be still used to fit the behavior of the self-energy. At higher energies, this formula does not work because the corresponding Fermi wave vector is larger and the electron wavelengths are comparable to the correlation lengths in this region.
We then compare the results of this manuscript with the existing literature based on real-space methods.
## VI Conclusion
In summary, using the numerically exact momentum-space Lanczos method, we have systematically inves
Figure 9: The fitting of the inverse of the energy-dependent momentum relaxation time \(\tau_{p}(E)\) by using the power-law formula. The open symbols denote the numerical results extracted from Fig. 16 in Ref.[37] and the dashed lines are the fitting curves.
tigated the multiple impurity scattering effects on the quasiparticle and transport properties of two-dimensional Dirac fermionic systems in the presence of isolated or correlated weak scalar potentials. We uncover that the multiple impurity scattering processes arising from the weak disorder can induce nontrivial non-Fermi liquid behavior, which is insensitive to the detailed types of disorder. Our theory can account for a set of unconventional findings in the transport measurements: (i) The temperature-dependent resistivity can be divided into two different density regimes: a metallic regime and an insulating regime, separated by \(n^{\star}\). (ii) For \(|n|<n^{\star}\), we examined the temperature dependence of the minimum conductivity at the Dirac point. As the temperature increases, the temperature-dependent minimum conductivity first increases linearly, then becomes sublinear, and tends to saturate at higher temperatures. (iii) In the high-density regime \(|n|>n^{\star}\), the resistivity linearly increases with temperature in the high-temperature range when \(n\) is not too close to \(n^{\star}\). The slope of the resistivity versus temperature increases as \(n\) gets closer to \(n^{\star}\). Our theory can consistently explain the temperature and carrier density dependence of conductivity. Our work attests that the vital importance of multiple impurity scattering events in understanding the exotic low energy physics of ultrahigh-mobility graphene.
###### Acknowledgements.
We thank Profs. Xin-Cheng Xie, Qing-Feng Sun, and Xiang-Rong Wang for valuable discussions. This work was supported by National Key Research & Development Program of China (No. 2016YFA0200600 and 2017YFA0204904), National Natural Science Foundation of China (No. 21473168, 11634011, 11774325, 12047544, 21603210 and 11974323), Fundamental Research Funds for the Central Universities and the Innovation Program for Quantum Science and Technology (Grant No. 2021ZD0302800). Computational resources are provided by CAS, Shanghai and USTC Supercomputer Centers.
## Appendix A Spectral Function
To demonstrate how this power-law correction significantly renormalizes the quasiparticle properties around the Dirac point more intuitively, we calculate the spectral function. The single-particle spectral function relates to the Green's function through
\[\begin{split} A(s\mathbf{k},E)&=-\frac{1}{\pi}\text{ Im}G(s\mathbf{k},E)\\ &=\frac{1}{\pi}\frac{-\text{Im}\Sigma}{(E-s\hbar v_{f}k-\text{Re} \Sigma)^{2}+(\text{Im}\Sigma)^{2}}\\ &=\frac{1}{\pi}\frac{\eta}{(a-s\hbar v_{f}k)^{2}+\eta^{2}},\end{split} \tag{10}\]
where \(s=\pm\) represents the conduction band and valance band respectively, and we have defined \(E-\text{Re}\Sigma\equiv a\) and \(-\text{Im}\Sigma\equiv\eta\) for simplicity. In the absence of disorder, the spectral function \(A(\mathbf{k},E)\) is a \(\delta\) function, reflecting that the wave vector \(k\) is a good quantum number and all its weight ratio is precisely at \(E=s\hbar v_{f}k\). In the presence of disorder, Eq. (10) is plotted graphically in Fig. 10. For \(\alpha=0.07\), \(A(\mathbf{k}=0,E)\) exhibits a sharp peak of a Lorentzian type at \(E=0\) shown as the black line in Fig. 10(a). When \(k\) moves away from the Dirac point, \(A(\mathbf{k},E)\) maintains the Lorentzian line shape but becomes much broader due to the increasing of the scattering processes (red and blue lines in Fig. 10(a)). For \(\alpha=0.12\), \(A(\mathbf{k},E)\) clearly deviates from the Lorentzian type and carries substantially more weight in wings as shown in Fig. 10(b). One can also extract the dispersion relation from the peak of the spectral function \(A(\mathbf{k},E)\) for a given \(\mathbf{k}\). The peak of \(A(k=0.333/a,0.667/a,E)\) moves toward \(E=0\) as the disorder strength \(\alpha\) increases, indicating that Dirac electron group velocity \(v_{g}\) and the dispersion relationship are strongly renormalized due to the multi-scattering events. This is quite different from the usual picture in conventional metal with a finite density of states (DoS), where the life-time effects dominate.
## Appendix B Density of states
To make our results even more convincing, we revisit the calculation of the DoS based on our spectral function function. The single-particle DoS can be easily obtained through our simulated self-energy function as
\[\begin{split}\rho(E)=&-\frac{A_{c}}{2\pi}\int \frac{d^{2}\mathbf{k}}{(2\pi)^{2}}[\text{Im}G^{R}(\mathbf{k}+,E)+\text{Im}G^{R}(\mathbf{k }-,E)]\\ =&\frac{A_{c}\eta}{2(\hbar v_{f})^{2}\pi^{2}}\left[ \text{ln}\frac{E_{c}^{2}}{a^{2}+\eta^{2}}+\frac{2a}{\eta}\text{tan}^{-1} \left(\frac{a}{\eta}\right)\right].\end{split} \tag{11}\]
Figure 10: Single-particle spectral function \(A(\mathbf{k}+,E)\) (conduction band) plotted as a function of energy \(E\) at several k points of \(k=0.00\) (black lines), \(0.333/a\) (red lines), \(0.666/a\) (blue lines) along the \(k_{x}\) direction (from left to right). The disorder strength is chosen to be (a) \(\alpha=0.07\) and (b) \(\alpha=0.12\), respectively.
Here we directly compare this result with the average DoS obtained by the widely used Lanczos method in real space that has been well studied by our former work [50]. As shown in Fig. 1, for a substantial energy range \([-0.1,0.1]\), the DoS calculated by our simulated self-energy agrees well with the results obtained by the real-space method. The line shape of DoS deviates from linearity to sub-linearity as the disorder strength increases, quite similar to the behavior of the imaginary part of the self-energy \(\mathrm{Im}\Sigma(E)\). Furthermore, according to Eq. (14) and our simulated results for \(\Sigma_{0}\) in the main text, the DoS at the Dirac point \(\rho(0)=A_{c}/(2\hbar^{2}v_{f}^{2}\pi^{2})\cdot\Sigma_{0}/\alpha\sim\exp(-1/2 \alpha)/\alpha\), which is also consistent with the results obtained by the functional Renormaliztion group technique [26; 27].
## Appendix C Vertex Correction For Conductivity
As mentioned in the main text, the vertex correction only contributes in the long range disorder case. By only considering the intravalley scattering, the current vertex \(J_{x}\) satisfies the Beta-Salpeter equation of Eq. (8). In the vicinity of the single-valley Dirac point, we can also neglect the momentum-dependence of the disorder potential correlator, and obtain
\[\langle V_{\mathbf{k}-\mathbf{k}^{{}^{\prime}}}V_{\mathbf{k}^{{}^{\prime}}-\mathbf{k}}\rangle_ {\mathrm{dis}}\approx\gamma(\hbar v_{f})^{2}. \tag{15}\]
Here, we adopt the Gaussian correlated disordered potential that has been described in the main text. The summation of the discrete momentum \(\mathbf{k}\) in Eq. (8) will be replaced by the integral of the first Brillion zone, i.e., \(1/N\sum_{\mathbf{k}}\to A_{c}/(2\pi)^{2}\int d^{2}\mathbf{k}\), and then
\[\begin{split}&\Lambda^{ss^{{}^{\prime}}}(E)\sigma_{x}=\sigma_{x}+ \Lambda^{ss^{{}^{\prime}}}(E)\gamma(\hbar v_{f})^{2}\int\frac{d^{2}\mathbf{k}^{{}^ {\prime}}}{(2\pi)^{2}}\\ &\times\frac{1}{a+is\eta-\hbar v_{f}\mathbf{k}^{{}^{\prime}}\cdot \mathbf{\sigma}}\sigma_{x}\frac{1}{a+is\eta-\hbar v_{f}\mathbf{k}^{{}^{\prime}}\cdot \mathbf{\sigma}}.\end{split} \tag{16}\]
With some algebraic operation, one can directly obtain
\[\Lambda^{ss^{{}^{\prime}}}(E)=[1-\frac{\gamma}{4\pi}\mathcal{I}(E,s,s^{{}^{ \prime}})]^{-1}, \tag{17}\]
with
\[\begin{split}\mathcal{I}(E,s,s^{{}^{\prime}})=&\int _{0}^{k_{c}}dk\ k\frac{2(\hbar v_{f})^{2}(a+is\eta)}{(a+is\eta)^{2}-(\hbar v_{ f}k)^{2}}\\ &\times\frac{(a+is^{{}^{\prime}}\eta)}{(a+is^{{}^{\prime}}\eta)^ {2}-(\hbar v_{f}k)^{2}}.\end{split} \tag{18}\]
After performing the integration, we have
\[\begin{split}\mathcal{I}(E,+,+)=&\ \mathcal{I}(E,-,-)=-1,\\ \mathcal{I}(E,+,-)=&\ \mathcal{I}(E,-,+)=(\frac{a}{ \eta}+\frac{\eta}{a})\mathrm{arctan}\frac{a}{\eta}.\end{split} \tag{19}\]
Finally, we find that with the vertex correction, the Kubo formula for conductivity is given by
\[\begin{split}\sigma_{xx}(E)=&-\frac{\hbar e^{2}v_{ f}^{2}}{4\pi}\sum_{ss^{{}^{\prime}}=\pm}ss^{{}^{\prime}}\int\frac{d^{2}\mathbf{k}}{(2 \pi)^{2}}\mathrm{Tr}[\sigma_{x}G(\mathbf{k},a+is\eta)\\ &\times J_{x}(\mathbf{k},a+is\eta,a+is^{{}^{\prime}}\eta)G(\mathbf{k},a+ is^{{}^{\prime}}\eta)]\\ =&\frac{e^{2}}{2\pi h}\frac{1+\mathcal{I}(E)}{(1+ \gamma/4\pi)(1-\gamma/4\pi\mathcal{I}(E))},\end{split} \tag{20}\]
where \(\mathcal{I}(E)=(a/\eta+\eta/a)\mathrm{arctan}(a/\eta)\). At \(E=0\), we have \(\mathcal{I}(E=0)=1\), and then get
\[\sigma_{xx}(E=0)=\frac{e^{2}}{\pi h}\frac{1}{1-(\frac{\gamma}{4\pi})^{2}}. \tag{21}\]
According to Eq. (21), the minimum conductivity will be dependent on the disorder strength with the vertex correction. However, this dependence is extremely small. As one moves away from the Dirac point, the vertex correction becomes large and gradually gets close to the result \(\Lambda=2\) in the conventional metal.
## Appendix D Quantum Interference Corrections For Conductivity
In this section, we calculate the quantum correction to the classical conductivity. The low energy electron excitation of graphene is well described by the two-valley massless Dirac model in two dimension that is given by
\[H_{s}=\hbar v_{f}(sk_{x}\sigma_{x}-k_{y}\sigma_{y}), \tag{22}\]
where \(s=\pm\) stands for \(K\) and \(K^{\prime}\) valleys, respectively. We suppose that the Fermi level \(E_{f}\) intersects the conduction band with the dispersion as
\[\epsilon_{\mathbf{k}}=\hbar v_{f}|\mathbf{k}|, \tag{23}\]
and the corresponding eigenfunctions are
\[\langle\mathbf{r}|s\mathbf{k}\rangle=\psi_{s\mathbf{k}}(\mathbf{r})=\frac{1}{\sqrt{2S}}\left[ \begin{array}{c}1\\ se^{-is\theta_{\mathbf{k}}}\end{array}\right]e^{i\mathbf{k}\cdot\mathbf{r}}. \tag{24}\]
The disorder-induced self-energy is obtained numerically through the momentum-space Lanczos methods introduced in the main text, and then the retarded (R) and advanced (A) Green's functions have the form
\[G_{K,K^{\prime}}^{R/A}(\mathbf{k},\omega)=\frac{1}{\omega-\epsilon_{\mathbf{k}}- \mathrm{Re}\Sigma^{R}(\omega)\mp i\mathrm{Im}\Sigma^{R}(\omega)}. \tag{25}\]
In order to evaluate the quantum correction to the classical conductivity, we need to calculate a summation of maximally crossed diagrams, which is denoted by
\[\sigma_{qi}=\sigma_{KK}^{KK}+\sigma_{K^{\prime}K^{\prime}}^{K^{\prime}K^{ \prime}}+\sigma_{K^{\prime}K}^{KK^{\prime}}+\sigma_{KK^{\prime}}^{K^{\prime}K}, \tag{26}\]
with
\[\sigma_{ij}=\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+ \sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+ \sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij} +\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij} +\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}+\sigma_{ij}
\[\sigma^{KK}_{KK} =\frac{e^{2}\hbar}{2\pi S}\sum_{\mathbf{k}}\sum_{\mathbf{q}}\Gamma^{KK}_{KK}( \theta_{\mathbf{k}},\theta_{-\mathbf{k}},\mathbf{q})G^{R}_{K}(\mathbf{k})v^{x}_{K}(\mathbf{k})G^{A}_ {K}(\mathbf{k})G^{R}_{K}(\mathbf{q}-\mathbf{k})v^{x}_{K}(\mathbf{q}-\mathbf{k})G^{A}_{K}(\mathbf{q}-\bm {k}), \tag{100}\] \[\sigma^{K\bar{K}}_{KK} =\frac{e^{2}\hbar}{2\pi S}\sum_{\mathbf{k}}\sum_{\mathbf{q}}\Gamma^{K\bar {K}}_{KK}(\mathbf{q})G^{R}_{K}(\mathbf{k})v^{x}_{K}(\mathbf{k})G^{A}_{K}(\mathbf{k})G^{R}_{ \bar{K}}(\mathbf{q}-\mathbf{k})v^{x}_{\bar{K}}(\mathbf{q}-\mathbf{k})G^{A}_{\bar{K}}(\mathbf{q}-\bm {k}). \tag{101}\]
There exists three types of Cooperon (particle-particle type) channels and the full vertex function \(\Gamma\) is related to \(\gamma\) by the Bethe-Salpter equation:
\[\Gamma^{KK}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}};\mathbf{q}) =\gamma^{KK}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}})+ \frac{1}{S}\sum_{\mathbf{k}}\gamma^{KK}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{k}})G^{R }_{K}(\mathbf{k})G^{A}_{K}(\mathbf{q}-\mathbf{k})\Gamma^{KK}_{KK}(\theta_{\mathbf{k}},\theta_{ \mathbf{p}^{\prime}};\mathbf{q}), \tag{102}\] \[\Gamma^{KK}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}};\mathbf{q}) =\gamma^{KK}_{K\bar{K}}(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}})+ \frac{1}{S}\sum_{\mathbf{k}}\Big{[}\gamma^{KK}_{K\bar{K}}(\theta_{\mathbf{p}},\theta_{ \mathbf{k}})G^{R}_{K}(\mathbf{k})G^{A}_{\bar{K}}(\mathbf{q}-\mathbf{k})\Gamma^{KK}_{KK}(\theta_ {\mathbf{k}},\theta_{\mathbf{p}^{\prime}};\mathbf{q})\] \[\quad+\gamma^{KK}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{k}})G^{R}_{K} (\mathbf{k})G^{A}_{K}(\mathbf{q}-\mathbf{k})\Gamma^{K\bar{K}}_{KK}(\theta_{\mathbf{k}},\theta_ {\mathbf{p}^{\prime}};\mathbf{q})\Big{]},\] (103) \[\Gamma^{K\bar{K}}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}}; \mathbf{q}) =\gamma^{K\bar{K}}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}})+ \frac{1}{S}\sum_{\mathbf{k}}\Big{[}\gamma^{KK}_{K\bar{K}}(\theta_{\mathbf{p}},\theta_{ \mathbf{k}})G^{R}_{K}(\mathbf{k})G^{A}_{\bar{K}}(\mathbf{q}-\mathbf{k})\Gamma^{K\bar{K}}_{KK}( \theta_{\mathbf{k}},\theta_{\mathbf{p}^{\prime}};\mathbf{q})\] \[\quad+\gamma^{K\bar{K}}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{k}})G^{R }_{K}(\mathbf{k})G^{A}_{K}(\mathbf{q}-\mathbf{k})\Gamma^{\bar{K}\bar{K}}_{KK}(\theta_{\mathbf{ k}},\theta_{\mathbf{p}^{\prime}};\mathbf{q})\Big{]}. \tag{104}\]
where \(\theta_{\mathbf{p}}\) and \(\theta_{\mathbf{p}^{\prime}}\) label the incoming and outgoing momenta, respectively, and we have neglected the \(\mathbf{q}\) dependence in the bare scattering vertex. The bare scattering vertex which only causes small momentum transfer within the single valley can be expressed as
\[\gamma^{KK}_{KK}(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}}) \tag{105}\] \[=\frac{(\hbar v_{f})^{2}}{E\tau_{0}/\hbar}\langle K,\mathbf{p}|K,\bm {p}^{\prime}\rangle\langle K,-\mathbf{p}|K,-\mathbf{p}^{\prime}\rangle\] \[=\frac{(\hbar v_{f})^{2}}{2E\tau_{0}/\hbar}[\frac{1}{2}e^{-2i( \theta-\theta^{\prime})}+e^{-i(\theta-\theta^{\prime})}+\frac{1}{2}],\]
and
\[\gamma^{KK}_{KK}=\gamma^{\bar{K}\bar{K}}_{KK} \tag{106}\] \[=\frac{(\hbar v_{f})^{2}}{E\tau_{0}/\hbar}\langle K,\mathbf{p}|K,\bm {p}^{\prime}\rangle\langle K^{\prime},-\mathbf{p}|K^{\prime},-\mathbf{p}^{\prime}\rangle\] \[=\frac{(\hbar v_{f})^{2}}{2E\tau_{0}/\hbar}[1+\frac{1}{2}e^{i( \theta-\theta^{\prime})}+\frac{1}{2}e^{-i(\theta-\theta^{\prime})}].\]
The bare scattering vertex which causes the scattering of electrons between two valleys can be expressed as
\[\gamma^{KK^{\prime}}_{K^{\prime}K}=\gamma^{K^{\prime}K}_{KK^{ \prime}} \tag{107}\] \[=\frac{(\hbar v_{f})^{2}}{E\tau_{i}/\hbar}\langle K,\mathbf{p}|K^{ \prime},\mathbf{p}^{\prime}\rangle\langle K^{\prime},-\mathbf{p}|K-\mathbf{p}^{\prime}\rangle\] \[=\frac{(\hbar v_{f})^{2}}{2E\tau_{i}/\hbar}[1-\frac{1}{2}e^{i( \theta+\theta^{\prime})}-\frac{1}{2}e^{-i(\theta+\theta^{\prime})}].\]
We have introduced intra- and inter-disorder strengths of \(\frac{(\hbar v_{f})^{2}}{E\tau_{0}/\hbar}\) and \(\frac{(\hbar v_{f})^{2}}{E\tau_{i}/\hbar}\). The total disorder strength is given in terms of \(\tau_{0}\) and \(\tau_{i}\),
\[\frac{(\hbar v_{f})^{2}}{E\tau_{i}/\hbar}=\frac{(\hbar v_{f})^{2}}{E\tau_{0}/ \hbar}+\frac{(\hbar v_{f})^{2}}{E\tau_{i}/\hbar}. \tag{108}\]
As shown in Eqs. (102)-(104), the radial coordinate \(k\) is only contained in the kernel \(\frac{1}{S}\sum_{\mathbf{k}}G^{R}_{K}(\mathbf{k})G^{A}_{\bar{K}}(\mathbf{q}-\mathbf{k})\) and can be evaluated as
\[\int_{0}^{\infty}\frac{dkk}{2\pi}G^{R}(\mathbf{k}+\frac{\mathbf{q}}{2})G^ {A}(\frac{\mathbf{q}}{2}-\mathbf{k}) \tag{109}\] \[\approx \frac{\Pi(E)}{(\hbar v_{f})^{2}}\left\{1-i(v_{g}\tau q)\cos\theta- (v_{g}\tau q)^{2}\cos^{2}\theta\right\},\]
with \(\Pi(E)=\frac{\chi(E)}{2\pi}\left(\frac{\pi}{2}+\arctan\chi(E)\right)\).
For later convenience, we introduce the renormalized relaxation time through \(\Pi(E)\equiv E\tau^{*}/\hbar\). The angular
coordinate can be done by using the expansion of the full vertex function \(\varGamma\) and the bare vertex \(\gamma\):
\[\varGamma(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}};\mathbf{q})=\frac{( \hbar v_{f})^{2}}{E\tau_{t}/\hbar}\sum_{n,m}\varGamma_{nm}(\mathbf{q})e^{i(n\theta_{ \mathbf{p}}-m\theta_{\mathbf{p}^{\prime}})}, \tag{101}\] \[\gamma(\theta_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}})=\frac{(\hbar v_{ f})^{2}}{E\tau_{t}/\hbar}\sum_{n,m}\gamma_{nm}e^{i(n\theta_{\mathbf{p}}-m \theta_{\mathbf{p}^{\prime}})}. \tag{102}\]
If we further define
\[\begin{split}\Phi_{nm}&=\frac{1}{2\pi}\int_{0}^{2 \pi}d\theta\ e^{i(m-n)\theta}\\ &\quad\times\left\{1-i(v_{g}\tau q)\cos\theta-(v_{g}\tau q)^{2} \cos^{2}\theta\right\},\end{split} \tag{103}\]
the expansion coefficients in Eqs. (100)-(101) can be expressed in the matrix form
\[\begin{split}\mathbf{\Gamma}^{KK}_{KK}&=\mathbf{\gamma}^{ KK}_{KK}+\mathbf{\gamma}^{KK}_{KK}\mathbf{\Phi}\mathbf{\Gamma}^{KK}_{KK},\\ \mathbf{\Gamma}^{KK}_{KK}&=\mathbf{\gamma}^{KK}_{KK}+\mathbf{ \gamma}^{KK}_{KK}\mathbf{\Phi}\mathbf{\Gamma}^{KK}_{KK}+\mathbf{\gamma}^{KK}_{KK}\mathbf{ \Phi}\mathbf{\Gamma}^{KK}_{KK},\\ \mathbf{\Gamma}^{KK}_{KK}&=\mathbf{\gamma}^{K\bar{K}}_{KK}+ \mathbf{\gamma}^{KK}_{KK}\mathbf{\Phi}\mathbf{\Gamma}^{KK}_{KK}+\mathbf{\gamma}^{KK}_{KK}\bm {\Phi}\mathbf{\Gamma}^{KK}_{KK},\end{split} \tag{104}\]
where the bare scattering vertices are
\[\begin{split}\mathbf{\gamma}^{KK}_{KK}&=\frac{\tau}{ \tau_{0}}\left[\begin{array}{ccc}\frac{1}{2}&0&0\\ 0&1&0\\ 0&0&\frac{1}{2}\end{array}\right],\\ \mathbf{\gamma}^{K\bar{K}}_{KK}&=\frac{\tau}{\tau_{i}}\left[ \begin{array}{ccc}0&0&-\frac{1}{2}\\ 0&1&0\\ -\frac{1}{2}&0&0\end{array}\right],\\ \mathbf{\gamma}^{KK}_{K\bar{K}}&=\frac{\tau}{\tau_{0}}\left[ \begin{array}{ccc}\frac{1}{2}&0&0\\ 0&1&0\\ 0&0&\frac{1}{2}\end{array}\right].\end{split} \tag{105}\]
By truncating up to \(q^{2}\) terms in small \(q\) limit, \(\mathbf{\Phi}\) has the form,
\[\begin{split}\mathbf{\Phi}&=\frac{\tau^{*}}{\tau_{t}}\left[ \begin{array}{ccc}1-\frac{1}{2}(v_{g}\tau)^{2}q^{2}&-\frac{1}{2}iv_{g}\tau q_ {+}&-\frac{1}{4}\ell_{e}^{2}q_{+}^{2}\\ -\frac{1}{2}iv_{g}\tau q_{-}&1-\frac{1}{2}(v_{g}\tau)^{2}q^{2}&-\frac{1}{2}iv_{ g}\tau q_{+}\\ -\frac{1}{4}\ell_{e}^{2}q_{-}^{2}&-\frac{1}{2}iv_{g}\tau q_{-}&1-\frac{1}{2}(v_{g} \tau)^{2}q^{2}\end{array}\right],\end{split} \tag{106}\]
with \(q^{2}=q_{x}^{2}+q_{y}^{2}\) and \(q_{\pm}=q_{x}\pm iq_{y}\).The two Cooperon channels \(\mathbf{\Gamma}^{KK}_{KK}\) and \(\mathbf{\Gamma}^{K\bar{K}}_{KK}\) in Eq. (104) are coupled together. By introducing the new variables,
\[\begin{split}\mathbf{x}&=\mathbf{\gamma}^{KK}_{KK}+\mathbf{\gamma}^{K\bar {K}}_{KK},\\ \mathbf{y}&=\mathbf{\gamma}^{KK}_{KK}-\mathbf{\gamma}^{K\bar{K}}_{KK},\\ \mathbf{z}&=\mathbf{\gamma}^{KK}_{KK},\\ \mathbf{X}&=\mathbf{\Gamma}^{KK}_{KK}+\mathbf{\Gamma}^{K\bar{K}}_{KK},\\ \mathbf{Y}&=\mathbf{\Gamma}^{KK}_{KK}-\mathbf{\Gamma}^{KK}_{KK},\\ \mathbf{Z}&=\mathbf{\Gamma}^{KK}_{KK},\end{split} \tag{107}\]
the coupled Bethe-Salpeter equations (Eq. (104)) are reduced to uncoupled ones and the expansion coefficients can be easily solved through
\[\begin{split}\mathbf{X}&=\left[\mathbf{1}_{3\times 3}-\mathbf{x} \mathbf{\Phi}\right]^{-1}\mathbf{x},\\ \mathbf{Y}&=\left[\mathbf{1}_{3\times 3}-\mathbf{y}\mathbf{\Phi} \right]^{-1}\mathbf{y},\\ \mathbf{Z}&=\left[\mathbf{1}_{3\times 3}-\mathbf{z}\mathbf{\Phi} \right]^{-1}\mathbf{z}.\end{split} \tag{108}\]
By retaining the most singular terms, we can solve the above three matrix equations:
\[\begin{split}\mathbf{X}&\approx\left[\begin{array}{ccc}0&0&0\\ 0&\frac{1}{g_{x}+D_{ler}\tau q^{2}}&0\\ 0&0&0\end{array}\right],\\ \mathbf{Y}&\approx\left[\begin{array}{ccc}0&0&0\\ 0&\frac{1}{g_{y}+D_{ler}\tau q^{2}}&0\\ 0&0&0\end{array}\right],\\ \mathbf{Z}&\approx\left[\begin{array}{ccc}0&0&0\\ 0\frac{1}{2}&\frac{1}{g_{s}+D_{tr}\tau q^{2}}&0\\ 0&0&0\end{array}\right],\end{split} \tag{109}\]
with the Cooperon gaps
\[\begin{split} g_{x}&=1-\frac{\tau^{*}}{\tau_{t}},\\ g_{y}&=(\frac{\tau_{t}}{\tau_{0}}-\frac{\tau_{t}}{\tau_{i}})^{-1}-\frac{ \tau^{*}}{\tau_{t}},\\ g_{z}&=\frac{\tau_{0}-\tau^{*}}{2\tau_{t}},\end{split} \tag{110}\]
and the diffusive constants for the inter- and intra-Cooperon channels
\[\begin{split} D_{ter}&=\left[\left(2\frac{\tau}{\tau^{*}}-\frac{ \tau_{t}}{\tau_{0}}+\frac{\tau_{t}}{\tau_{i}}\right)^{-1}+\left(2\frac{\tau_{ t}}{\tau^{*}}-1\right)^{-1}\right]\frac{v_{g}^{2}\tau}{2},\\ D_{tra}&=\frac{\tau^{*}}{\tau_{t}}(1+\frac{\tau_{t}}{\tau_{i}})^{-1} \frac{v_{g}^{2}\tau}{2}.\end{split} \tag{111}\]
Thus, according to Eq. (107), these Cooperons are evaluated as
\[\begin{split}\Gamma^{K\bar{K}}_{KK}(\mathbf{\theta}_{\mathbf{p}},\theta_{ \mathbf{p}^{\prime}};\mathbf{q})&=\frac{1}{2}\frac{(\hbar v_{f})^{2}}{E\tau_{t}/ \hbar}\left(\frac{1}{g_{x}+D_{ter}\tau q^{2}}-\frac{1}{g_{y}+D_{ter}\tau q^{2 }}\right),\\ \Gamma^{KK}_{KK}(\mathbf{\theta}_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}};\mathbf{q})& =\frac{1}{2}\frac{(\hbar v_{f})^{2}}{E\tau_{t}/\hbar}\left(\frac{1}{g_{x}+D_{ ter}\tau q^{2}}+\frac{1}{g_{y}+D_{ter}\tau q^{2}}\right),\\ \Gamma^{KK}_{KK}(\mathbf{\theta}_{\mathbf{p}},\theta_{\mathbf{p}^{\prime}};\mathbf{q})& =\frac{1}{2}\frac{(\hbar v_{f})^{2}}{E\tau_{t}/\hbar}\frac{e^{i(\theta_{ \mathbf{p}}-\theta_{\mathbf{p}^{\prime}})}}{g_{z}+D_{tra}\tau q^{2}}.\end{split} \tag{112}\]
Generally speaking, the total quantum correction is determined by all these four Cooperon channels (the intravalley Cooperon channels are doubly degenerate). But, we are interested in how the multi-scattering effects will qualitatively renormalize the quantum interference correction to the conductivity. In the following, we only discuss two limiting regimes with two different types of scattering.
### Short range disorder
For short range impurities, \(2\tau_{t}=\tau_{0}=\tau_{i}\), and the intra-valley Cooperon channel \(\mathbf{\Gamma}^{KK}_{KK}\) is always fully gapped with its contribution suppressed. In this situations, we only need to consider the contribution from the inter-valley Cooperon channel \(\mathbf{\Gamma}^{KK}_{KK}\). From Eq. (107), we first evaluate the bare Hikami box. Since here we consider that the external momentum is zero, the bare Hikami box for the Cooperon channel \(\mathbf{\Gamma}^{KK}_{KK}\) vanishes, and the bare Hikami box contribution for the inter-valley Cooperon channel is
\[\begin{split}\sigma^{qi(0)}_{ter}=&\frac{e^{2}\hbar} {2\pi}\int\frac{d^{2}\mathbf{q}}{(2\pi)^{2}}\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}G^{R} _{\mathbf{K}}(\mathbf{k})v^{x}_{\mathbf{K}}(\mathbf{k})G^{A}_{\mathbf{K}}(\mathbf{k})G^{R}_{\mathbf{K}}(- \mathbf{k})v^{x}_{\mathbf{K}}(-\mathbf{k})G^{A}_{\mathbf{K}}(-\mathbf{k})\Gamma^{K\bar{K}}_{KK}( \theta_{\mathbf{k}},\theta_{-\mathbf{k}};\mathbf{q})\\ =&-\frac{e^{2}}{2\pi\hbar}\frac{\frac{1}{2\pi}+\Pi( E)}{E\tau_{t}/\hbar}\ln\frac{D_{ter}\tau/\ell_{e}^{2}+g_{x}}{D_{ter}\tau/\ell_{ \phi}^{2}+g_{x}}.\end{split} \tag{108}\]
The full correction to the conductivity should take into account the dressed Hikami box contribution. It is reported to have the same order of magnitude of the bare Hikami box and different signs in two-dimensional systems with large spin-orbital coupling [44].
For the inter-valley Cooperon channels, we need to consider the following dressed Hikami box contribution,
\[\begin{split}\sigma^{qi(1)}_{ter}&=2\frac{e^{2} \hbar}{2\pi}\int\frac{d^{2}\mathbf{q}}{(2\pi)^{2}}\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2 }}\int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\Gamma^{KK}_{KK}(\theta_{\mathbf{k}},\theta_{ \mathbf{p}};\mathbf{q})\langle U^{\mathbf{K}\mathbf{K}}_{-\mathbf{k},\mathbf{p}}U^{\mathbf{K}\mathbf{K}}_{k,- \mathbf{p}}\rangle_{\rm imp}\\ &\quad\times G^{R}_{\mathbf{K}}(-\mathbf{k})G^{R}_{\mathbf{K}}(\mathbf{p})v^{x}_ {\mathbf{K}}(\mathbf{p})G^{A}_{\mathbf{K}}(\mathbf{p})G^{R}_{\mathbf{K}}(-\mathbf{p})G^{R}_{\mathbf{K}}( \mathbf{k})v^{x}_{\mathbf{K}}(\mathbf{k})G^{A}_{\mathbf{K}}(\mathbf{k})\\ &=-\frac{e^{2}}{2\pi\hbar}\frac{(\frac{1}{2\pi}+\Pi(E))^{2}}{4(E \tau_{t}/\hbar)(E\tau_{i}/\hbar)}\ln\frac{D_{ter}\tau/\ell_{e}^{2}+g_{x}}{D_{ ter}\tau/\ell_{\phi}^{2}+g_{x}},\end{split} \tag{109}\]
After collecting all these contributions, we finally obtain the quantum interference correction for the inter-valley Cooperon channel as
\[\begin{split}\sigma^{qi}_{ter}&=\sigma^{qi(0)}_{ ter}+\sigma^{qi(1)}_{ter}+\sigma^{qi(2)}_{ter}\\ &=-\frac{e^{2}}{2\pi\hbar}\frac{\frac{1}{2\pi}+\Pi(E)}{E\tau_{t}/ \hbar}\ln\frac{D_{ter}\tau/\ell_{e}^{2}+g_{x}}{D_{ter}\tau/\ell_{\phi}^{2}+g_{ x}}.\end{split} \tag{110}\]
If the chemical potential is located far from the Dirac node, we have \(\Pi(E)\sim E\tau/\hbar\gg 1\) and the Cooperon gap \(g_{x}=1-\frac{\tau^{*}}{\tau_{t}}\) vanishes since \(\tau^{*}\sim\tau_{t}\). The quantum interference conductivity correction is \(\sigma^{qi}_{ter}=-\frac{e^{2}}{\pi\hbar}\ln\frac{\ell_{e}}{\tau_{e}}\), recovering the results of the conventional weak localization regime. When the chemical potential is near the Dirac point (strong scattering regime), due to the finite Cooperon gap (\(g_{x}\approx 1\)), the quantum interference correction is strongly suppressed.
### Long range disorder
For the long range potential disorder (\(\tau_{t}\approx\tau_{0}\ll\tau_{i}\)), the inter-valley Cooperon channel \(\Gamma^{KK}_{KK}\) directly vanishes since \(g_{x}\approx g_{y}\) and the channel \(\Gamma^{KK}_{KK}\) can also be neglected since it is proportional to the inter-valley scattering strength. Thus, only the intra-valley channel \(\Gamma^{KK}_{KK}\) will contribute to the quantum interference correction. The bare Hikami box for the intra-valley Cooperon channel can be evaluated as
\[\begin{split}\sigma^{qi(0)}_{tra}=& 2\times\frac{e^{2}\hbar}{2\pi}\int \frac{d^{2}\mathbf{q}}{(2\pi)^{2}}\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}}G^{R}_{\mathbf{K}}( \mathbf{k})v^{x}_{\mathbf{K}}(\mathbf{k})G^{A}_{\mathbf{K}}(\mathbf{k})G^{R}_{\mathbf{K}}(-\mathbf{k})v^{x} _{\mathbf{K}}(-\mathbf{k})G^{A}_{\mathbf{K}}(-\mathbf{k})\varGamma^{KK}_{KK}(\theta_{\mathbf{k}}, \theta_{-\mathbf{k}};\mathbf{q})\\ \approx&\frac{e^{2}}{2\pi\hbar}\frac{\frac{1}{2\pi} +\Pi(\mu)}{E\tau_{t}/\hbar}\ln\frac{D_{tra}\tau/\ell_{e}^{2}+g_{z}}{D_{tra}\tau /\ell_{\phi}^{2}+g_{z}},\end{split} \tag{103}\]
where the prefactor \(2\) is due to the degeneracy of the intra-valley Cooperon channel. The phase factor \(e^{i(\theta_{\mathbf{k}}-\theta_{-\mathbf{k}})}=-1\) gives an additional minus sign compared with the inter-valley Cooperon channels (102) due to the \(\pi\) berry phase. The dressed Hikami box contributions for the intra-valley Cooperon channels are
\[\begin{split}\sigma^{qi(1)}_{tra}&=4\frac{e^{2}}{2 \pi\hbar}\int\frac{d^{2}\mathbf{q}}{(2\pi)^{2}}\int\frac{d^{2}\mathbf{k}}{(2\pi)^{2}} \int\frac{d^{2}\mathbf{p}}{(2\pi)^{2}}\varGamma^{KK}_{KK}(\theta_{\mathbf{k}},\theta_{ \mathbf{p}};\mathbf{q})\langle U^{\mathbf{KK}}_{-\mathbf{k},\mathbf{p}}U^{\mathbf{KK}}_{k,-\mathbf{p}} \rangle_{imp}\\ &\times G^{R}_{\mathbf{K}}(-\mathbf{k})G^{R}_{\mathbf{K}}(\mathbf{p})v^{x}_{\bm {K}}(\mathbf{p})G^{A}_{\mathbf{K}}(\mathbf{p})G^{R}_{\mathbf{K}}(-\mathbf{p})G^{R}_{\mathbf{K}}(\mathbf{k} )v^{x}_{\mathbf{K}}(\mathbf{k})G^{A}_{\mathbf{K}}(\mathbf{k})\\ &=-\frac{e^{2}}{2\pi\hbar}\frac{(\frac{1}{2\pi}+\Pi(E))^{2}}{2(E \tau_{t}/\hbar)^{2}}\ln\frac{D_{tra}\tau/\ell_{e}^{2}+g_{z}}{D_{tra}\tau/\ell_ {\phi}^{2}+g_{z}}.\end{split} \tag{104}\]
Then, we can obtain the quantum interference correction for the intra-valley Cooperon channel as
\[\begin{split}\sigma^{qi}_{tra}&=\sigma^{qi(0)}_{tra }+\sigma^{qi(1)}_{tra}\\ &=\frac{e^{2}}{2\pi\hbar}\frac{\frac{1}{2\pi}+\Pi(E)}{E\tau_{t}/ \hbar}\left[1-\frac{\frac{1}{2\pi}+\Pi(E)}{2E\tau_{t}/\hbar}\right]\\ &\times\ln\frac{D_{tra}\tau/\ell_{e}^{2}+g_{z}}{D_{tra}\tau/\ell_ {\phi}^{2}+g_{z}}.\end{split} \tag{105}\]
In the weak scattering regime (\(E\tau/\hbar\gg 1\)), the Cooperon gap \(g_{z}=\frac{1}{2}\left(1-\frac{\tau^{*}}{\tau_{t}}\right)\) vanishes since \(\tau^{*}\sim\tau_{t}\). In this situation, after including the vertex correction, the quantum interference conductivity correction is \(\sigma^{qi}_{ter}=\frac{2e^{2}}{\pi\hbar}\ln\frac{\ell_{\phi}}{\ell_{e}}\), recovering the results of the weak anti-localization for the symplectic symmetry class. When the chemical potential is near the Dirac point (strong scattering regime), the Cooperon gap is finite (\(g_{z}\approx\frac{1}{2}\)), the quantum interference correction will be strongly suppressed.
|
2307.09459 | SMEFT analysis with LHeC, FCC-eh, and EIC DIS pseudodata | In this study, we examine the possibilities opened by upcoming high-energy
deep-inelastic scattering (DIS) experiments to investigate new physics within
the framework of the Standard Model Effective Field Theory (SMEFT).
Specifically, we investigate the beyond-the-Standard-Model (BSM) potential of
the Large Hadron-electron Collider (LHeC) and the Future Circular lepton-hadron
Collider (FCC-eh), and we improve previous simulations of the Electron-Ion
Collider (EIC) by incorporating $Z$-boson vertex corrections. Our fits,
performed using DIS pseudodata, reveal that the LHeC and the FCC-eh can play a
crucial role in resolving degeneracies observed in the parameter space of
Wilson coefficients in global fits using the Higgs, diboson, electroweak, and
top data. This emphasizes the significance of precision DIS measurements in
advancing our understanding of new physics. | Chiara Bissolotti, Radja Boughezal, Kaan Simsek | 2023-07-18T17:40:57Z | http://arxiv.org/abs/2307.09459v1 | # SMEFT analysis with LHeC, FCC-eh, and EIC DIS pseudodata
###### Abstract
In this study, we examine the possibilities opened by upcoming high-energy deep-inelastic scattering (DIS) experiments to investigate new physics within the framework of the Standard Model Effective Field Theory (SMEFT). Specifically, we investigate the beyond-the-Standard-Model (BSM) potential of the Large Hadron-electron Collider (LHeC) and the Future Circular lepton-hadron Collider (FCC-eh), and we improve previous simulations of the Electron-Ion Collider (EIC) by incorporating \(Z\)-boson vertex corrections. Our fits, performed using DIS pseudodata, reveal that the LHeC and the FCC-eh can play a crucial role in resolving degeneracies observed in the parameter space of Wilson coefficients in global fits using the Higgs, diboson, electroweak, and top data. This emphasizes the significance of precision DIS measurements in advancing our understanding of new physics.
+
Footnote †: preprint: _Presented at DIS2023: XXX International Workshop on Deep-Inelastic Scattering and Related Subjects, Michigan State University, USA, 27-31 March 2023_
## I Introduction
The Standard Model (SM) stands as an elegant and comprehensive theory, regarded as the most complete framework to date. However, this theory falls short of providing a complete understanding of the fundamental workings of the universe, and numerous compelling indications suggest the existence of physics beyond the Standard Model.
Given the fact that no particle outside the SM has been found yet, employing an effective field theory (EFT) seems particularly advantageous for the exploration of BSM physics. Specifically, the Standard Model Effective Field Theory (SMEFT) emerges as a highly suitable and versatile approach, offering a model-independent framework for conducting such investigations.
In the SMEFT, higher-dimensional operators in mass dimensions are constructed utilizing the current particle spectrum of the SM. The SMEFT framework assumes that any new physics lies beyond the energy range of both the SM particles and the colliders' capabilities. A review of the SMEFT can be found in Ref. [1].
This contribution is a summary of the results discussed in Ref. [2]. Our goal in this work is to study the BSM potential of future colliders, such as the LHeC, FCC-eh, and the EIC with a detailed accounting of anticipated uncertainties. Following previous studies [3], we consider the neutral-current (NC) deep-inelastic scattering (DIS) cross section as our observable at the LHeC and at the FCC-eh, while, at the EIC, we focus on parity-violating (PV) asymmetries, as done in Ref. [4].
Prior studies have demonstrated that DIS measurements at the EIC and in low-energy fixed target experiments have the potential to address blind sports observed in the semi-leptonic four fermion Wilson coefficient space that persist after Drell-Yan measurements at the LHC [5; 6]. Additionally, EIC measurements of single-spin asymmetries offer a competitive tool for probing the Wilson coefficients associated with dipole operators [7].
We consider here the full spectrum of Wilson coefficients that can alter the DIS process at leading order in the SMEFT loop expansion. These include both semi-leptonic four-fermion Wilson coefficients and \(Z\)-boson vertex correction, for a total of 17 Wilson coefficients.
In this study, we present evidence that future DIS measurements can play a vital role in resolving degeneracies observed in the parameter space of Wilson coefficients in global fits using the Higgs, diboson, electroweak, and top data.
## II Formalism
### SMEFT formalism
The SMEFT serves as a gauge symmetry-preserving, model-independent extension of the SM Lagrangian. Within this framework, one constructs higher-dimensional operators, denoted as \(O_{k}^{(n)}\), utilizing the existing SM particle spectrum. The associated Wilson coefficients, represented as \(C_{k}^{(n)}\), quantify the strength of these operators. These effective couplings are defined at an ultraviolet (UV) cut-off scale, \(\Lambda\), which is assumed to
exceed the masses of all SM particles, as well as the energies accessible by collider experiments. The Lagrangian takes the form
\[\mathscr{L}_{\rm SMEFT}=\mathscr{L}_{\rm SM}+\sum_{n>4}\frac{1}{\Lambda^{n-4}}\sum _{k}C_{k}^{(n)}O_{k}^{(n)}\,. \tag{1}\]
In this study, we focus exclusively on dimension-6 operators, disregarding any dimension-5 operators that violate lepton-number conservation, as they are not relevant to our analysis. Our treatment of observables linearizes their dependence on the Wilson coefficients. There are a total of 17 operators that impact NC DIS matrix elements when considering leading-order coupling constants [8], and these operators are summarized in Table 1. In our study, we suppress flavor indices and assume flavor universality. We note that SMEFT one-loop corrections are anticipated to be of lesser significance compared to next-to-leading order (NLO) QCD corrections. In our analysis, we incorporate the NLO QCD corrections and observe that they have minimal impact on the obtained outcomes. Consequently, we make the assumption that the higher-order terms within the SMEFT loop expansion can be safely disregarded.
### DIS formalism
In NC DIS, a lepton scatters off a nucleon, namely \(\ell+H\to\ell^{\prime}+X\), where \(\ell\) is an electron or a positron, \(H\) can be a proton or a deuteron, and \(\ell^{\prime}\) and \(X\) are the final-state lepton and hadronic systems, respectively. The process is mediated by a photon or a \(Z\)-boson exchange. In this study, we deal with reduced cross sections, defined as
\[\frac{\mathrm{d}^{2}\sigma_{r,\rm NC}^{\ell}}{\mathrm{d}x\,\mathrm{d}Q^{2}} =\left\{\frac{2\pi\alpha^{2}}{xQ^{4}}[1+(1-y)^{2}]\right\}^{-1} \frac{\mathrm{d}^{2}\sigma_{\rm NC}^{\ell}}{\mathrm{d}x\,\mathrm{d}Q^{2}}\,, \tag{2}\] \[\frac{\mathrm{d}^{2}\Delta\sigma_{r,\rm NC}^{\ell}}{\mathrm{d}x\, \mathrm{d}Q^{2}} =\left\{\frac{4\pi\alpha^{2}}{xQ^{4}}[1+(1-y)^{2}]\right\}^{-1} \frac{\mathrm{d}^{2}\Delta\sigma_{\rm NC}^{\ell}}{\mathrm{d}x\,\mathrm{d}Q^{2 }}\,, \tag{3}\]
where \(Q\) is the usual DIS momentum transfer, \(x\) is the Bjorken variable, and \(y\) is the inelasticity parameter. The expressions for the NC DIS cross sections for collisions of a lepton \(\ell\) with an unpolarized or polarized hadron, \(\frac{\mathrm{d}^{2}\sigma_{\rm NC}^{\ell}}{\mathrm{d}x\,\mathrm{d}Q^{2}}\) and \(\frac{\mathrm{d}^{2}\Delta\sigma_{\rm NC}^{\ell}}{\mathrm{d}x\,\mathrm{d}Q^{2}}\), are given, for example, in Ref. [2]. From this point forward, whenever we refer to cross sections, we will refer to the reduced ones and we will indicate them with \((\Delta)\sigma_{\rm NC}\).
We include NLO QCD corrections to both the SM and the SMEFT corrections. The NLO QCD corrections to the SM process are well known [9; 10; 11; 12; 13]. These corrections modify only the quark lines, and, therefore, the corrections are identical for both SM and SMEFT cross sections.
The observable of interest at the LHeC and FCC-eh is the NC DIS cross section, \(\sigma_{\rm NC}\), of unpolarized protons with electrons or positrons of various polarizations. For the EIC, we consider PV asymmetries in cross sections of polarized electrons with either polarized or unpolarized protons/deuterons. We define the unpolarized PV asymmetry, \(A_{\rm PV}\), and the polarized one, \(\Delta A_{\rm PV}\), by
\[A_{\rm PV}=\frac{\sigma_{\rm NC}^{+}-\sigma_{\rm NC}^{-}}{\sigma_{\rm NC}^{+} +\sigma_{\rm NC}^{-}}\hskip 56.905512pt\Delta A_{\rm PV}=\frac{\Delta \sigma_{\rm NC}^{0}}{\sigma_{\rm NC}^{0}}\,. \tag{4}\]
\begin{table}
\begin{tabular}{|c|c|l|} \hline \multicolumn{2}{|c|}{\(ffV\)} & semi-leptonic four-fermion \\ \hline \(C_{\nu W}n\) & \(O_{\nu W}=(\gamma^{\dagger}\tau^{\prime}\phi)W^{I}_{L}B^{\mu\nu}\) & \(C_{\ell 4}^{(1)}\) & \(O_{\ell 4}^{(1)}=(\bar{\ell}\gamma_{\mu}\ell)(\bar{\nu}\tau^{\prime}q)\) \\ \hline \(C_{\varphi D}\) & \(O_{\varphi D}=(\varphi^{\dagger}D_{\mu}\varphi)^{*}(\varphi^{\dagger}D^{\mu} \varphi)\) & \(C_{\ell 4}^{(3)}\) & \((\bar{\ell}\gamma_{\mu}\tau^{\ell})(\bar{\nu}\gamma^{\prime+}q)\) \\ \hline \(C_{\varphi\ell}^{(1)}\) & \(O_{\varphi\ell}^{(1)}=(\varphi^{\dagger}i\,\,\overline{D}_{\mu}\,\,\varphi)( \bar{\ell}\gamma^{\prime}\ell)\) & \(C_{eu}\) & \(O_{eu}=(\bar{\ell}\gamma_{\mu}e)(\bar{u}\gamma^{\mu}u)\) \\ \hline \(C_{\varphi d}^{(3)}\) & \(O_{\varphi d}^{(3)}=(\varphi^{\dagger}i\,\,\overline{D}_{\mu}\,\,\tau^{\prime} \varphi)(\bar{\ell}\gamma^{\prime+}t)\) & \(C_{ed}\) & \(O_{ed}=(\bar{\ell}\gamma_{\mu}e)(\bar{d}\gamma^{\prime}d)\) \\ \hline \(C_{\varphi e}\) & \(O_{\varphi e}=(\varphi^{\dagger}i\,\,\overline{D}_{\mu}\,\,\varphi)(\bar{ \nu}\bar{\nu})(\bar{\nu}\bar{\nu}^{\prime\pm}e)\) & \(C_{lu}\) & \(O_{tu}=(\bar{\ell}\gamma_{\mu}\ell)(\bar{u}\gamma^{\mu}u)\) \\ \hline \(C_{\varphi q}^{(4)}\) & \(O_{\varphi q}^{(1)}=(\varphi^{\dagger}i\,\,\overline{D}_{\mu}\,\,\varphi)( \bar{\nu}\gamma^{\prime\pm}q)\) & \(C_{dd}\) & \(O_{dd}=(\bar{\ell}\gamma_{\mu}\ell)(\bar{d}\gamma^{\prime\pm}d)\) \\ \hline \(C_{\varphi q}^{(3)}\) & \(O_{\varphi q}^{(3)}=(\varphi^{\dagger}i\,\,\overline{D}_{\mu}\,\,\tau^{\prime} \varphi)(\bar{\nu}\gamma^{\prime\pm}q)\) & \(C_{qe}\) & \(O_{qe}=(\bar{q}\gamma_{\mu}q)(\bar{\nu}\gamma^{\mu}e)\) \\ \hline \(C_{\varphi u}\) & \(O_{\varphi u}=(\varphi^{\dagger}i\,\,\overline{D}_{\mu}\,\,\varphi)(\bar{ \nu}\bar{\nu}^{\prime\pm}u)\) & \\ \hline \(C_{\varphi d}\) & \(O_{\varphi d}=(\varphi^{\dagger}i\,\,\overline{D}_{\mu}\,\,\varphi)(\bar{d} \gamma^{\prime\pm}d)\) & \\ \hline \(C_{\ell\ell}\) & \(O_{\ell\ell}=(\ell\gamma_{\mu}\ell)(\ell\gamma^{\prime\pm}\ell)\) & \\ \hline \end{tabular}
\end{table}
Table 1: Dimension-6 operators in the Warsaw basis [8] affecting NC DIS matrix elements at leading order in the coupling constants. Operators in the left column shift the \(ffV\) vertices, while those on the right induce semi-leptonic four-fermion contact interactions. Both the operators and their associated Wilson coefficients are shown. Here, \(\varphi\) represents the Higgs doublet belonging to the SU(2) gauge group; \(\ell\) and \(q\) refer to the left-handed lepton and quark doublets, while \(e\), \(u\), and \(d\) denote the right-handed electron, up-quark, and down-quark singlets, respectively. The notation \(\tau^{I}\) represents the Pauli matrices; the double-arrow covariant derivative is defined as in [2].
In Eq.(4), \(\sigma^{\pm}_{\rm NC}\) is the unpolarized NC DIS \(e^{-}H\) (\(H=p,D\)) cross section evaluated with \(\lambda_{\ell}=\pm P_{\ell}\), \(\sigma^{0}_{\rm NC}\) is the same as \(\sigma^{\pm}_{\rm NC}\) but with \(\lambda_{\ell}=0\), and \(\Delta\sigma^{0}_{\rm NC}\) is the same as \(\sigma^{0}_{\rm NC}\) but with a polarized hadron. \(P_{\ell}\) is the assumed value for the lepton beam polarization at the EIC.
In this study, we linearize the SMEFT expressions. Thus, the SMEFT observables have the generic form
\[\mathcal{O}=\mathcal{O}^{\rm SM}+\sum_{k}C_{k}\ \delta\mathcal{O}_{k}+ \mathcal{O}(C_{k}^{2})\,, \tag{5}\]
where \(k\) runs over the active Wilson coefficients, \(\mathcal{O}\) is the observable, and \(\delta\mathcal{O}_{k}\) is the SMEFT correction to the observable proportional to the Wilson coefficient \(C_{k}\).
## III Pseudodata Sets
For our analysis, we utilize the most recent publicly available LHeC pseudodata sets [14; 15], as well as the EIC dataset that has been identified as the most sensitive to SMEFT Wilson coefficients in [4]. Regarding the FCC-eh, we generate pseudodata sets using the procedure outlined in [4], taking into account the FCC-eh run parameters as specified in [16]. From this point onward, we refer to these pseudodata sets as _data sets_. For a full list of the data sets included in this analysis, we point the reader to Table 2 of Ref. [2]. In order to minimize significant uncertainty from non-perturbative QCD and nuclear dynamics that occur at low \(Q\) and high \(x\), where we expect SMEFT effects to be diminished, we limit ourselves to the bins that fulfill \(x\leq 0.5\), \(Q\geq 10\) GeV, and \(0.1\leq y\leq 0.9\).
Regarding the uncertainties, we adopt the error estimates from prior assessments [3; 16] for the LHeC and the FCC-eh. We introduce the systematics in a completely correlated manner and consider the luminosity error to be 1% relative to the cross section.
As for the EIC asymmetries, we take into account both statistical and systematic uncertainties. The systematic errors due to particle background and other imperfections in measurements are treated as uncorrelated and are 1% relative to the asymmetry. We assume uncertainties in lepton (hadron) beam polarization to be fully correlated and 1% (2%) relative in asymmetry. More discussion on the anticipated experimental uncertainties at the EIC is given in Ref. [4].
Additionally, for all the data sets, we take into account PDF errors fully correlated between bins. In the Appendices of Ref. [2], we discuss how these systematic uncertainties are incorporated into the error matrix for our analysis; we also give details of our pseudodata generation and describe our statistical procedure for deriving the bounds on Wilson coefficients. For a full breakdown of the uncertainties of all the data sets included in this study, we the reader may refer to Ref. [2].
## IV Results
### Bounds on semi-leptonic four-fermion operators
At first, we activate solely the seven semi-leptonic four-fermion operators. Previous investigations have indicated that the Drell-Yan process at the LHC, which is naturally suited to probe these operators given its energy coverage and exceptional measurement precision, encounters challenges in disentangling specific linear combinations of Wilson coefficients within this subspace [5; 17]. Future DIS experiments offer the potential to resolve these degeneracies and provide valuable insights in this regard [4; 5]. In Table 2, we show the marginalized and non-marginalized1 95% confidence-level (CL) bounds on the semi-leptonic four-fermion Wilson
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline & & \(C_{\infty}\) & & \(C_{sd}\) & & \(C_{ls}^{(1)}\) & & \(C_{ls}^{(3)}\) & & \(C_{ls}\) & & \(C_{ld}\) & & \(C_{se}\) \\ \hline \hline & 95\% & UV sc. & 95\% & UV sc. & 95\% & UV sc. & 95\% & UV sc. & 95\% & UV sc. & 95\% & UV sc. & 95\% & UV sc. & 95\% & UV sc. & UV sc. & 95\% & UV sc. & UV sc. \\ \hline \hline EIC & mar. & 2.1 & 0.69 & 7.2 & 0.37 & 2.8 & 0.59 & 4.2 & 0.49 & 9.1 & 0.33 & 9.8 & 0.32 & 8.9 & 0.33 \\ \cline{2-13} & nonmar & 0.12 & 2.9 & 0.34 & 1.7 & 0.17 & 2.4 & 0.10 & 3.2 & 0.28 & 1.9 & 0.57 & 1.3 & 0.39 & 1.6 \\ \hline \hline & mar. & 0.0053 & 14. & 0.026 & 6.2 & 0.020 & 7.1 & 0.011 & 9.5 & 0.032 & 5.6 & 0.16 & 2.5 & 0.018 & 7.4 \\ \cline{2-13} & nonmar & 0.0022 & 21. & 0.0097 & 10. & 0.0031 & 18. & 0.0017 & 24. & 0.0084 & 11. & 0.036 & 5.3 & 0.011 & 9.7 \\ \hline \hline FCC-eh & mar. & 0.0031 & 18. & 0.0070 & 12. & 0.035 & 5.4 & 0.014 & 8.4 & 0.068 & 3.8 & 0.26 & 2. & 0.0092 & 10. \\ \cline{2-13} & nonmar & 0.00056 & 42. & 0.0012 & 28. & 0.0014 & 27. & 0.00038 & 51. & 0.0028 & 19. & 0.0061 & 13. & 0.0016 & 25. \\ \hline \end{tabular}
\end{table}
Table 2: Marginalized and non-marginalized 95% CL bounds on semi-leptonic four-fermion Wilson coefficients at \(\Lambda=1\) TeV for the combined EIC, LHeC, and FCC-eh datasets, as well as the corresponding effective UV scales, in units of TeV.
coefficients and the corresponding effective UV scales, expressed in TeV, obtained from the full seven-parameter (7\(d\)) joint fits for the EIC, LHeC, and FCC-eh data sets, respectively.
A comprehensive table, including the results for individual data sets from each collider, can be found in Ref. [2]. Examining Table 2, we observe that the effective scales probed in the fully marginalized joint fits vary depending on the Wilson coefficient. At the EIC, the probed scales range from 300 GeV to 700 GeV. For the LHeC, the range extends from 2.5 TeV to 14 TeV, while at the FCC-eh, the probed scales span from 2.0 TeV to 18 TeV.
Furthermore, we note that the joint LHeC data set imposes significantly stronger bounds on semi-leptonic four-fermion Wilson coefficients compared to the EIC. This difference arises due to the LHeC's higher momentum transfers, and thus deviations induced by SMEFT are more pronounced. For the majority of operators, the joint FCC-eh fit imposes stronger constraints than the joint LHeC fit.
The effective UV scales presented in Table 2 are defined as \(\Lambda/\sqrt{C_{k}}\) for each Wilson coefficient \(C_{k}\). We emphasize that the convergence of the EFT expansion is governed by the ratio \(C_{k}Q^{2}/\Lambda^{2}\), where \(Q\) denotes the DIS momentum transfer. The obtained constraints on the effective scales indicate that this ratio remains significantly below unity for all considered runs. This supports our decision to truncate the expansion at dimension-6 and to linearize the dimension-6 SMEFT effects.
In Fig. 1, we show representative confidence ellipses projected from the 7\(d\) fit of the four-fermion Wilson coefficients. We can see the emergence of flat directions for individual sets, namely LHeC3 and FCCeh3. These flat directions appear to be resolved in the joint fits. We note that the EIC confidence ellipse remains weaker in the joint fit with respect to the LHeC and the FCC-eh.
### Bounds on \(ffV\) vertex corrections
We proceed by activating all 17 Wilson coefficients listed in Table 1, which encompass both four-fermion interactions and operators affecting the \(ffV\) vertices. Generally, corrections to the \(ffV\) vertices are expected to be tightly constrained by precision \(Z\)-pole observables. Notably, fits considering only a single activated Wilson coefficient yield remarkably stringent bounds, reaching up to 10 TeV in certain cases [18]. However, the limited number of available measurements gives rise to multiple degeneracies within this parameter space. This phenomenon is highlighted in Ref.[19], where the bounds on \(ffV\) vertex corrections are significantly relaxed by approximately one order of magnitude when transitioning from single-coefficient fits to results where the remaining Wilson coefficients are marginalized over. For instance, the reach of the effective UV scale associated with the coefficient \(C_{\phi WB}\) diminishes from approximately 15 TeV to 1 TeV when all coefficients are active (see Ref.[19], Fig. 3). We consider here the potential of future DIS experiments to probe this sector of the SMEFT.
A table presenting the marginalized 95% CL bounds on Wilson coefficients obtained from the full 17\(d\) fit can be found in Ref. [2]. In that reference, we provide analogous results for the joint EIC fit, as well as the joint LHeC and FCC-eh constraints.
Figure 1: Marginalized 95% CL ellipses for the parameter subspaces spanned by \(C_{\ell q}^{(1)}\) and \(C_{\ell q}\) (left) and \(C_{\ell q}^{(1)}\) and \(C_{\varphi q}\) (right) with \(\Lambda=1\) TeV. In the plots, we show the strongest individual EIC data set, the strongest LHeC and FCC-eh sets for these Wilson coefficients, as well as the joint EIC, LHeC, and FCC-eh fits. The insets show the zoomed-in plots of the joint LHeC and FCC-eh fits.
We present here a summary of the main findings of the \(17d\) fit. The obtained bounds from the LHeC surpass those obtained from the joint fit of electroweak precision data and the LHC results for the majority of Wilson coefficients, indicating that the inclusion of LHeC data enhances the constraining power in the global fit. Furthermore, the FCC-eh bounds are generally stronger than both the LHeC and EIC bounds in most cases. The bounds from the EIC are weaker than those obtained from the LHeC and in [19]. It is important to note that a direct comparison between the fits conducted in [19] and our study may not be entirely straightforward due to differences in the number of fitted parameters.
To further investigate the implications of including future precision DIS data in the existing global fit, we examine representative \(2d\) projections of our results. In Fig. 2, we present non-marginalized 95% CL ellipses in the parameter subspace defined by \((C_{\varphi WB},C_{\varphi\ell}^{(1)})\) and \((C_{\varphi WB},C_{\varphi u})\). We analyze the joint fits from each DIS experiment, as well as the electroweak-pole-observable (EWPO) fits adapted from Ref. [19] From the \(2d\) projections in Fig. 2, we can see that the potential LHeC probes are stronger than those of the joint electroweak and LHC fit and that the FCC-eh bounds are even stronger. In particular, the joint electroweak and LHC fit exhibits strong correlations between parameters that result in elongated ellipses in several of the \(2d\) projections that we consider, as illustrated by the pair \((C_{\varphi WB},C_{\varphi\ell}^{(1)})\). The combinations of future LHeC and FCC-eh runs do not show these correlations, and can remove these (approximate) degeneracies in the joint electroweak and LHC fit. We note that the EIC probes are far weaker than those obtained from the other fits, and do not contribute significantly to probing the \(ffV\) parameter space.
## V Conclusions
This study investigates the potential of the LHeC, FCC-eh, and EIC in exploring BSM physics within the framework of the SMEFT. Building upon previous research, we focus on key observables: the NC DIS cross section at the LHeC and the FCC-eh, and PV asymmetries at the EIC. By considering SMEFT semi-leptonic four-fermion operators and \(ffV\) vertex corrections, we work first with a 7-dimensional and then with a 17-dimensional Wilson coefficient parameter space.
Our \(7d\) fits reveal that at the EIC can probe UV scales up to 700 GeV. For the LHeC, the range reaches 14 TeV, while at the FCC-eh, the probed scales span from 2 TeV to 18 TeV. We note that no single-run scenario at any of these experiments provides an ideal probe for the complete SMEFT parameter space. Our study demonstrates that future precision DIS measurements can effectively alleviate degeneracies observed in precision electroweak fits based on \(Z\)-pole observables. The constraints obtained from the LHeC and FCC-eh experiments are generally stronger compared to the combined fits using \(Z\)-pole and LHC data. Overall, our results underscore the considerable potential of future DIS studies in exploring BSM physics.
_Acknowledgments._ We thank D. Britzger for suggesting to include an analysis of the FCC-eh capabilities. C. B. and R. B. are supported by the DOE contract DE-AC02-06CH11357. K. S. is supported by the DOE grant DE- FG02-91ER40684. This research was supported in part through the computational resources and staff contributions provided for the Quest high performance computing facility at Northwestern University which is
Figure 2: Marginalized 95% CL ellipses in the two-parameter fits of \(C_{\varphi WB}\) and \(C_{\varphi\ell}^{(1)}\) (left) and \(C_{\varphi WB}\) and \(C_{\varphi u}\) (right) at \(\Lambda=1\) TeV. Shown are the joint EIC, LHeC, and FCC-eh fits, as well as the EWPO fit adapted from Ref. [19].
jointly supported by the Office of the Provost, the Office for Research, and Northwestern University Information Technology.
|
2307.11216 | Spectra of axions emitted from main sequence stars | We compute the detailed energy spectra of axions with two-photon coupling
produced in stellar cores over a wide range of stellar masses. We focus on main
sequence stars and base our calculations on the stellar interior profiles from
MESA, for which we provide simple fits in an appendix. The obtained stellar
axion spectra, combined with recent models of star formation history and
stellar initial mass function, enable us to estimate the properties of the
diffuse axion background sourced by all the stars in the universe. The fluxes
of this stellar axion background and its decay photons are subdominant to but
can in principle be disentangled from those expected from the Sun and the early
universe based on their different spectral and spatial profiles. | Ngan H. Nguyen, Erwin H. Tanin, Marc Kamionkowski | 2023-07-20T20:03:46Z | http://arxiv.org/abs/2307.11216v2 | # Spectra of axions emitted from main sequence stars
###### Abstract
We compute the detailed energy spectra of axions with two-photon coupling produced in stellar cores over a wide range of stellar masses. We focus on main sequence stars and base our calculations on the stellar interior profiles from MESA, for which we provide simple fits in an appendix. The obtained stellar axion spectra, combined with recent models of star formation history and stellar initial mass function, enable us to estimate the properties of the diffuse axion background sourced by all the stars in the universe. The fluxes of this stellar axion background and its decay photons are subdominant to but can in principle be disentangled from those expected from the Sun and the early universe based on their different spectral and spatial profiles.
###### Contents
* 1 Introduction
* 2 Axions from main sequence stars
* 2.1 Axion production in stars with different masses
* 2.2 Stellar Axion Background
* 3 X-ray from stellar axion decay
* 3.1 Limits from the extragalactic X-ray background
* 3.2 X-rays from gravitationally bound objects
* 4 Conclusion
* A Simple fits to stellar properties from MESA
Introduction
New light particles with feeble interactions arise ubiquitously in a wide range of beyond the Standard Model theories that address various issues of the Standard Model [1; 2]. To maximize the discovery potential of these particles, it is useful to have a quantitative understanding of their production from all possible sources across all energy regimes. Stars are among the most intense continuous sources of new light particles in the present epoch [3; 4]. While the impact of new particle emission on stellar evolution has been extensively explored, a characterization of the energy spectra of the particles emitted from stars other than the Sun has been lacking [5; 6; 7; 8; 9; 10; 11; 12; 13; 14]. Knowing the properties of such star-sourced light particles can be useful as it may reveal new opportunities for probing beyond the Standard Model.
The aim of this work is to provide benchmark stellar-emission spectra of light particles from main sequence stars over a wide range of stellar masses. We focus, as a start, on what we refer to as the _axion_, a pseudoscalar \(a\) whose Lagrangian includes [15; 16; 17; 18; 19]
\[\mathcal{L}\supset\frac{1}{2}\partial_{\mu}a\partial^{\mu}a-\frac{1}{2}m_{a} ^{2}a^{2}-\frac{g_{a\gamma\gamma}}{4}aF_{\mu\nu}\tilde{F}^{\mu\nu}, \tag{1}\]
where \(F_{\mu\nu}\) is the electromagnetic field strength tensor and the axion-photon coupling \(g_{a\gamma\gamma}\) is treated as independent from the axion mass \(m_{a}\).1 CAST [5; 23] and globular cluster observations [24; 25] have essentially ruled out \(g_{a\gamma\gamma}\gtrsim 6.6\times 10^{-11}\,\mathrm{GeV}^{-1}\) for \(m_{a}\lesssim 10\,\mathrm{keV}\), nevertheless the remaining parameter space still allows axions to be produced at some level in stellar cores and leave interesting astrophysical signatures [3; 4].
Footnote 1: All the results we obtain below apply as well to a CP-even scalar \(s\) with purely electromagnetic coupling of the form \(g_{a\gamma\gamma}sF_{\mu\nu}F^{\mu\nu}\), since this coupling leads to processes analogous the axion processes that we consider with the same amplitudes [20; 21; 22]. This is, of course, assuming the scalar bare mass is greater than the radiative corrections it may receive.
Over the past decades, our quantitative understanding of stellar structure and evolution has improved dramatically, largely thanks to important calibrations of stellar evolution models against asteroseismic data enabled by the recent advent of space-based photometry [26; 27; 28; 29; 30; 31]. The majority of asteroseismic studies have been on main sequence stars. Post-main sequence, nuclear-burning stars are statistically rare in numbers due to their short lifetimes, and are generally less understood [32; 33; 34; 35; 36; 37; 38]. One major reason for the latter is that the underlying physical mechanism behind the strong mass loss that is known to occur for these stars is not yet understood.2 For these reasons, we restrict our analysis to main sequence stars. While mass loss also plays a substantial role for high-mass main sequence stars, these stars are better constrained due to the better availability of their asteroseismic data [40]. The key input parameters of stellar evolution models include the initial mass, metallicity, rotation, and magnetic field. To simplify our analysis, we consider only the dominant parameter of these, namely the initial stellar mass, neglect the rotation and magnetic field, and set the metallicity to that of the cosmic average.
Footnote 2: Most stellar evolution codes treat mass loss very crudely, controlled by free parameters that are yet to be empirically calibrated. Moreover, there are significant discrepancies among the mass loss rates predicted by different codes [39].
Stars, collectively, can also be regarded as a cosmic source of axions.3 The resulting _Stellar Axion Background_ (StAB) spectrum is a triple integral over the interior of a star, the stellar population at a particular epoch, and the star formation history. These are characterized by a stellar-evolution model, a stellar initial mass func
rate. The StAB will contribute to the diffuse extragalactic axion background together with other potential cosmic axion sources, which include supernovae [46; 47; 48; 49; 50; 51; 52], dark matter [53; 54; 55; 56; 57; 58], dark energy fluctuations [59; 60], primordial black holes [61; 62], and various processes in the early universe [63; 64; 65; 66; 67; 68; 69]. One can in principle distinguish the StAB from other axion backgrounds based on their spectra and spatial distributions. If the axion is sufficiently heavy, a considerable fraction of the StAB can spontaneously decay into X-ray within the age of the universe. These X-ray photons will contribute to the cosmic X-ray background (CXB) and potentially leave an imprint in the form of a local bump in the CXB spectrum.
The paper is organized as follows. We calculate the axion spectra from main sequence stars over a wide range of stellar masses individually and collectively in Section 2, evaluate the detectability of the X-ray from the decay of stellar axions in Section 3, and conclude in Section 4. Fits to the interior profiles of the ensemble of main sequence stars used in our analysis are collected in Appendix. A.
## 2 Axions from main sequence stars
### Axion production in stars with different masses
Axions are produced in stellar cores primarily through the Primakoff process (thermal photons converting into axions in the static electric fields sourced by charged particles in the star) and photon coalescence \(\gamma\gamma\to a\). The axion production rate per unit volume via the Primakoff effect4[66; 74; 75; 76] and photon coalescence5[7; 77; 78; 10; 79] as functions of the axion energy \(E_{a}\) are well known and can be written as
Footnote 4: Our energy-integrated axion emission rate from the Primakoff process is in agreement with the total axion luminosity of [5] as well as the axion luminosity per unit stellar interior mass of [70] which, as pointed out in [71], is about an order of magnitude larger than the expression reported in [72; 73].
Footnote 5: Until recently [77; 78; 79; 10], axion production in dense astrophysical objects via photon coalescence has mostly been neglected, as this process is negligible by far compared to axion production via the Primakoff effect when the axion is light. However, for heavier axions with masses comparable to or higher than the core temperatures of stars, photon coalescence process can be more efficient than the Primakoff effect.
\[\frac{d\dot{n}_{a}^{\rm Prim.}}{dE_{a}} = \frac{g_{a\gamma\gamma}^{2}\kappa^{2}TE_{a}\sqrt{E_{a}^{2}- \omega_{p}^{2}}}{32\pi^{3}\left(e^{E_{a}/T}-1\right)}\left[\left(1-\frac{1}{2 \gamma_{a}^{2}}+\frac{\kappa^{2}}{4\gamma_{a}^{2}m_{a}^{2}}\right)\ln\left( \frac{2\gamma_{a}^{2}(1+v_{a})-1+\kappa^{2}/m_{a}^{2}}{2\gamma_{a}^{2}(1-v_{a} )-1+\kappa^{2}/m_{a}^{2}}\right)\right.\right.\] \[\left.\left.-\frac{m_{a}^{2}}{4\gamma_{a}^{2}\kappa^{2}}\ln\left( \frac{2\gamma_{a}^{2}(1+v_{a})-1+m_{a}^{2}/\kappa^{2}}{2\gamma_{a}^{2}(1-v_{a} )-1+m_{a}^{2}/\kappa^{2}}\right)-v_{a}\right],\right.\] \[\frac{d\dot{n}_{a}^{\rm coal.}}{dE_{a}} = \Theta\left(m_{a}-2\omega_{\rm p}\right)\frac{g_{a\gamma\gamma}^ {2}Tm_{a}^{2}(m_{a}^{2}-4\omega_{\rm p}^{2})}{64\pi^{3}(e^{E_{a}/T}-1)}\ln \left[\frac{\sinh\left[\gamma_{a}(m_{a}+v_{a}\sqrt{m_{a}^{2}-4\omega_{\rm p}^{ 2}})/4T\right]}{\sinh\left[\gamma_{a}(m_{a}-v_{a}\sqrt{m_{a}^{2}-4\omega_{\rm p }^{2}})/4T\right]}\right],\]
where \(v_{a}=\sqrt{1-1/\gamma_{a}^{2}}\) and \(\gamma_{a}=E_{a}/m_{a}\) are respectively the velocity and the corresponding Lorentz factor of the axion; \(\kappa\) and \(\omega_{\rm p}\) are respectively the inverse screening length and the plasma mass in the stellar interior, given by
\[\kappa^{2}=4\pi\alpha\frac{\sum_{i=e,\ \rm ions}Z_{i}^{2}n_{i}}{T},\qquad\omega_{ \rm p}^{2}=4\pi\alpha\frac{n_{e}}{m_{e}}, \tag{3}\]
where \(T\), \(Z_{i}\), and \(n_{i}\) are the temperature, charge of species \(i\) (in units of electron charge), and number density of species \(i\). The spectral axion emission rate from a whole star is then found by integrating over the volume of the star
\[\frac{d\dot{N}_{a}^{\star}}{dE_{a}}=\int_{\rm star}dV\,\left(\frac{d\dot{n}_{a}^{ \rm Prim.}}{dE_{a}}+\frac{d\dot{n}_{a}^{\rm coal.}}{dE_{a}}\right), \tag{4}\]
which requires knowing the internal profiles of the star.
We obtain the stellar interior profiles from the state of the art stellar evolution code Modules for Experiments in Stellar Astrophysics (MESA) [80, 81, 82, 83, 84, 85].6 MESA is a one-dimensional, i.e. spherically symmetric, stellar evolution code which numerically solves the coupled equations for the structure, nuclear reaction network, and energy transfers (convection, radiative transfer, mass loss) of individual stars. We generate the profiles of 34 representative main-sequence stars with masses ranging from \(0.1-100M_{\odot}\) using MESA. The inputs of the simulation are chosen so as to produce the most typical main-sequence stars.7 We simulate the evolution of these stars starting from their slowly-contracting, pre-main-sequence stage. At some point during the evolution, hydrogen burning ignites and halts the contraction, marking the start of the main sequence phase. We let the stars evolve through the entire main sequence phase and define the end of the phase at the age \(t_{\rm life}\) when the central hydrogen fraction reaches \(X=10^{-4}\).
Footnote 6: We use MESA \(r2.11.1\) version in this paper.
Footnote 7: We adopt the following assumptions in running the MESA code. The initial metallicity of the stars are uniformly set to the cosmic average \(\langle Z\rangle=0.0175\) taken from [86]. The helium abundance is set to MESA’s default \(Y=0.24+2Z\). The radiative opacities are taken from the standard Type 1 OPAL opacity tables [87] based on the solar chemical compositions from [88], with the setting such that it automatically switches to Type 2 OPAL [89] when appropriate.
Axion production in a star depends mainly on the temperature \(T(r)\), screening length \(\kappa(r)\), and plasma mass \(\omega_{\rm p}(r)\) profiles in the core region of the star. Our simulations show that these quantities evolve slowly by \(O(10\%)\) over the bulk of the main sequence stage and only start to vary appreciably toward the end of the stage. To reduce the computational cost of evaluating the stellar axion production, we neglect this time dependence and extract the stellar profiles from a representative point in the stellar evolution. Whenever possible, the profiles of these stars are taken from a snapshot at the so-called intermediate age main sequence phase, namely the point when the hydrogen abundance hits \(X=0.3\). For low-mass stars \(M<M_{\odot}\) which do not reach this phase within the age of the universe due to their slow evolution, the profiles are instead extracted at half the age of the universe, \(t_{\rm U}/2=6.85\) Gyr.
We compute the axion production using the full stellar profiles from MESA, but also provide simple fits to these stellar profiles in Appendix. A, which can be used for reproducing our results or for other purposes. In Fig. 1, we show the results of the integration over stellar layers (4) in terms of the energy distribution of the axion luminosity, \(dL_{a}^{\star}/d\ln E_{a}=E_{a}^{2}d\dot{N}_{a}^{\star}/dE_{a}\), for several representative stellar masses. As one would expect, almost all the axions from a star are emitted from the core region of the star. The axion emission from our solar-mass star is close to that from the Sun [5] though with small differences, due to the slightly different chemical composition and stellar age assumed. Our results show that axion production from photon coalescence tends to dominate over that from the Primakoff process for massive stars and sufficiently high axion masses.
To better understand how the axion emission changes with stellar mass, we next derive rough scaling laws of the axion luminosity \(L_{a}^{\star}(M)\) with the stellar mass \(M\) using our MESA
fits (see Appendix. A). In the remainder of this sub section, we focus on stars with masses \(M_{\odot}\leq M\leq 100M_{\odot}\) for which our MESA fits work well, consider axions with masses \(m_{a}\gtrsim\) keV for which the plasma masses in stellar cores8 are negligible (\(m_{a}^{2}\gg\omega_{\rm p}^{2}\)), and ignore log factors unless they have exponentially large arguments, which may occur in the expression for the axion production rate from photon coalescence (2) due to the \(\sinh\) functions. As shown in Appendix. A, the temperature profiles in the cores of our MESA-generated stars are well fitted by the exponential profile \(T(r)=T_{c}e^{-r/r_{T}}\). Using the characteristic length scale of the temperature profile \(r_{T}\) as a proxy for the stellar core radius, the axion luminosity from a star can be crudely estimated as
Footnote 8: The core plasma mass varies over the stellar ensemble in the range \(\omega_{\rm p,c}\sim 0.3-0.04\,\)keV.
\[L_{a}^{*}\sim r_{T}^{3}\left(E_{a}\frac{d\dot{n}_{a}^{*}}{dE_{a}}\right)_{\rm peak }\Delta E_{a}. \tag{5}\]
The peak energy at which axion is sourced occurs at \(E_{\rm peak}\sim\max\left[3T_{c},m_{a}\right]\) for the Primakoff process and \(E_{\rm peak}\sim m_{a}\) for photon coalescence, while the width of the peak for both processes is set by the Boltzmann factor, i.e. \(\Delta E_{a}\sim T_{c}\). Hence, the axion luminosity from the
Figure 1: Axion luminosity per unit logarithmic energy range \(dL_{a}^{*}/d\ln E_{a}\) from single main sequence stars of different masses \(M\), for axions with \(g_{a\gamma\gamma}=10^{-10}\,\)GeV\({}^{-1}\) and different masses \(m_{a}\). Shown in solid and dot-dashed lines are the \(dL_{a}^{*}/d\ln E_{a}\) from the Primakoff effect (Eq.(1)) and photon coalescence (Eq.(2)), respectively. The core temperatures of the assumed \(0.1M_{\odot}\), \(M_{\odot}\), \(10M_{\odot}\), and \(100M_{\odot}\) model stars are about \(0.8\,\)keV, \(1.6\,\)keV, \(3.0\,\)keV, and \(4.4\,\)keV, respectively.
Primakoff effect and photon coalescence scale as
\[L_{a}^{*}|_{\rm Prim.} \propto g_{a\gamma\gamma}^{2}r_{T}(M)^{3}\kappa_{c}(M)^{2}T_{c}(M)^{2} \left[\max\left(3T_{c}(M),m_{a}\right)\right]^{3}e^{-m_{a}/T_{c}(M)}, \tag{6}\] \[L_{a}^{*}|_{\rm coal.} \propto g_{a\gamma\gamma}^{2}r_{T}(M)^{3}m_{a}^{5}T_{c}(M)^{2}e^{-m_{a} /T_{c}(M)}. \tag{7}\]
According to our MESA fits for the stellar mass range \(1-100M_{\odot}\), the core temperature \(T_{c}(M)\), core radius \(r_{T}(M)\), and inverse screening length \(\kappa_{c}(M)\) scale as \(T_{c}\propto M^{0.22}\), \(r_{T}\propto M^{0.61}\), and \(\kappa_{c}\propto M^{-0.76}\). These give
\[L_{a}^{*}|_{\rm Prim.} \propto\begin{cases}g_{a\gamma\gamma}^{2}M^{0.75+b(M)},&M\lesssim M _{b}\\ g_{a\gamma\gamma}^{2}M^{1.41},&M\gtrsim M_{b}\end{cases}, \tag{8}\] \[L_{a}^{*}|_{\rm coal.} \propto\begin{cases}g_{a\gamma\gamma}^{2}M^{2.27+b(M)},&M\lesssim M _{b}\\ g_{a\gamma\gamma}^{2}M^{2.27},&M\gtrsim M_{b}\end{cases}, \tag{9}\]
where
\[b(M)=\frac{1-(M/M_{\odot})^{-0.22}}{\ln(M/M_{\odot})}\frac{m_{a}}{1.83\,{\rm keV }},\qquad M_{b}=\left(\frac{m_{a}}{1.83\,{\rm keV}}\right)^{4.54}M_{\odot}. \tag{10}\]
The exponent \(b(M)\) captures the exponential suppression from the Boltzmann factor \(e^{-m_{a}/T_{c}(M)}\) which is important for \(M\lesssim M_{b}\) (for which \(m_{a}\gtrsim T_{c}(M)\)) and is a monotonically decreasing function of \(M\) which varies in the range \((0.22-0.14)\times m_{a}/1.83\,{\rm keV}\) as the stellar mass is varied from \(M_{\odot}\) to \(100M_{\odot}\).
### Stellar Axion Background
The aggregate of all the stars in the universe can source a cosmic population of axions which we refer to as the _Stellar Axion Background_ (StAB). Let us begin with a quick estimate for the largest possible energy density of the StAB. The limits on the axion-photon coupling \(g_{a\gamma\gamma}\lesssim 6.6\times 10^{-11}\,{\rm GeV}\) for \(m_{a}\lesssim 10\,{\rm keV}\) from CAST and globular cluster observations allow a Sun-like (near solar mass, main sequence) star to emit axions with luminosity \(\lesssim 10^{-3}L_{\odot}\)[5]. This can be linked to the cosmic optical background (COB), which is dominantly sourced by Sun-like stars [90, 91, 92]. The observed energy density of the COB \(\rho_{\rm COB}\sim 10^{-5}-10^{-4}\,{\rm meV}^{4}\)[93] sets a rough upper limit on the stellar axion background energy density
\[\rho_{\rm StAB}\lesssim 10^{-3}\rho_{\rm COB}\sim 10^{-8}-10^{-7}\,{\rm meV}^{4 }\,. \tag{11}\]
A similar estimate for the maximum \(\rho_{\rm StAB}\) can be obtained from the luminosity density of the universe, which has been measured to be \(2\times 10^{8}L_{\odot}/{\rm Mpc}^{3}\) around the optical band (\(E_{\gamma}\sim 3\,{\rm eV}\)) [94, 95, 96], implying that the cosmic density of Sun-like stars is \(\sim 10^{8}/{\rm Mpc}^{3}\). Combining this and that the axion luminosity of a Sun-like star is at most \(\sim 10^{-3}\) of its total luminosity we find
\[\rho_{\rm StAB}\lesssim\frac{10^{8}\ {\rm Sun-like\ stars}}{{\rm Mpc}^{3}} \times 10^{-3}L_{\odot}\times H_{0}^{-1}\sim 10^{-7}\ {\rm meV}^{4}. \tag{12}\]
If a substantial fraction of the StAB decays to photons, it can leave an imprint in the cosmic X-ray background (CXB) spectrum, which has been measured to have an energy density of \(\rho_{\rm CXB}\sim 10^{-8}\,{\rm meV}^{4}\) in the \(1-10\,{\rm keV}\) energy range. The above estimates suggest that there is
a potential for probing the axion parameter space below the globular cluster bound with the CXB spectrum, which will depend on not only the spectral shape of the StAB decay signal but also how well the CXB spectrum is measured and understood.
In the remainder of this subsection, we compute the spectra and evolution of the StAB more carefully. The resulting StAB and StAB-decay photon spectral energy density at the present epoch (\(z=0\)) and the redshift evolution of their total energy densities are shown in Figs. 2 and 3, for different axion masses. We consider only axion emissions from main sequence stars, neglect the metallicity- and time-dependence of the axion production rate from a single star, and neglect the backreaction of axion emission on stellar evolution. While there are many sources of uncertainties associated with the properties and distribution of stars, we find that the dominant axion sourcing is due to stars similar to the Sun (those with \(M\sim M_{\odot}\)) which are the best understood type of stars and occurring at redshifts \(z\lesssim 2\) where the star formation rate is well established.
We now proceed with the full calculation of the StAB spectrum and evolution. The StAB _comoving_ energy spectrum is given by
\[\frac{d\rho_{a}}{d\ln E_{a}}\left(E_{a},z\right)=E_{a}^{2}\frac{dn_{a}}{dE_{a} }\left(E_{a},z\right), \tag{13}\]
with the comoving spectral axion density \(dn_{a}/dE_{a}\) evolving as
\[-\frac{d^{2}n_{a}}{dE_{a}dz}(E_{a},z)=\frac{1}{H(z)(1+z)}\left[\frac{d\hat{m}_ {a}^{\rm all\ \star}}{dE_{a}}(E_{a},z)-\Gamma_{a\rightarrow\gamma\gamma}(E_{a})\frac{dn_{a} }{dE_{a}}(E_{a},z)\right], \tag{14}\]
Figure 2: The spectra of energy density per unit logarithmic energy interval \(d\rho/d\ln E\) of _StAB (dashed)_ and _StAB-decay photons (solid)_ at the current epoch (\(z=0\)) for \(g_{a\gamma\gamma}=10^{-10}\,{\rm GeV}^{-1}\) and different axion masses \(m_{a}\). As \(m_{a}\) is increased from a small value, the StAB spectrum becomes progressively suppressed due to shorter axion decay lifetime, kinematic energy cut (\(E>m_{a}\)), and Boltzmann suppression. The photon spectrum increases at the start due to increased fraction of decayed axion, but decreases at higher \(m_{a}\) once Boltzmann suppression of the axion production kicks in for most of the stars. The widths of the StAB and StAB-decay photon spectra are determined by many factors, with the general trend being that they are narrower for heavier axions due to the more severe kinematic cuts of the axion production and the smaller number of massive stars with sufficiently high core temperatures to efficiently source axions.
where \(H(z)\) is the Hubble rate at redshift \(z\) and \(\Gamma_{a\rightarrow\gamma\gamma}(E_{a})\) is the decay rate of axion in the rest frame. The axion number spectrum per comoving volume observed at redshift \(z\) is found by solving the above differential equation. We express this solution as an integral over the previous epochs
\[\frac{dn_{a}}{dE_{a}}(E_{a},z)=\int_{z}^{\infty}\frac{dz^{\prime}}{H(z^{\prime })(1+z^{\prime})}\frac{dE_{a}^{\prime}}{dE_{a}}\frac{d\hat{n}_{a}^{\rm all\; \star}}{dE_{a}^{\prime}}(E_{a}^{\prime},z^{\prime})e^{-\int_{z}^{z^{\prime}} \frac{dz^{\prime\prime}}{H(z^{\prime\prime})(1+z^{\prime\prime})}\Gamma_{a \rightarrow\gamma\gamma}(E_{a}^{\prime\prime})}, \tag{15}\]
where the exponential factor is the fraction of axions that do _not_ decay to photons between redshift \(z\) and \(z^{\prime}\) (\(z^{\prime}>z\)), \(E_{a}^{\prime}\) is the axion energy when it is emitted at redshift \(z^{\prime}\),
\[E_{a}^{\prime}=\sqrt{m_{a}^{2}+\left(\frac{1+z^{\prime}}{1+z}\right)^{2}(E_{a }^{2}-m_{a}^{2})}, \tag{16}\]
\(E^{\prime\prime}\) is defined similarly but with \(z^{\prime}\to z^{\prime\prime}\), and \(d\hat{n}_{a}^{\rm all\;\star}/dE_{a}^{\prime}(E_{a}^{\prime},z^{\prime})\) is the total axion production rate per unit emission energy \(E_{a}^{\prime}\) per unit comoving volume produced by all stars present at redshift \(z^{\prime}\) which is given by
\[\frac{d\hat{n}_{a}^{\rm all\;\star}}{dE_{a}}(E_{a},z) =\int_{M_{\rm min}}^{M_{\rm max}}\frac{dM}{M_{\odot}}\,\phi(M) \int_{0}^{t(z)}dt^{\prime}\,\psi[z(t^{\prime})]\frac{d\dot{N}_{a}^{\star}}{dE_ {a}}(E_{a},M,t(z)-t^{\prime})\] \[=\int_{M_{\rm min}}^{M_{\rm max}}\frac{dM}{M_{\odot}}\,\phi(M) \frac{d\dot{N}_{a}^{\star}}{dE_{a}}(E_{a},M)\int_{\max[0,t(z)-t_{\rm life}(M)] }^{t(z)}dt^{\prime}\,\psi[z(t^{\prime})], \tag{17}\]
where \(\phi(M)\) is the normalized stellar initial mass function for an assumed stellar mass range \([M_{\rm min},M_{\rm max}]\), \(\psi(z)\) is the comoving star formation rate density at a given redshift \(z\), \(t_{\rm life}(M)\) is the main-sequence lifetime of a star of a given mass \(M^{9}\) and \(d\dot{N}_{a}^{\star}/dE_{a}\) is the axion production
Figure 3: The redshift evolution of the physical energy density of _StAB_ (_dashed_) and _StAB-decay photons (solid)_ relative to the critical density of the universe \(\rho_{\rm crit}=3M_{P}^{2}H(z)^{2}\) for \(g_{a\gamma\gamma}=10^{-10}{\rm GeV}^{-1}\) and different axion masses \(m_{a}\). For low axion masses, the StAB abundance accumulates over time while a small fraction of it gradually leaks into photons. For high axion masses, the sourced axions decay promptly into photons, and consequently the StAB energy density is set by the quasi-equilibrium between its sourcing and decay. The StAB energy density in this case is proportional to and reflects the redshift dependence of the comoving star formation rate \(\psi(z)\) (see Fig. 4), while the StAB-decay photons inherits essentially all the energy of the sourced axions and accumulates over time.
rate per unit emission energy \(E_{a}\) produced by a single star, given by Eq.(4). We have assumed in going to the second line that \(d\dot{N}_{a}^{\star}/dE_{a}\) is constant and has support only during the main-sequence phase of a star
\[\frac{d\dot{N}_{a}^{\star}}{dE_{a}}(E_{a},M,t-t^{\prime})=\frac{d\dot{N}_{a}^{ \star}}{dE_{a}}(E_{a},M)\Theta\left[t_{\rm life}(M)-\left(t-t^{\prime}\right) \right], \tag{18}\]
where \(t-t^{\prime}\) is the age of the star.
The cosmic star formation rate density \(\psi(z)\) is the total mass of stars formed per unit time per unit comoving volume at redshift \(z\), typically written in units of \(M_{\odot}/\rm yr/Mpc^{3}\). We use the simple parameterization by Madau and Dickinson (2014) [90] and updated by Madau and Fragos (2017)[97],
\[\psi(z)=0.01\frac{(1+z)^{2.6}}{1+[(1+z)/3.2]^{6.2}}\ M_{\odot}\rm yr^{-1}Mpc^{ -3}. \tag{19}\]
Note that most of the stars are produced at \(z\approx 2\), where the star formation rate \(\psi(z)\) is peaked. The above redshift-dependence of the star formation rate must be combined with the expansion history of the universe. We assume the standard \(\Lambda\)CDM model cosmology with the Hubble rate \(H(z)\) evolving with redshift \(z\) as,
\[H(z)=H_{0}\sqrt{\Omega_{\rm m}(1+z)^{3}+\Omega_{\Lambda}}, \tag{20}\]
and take \(H_{0}=70\) km/s/Mpc, \(\Omega_{\rm m}=0.3\), and \(\Omega_{\Lambda}=0.7\).
The initial mass function \(\phi(M)\propto dN_{\star}/dM\) characterizes the relative abundances of stars of different masses \(M\) at birth. It is conventionally normalized such that,
\[\int_{M_{\rm min}}^{M_{\rm max}}dM\,M\phi(M)=M_{\odot}. \tag{21}\]
The commonly used Salpeter initial mass function \(\phi(M)\propto M^{-2.35}\) works well only for \(M\gtrsim 0.5M_{\odot}\). A more recent fit to various luminosity density data by Baldry and Glazebrook (2003) gives [98],
\[\phi=\phi_{0}\begin{cases}\left(\frac{M}{0.5M_{\odot}}\right)^{-1.5},&M\leq 0.5M_{\odot}\\ \left(\frac{M}{0.5M_{\odot}}\right)^{-2.2},&M\geq 0.5M_{\odot}\end{cases}, \tag{22}\]
where the prefactor \(\phi_{0}\) is determined by the above normalization condition. The minimum and maximum stellar mass that we consider are \(M_{\rm min}=0.1M_{\odot}\) and \(M_{\rm max}=100M_{\odot}\). Stars with \(M<0.1M_{\odot}\) do not ignite hydrogen burn and so do not enter the main sequence phase, while stars with \(M>100M_{\odot}\) are extremely rare.
Setting aside axion decay, the total spectral energy density of axions sourced by all the stars in the universe can be estimated as
\[\rho_{a}^{\rm all}\,^{*}\sim\psi_{\rm peak}H_{0}^{-1}\int d\ln M\frac{M}{M_{ \odot}}\phi(M)\times\min\left[t_{\rm life}(M),H_{0}^{-1}\right]\times L_{a}^{ *}(M), \tag{23}\]
where the \(M\)-dependence of the combination \(\phi\times\min\left[t_{\rm life},H_{0}^{-1}\right]\) roughly approximates that of the _present day_ stellar mass function and \(\psi_{\rm peak}\approx\psi(z=2)\) is the peak star formation
rate, which occurs at the cosmic noon (\(z=2\)). For stars with \(M_{\odot}\lesssim M\lesssim 10M_{\odot}\), we have \(\phi\propto M^{-2.2}\) and \(t_{\rm life}\propto M^{-2.79}\) (and \(t_{\rm life}\lesssim H_{0}^{-1}\)), which result in stellar mass scalings that strongly inflate the importance of Sun-like stars (\(M\sim M_{\odot}\))
\[\left.\frac{d\rho_{a}^{\rm all\;*}}{d\ln M}\right|_{\rm Prim.} \propto M\phi(M)t_{\rm life}(M)\left.L_{a}^{*}\right|_{\rm Prim.}(M) \propto\begin{cases}M^{-3.24+b(M)},&M\lesssim M_{b}\\ M^{-2.58},&M\gtrsim M_{b}\end{cases}, \tag{24}\] \[\left.\frac{d\rho_{a}^{\rm all\;*}}{d\ln M}\right|_{\rm coal.} \propto M\phi(M)t_{\rm life}(M)\left.L_{a}^{*}\right|_{\rm coal.}(M) \propto\begin{cases}M^{-1.72+b(M)},&M\lesssim M_{b}\\ M^{-1.72},&M\gtrsim M_{b}\end{cases}. \tag{25}\]
At larger stellar masses \(10M_{\odot}\lesssim M\lesssim 100M_{\odot}\) the main-sequence lifetime behaves differently \(t_{\rm life}\propto M^{-0.63}\), and consequently the \(M\)-scalings are essentially flat in this mass range. To sum up, the extremely steep present-day stellar mass function \(\propto\phi\times\min\left[t_{\rm life},H_{0}^{-1}\right]\) overcomes the relatively slow increase of the axion luminosity with stellar mass \(L_{a}(M)\), resulting in Sun-like stars being the primary source of StAB.10 Therefore, our results are largely independent of the detailed properties and distribution of stars with \(M\gg M_{\odot}\).
Footnote 10: For large enough axion mass \(m_{a}\) that yields \(b(M)\gtrsim 1.72\), however, the integral for photon coalescence can be dominated by the upper bound of the integration, namely the most massive star \(M_{\rm max}\), nevertheless in that regime the overall axion emission is already strongly Boltzmann suppressed.
## 3 X-ray from stellar axion decay
### Limits from the extragalactic X-ray background
Axions can spontaneously decay to two photons over the age of the universe with significant probabilities if their mass is relatively heavy, \(m_{a}\sim\) keV.11 The time-dilated axion decay lifetime in the cosmic frame is given by
Footnote 11: For much lighter axions (lighter than or comparable to the plasma frequencies of the relevant media) efficient axion-photon conversions can also occur with the help of cosmic [99, 100], cluster [67, 68, 101], galactic [102, 103], or stellar [104, 105, 106, 107, 108, 9, 108] magnetic fields. Given the considerable uncertainties in our knowledge of these magnetic fields we choose to focus on the spontaneous decay signals of the StAB.
\[\Gamma_{a\gamma\gamma}^{-1}=\frac{64\pi E_{a}}{g_{a\gamma\gamma}^{2}m_{a}^{4}} =0.14t_{\rm U}\left(\frac{g_{a\gamma\gamma}}{10^{-10}\,{\rm GeV}^{-1}}\right) ^{-2}\left(\frac{m_{a}}{1\,{\rm keV}}\right)^{-4}\left(\frac{E_{a}}{4.5\,{\rm keV }}\right), \tag{26}\]
where \(t_{U}=13.7\) Gyr is the age of the universe. All the StAB-decay photons share the same energy of \(\tilde{E}_{\gamma}=m_{a}/2\) in the axion rest frame but are Lorentz boosted in the cosmic frame
Figure 4: _Left:_ comoving star formation rate density as a function of redshift \(\psi(z)\) (Eq.(19)) from [97]. _Right:_ stellar initial mass function \(\phi(M)\) (Eq.(22)) from [98].
by \(v_{a}=\sqrt{1-(m_{a}/E_{a})^{2}}\) (corresponding to a Lorentz factor \(\gamma_{a}=E_{a}/m_{a}\))
\[E_{\gamma}=\gamma_{a}\left(\tilde{E}_{\gamma}+v_{a}\tilde{E}_{\gamma}\cos\theta \right)=\frac{E_{a}}{2}\left(1+v_{a}\cos\tilde{\theta}_{\gamma}\right). \tag{3.2}\]
Thus, \(E_{\gamma}\) ranges from \(E_{\gamma,\min}=E_{a}(1-v_{a})/2\) to \(E_{\gamma,\max}=E_{a}(1+v_{a})/2\), corresponding to the axion-frame photon emission angles \(\tilde{\theta}_{\gamma}=\pi\) (backward emission) and \(\tilde{\theta}_{\gamma}=0\) (forward emission), respectively. The decay photons being isotropic in the axion frame (i.e. have flat distribution over solid angles) and \(dE_{\gamma}\propto d\cos\tilde{\theta}_{\gamma}\propto d\tilde{\Omega}_{\gamma}\) imply that the cosmic-frame photon energy distribution from a single axion is flat in the range \([E_{\gamma,\min},E_{\gamma,\max}]\). The energy spectrum of the StAB-decay photons at energy \(E_{\gamma}\) is therefore related to that of the parent axions as12
Footnote 12: We do not include in our calculation X-ray absorption effects of the StAB decay signal in the intergalactic medium and the Milky Way. This results in only \(\lesssim 10\%\) attenuation of the signal in the \(1-10\,\)keV energy range of interest and hence is negligible at the level of precision we are aiming for [109, 110, 111].
\[\frac{d\dot{m}_{\gamma}}{dE_{\gamma}}(E_{\gamma},z) =\int_{0}^{\infty}dE_{a}\frac{dn_{a}}{dE_{a}}(E_{a},z)\Gamma_{a \gamma\gamma}(E_{a})\frac{2\Theta(E_{\gamma,\max}-E_{\gamma})\Theta\left(E_{ \gamma}-E_{\gamma,\min}\right)}{E_{\gamma,\max}-E_{\gamma,\min}}\] \[=\int_{E_{\gamma}+\frac{m_{a}^{2}}{dE_{\gamma}}}^{\infty}dE_{a} \frac{dn_{a}}{dE_{a}}(E_{a},z)\frac{2\Gamma_{a\gamma\gamma}(E_{a})}{\sqrt{E_{ a}^{2}-m_{a}^{2}}}. \tag{3.3}\]
The photon yield at the present epoch is then given by
\[\frac{d\rho_{\gamma}}{d\ln E_{\gamma}}(E_{\gamma},z=0)=E_{\gamma}^{2}\int_{0 }^{\infty}\frac{dz^{\prime}}{H(z^{\prime})}\frac{d\dot{m}_{\gamma}}{dE_{ \gamma}^{\prime}}(E_{\gamma}^{\prime},z^{\prime}). \tag{3.4}\]
The result is displayed in Figs. 2, 3, and 5, which show that the strongest StAB-decay signals occur in the parameter space where virtually all the star-produced axions decay to photon before the present epoch, in which case the StAB-decay photon energy density is completely determined by the total amount of axion energy sourced. The present day photon energy spectrum from StAB decay lies mainly in the X-ray regime and is most detectable in the \(\sim 1-10\,\)keV energy range, corresponding to axions with masses \(m_{a}\approx 0.5-30\,\)keV. For the maximum axion-photon coupling compatible with the CAST and globular cluster bounds, \(g_{a\gamma\gamma}=6.6\times 10^{-11}\,\)GeV\({}^{-1}\), the total StAB-decay photon energy density can be as high as \(\rho_{\gamma}\sim 10^{-8}\,\)meV\({}^{4}\) which amounts to X-ray fluxes per unit solid angle of \(\sim 10^{-8}\,\)erg s\({}^{-1}\)cm\({}^{-2}\)sr\({}^{-1}\), i.e. comparable to that of the observed CXB in the same energy range. Hence the StAB X-ray signal, if present, can be seen as a bump in the low-energy tail of the CXB spectrum which is known to peak at around 30 keV energy. As we will discuss in the next subsection, the known shape of the axion decay signal enables us to disentangle it from adequately-modeled backgrounds and thereby probe the existence of axion.
Several generations of X-ray instruments such as Chandra, HEAO, NuSTAR, Swift-XRT, and XMM-Newton have measured the CXB in the \(\sim 1-10\,\)keV energy range where the StAB decay signal is most likely to be found [16, 93, 112, 113, 114, 115, 116]. We adopt for our analysis the CXB data from Chandra [117] and NuSTAR [118] which cover the low- and high-energy part of this rough energy range. The observed CXB spectra should be interpreted as the sum of the axion decay signal and the astrophysical background, which is known to be primarily due to active galactic nuclei. Chandra has resolved around 90% of the CXB into point-like sources with fluxes \(\gtrsim 10^{-13}\,\)erg s\({}^{-1}\)cm\({}^{-2}\)sr\({}^{-1}\) in the 1-7 keV energy range. The remaining unresolved part of the CXB in Chandra's energy range is thus only \(\sim 10\%\) of the total
\(m_{a}=12.37\) keV \(m_{a}=19.78\) keV \(m_{a}=24.36\) keV \(m_{a}=30.00\) keV
Figure 5: The StAB-decay photon spectral energy flux \(\Phi\) (the local photon flux \(F\) per unit logarithmic energy \(\ln E\) interval per unit solid angle \(\Omega\), \(\Phi=d^{2}F/d\ln Ed\Omega\)) for \(g_{a\gamma\gamma}=10^{-10}\,\mathrm{GeV}^{-1}\) and different axion masses \(m_{a}\). Also shown are the CXB spectrum data points from Chandra and NuSTAR. Chandra has resolved most of the CXB in the energy range it covers into point-like sources. The Chandra data for the unresolved (point-source subtracted) component of the CXB are also shown.
Figure 6: Exclusion limits on the axion parameter space at 95% confidence level derived from the chi-squared goodness of fit test against the CXB data from Chandra and NuSTAR. The axion decay lifetime \(\tau_{a}\) is calculated at the peak energy of the StAB energy spectrum which is expected to be \(\sim 4.5\) keV for \(m_{a}\leq 4.5\) keV and at \(\sim m_{a}\) for \(m_{a}>4.5\) keV. The light gray region is the constraint derived from the decay of relic axions produced via freeze-in from [64].
CXB. In most of the axion parameter space of interest the axion decay signal is sufficiently smeared out compared the typical size of a cluster (1-10 Mpc) that essentially all of it would contribute to the \(\sim 10\%\) unresolved component of the CXB [119; 120; 121]. Hence, we adopt for our analysis the unresolved (point-source subtracted) CXB data from Chandra as well.
We can probe an axion parameter space based on how the inclusion of the StAB X-ray predicted by that parameter space affects the quality of fit to the CXB data. We define the CXB spectral energy flux \(\Phi_{\rm CXB}(E)=d^{2}F_{\rm CXB}/d\ln Ed\Omega\) (with the units of erg s\({}^{-1}\)cm\({}^{-2}\)sr\({}^{-1}\)) as the CXB photon energy flux \(F_{\rm CXB}\) per unit logarithmic energy interval per unit solid angle at energy \(E\) and quantify the goodness of fit to the CXB data \(\Phi_{\rm CXB,i}\) with the following chi-squared function
\[\chi^{2}=\sum_{i}\frac{1}{\sigma_{\Phi_{\rm CXB,i}}^{2}}\left(\Phi_{\rm CXB,i} -\left.\Phi_{\rm CXB}^{\rm model}(g_{a\gamma\gamma},m_{a},\mathbf{\theta}_{\rm bg })\right|_{E_{i}}\right)^{2}, \tag{10}\]
where the sum is over the energy \(E_{i}\) of the X-ray telescope; \(\Phi_{\rm CXB,i}\) and \(\sigma_{\Phi_{\rm CXB,i}}\) are respectively the CXB spectral energy flux and its associated error at energy \(E_{i}\). We model the spectral energy flux \(\Phi(E)\) of the CXB as the sum of the expected signal from StAB decay and an attenuated power law model for the CXB background [122; 123]
\[\Phi_{\rm CXB}^{\rm model}(g_{a\gamma\gamma},m_{a},\mathbf{\theta}_{\rm bg})=\frac {1}{4\pi}\left(\frac{d\rho_{\gamma}}{d\ln E}\right)_{\rm StAB}+A\left(\frac{E} {\rm keV}\right)^{1-\Gamma}e^{-E/E_{0}}, \tag{11}\]
where \(\mathbf{\theta}_{\rm bg}=\{A,\Gamma,E_{0}\}\). For each axion mass \(m_{a}\), we first minimize the \(\chi^{2}\) over all the parameters other than the mass \(m_{a}\) to obtain the best-fit chi-squared \(\left[\chi^{2}(m_{a})\right]_{\rm best}(g_{a\gamma\gamma},\mathbf{\theta}_{\rm bg})\). Then we calculate again the \(\chi^{2}\) but now minimizing over only the background parameters \(\mathbf{\theta}_{\rm bg}\), giving \(\left[\chi^{2}(g_{a\gamma\gamma},m_{a})\right]_{\rm best}(\mathbf{\theta}_{\rm bg})\). By Wilk's theorem [124], the difference of these two chi-squared values follow a chi-squared distribution for one degree of freedom. This allows us to infer the likelihood of obtaining a given value of \(g_{a\gamma\gamma}\) and place the 95% confidence-level exclusion limits on the axion parameter space based on the following criterion
\[\left[\chi^{2}(g_{a\gamma\gamma},m_{a})\right]_{\rm best}(\mathbf{\theta}_{\rm bg })-\left[\chi^{2}(m_{a})\right]_{\rm best}(g_{a\gamma\gamma},\mathbf{\theta}_{\rm bg },)>\chi_{95\%}^{2}, \tag{12}\]
where \(\chi_{95\%}^{2}=2.71\).
In the absence of the axion (\(g_{a\gamma\gamma}=0\)), all the data are fit reasonably well with the attenuated power law model, with a best fit chi-squared per degree of freedom \(\chi^{2}/{\rm dof}\) of 1.88 (Chandra, full CXB), 1.55 (Chandra, unresolved CXB), and 0.46 (NuSTAR, full CXB). When the axion signal is included, the data do not display significant preference toward axion of any \(m_{a}\) and \(g_{a\gamma\gamma}\). As shown in Fig. 6, our analysis rules out at 95% confidence level a swatch of axion parameter space (\(m_{a},g_{a\gamma\gamma}\)) slightly below the cooling bound. The limits that we found lie in a parameter space that is already ruled out, mainly by limits based on relic axions from the early universe [64; 66] and partially by limits from gravitationally bound axions around the Sun [125; 126]. Nevertheless, our limits are based on different assumptions from that of these earlier works. The considerations presented here can in principle provide independent and complementary tests in less minimal extended-sector models [127; 128; 129; 130; 131; 132; 133] with possibly non-standard cosmological scenarios [134], e.g. where the effective field theory parameters are time varying [135; 136; 137].
### X-rays from gravitationally bound objects
While the integrated photon signal from StAB decay over the cosmic history is approximately isotropic, the decay signal from newly produced axions trace to some degree the spatial distribution of stars at the present epoch. We expect such decay signals from smaller redshifts to be enhanced in the directions of high density of stars such as groups and clusters of galaxies. The X-ray in those directions are typically also stronger, which means each source needs to be studied on case by cases basis. Given the diversity of astrophysical objects in the universe one might be able to find objects for which there is a relative enhancement of the stellar axion decay signal over the background. We assess the prospect for setting more stringent limits on the axion parameter space below \(g_{a\gamma\gamma}\approx 10^{-11}\,\mathrm{GeV}^{-1}\) with the X-ray observations of gravitationally bound objects. Our aim here is simply to identify potential directions for future in depth studies.
Since most of the optical photons and the StAB share the same source, namely Sun-like stars, the StAB X-ray sky should be highly correlated with the visible sky if the axions decay immediately outside the stars that source them. The finite decay lifetime of these axions, however, allow them to traverse a typical distance of \(\ell_{a}\sim v_{a}\gamma_{a}/\Gamma_{a\gamma\gamma}\) before decaying into a pair of photons, leading to a spatial smearing of the X-ray from StAB at scales smaller than \(\ell_{a}\). In the parameter space slightly below the cooling bound with \(g_{a\gamma\gamma}\approx 10^{-11}\,\mathrm{GeV}^{-1}\) and \(m_{a}\lesssim 10\,\mathrm{keV}\), we have \(\ell_{a}\gtrsim 1\) Mpc which is always much longer than the size of a galaxy (\(\sim 1-100\) kpc) and can be comparable to the size of a galaxy cluster (\(\sim 1-10\) Mpc). A useful picture to have before we proceed is that the decay of the axions sourced by a star would take place dominantly in a spherical shell of radius \(\sim\ell_{a}\) and thickness \(\sim\ell_{a}\) around the star. For the smallest possible \(\ell_{a}\), these decay shells can be contained in a cluster and in that case the decay photons would appear to originate from the cluster. The axion decay signals become increasingly smeared out as \(\ell_{a}\) is increased and eventually their sum become almost indistinguishable from complete homogeneity and isotropy.
Depending on the axion mass, star-emitted axions can behave like warm dark matter or dark radiation. We find that in all cases that yield substantial axion decay flux, the typical axion Lorentz factor is \(\gamma_{a}\sim 1\). Hence, for simplicity, we will assume in what follows that the axion decays are isotropic. Strong relativistic beaming (\(\gamma_{a}\gg 1\)) of the decay photon may occur for axions that are orders of magnitude lighter than the typical temperature of stellar cores \(T_{c}\sim 1.5\,\mathrm{keV}\), however such scenarios are of less interest in terms of their X-ray signals because the axions would have lifetimes much longer than the age of the universe. Slower axions with non-relativistic velocities \(v_{a}\lesssim 10^{-3}\) are produced in stars at phase-space-suppressed rates but may accumulate in gravitationally bound objects over long timescales as in [10, 125, 126, 138]. Such gravitationally-trapped axions would produce line-like decay photon signals, and potentially lead to stronger limits on the axion parameter space depending how many axions can be trapped at a given time. The latter will depend on the stability timescale of axion orbits in a many-body gravitational potential, which is nontrivial and requires a dedicated study.
The present-day stellar mass function is strongly dominated by Sun-like stars whose axion luminosity is \(L_{a}\sim 10^{-3}\left(g_{a\gamma\gamma}/6\times 10^{-11}\,\mathrm{GeV}^{-1} \right)^{2}L_{\odot}\). Axions are produced at roughly this luminosity as long as they are sufficiently light to avoid Boltzmann suppression, i.e. \(m_{a}\lesssim 10\,\mathrm{keV}\). We would like to estimate the X-ray flux from the decaying axion cloud around an object of stellar concentration, which could be a galaxy, a galaxy, a galaxy group, or a cluster. Assuming the optical luminosity \(L_{\mathrm{O}}\) from that object is dominated by Sun-like stars, the axion energy density \(\rho_{a}\) at a radial position \(r\) away from the center of such an
object can be estimated as
\[\rho_{a}\sim 10^{-3}\left(\frac{g_{a\gamma\gamma}}{6\times 10^{-11}\,\text{GeV}^{ -1}}\right)^{2}L_{\text{O}}\frac{e^{-\ell(r)/\ell_{a}}}{\ell(r)^{2}}, \tag{3.8}\]
where \(\ell(r)\sim\max{(r,R)}\) is the typical distance from the point of interest to an arbitrary point in the object and we have assumed \(\ell_{a}\gtrsim 1\text{ Mpc}\gtrsim R\). The flux per unit solid angle from axion decay in that object is then given by the integral of \(\rho_{a}\) along the line of sight distance \(s\) weighted by the axion decay probability per unit length, \(F_{a\rightarrow\gamma\gamma}\sim(1/4\pi)\int ds\rho_{a}(1-e^{-\ell/\ell_{a}})/ \ell_{a}\), yielding
\[F_{a\rightarrow\gamma\gamma}\sim 10^{-3}\left(\frac{g_{a\gamma\gamma}}{6\times 10^ {-11}\,\text{GeV}^{-1}}\right)^{2}L_{\text{O}}\int\frac{ds}{\ell_{a}}\frac{e^{- \ell/\ell_{a}}\left(1-e^{-\ell/\ell_{a}}\right)}{4\pi\ell^{2}}. \tag{3.9}\]
For objects in which we reside, the relevant \(r\) is whichever radius that dominates the X-ray flux. For distant objects, the relevant \(r\) will be set by the direction and the FoV of the X-ray telescope we are using. Below we provide crude estimates for the maximum axion-induced X-ray flux per unit solid angle from various types of astrophysical objects
* _The Sun_ \[F_{a\rightarrow\gamma\gamma}^{\text{Sun}}\sim\left(\frac{g_{a\gamma\gamma}}{6 \times 10^{-11}\,\text{GeV}^{-1}}\right)^{2}\frac{10^{-3}L_{\odot}}{4\pi \text{AU}^{2}}\frac{\text{AU}}{\ell_{a}}\lesssim 3\times 10^{-9}\text{ erg s}^{-1}\text{cm}^{-2}\text{sr}^{-1}.\] (3.10) Here, the flux is the strongest when the distance to the sun \(r\) is minimized, i.e. at \(r\sim\text{AU}\), because the \(1/r^{2}\) decrease in the axion density is stronger than the linear increase \(\propto r\) in the decay probability.
* _The Milky Way galaxy_ \[F_{a\rightarrow\gamma\gamma}^{\text{MW}} \sim 10^{10}\text{ Sun-like stars}\times\left(\frac{g_{a\gamma\gamma}}{6\times 10^{-11} \,\text{GeV}^{-1}}\right)^{2}\frac{10^{-3}L_{\odot}}{4\pi(10\text{ kpc})^{2}}\frac{10\text{ kpc}}{\ell_{a}}\] \[\lesssim 1\times 10^{-8}\text{ erg s}^{-1}\text{cm}^{-2}\text{sr}^{ -1}.\] (3.11)
* _Distant clusters_ \[F_{a\rightarrow\gamma\gamma}^{\text{cluster}} \sim 10^{3}\text{ galaxies}\times 10^{10}\text{ Sun-like stars}\times\left(\frac{g_{a\gamma\gamma}}{6\times 10^{-11}\, \text{GeV}^{-1}}\right)^{2}\frac{10^{-3}L_{\odot}}{4\pi(1\text{ Mpc})^{2}}\frac{1\text{ Mpc}}{\ell_{a}}\] \[\lesssim 1\times 10^{-7}\text{ erg s}^{-1}\text{cm}^{-2}\text{sr}^{ -1}.\] (3.12) We assume that the solid angle of the cluster \(\Omega_{\text{cluster}}\sim(\text{Mpc}/d)^{2}\) is greater than and so cover the entire FoV of the instrument, meaning that the axion decay signal is not diluted. For Chandra and XMM-Newton, \(\Omega_{\text{FoV}}\sim 10^{-5}\text{ sr}\)[139, 120].
The above maximum fluxes were obtained by setting \(g_{a\gamma\gamma}=10^{-11}\,\text{GeV}^{-1}\) and the shortest axion decay length corresponding to the highest \(m_{a}\) without significant Boltzmann suppression, \(\ell_{a}\sim\text{Mpc}\). By comparison, the previously-obtained StAB flux for the \(g_{a\gamma\gamma}\) that saturates the globular cluster limit is at the level of \(\sim 10^{-8}\text{ erg s}^{-1}\text{cm}^{-2}\text{sr}^{-1}\) (comparable to the observed isotropic CXB). Hence, the X-ray signals from the directions of stellar concentration can be enhanced by not more than an order of magnitude relative to the StAB X-ray signal
in the same solid angle. This is essentially because the overall X-ray signals from these directions are not determined by the highly enhanced axion density in gravitationally bound objects. They are instead determined by the (more diluted) column densities of axion in these directions, i.e. the axion density integrated over the line of sight distance. Since the X-ray background relevant to these regions are also enhanced (or at best comparable in the periphery of these objects [139; 140; 141]) relative to that of the isotropic CXB, we expect only marginal improvements on the axion limits from what we have found previously with the isotropic CXB.
## 4 Conclusion
We have computed the spectra of axions with coupling only to electromagnetism produced in the cores of main sequence stars with masses in the range \(0.1-100M_{\odot}\) using the stellar profiles obtained from the stellar evolution code MESA. We then use these axion spectra to estimate the abundance, spectrum, and time-evolution of the diffuse axion background sourced by all the stars in the universe across cosmic histories. This axion background can subsequently decay into X-rays and contribute to the cosmic X-ray background. The decay-photon spectrum has a calculable characteristic spectral shape with a peak expected at either half the average thermal energy, \(3T_{c}/2\sim 2\,\mathrm{keV}\), or half the axion mass \(m_{a}/2\), corresponding to relativistic and non-relativistic decays, respectively.
We provide in Appendix A simple exponential fits of the temperature, inverse screening length, and plasma mass as a function of radius which approximate well the core profiles of \(1-100M_{\odot}\) main sequence stars used in our analysis. These fits in conjunction with Eqs.(4), (1), and (2) allow one to estimate the axion spectrum produced in the core of any main sequence star whose mass lies in the aforementioned range. Our ensemble of benchmark stars can be made more realistic by considering effects of time-evolution, varied chemical compositions, rotations, and magnetic fields. It would be interesting to include post-main-sequence, and perhaps also population III stars, in the stellar ensemble as the core temperatures in some of these non-main-sequence stars can be considerably higher than those of the main-sequence stars. These stars can dominate the production rate of heavy axions due to the relative lack of Boltzmann suppression. The formalism we use for calculating the properties of the StAB and its decay signal can serve as a template for estimating the stellar background of other light dark sector particles such as dark photons and millicharged particles.13
Footnote 13: For particles that are produced in stars dominantly near their surfaces rather than in their cores [142; 14], one would need to capture the near-surface properties of the stars more carefully.
###### Acknowledgements.
We thank Kevin Langhoff for collaboration in the early stages of the project and Gautham Adamane Pallathadka, Peter Graham, David E. Kaplan, Xuheng Luo, Nadav Outmezguine, Surjeet Rajendran for useful discussions at various stages of the project. This work was supported by NSF Grant No. 2112699 and the Simons Foundation.
Simple fits to stellar properties from MESA
We fit the radial profiles of the inverse screening length \(\kappa\), temperature \(T\), and plasma mass \(\omega_{\rm p}\) in the cores of our MESA-generated stars with the following exponential functions
\[T =T_{c}(M)e^{-\frac{r}{r_{T}(M)}}, \tag{10}\] \[\kappa =\kappa_{c}(M)e^{-\frac{r}{r_{\kappa}(M)}},\] (11) \[\omega_{\rm p} =\omega_{\rm p,c}(M)e^{-\frac{r}{r\omega_{\rm p}(M)}}. \tag{12}\]
As displayed in Figs. 10, 11, and 12, these exponential fits closely track the core profiles of the stars (where virtually all the axions are produced) but they start to fail near the surface of the stars. The stellar mass \(M\) dependence of these parameters \(T_{c}\), \(r_{T}\), \(\kappa_{c}\), \(r_{\kappa}\), \(\omega_{\rm p,c}\), \(r_{\omega_{p}}\) are shown in Fig. 7. We further fit these parameters as power laws in \(M\)
\[T_{c}(M) =(1.83\pm 0.06)\left(\frac{M}{M_{\odot}}\right)^{0.22\pm 0.01} \ {\rm keV}, \tag{13}\] \[r_{T}(M) =(0.86\pm 0.03)\left(\frac{M}{M_{\odot}}\right)^{0.61\pm 0.01} R_{\odot},\] (14) \[\kappa_{c}(M) =(10.4\pm 0.1)\left(\frac{M}{M_{\odot}}\right)^{-0.76\pm 0.02} \ {\rm keV},\] (15) \[r_{\kappa}(M) =(1.06\pm 0.05)\left(\frac{M}{M_{\odot}}\right)^{0.56\pm 0.01} R_{\odot},\] (16) \[\omega_{\rm p,c}(M) =(0.350\pm 0.004)\left(\frac{M}{M_{\odot}}\right)^{-0.60\pm 0.01 }\ {\rm keV},\] (17) \[r_{\omega_{\rm p}}(M) =(0.79\pm 0.04)\left(\frac{M}{M_{\odot}}\right)^{0.57\pm 0.01} R_{\odot}, \tag{18}\]
where \(M_{\odot}\) and \(R_{\odot}\) are the mass and radius of the Sun. We also fit the main sequence lifetimes of our MESA stars as follows
\[t_{\rm life}(M)=\begin{cases}(6.8\pm 0.1)\times 10^{9}\left(\frac{M}{M_{\odot}} \right)^{-2.79\pm 0.02}\ {\rm yr},&M\lesssim 10M_{\odot}\\ (5\pm 1)\times 10^{7}\left(\frac{M}{M_{\odot}}\right)^{-0.63\pm 0.08}\ {\rm yr},&M \gtrsim 10M_{\odot}\end{cases}. \tag{19}\]
All the above stellar mass scalings are accurate only for massive stars with masses \(1-100M_{\odot}\) (which dominate the axion sourcing). Lower mass stars behave differently for at least a couple of reasons. Their nuclear burning is dominated by the p-p chain reaction instead of the CNO cycle [143]. Moreover, they evolve very slowly and consequently fail to arrive at the intermediate main sequence age (the point where the hydrogen abundance is \(X=0.3\)) within the age of the universe. As mentioned in the main text, in such cases we extract the stellar profiles at half the age universe instead.
To verify the accuracy of our MESA simulations, we extract the stellar radii \(R\) and effective (surface) temperatures \(T_{\rm eff}\) of our MESA stars and fit them as power laws
\[R(M) =(0.80\pm 0.08)\left(\frac{M}{M_{\odot}}\right)^{0.83\pm 0.03} R_{\odot}, \tag{20}\] \[T_{\rm eff}(M) =(6100\pm 300)\left(\frac{M}{M_{\odot}}\right)^{0.54\pm 0.02} \ {\rm K}. \tag{21}\]
We stress that these fits come purely from MESA simulations without any input from the observed luminosity - mass relation. We compare these observable quantities with the existing data from [144], including 509 main sequence stars selected from the "Catalog of Stellar Parameters from the Detached Double-Lined Eclipsing Binaries in the Milky Way" by [145], and found, as shown in Fig. 9, that they agree well.
|
2304.07124 | A Dynamic Heterogeneous Team-based Non-iterative Approach for Online
Pick-up and Just-In-Time Delivery Problems | This paper presents a non-iterative approach for finding the assignment of
heterogeneous robots to efficiently execute online Pickup and Just-In-Time
Delivery (PJITD) tasks with optimal resource utilization. The PJITD assignments
problem is formulated as a spatio-temporal multi-task assignment (STMTA)
problem. The physical constraints on the map and vehicle dynamics are
incorporated in the cost formulation. The linear sum assignment problem is
formulated for the heterogeneous STMTA problem. The recently proposed Dynamic
Resource Allocation with Multi-task assignments (DREAM) approach has been
modified to solve the heterogeneous PJITD problem. At the start, it computes
the minimum number of robots required (with their types) to execute given
heterogeneous PJITD tasks. These required robots are added to the team to
guarantee the feasibility of all PJITD tasks. Then robots in an updated team
are assigned to execute the PJITD tasks while minimizing the total cost for the
team to execute all PJITD tasks. The performance of the proposed non-iterative
approach has been validated using high-fidelity software-in-loop simulations
and hardware experiments. The simulations and experimental results clearly
indicate that the proposed approach is scalable and provides optimal resource
utilization. | Shridhar Velhal, Srikrishna B R, Mukunda Bharatheesha, Suresh Sundaram | 2023-04-14T13:40:57Z | http://arxiv.org/abs/2304.07124v1 | A Dynamic Heterogeneous Team-based Non-iterative Approach for Online Pick-up and Just-In-Time Delivery Problems
###### Abstract
This paper presents a non-iterative approach for finding the assignment of heterogeneous robots to efficiently execute online Pickup and Just-In-Time Delivery (PJITD) tasks with optimal resource utilization. The PJITD assignments problem is formulated as a spatio-temporal multi-task assignment (STMTA) problem. The physical constraints on the map and vehicle dynamics are incorporated in the cost formulation. The linear sum assignment problem is formulated for the heterogeneous STMTA problem. The recently proposed Dynamic Resource Allocation with Multi-task assignments (DREAM) approach has been modified to solve the heterogeneous PJITD problem. At the start, it computes the minimum number of robots required (with their types) to execute given heterogeneous PJITD tasks. These required robots are added to the team to guarantee the feasibility of all PJITD tasks. Then robots in an updated team are assigned to execute the PJITD tasks while minimizing the total cost for the team to execute all PJITD tasks. The performance of the proposed non-iterative approach has been validated using high-fidelity software-in-loop simulations and hardware experiments. The simulations and experimental results clearly indicate that the proposed approach is scalable and provides optimal resource utilization.
keywords: Spatio-temporal tasks, time scheduling, heterogeneous resource allocation, multiagent pick-up and delivery, just-in-time +
Footnote †: journal: XXXXX
## 1 Introduction
With growing technology, robots have been used in various industrial applications. Multi-robot systems provide a distributed, reliable, and scalable approach for handling various operations. With the help of developments in IoT and Industry 4.0 technologies, just-in-time [1; 2] approaches are used in the automation industry to manage storage and inventories optimally. Warehouses are the critical connection hub in the supply chain of the e-commerce industry, and warehouse automation is becoming very important [3; 4; 5; 6]. Customers demand quick and on-time delivery of items, and time is becoming a crucial aspect of the e-commerce industry. The time sensitivity of delivery tasks increases for perishable items such as food and beverages. Due to the ever-persistent competition in e-commerce, even-for non-perishable items, the on-time delivery of items is a game-changing factor. Warehouse management and e-commerce are a few of the important applications that require online solutions and whose efficacy can be improved with the use of JIT tasks.
A typical warehouse has many objects that need picking and placing between various locations, which is currently done by autonomous robots. If items are delivered at an exact time, the subsequent processes can start immediately, improving the efficiency of operations. Also, it will reduce/eliminate the need for local storage space. The packaging of different items for an order is one example where all items must be at the packaging counter at the desired time. Local storage is not required if all items come to the packaging counter at desired times; it also helps improve efficacy by reducing redundant pick and place operations. Just-in-time (JIT) management strategy implemented in manufacturing and automobile industries to align raw-material orders from suppliers directly with production schedules. A major concern in JIT approach is the potential disruptions in the supply chain. In this paper, we propose use of robots for pick-up and just-in-time delivery tasks in warehouse operations and get the benefits of the JIT approach with a robust supply chain maintained with robots.
A multi-robot pick-up and delivery problem have been approached via distributed resource allocation in [7]. The cost function minimizes the total distance traveled by robots while
Figure 1: Typical Warehouse and its operations
executing pick-up and delivery tasks. In [8], an integrated approach for task assignment and path planning for capacity-constrained multi-agent pick-up and delivery problems have been presented. This approach also handles multiple packets carrying during transport. The marginal-cost-based and regret-based marginal-cost-based algorithms minimize the total travel delay while avoiding collisions. The multi-task allocation problem for the final-mile delivery using drones has been solved by [9], where a drone has to pick and deliver an item one by one. In [10], the adaptive task allocation in warehouse operations has been presented to handle system dynamics such as the location of tasks, number of robots, replenishment of new stock, and battery of robots. In [11], the complexity of this combined pick-up and delivery has been reduced by considering them as separate tasks and putting precedence constraints such that the same robot should pick up the items before delivery for a last-mile delivery problem. But this increases the number of tasks to double.
The detailed review of the tasks assignments and scheduling with different temporal constraints has been presented in [12]. Two critical problems, deadline and time-window problems[13; 14; 15; 16; 17], are well-studied in the literature for warehouse problems. The deadline tasks require local storage to keep items before they are used for the subsequent process (which will start only after the deadline). The time-window problems require the items to be delivered within a time window and the local storage is available only for that time window. The traveling salesman problem with time windows [18] provides the mathematical framework for the time-constrained TSP. The solution approach considers the pick-up and delivery as separate tasks and adds the constraint on a robot that picks the item should deliver the item, and only delivery is allowed only after the pick-up. This increases the constraints and dimensions of the optimization problem. In [19], the effects of the size of the time window were studied, and it is observed that decreasing the size of the time window increases customer satisfaction and decreases computation time, but it increases the tour duration.
All the aforementioned works in warehouse automation do not consider the JIT tasks and assume the feasibility of tasks for a given team, so they assume only a fixed-sized team of robots. Hence there is a strong need to develop an algorithm to handle the heterogeneous JIT tasks with a dynamic-sized team of heterogeneous robots and compute the feasible solution by utilizing the minimum resources (robots).
In this paper, we propose heterogeneous resource allocation approach for the online pick-up and just-in-time delivery problem with heterogeneous robots in warehouse management. The cost function defined in [20] has been modified for heterogeneous robots to execute the heterogeneous tasks. The cost modification also considers the total distances traveled by robots for pick-up and delivery, the loading time (at pick-up) and unloading time (at delivery) required by robots. The proposed heterogeneous resource allocation approach for PJITD tasks provides the non-iterative solution that computes the optimal trajectories for a dynamic-sized team of robots to execute given heterogeneous spatio-temporal tasks. In the first step, the number of robots required to execute the given heterogeneous spatio-temporal task is computed. Those many robots from different skill sets are added to the active team of robots, and finally, feasible assignments for an updated team of heterogeneous robots are computed. This way, in at most two steps, one can compute the optimal assignments to execute the given heterogeneous spatio-temporal tasks with minimum resources (robots). From the solution of heterogeneous spatio-temporal multi-task assignment (STMTA), the trajectories of active robots are computed using the trajectory generation algorithm, following which a team of robots will execute all the given tasks. The working of the DREAM algorithm for PJITD tasks is presented in both simulation and hardware experiments. The high-fidelity simulations are carried out in a ROS2-Gazebo environment. The lab-scale hardware experiments are conducted to illustrate the working of the proposed heterogeneous resource allocation approach for PJITD problems.
The rest of the paper is organized as follows; Section 2 provides the related works. Section 3 presents the mathematical problem formulation for the online pick-up and just-in-time delivery task assignment problem. Section 4 presents the heterogeneous resource allocation approach for computing the feasible task assignments. The working of the proposed approach is shown in Section 5. The paper is concluded in Section 6
## 2 Related works
The proposed work uses the idea of JIT tasks, the spatio-temporal multi-task assignment problem, and a dynamic-sized team of robots to execute given spatio-temporal tasks. Here, we briefly review these related works.
### Just-in-Time (JIT)
The just-in-time [1; 2] approach demands the tasks should be done on exact time; this helps to manage the inventory and storage optimally. Recently, a new approach named zero-warehousing and smart manufacturing has been presented in [21] in which IoT-based zero-warehousing is proposed to minimize the non-value adding and redundant handling warehouse process and also to minimize the warehousing space. Recently, [22; 23] have presented the just-in-time approach for pick-up and delivery for automated guided vehicles. The cost function has been formulated to minimize the deviations from the desired pick-up and delivery times; hence, it handles the temporal constraints softly. The aforementioned works on JIT are designed for static environments where tasks are known in advance and solution can be computed offline.
### Spatio-Temporal Multi-Task assignment (STMTA)
Chopra and Egerstedt [24; 25] has presented the multi-robot routing problem and demonstrated using the music wall, where the robot reaches different note location and plays musical notes at respective specific exact times. As spatio-temporal tasks need to be done at the desired times, some minimum number of robots is required. The main issue in the spatio-temporal task assignment is the computation of the minimum number
of robots required to execute the given spatio-temporal tasks. In [24; 25], the required minimum number of robots is computed offline, in an iterative way, for given tasks. This iterative method for computing required a minimum number of robots and was a big huddle for the online use of STMTA.
### Dynamic resource allocation approaches for STMTA
Dynamic REsource Allocation with decentralized Multi-task assignment (DREAM) [20] approach has been proposed for the spatio-temporal multi-task assignment problem. It provides the non-iterative solution to compute the required number of homogeneous robots to execute the given spatio-temporal tasks and their assignments to execute the given spatio-temporal tasks. The non-iterative DREAM approach has been implemented to compute the collision-free trajectories for a dynamic-sized team of music-playing robots (i.e., just-in-time tasks with homogeneous robots) in [26]. The DREAM approach is limited to homogeneous agents and considers only simple routing tasks. PJITD tasks demand a solution for heterogeneous robots, so DREAM is not directly applicable to PJITD tasks.
The warehouse automation requires a non-iterative (online) solution to assign multiple complex tasks (a combination of a few sub-tasks and waiting) to the optimal-sized team of heterogeneous (with different speed and payload carrying capability) robots. The DREAM algorithm provides an online solution but considers simple routing tasks homogeneous robots. Hence there is a need to develop an online implementable algorithm that handles the online, dynamic and heterogeneous complex tasks in a warehouse environment.
## 3 Pick-up and Just-In-Time Delivery Tasks
Typical warehouse operations are shown in Fig. 1. The main operational objective in the warehouse is to minimize the time of dispatch of items from the warehouse once the order is received. The ordered items are dispatched from a warehouse to some local hub near the customer. All items belonging to one local hub need to be collected on priority before the scheduled leaving time of the vehicle transporting items to that hub/customer. One can use linear temporal logic approaches [27] to generate sub-tasks in automation; bin packaging algorithms [28] to compute the sequence in which items need to be packed. As all items in a single package should be packed together, the exact delivery time will help speed up the packaging process. All items from a single package can come together and be directly packed without local storage and time delay. It also eliminates the redundant pick and place operations for local storage. In this way, the efficacy of operations will be improved.
In warehouse operations, a human operator (near pick-up) or an arm system (on each robot) is required to pick up multiple items and transport them. To avoid the complexity of the co-working environment, this paper assumes that one robot can execute one task at a time. i.e., if a robot has picked one item, it has to deliver it before picking another. A robot can plan for future tasks but must complete the first task before starting the next one.
In this paper, we assume that once the order is received from customers, the sub-tasks are defined, and pick-up and just-in-time delivery (PJITD) problems are generated. This paper presents the solution to the PJITD problem while minimizing the robots required and the collective distance traveled by the dynamic-sized team of robots to execute all PJITD tasks on time. The PJITD task demands the robots with the desired skill set to pick up the items and deliver them at a specific location at a specific time; hence this task is also called a heterogeneous spatio-temporal task. One should note that the given JIT/spatio-temporal tasks will require a minimum number of robots to execute the tasks on time. The objective of this paper is to compute the assignments of robots to execute online heterogeneous tasks while minimizing the resources (i.e., the number of actively used robots) and the total distance traveled.
First, we define the notations used in the paper.
\(\mathbf{p}=(x,y)\in\mathbb{R}^{2}\): location in Cartesian coordinate
\(\mathbf{p}^{R}\): robot's location
\(\mathbf{p}^{p}\): pick-up location
\(\mathbf{p}^{D}\): delivery location
\(R_{i}\): robot number \(i\), (\(i\) is used for robot )
\(\overline{V}_{i}^{R}\): maximum velocity of \(R_{i}\)
\(Q\): quality/skill set \(\ell,\ \ell\in\mathcal{L}=\{1,2,\cdots,n_{\ell}\}\),
\(Q(R_{i})\): set of qualities/skills of robots \(R_{i}\)
\(\tau^{i}\): loading time
\(\tau^{u}\): unloading time
\(t_{j}^{D}\): delivery time of task \(T_{j}\)
\(Q(T_{j})=\) set of qualities/skills required to execute the task \(T_{j}\).
\(T_{j}(\mathbf{p}_{j}^{p},\tau_{j}^{l},\mathbf{p}_{j}^{p},\tau_{j}^{l},t_{j}^{D},Q(T_{j}))\) or \(T_{j}:j^{th}\) task ( indices \(j\) and \(k\) are used for tasks)
\(\mu_{i}:\) sequence of tasks assigned to robot \(R_{i}\)
\(c_{ij\cap Q(R_{i})}^{J\cap Q(R_{i})}\): cost of \(R_{i}\) (with quality \(Q(R_{i})\)) to execute the \(T_{j}\) as a first task
\(c_{kj}^{\sim,Q_{i}}\): cost of robot with quality \(Q_{\ell}\) to execute the task \(T_{j}\) just after the task \(T_{k}\) (subsequent task).
\(\delta_{ij}^{\sim,Q(R_{i})}\): decision variable whether robot \(Q(R_{i})\) execute the task \(T_{j}\) as first task or not.
\(\delta_{kj}^{\sim,Q_{\ell}}\): decision variable whether robot with quality \(Q_{\ell}\) execute the task \(T_{j}\) just after task \(T_{k}\) or not.
### Mathematical Formulation
Consider a set of \(N\) robots denoted as \(\mathcal{R}\), \(\mathcal{R}=\{R_{1},R_{2},\cdots,R_{N}\}\). The position of robots \(R_{i}\) is denoted as \(\mathbf{p}_{i}^{R}=(x_{i}^{R},y_{i}^{R})\). Typically robots have different finite skills (for example, the weight carrying capacity, size carrying capacity), and they are represented using set, _lambda\({}_{e}\)ll_, Robots will be assigned to pick-up and delivery tasks. The location of pick-up for task \(T_{j}\) is denoted as \(\mathbf{p}_{j}^{p}=(x_{j}^{p},y_{j}^{p})\) and location of delivery for task \(T_{j}\) is denoted as \(\mathbf{p}_{j}^{D}=(x_{j}^{D},y_{j}^{D})\). The charging stations are placed at \(S_{i}=(x_{i}^{C},y_{i}^{C})\). A warehouse robot operates in two modes; active mode, when robots are assigned to a task, and rest mode. In rest mode, a robot is either ideal or charging its battery from the charging station.
#### 3.1.1 Pickup and just-in-time delivery task
A pickup and just-in-time delivery (PJITD) task consists of loading an item (the robot has to stop at the picking station for loading time), traveling to the delivery location, and unloading the item (the robot has to wait for unloading time). This execution has to be completed on the desired delivery time. If a robot starts a task and executes it on the desired delivery time, then the PJITD task is completed. If the robot reaches the delivery location after the desired delivery time, the task execution fails. Consider a PJITD task (\(T_{j}\)), in which a robot has to visit the pick-up location (\(\mathbf{p}_{j}^{p}\)), wait for the loading time (\(\tau_{j}^{t}\)) to load items on a robot. After picking items, a robot should reach the desired packing/processing (delivery) counter located at (\(\mathbf{p}_{j}^{D}\)), unload the items with unloading time (\(\tau_{j}^{u}\)) on or before the delivery time (\(t_{j}^{D}\)). This task is represented by \(T_{j}(\mathbf{p}_{j}^{p},\tau_{j}^{l},\mathbf{p}_{j}^{D},t_{j}^{l},\tau_{j}^{u},Q(T_{ j}))\); in rest of the paper, this PJITD task is referred as \(T_{j}\).
A task consists of sub-tasks listed below:
1. select the robot with desired skills (i.e. \(Q(T_{j})\subseteq Q(R_{i})\) )
2. robot should reach to the pick-up location \(\mathbf{p}_{j}^{p}\)
3. wait for \(\tau_{j}^{t}\) to load the items
4. reach to the delivery/drop location \(\mathbf{p}_{j}^{D}\)
5. wait till the delivery time \(t_{j}^{D}\)
6. wait for \(\tau_{j}^{u}\) to unload the items
The spatial distance traveled in the execution of a task is the distance between the current position of the robot to the pick-up location (\(\mathbf{p}_{j}^{p}\)) and then travel to the delivery station at \(\mathbf{p}_{j}^{D}\). This task has to be executed with delivery time constraints.
Once orders are received from a customer, tasks are generated, and the task allocator will assign the task to the robots. The robot has to execute the assigned tasks in sequence. Lets say the task assigned to robot \(R_{i}\) is \(\mu_{i}\), \(\mu_{i}=\{T_{a},T_{b}\}\) then robot starts from its initial position, moves to pick up the items from the location of task \(T_{a}\), (i.e., \(\mathbf{p}_{a}^{p}\)), then delivers it to the delivery location of task \(T_{a}\) (i.e., \(\mathbf{p}_{a}^{D}\)) on or before the delivery time \(t_{a}^{D}\). Now, from the delivery point of task \(T_{a}\), the robot moves to the pick-up location of task \(T_{b}\), picks the task, and delivers it to the delivery location \(\mathbf{p}_{b}^{D}\). In short, from the previous task's delivery location, robots move toward the next assigned task's pick-up location and execute the tasks in sequence.
Let us consider the PJITD tasks available at any given time \(t\) be \(\left\{T_{1}\left(\mathbf{p}_{1}^{T},t_{1}\right)\right.\), \(T_{2}\left(\mathbf{p}_{2}^{T},t_{2}\right)\), \(\cdots\), \(T_{M_{t}}\left(\mathbf{p}_{M_{t}}^{T},t_{M_{t}}\right)\right\}\). In general, the number of tasks (\(M_{t}\)) are more than the number of robots(\(N\)), (\(M_{t}>N\)). Note that the number of tasks depends on the customer's order, and a new task is added for every new order from the customer. However, for a given time, \(t\), the number of tasks is \(M_{t}\). Let us consider, given \(M_{t}\) tasks are feasible with the team of \(N\) robots, then these \(N\) robots need to plan their trajectories (set of pick and delivery points with respective delivery times) {\(\mu_{1}\), \(\mu_{2}\), \(\cdots\), \(\mu_{N}\) }, cooperatively such that, collectively robots completes all the PJITD tasks.
The main objective of this PJITD problem is to find the optimal assignment of multiple tasks to the robots such that all PJITD tasks are executed while minimizing the total distance traveled and optimizing resource utilization. The major challenge in this PJITD task assignment is to compute the minimum number of robots required to execute all the given PJITD tasks. Once the minimum robots are identified, the heterogeneous spatio-temporal multi-task assignment problem is solved to compute the feasible and optimal trajectories for robots. The details of this approach are explained in the next section.
## 4 Heterogeneous Resource allocation approach for PJITD task assignments problem
In the previous section, the PJITD task has been discussed. This section provides the algorithm for assigning robots to execute those PJITD tasks. In order to execute the given PJITD tasks, robots need to visit the sequence of locations at specific times. For pick-up of items robot can visit at any time, but for delivery, a robot needs to be at the delivery location at the desired delivery time. Due to the temporal constraint, some minimum number of robots is required to execute all the given heterogeneous spatio-temporal tasks. Hence, a dynamicalized team of robots is used. The minimum required number of robots is utilized to execute given PJITD tasks. The proposed algorithm computes the minimum number of robots required to execute given PJITD tasks. Next, it computes the optimal trajectories for robots in the updated team such that they execute all given PJITD tasks. The robots are assigned tasks to minimize the total cost, and the cost for executing the PJITD task is described in the subsection.
Each robot executes assigned tasks; starting from its current position, it picks up the items from the pick-up point and delivers them at the exact desired delivery time at the delivery location. Next, the robot moves towards the pick-up of the assigned subsequent task. The robot executes its assigned tasks in sequence; sequence constitutes both the spatial locations and times the sequence is referred to as the _trajectory_ of that robot (\(\mu\)) (as it constitutes both the spatial locations and times). The feasible trajectories of all robots are computed with the actions of a robot from its current position and then the delivery locations of assigned tasks.
The binary decision vector \(\mathbf{\delta}\) has two components, namely the first assigned task (\(\mathbf{\delta}_{i}^{f,Q(R_{i})}\in\mathbb{R}^{M_{t}}\)) and the second being the subsequently assigned tasks (\(\mathbf{\delta}_{i}^{s,Q_{i}}\in\mathbb{R}^{M_{t}}\)). The decision variables are given as \(\mathbf{\delta}_{i}^{f,Q(R_{i})}=\left[\delta_{i1}^{f,Q(R_{i})},\;\delta_{i2}^{f,Q( R_{i})},\;\cdots,\;\delta_{iM_{t}}^{f,Q(R_{i})}\right]\), \(i=\{1,2,\cdots,N\}\), and \(\mathbf{\delta}_{i}^{s,Q_{i}}=\left[\delta_{i1}^{s,Q_{i}},\;\delta_{i2}^{s,Q_{i}},\; \cdots,\;\delta_{kM_{t}}^{s,Q_{i}}\right]\), \(k=\{1,2,\cdots,M_{t}-1\}\), \(\ell=\{1,2,\cdots,n_{t}\}\).
The first decision variable \(\delta_{ii}^{f,Q(R_{i})}\in\{1,0\}\) denotes whether the robot \(R_{i}\) first executes the task \(T_{j}\) or not. The subsequent decision variables \(\delta_{i}^{s,Q_{i}}\in\{1,0\}\) denote whether the robot \(R_{i}\) will execute the task \(T_{j}\) just after the execution of the task \(T_{k}\) or not. The decision variables, \(\mathbf{\delta}^{f,Q(R_{i})}\) and \(\mathbf{\delta}^{s,Q_{i}}\) are optimized to minimize the cost (fuel spent), which is based on the distance traveled by a robot to execute all PJITD tasks at their respective delivery times.
### Cost Function
The cost of a task is defined as the distance that needs to be traveled by a robot from its location to the pick-up location and then to the delivery location before the desired delivery time, and the robot has the required skill to execute the task. For a robot with required skills executing its first task from its initial position, the cost of the first task (\(\mathbf{C}^{f,Q(R_{i})}\)) is the distance traveled by a robot \(R_{i}\) from its current position to the pick-up location and from the pick-up location to the delivery location on or before the delivery time of PJITD task. Mathematically,
\[d_{1}(R_{i},T_{j})=d(\mathbf{p}_{i}^{R},\mathbf{p}_{j}^{P})+d(\mathbf{p}_{j}^ {P},\mathbf{p}_{j}^{D}) \tag{1}\] \[C_{ij}^{f,Q(R_{i})}=\begin{cases}d_{1}(R_{i},T_{j})&\text{if }\dfrac{d_{1}(R_{i},T_{j})}{ \overline{V}_{i}^{R}}\leq t_{j}^{D}-\tau_{j}^{I}\\ \kappa&\text{if }\dfrac{d_{1}(R_{i},T_{j})}{\overline{V}_{i}^{R}}>t_{j}^{D}- \tau_{j}^{I}\\ \kappa&\text{if }Q_{i}^{R}\nsubseteq Q_{i}^{T}\end{cases}\] (2) \[\text{for }i\in\mathcal{I}=\{1,2,\cdots,N\},\quad j\in \mathcal{J}=\{1,2,\cdots,M_{t}\}\]
where \(\kappa\) is a large value, and \(d(A,B)\) is the distance along the shortest feasible path from point A to point B.
The cost for executing the subsequent tasks (\(\mathbf{C}^{s,Q_{i}}\)) by the robot \(R_{i}\) is the distance traveled by the robot from the location of its previous delivery location to reach the location of the current pick-up location and from the current pick-up location to the current delivery location on or before the delivery time of that subsequent task. If a robot does not have the skill set to execute the task, then the cost is set to \(\kappa\). If a task is required to be executed in negative time, then the cost is set to \(\infty\).
\[d_{2}(T_{k},T_{j})=d(\mathbf{p}_{k}^{D},\mathbf{p}_{j}^{P})+d(\mathbf{p}_{j}^ {P},\mathbf{p}_{j}^{D}) \tag{3}\] \[C_{kj}^{s,Q_{i}}=\begin{cases}d_{2}(T_{k},T_{j})&\text{if }t_{k,j}^{I} \leq t_{j}^{D}-(t_{k}^{D}+\tau_{k}^{\kappa}+\tau_{j}^{I})\\ \kappa&\text{if }t_{k,j}^{I}>\left(t_{j}^{D}-(t_{k}^{D}+\tau_{k}^{\kappa}+\tau_{j}^{I}) \right)>0\\ \kappa&\text{if }\mathcal{Q}(T_{j})\nsubseteq Q_{\ell}\\ \infty&\text{if }\left(t_{j}^{D}-(t_{k}^{D}+\tau_{k}^{\kappa}+\tau_{j}^{I}) \right)\leq 0\end{cases}\] (4) \[\text{for }k\in\mathcal{K}=\{1,2,\cdots,M_{t}-1\};\ j\in \mathcal{J};\ \ell\in\mathcal{L}\]
where \(t_{k,j}^{\ell}\) is the minimum time required by robot \(R\) with quality \(Q_{\ell}\) to travel from the location of task \(T_{k}\) to task \(T_{j}\) and it is computed as
\[t_{k,j}^{\ell}=\dfrac{d_{2}(T_{k},T_{j})}{\overline{V}_{\ell}} \tag{5}\]
### Optimization Problem
A heterogeneous resource allocation algorithm assigns robots to execute multiple PJITD tasks. A robot will execute the tasks in a sequence, and we denote the sequence assigned to robot \(R_{i}\) by \(\mu_{i}\). Here, the problem of computing sequence \(\mu_{i}\) has been converted to compute each move of one robot from one location to another; combining all moves, one can get the sequence of tasks. Each robot computes its sequence such that it executes all PJITD tasks while minimizing the distance traveled. The first decision variable \(\delta_{ij}^{f,Q(R)}\) is used to denote that either a robot \(R_{i}\) moves from position \(\mathbf{p}_{i}^{R}\) to the position \(\mathbf{p}_{j}^{P}\) picks up the task and then deliver it to the delivery station \(\mathbf{p}_{j}^{D}\) at time \(t_{j}^{\prime}\) and \(t_{j}^{\prime}\leq t_{j}^{D}\). The subsequent decision variable \(\delta_{ij}^{s,Q_{i}}\) is used to denote that either a robot \(R_{i}\) moves from its previous task's delivery position \(\mathbf{p}_{k}^{D}\) at the time (\(t_{k}^{\prime}\)) to the next task's pick-up position \(\mathbf{p}_{j}^{P}\) and deliver the item to the delivery station (\(\mathbf{p}_{j}^{D}\)) on or before the delivery time \(t_{j}^{D}\). The integer programming problem is defined as,
\[\min_{\delta_{ij}^{s,Q(R_{i})}}\sum_{\delta_{ij}^{s,Q_{i}}}\sum_{ i\in\mathcal{I}}\sum_{j\in\mathcal{J}}C_{ij}^{f,Q(R_{i})}\delta_{ij}^{f,Q(R_{i})} \tag{6}\] \[\qquad\qquad\qquad\qquad\qquad+\sum_{\ell\in\mathcal{L}}\sum_{k \in\mathcal{K}}\sum_{j\in\mathcal{J}}\sum_{\mathcal{K}}C_{kj}^{s,Q_{i}}\delta_{ kj}^{s,Q_{i}}\] s. t. \[\delta_{ij}^{s,Q(R_{i})}\in\{0,1\}\qquad\forall(i,j)\in\mathcal{I} \times\mathcal{J}\] (6a) \[\delta_{kj}^{s,Q_{i}}\in\{0,1\}\qquad\forall(\ell,k,j)\in \mathcal{L}\times\mathcal{K}\times\mathcal{J}\] (6b) \[\sum_{i\in\mathcal{I}}\delta_{ij}^{f,Q(R_{i})}+\sum_{\ell\in \mathcal{L}}\sum_{k\in\mathcal{K}}\delta_{kj}^{s,Q_{i}}=1\quad\forall j\in \mathcal{J}\] (6c) \[\sum_{j\in\mathcal{J}}\delta_{ij}^{f,Q(R_{i})}\leq 1\quad\forall i\in \mathcal{I}\] (6d) \[\sum_{\ell\in\mathcal{L}}\sum_{j\in\mathcal{J}}\delta_{kj}^{s,Q_{i }}\leq 1\quad\forall k\in\mathcal{K} \tag{6e}\]
All tasks must be assigned as a first or subsequent task to exactly one robot; this constraint is given by (6c). A robot can move to, at most, one task location just after the current location, which is constrained by Eq. (6d) and (6e).
### Heterogeneous Resource Allocation Approach
The PJITD tasks are spatio-temporal in nature. The spatio-temporal tasks need to be executed within time constraints; due to the time constraints the given number of robots may or may not be able to execute all spatio-temporal tasks. Some minimum number of robots is required in the team to execute all the given heterogeneous spatio-temporal tasks. The heterogeneous spatio-temporal tasks with given \(N\) active robots may or may not be feasible due to time constraints. Hence, firstly a dynamic resource allocation algorithm [20] is modified for heterogeneous tasks (and referred as the heterogeneous resource allocation algorithm in the rest of the paper). The heterogeneous resource allocation algorithm computes the required minimum active robots with different skill sets. Once these minimum active robots from all the skill sets are available, one can guarantee that the updated heterogeneous spatio-temporal multi-task assignment problem is feasible. Then this feasible heterogeneous spatio-temporal multi-task assignment (STMTA) problem is solved by assigning the robots to multiple tasks. A robot can be set to active mode from the rest mode whenever required. Also, if a robot is not required for the execution of tasks, then it can be set to rest mode.
In a heterogeneous resource allocation algorithm, at first, the algorithm solves the optimization problem (without any guarantee of a feasible solution) (step 2 in Algorithm 1 ). From the computed solution (which may or may not be feasible), the total infeasible assignments (cost equal to \(\kappa\)) are identified, and those many rest robots are added to active mode (steps 7 in Algorithm 1). While adding these reserve resources are selected
such that they have the skill set to execute the considered infeasible task. The task assignment problem is solved again with the updated team of heterogeneous robots. If the obtained solution is already feasible, then robots are assigned to tasks as per the solution. Then robots compute their respective trajectories using the trajectory computation algorithm. If any robot is unassigned, that robot is set to rest mode.
#### 4.3.1 Trajectory computations
From the feasible solutions of the STMTA, the sequence of the tasks assigned to the robots is computed using the trajectory generation algorithm as given in Algorithm 2. Each robot computes its own trajectory independently and follows it to execute its assigned PJITD tasks. The trajectory generation algorithm computes the sequence of tasks assigned to the robot starting from its own position. The first assigned task defines the first two waypoints (i.e., pick-up and delivery) as given in step 5. Note that there is no exact time constraint for the pick-up task; hence, time for pick-up is denoted with \(\cdot\) (dot) to indicate feasible time (considering the time required to travel) after the previous task and before the delivery time of the current task.
After pick-up, the next waypoint is the spatio-temporal delivery point which defines both the delivery location and the delivery time. Once the first task is added to the trajectory, the algorithm finds if any task is assigned from the current delivery location. If a subsequent task is assigned from the delivery location, that task is augmented to the trajectory (line 9 of Algorithm 2). This step of augmenting the subsequent task for updating the trajectory is repeated until no task is assigned from the last delivery location of the updated trajectory.
```
1:Input: task assignment solution
2:for i = 1:N do
3:\(\mu_{i}=\{\}\)
4:if\(\sum_{j}\delta_{i,j}^{G,Q(R_{i})}=1\)then
5:\(k^{*}=arg_{k}(\delta_{i}^{G,Q(R_{i})}=1)\)
6:\(\mu_{i}=\left\{(\mathbf{p}_{k^{*}}^{P},\cdot),(\mathbf{p}_{k^{*}}^{D},t_{k^{*}}^{D})\right\}\)
7:while\(\sum_{j}\delta_{k,j}^{G,Q}=1\)do
8:\(j*=arg_{j}(\delta_{k,j}^{G,Q}=1\) )
9:\(\mu_{i}=\{\mu_{i},(\mathbf{p}_{j^{*}}^{D},\cdot),(\mathbf{p}_{j^{*}}^{D},t_{j^{*}}^{D})\}\)
10:\(k^{*}=j^{*}\)
11:endwhile
12:endif
13:endfor
```
**Algorithm 2** Trajectory Computation
## 5 Performance Evaluation
The working of the proposed heterogeneous resource allocation approach for the PJITD task assignment problem is illustrated using ROS2-Gazebo simulations and lab-scale hardware experiments.
### High-fidelity Simulation Study
#### 5.1.1 Simulation Setup
The proposed use of the DREAM algorithm for pick-up and just-in-time delivery problems is demonstrated in a simulation environment. A Gazebo simulator with _RViz2_ plugins is used and operated using ROS2 (galactic) and Python. The _Nav2_ plugins are used for the navigation of robots in the simulations. The simulations are carried out in a Ubuntu system with an i7-8700 CPU with 16GB RAM and NVIDIA GT710 GPU.
The simulations are conducted for a small warehouse world designed in Gazebo; Fig 2 shows the warehouse. A total of four robots and seven PJITD tasks have been considered in the simulation.
#### 5.1.2 Architecture of Simulation
Fig. 3 represents the functioning blocks of the simulation software. The architecture consists of 4 blocks: Environment, Spatio-Temporal Task Assigner, Multi-Robot Navigator, and Common Interface.
Figure 2: Warehouse model in Gazebo
Environment.The environment block corresponds to the simulation environment. It consists of the robots and all other objects in the simulation. It also provides the sensed data from each of the robots. All robots use the SLAM algorithm to localize and get the live map of the environment. Robots will operate in the environment to execute the tasks as instructed by the navigator.
Central system.This software block acts as a common interface between the environment, the multi-robot navigator block, and the spatio-temporal task assignment generator block. It receives the tasks from the user/customer and sends them to the assignment generator block. Once assignments are computed, they will be received by the central system. Afterward, these assignments are shared with the navigator block to execute the tasks. Meanwhile, if any new tasks are received, the central system checks the status of the ongoing task and then updates the position of robots in future time and then calls the spatio-temporal task assignments routine.
Spatio-temporal task assignment service.All task information i.e., respective pick-up and delivery locations and delivery time, is given to the assignment service by the central system. In the spatio-temporal task assignment service, the robots compute the navigation distances to generate the cost matrices. The in-built _ComputePathToPose_ action-client service in _Nav2_ (which uses the Dijkstra's algorithm to compute the feasible path for the robot from one location to another) is used to generate the feasible trajectories. Next, the line integral along the generated trajectories provides the distances. Then cost matrices are computed by considering the feasibility over desired time using Eq. (2) and (4). Then the optimization problem defined by Eq. (6) is solved using the _linear_sum_assignment_ function from optimize library in scipy. Next, each robot's trajectories are computed for obtained assignments using the trajectory generation algorithm. These trajectories are returned as the output to the central system.
Multi-robot navigator.Once the central system receives the trajectories for each robot, it is sent to the navigation service.
The _Nav2_ plugin is used in the navigation node to navigate each robot. For simultaneous operations of multiple robots, _Nav2_ services are called asynchronously. Each robot will travel along its assigned trajectory. For the individual robot, a task is subdivided into four sub-tasks: reaching to pick-up station, loading items, reaching the delivery station, and unloading the items at desired delivery time. All these sub-tasks are executed sequentially (synchronously). However, in a team sense, all robots operate asynchronously to execute their individual tasks simultaneously.
Event.Suppose something abrupt happens in the environment, such as the failure of one robot or the closure of some roadways. Then the event is triggered. After the event, the central system checks the status of all active tasks and solves the task assignment problem again for scenarios after the event. Navigation services are updated, and robots are assigned to tasks according to the updated assignments
#### 5.1.3 Simulation Results
The simulations are conducted with seven tasks (\(T_{0}\) to \(T_{6}\)) and four robots (\(R_{0}\) to \(R_{3}\)) serving those tasks to illustrate the working of the proposed heterogeneous resource allocation approach for PJITD tasks. The PJITD tasks are given in table 1. The robots are initialized at \(R_{0}(0)=(2,-0.35)\), \(R_{1}(0)=(1.6,2.5)\), \(R_{2}(0)=(-3.0,1.2)\), and \(R_{3}(0)=(3.6,1.5)\). To visualize the coordinates of the pick-up and delivery points, unique color is allocated to each task, where a circle shows the pick-up point, and the delivery point is shown by a square dot along with its delivery time. The current simulation time is displayed at the top right corner. The simulation video is available at [https://www.youtube.com/watch?v=gNCOhG4CG2A](https://www.youtube.com/watch?v=gNCOhG4CG2A).
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline & \(T_{0}\) & \(T_{1}\) & \(T_{2}\) & \(T_{3}\) & \(T_{4}\) & \(T_{5}\) & \(T_{6}\) \\ \hline \begin{tabular}{l} Pick up \\ location \\ \end{tabular} & 4.0,1.0 & 1.0,3.5 & -4.0.5 & 1.5,5.0 & 1.0,3.5 & 2.0,1.0 & 20,2.0 \\ \hline \begin{tabular}{l} Delivery \\ location \\ \end{tabular} & 1.0,2.5 & -2.5,2.5 & 1.0,2.0 & -4.0,2.5 & 4.0,2.5 & 0,0.2.5 & 4.0,2.0 \\ \hline \begin{tabular}{l} Delivery \\ time \\ \end{tabular} & 35 & 50 & 60 & 75 & 100 & 120 & 140 \\ \hline \begin{tabular}{l} loading \\ time (\(\tau\)) \\ \end{tabular} & 1 & 1 & 1 & 2 & 2 & 1 & 1 \\ \hline
\begin{tabular}{l} unloading \\ time (\(\tau^{*}\)) \\ \end{tabular} & 1 & 2 & 1 & 1 & 2 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: The pick-up and delivery locations with respective delivery, loading, and unloading times for the considered tasks
Figure 4: Typical of the feasible path computed using Nav2
Figure 3: Architecture for simulation
Each RViz2 window is dedicated to each robot to show its operations and status. The action of each robot is decided using the proposed algorithm and displayed at the top of the corresponding RViz2 window. The local navigation trajectory computed by the robot is shown in a red colored curve. Once the navigation is initiated, the status indicates which task the robot is executing. It indicates whether the robot is executing the pick-up, delivery, loading, or unloading subtask. The status of a robot is updated after each operation. Once all the tasks allocated to a specific robot are completed, the same is highlighted in the status.
The robots are assigned tasks for the given PJITD tasks using the proposed method. For that purpose, robots compute the path distances for the given tasks. _ComputePathToPose_ function from the _Nav2_ package has been used to compute the feasible path from one point to another point in the arena. The RViz2 ship-shot of the warehouse is shown in Fig 4, where it also shows the feasible path between two points by a red-colored curve. The line integral along this feasible path is used to compute the distances.
The cost matrix is computed for given tasks and robot positions using Eq. (2), and (4) are given below by Eq. (7) and (8) respectively. The robots considered in simulations are homogeneous and have all skill sets to execute all tasks; so, the subsequent cost matrix is the same for all four robots, i.e. \(C^{s,0}=C^{s,1}=C^{s,2}=C^{s,3}=C^{s,\bullet}\). The solution obtained from the proposed heterogeneous resource allocation approach is the decision variable indicating the robot positions assigned to the tasks. The box in cost matrices highlights the task assignment solution, and the superscript denotes the robots to which the task is assigned.
\[\begin{bmatrix}\mathbf{C}^{f,0}\\ \mathbf{C}^{f,1}\\ \mathbf{C}^{f,2}\\ \mathbf{C}^{f,3}\end{bmatrix}=\begin{bmatrix}5.66&6.95&11.73&11.44&10.61&2.84&4. 25\\ 6.13&9.67&22.72&8.56&7.77&3.11&2.60\\ 10.54&9.78&6.75&12.77&12.01&6.69&7.21\\ \mathbf{C}^{f,3}\end{bmatrix} \tag{7}\]
\[\mathbf{C}^{s,\bullet}=\begin{bmatrix}\infty&9.62&11.12&8.60&7.62&3.36&3.08\\ \infty&\infty&8.91&14.56&13.56&7.17&8.24\\ \infty&\infty&\infty&\infty&1.31&12.12&4.65&6.02\\ \infty&\infty&\infty&\infty&\infty&\begin{bmatrix}11.96\\ 100\end{bmatrix}&8.15&8.67\\ \infty&\infty&\infty&\infty&\infty&\begin{bmatrix}3.49\\ 4.65\end{bmatrix}\end{bmatrix} \tag{8}\]
Next, the trajectories of each robot are obtained using the trajectory computation algorithm (Algorithm 2). The obtained trajectories for \(R_{0}\) is \(\mu_{0}=\{T_{1}\}\),for \(R_{1}\) is \(\mu_{1}=\{T_{3}\}\), for \(R_{2}\) is \(\mu_{2}=\{T_{2},T_{5},T_{6}\}\), and for \(R_{3}\) is \(\mu_{3}=\{T_{0},T_{4}\}\).
The task execution starts at the simulation time of 505 \(sec\), accordingly, all the delivery times are updated. Fig 5 shows the initial scenario at \(t=505sec\), where all four robots are assigned to their respective tasks. It also shows the robots' operations in their respective RViz2 windows.
Each robot has to execute the assigned tasks consisting of reaching the pick-up location, loading items, going to the delivery location, waiting till the desired delivery time, and then unloading the items. Each robot executes these sub-tasks sequentially and then executes the subsequently assigned tasks. Robot \(R_{0}\) has been assigned to task \(T_{1}\). \(R_{0}\) moves towards the pick-up location \(P_{1}\) (denoted by a green dot). At the same moment, \(R_{1}\) is assigned to task \(T_{3}\), so it moves for pickup at \(P_{3}\). Similarly, \(R_{2}\) moves towards the pickup at \(P_{2}\), and \(R_{3}\) moves towards the pickup at \(P_{0}\). Fig 5, shows the snapshot at t= 505 (tasks are given to the algorithm at 505 sec.), where robots find the feasible trajectories to reach respective pick-up locations.
Robot \(R_{3}\) reaches its pickup location \(P_{0}\) at \(t=513\) sec, it waits there for 1 sec to load the items. Next \(R_{3}\) moves towards its delivery location \(D_{0}\) and reaches \(D_{0}\) at \(t=531\) sec. The delivery time of task \(T_{0}\) is 540, so it waits for the next 9 seconds. Meanwhile, other robots \(R_{0}\),\(R_{1}\), and \(R_{2}\) had reached respective pick-up locations, loaded the items, and are traveling towards their respective delivery locations. At \(t=540\) sec, \(R_{3}\) unloads the items in 1 sec. So the \(R_{3}\) completes the task \(T_{0}\) at 541 sec. After completing first assigned task, \(R_{2}\) starts executing its next task \(T_{4}\). For that purpose, it travels towards \(P_{4}\) for pick-up.
The \(R_{0}\) reaches the delivery location \(D_{1}\) at 546 sec and the delivery time for \(T_{1}\) is 555 sec; so it waits for 9 seconds. After that, it unloads the items in 1 sec. As \(R_{0}\) is assigned to only one task, it goes to rest mode after completing it. The \(R_{2}\) reaches its first delivery location \(D_{2}\) at \(t=551\), it waits till the delivery time, i.e. 565 sec, and unloads the item in the next 1 sec. So, \(R_{2}\) competes the task \(T_{2}\) at \(t=566\). After completing the first assigned task \(T_{2}\), \(R_{2}\) starts its next task i.e. task \(T_{5}\). Meanwhile, \(R_{1}\) reaches its first delivery location \(D_{3}\) at \(t=556\), so it waits there till \(t=580\), and then unloads the items in 1 sec. As \(R_{1}\) is assigned to only one task, it goes to rest mode after completing all tasks.
Now only two robots, \(R_{2}\) and \(R_{3}\), are active and executing their tasks. \(R_{0}\) and \(R_{1}\) are in rest mode. \(R_{3}\) reaches its assigned delivery \(D_{4}\) at t = 591 and waits until the given delivery time of 605 sec. After t = 605, \(R_{3}\) unloads the items in 2 seconds and completes its last task \(T_{4}\), and goes to rest mode. \(R_{2}\) reaches to the delivery location \(D_{5}\) at \(t=599\), waits till its delivery time, and after \(t=610\) unloads the items and starts its new task \(T_{6}\). \(R_{2}\) picks up the item from \(P_{6}\) and reaches \(D_{6}\) at \(t=645\). After waiting for 5 sec, it unloads the item and completes its last task. From the simulation video, one can observe that robots are able to execute all the given PJITD tasks.
#### 5.1.4 Computational Complexity
Table 2 shows the computational time required for the proposed approach for PJITD task assignments. The cost of the tasks is computed using the map computed for the given warehouse; the cost matrix computation takes significant time. The algorithm is computationally efficient and requires time almost three orders less than the cost matrix computation. From the table, one can observe that as the number of tasks increases, the total computational time increases, and the total computational time is quadratically proportional to the number of tasks.
### Resource Utilization
The proposed approach requires a dynamic number of robots to execute the given spatio-temporal tasks. For this purpose, the resources (robots) required to execute given spatio-temporal
tasks for the different arrival rates of the tasks are analyzed. Here, PJITD tasks are generated at random pick-up and delivery locations with delivery times generated with the Poisson distribution. The resource utilization factor (RUF) for a given team size (\(n\)) is defined as the ratio of the total time interval at which a team of \(n\) robots is active to the total simulation time and is defined as
\[\text{RUF}(n)=\frac{\text{time interval with n robots}}{\text{Total simulation time}}\times 100 \tag{9}\]
Fig 6 shows the RUF for the different arrival rates (\(Q\)) of spatio-temporal tasks. For an arrival rate of 0.1, at max 5 robots are required and out of which, five robots are required only for the 1.15%, four robots are required for 2.68%, three robots are required for 48.35%, two robots are required for 25.61% and only one robot is required for 22.19%. From Fig 6, one can observe that as the arrival rate increases the number of robots required increases.
### Implementability Study
The proposed approach in this paper computes the required number of heterogeneous robots and their assignments to execute given tasks. To illustrate the implementability of the proposed non-iterative and online computable solution for heterogeneous PJITD tasks is demonstrated in the lab scale experiment. Fig 8 shows the arena considered in the experiments. The arena is of dimensions \(3.6m\times 2m\) with the origin on the right top. Two cuboid shaped obstacles are added to the arena at \(\mathbf{O}_{1}=(1.37,0.96)\) and \(\mathbf{O}_{2}=(2.78,0.96)\). Two TurtleBot3 Burger robots equipped with Raspberry Pi 3B+ and Ubuntu 18.04 Server with ROS Melodic 1 have been used for hardware
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c} \hline Number of tasks & 10 & 20 & 30 & 40 & 50 & 60 & 70 & 80 & 90 & 100 & 200 & 500 \\ \hline Average computation time (msec) & 0.0481 & 0.1283 & 0.2837 & 0.5294 & 0.8929 & 1.3869 & 2.0116 & 2.7918 & 3.7553 & 4.9230 & 31.5588 & 408.9476 \\ \hline \end{tabular}
\end{table}
Table 2: Average computational time for solving optimization problem (Eq(6))for different number of tasks
Figure 5: Initial assignments for four robots. Each sub-figure shows the robots moving from their initial location to their first assigned pick-up locations at the start of simulations at t = 505 sec.
experiments. The map of the arena is generated using the Simultaneous Localisation and Mapping (SLAM) algorithm on the LDS-01 LIDAR sensor outputs obtained by teleoperation of the Turtlebot in the arena. For the turtlebot's navigation, an inbuilt _Nav_ package is used by each turtlebot.
At the start of experiments, the turtlebots are placed in the arena, and PJITD tasks are defined. Using the map and task information centralized algorithm computes the sequence of tasks assigned to each turtlebot, and the sequence of tasks is communicated to each turtlebot. Navigation ROS Node is used to navigate the robot to its desired waypoints in the proper order while keeping the temporal constraints satisfied.
Fig. 8 shows the arena with given pick-up and delivery points. Additionally, a real-time clock is placed near the arena's top right corner to highlight the experiments' temporal features. The pick-up delivery tasks for the experiments are given in table 3. The robots are initialized at (1.7,0.5) and (0.4,1.12) for robot 1 and robot 2, respectively. The Turtlebot3 can have a maximum speed of \(0.22m/sec\). However, practically it has been observed that for a small arena of this size, the practical speed reached is \(0.12m/sec\) due to acceleration and deceleration limits. For this experiment, the robot's velocity is taken as \(0.12m/sec\).
Now, the robot computes the path distances for the given tasks. _Nav_ package has been used to compute the feasible path from one point to another point in the arena. The line integral along this feasible path is used to compute the distances. The cost matrix computed is computed using Eq. (2) and (4) and given as,
\[\begin{bmatrix}\mathbf{C}^{f,1}\\ \mathbf{C}^{f,2}\end{bmatrix}=\begin{bmatrix}\begin{bmatrix}1.7278\\ 2.4100\end{bmatrix}&\begin{bmatrix}3.0593&2.5471&4.3748&2.3141\\ 3.0920\end{bmatrix}\\ \mathbf{C}^{s,2}=\begin{bmatrix}\begin{array}{cccc}\infty&1000&1.9985&4.9158&2.6 336\\ \infty&\infty&2.8275&4.0760&2.6032\\ \infty&\infty&\infty&1000&2.6336\\ \infty&\infty&\infty&\infty&1000\\ \infty&\infty&\infty&\infty&\infty\end{array}\end{bmatrix} \tag{11}\]
\[\mathbf{C}^{s,2}=\begin{bmatrix}\begin{array}{cccc}\infty&1000&1.9985&4.91 58&2.6336\\ \infty&\infty&2.8275&4.0760&2.6032\\ \infty&\infty&\infty&\infty&1000&2.6336\\ \infty&\infty&\infty&\infty&1000\\ \infty&\infty&\infty&\infty&\infty\end{array}\end{bmatrix} \tag{12}\]
Note that, the diagonal and lower triangular elements of the cost matrix \(C^{s,1}\) and \(C^{s,2}\) are set to \(\infty\) as these tasks happened in the past. The forward-time infeasible assignments are given a large value of 1000. For the element \(C^{s,1}(1,2)\) i.e. the robot \(R_{1}\) executing the task \(T_{2}\) after the execution of task \(T_{1}\) the total time available is (\(t_{2}^{2}-t_{1}^{D}=15\)) and the distance needs to be traveled is 0.48 for pickup and 2.05786 for delivery. The total distance that needs to be traveled is 2.5378, and it requires a minimum time of \(21.48sec\). But the available time (\(15sec\)) is less than the minimum time required; hence the cost is set to 1000. Similarly the cost \(C^{s,1}(3,4)\) and \(C^{s,2}(4,5)\) are also set to 1000.
The solution obtained from the DREAM approach is the decision variable indicating the robot positions assigned to the
Figure 8: Arena for hardware experiment
Figure 6: Resource utilization with different arrival rates of PJITD tasks
Figure 7: Architecture of hardware experiment
Figure 9: **Snapshots of the turtlebots executing the PJITD tasks at various time instances.** a) \(t=20\), \(R_{1}\) is at pick-up \(P_{1}\) and \(R_{2}\) is at pick-up \(P_{2}\). b) \(t=35\)\(R_{1}\) completes its delivery at \(D_{1}\). c) \(t=40\)\(R_{2}\) reaches at \(D_{2}\) and waits till delivery time of 50 sec. d) \(t=50\)\(R_{2}\) completes its delivery at \(D_{2}\). e) t = 83, \(R_{2}\) reaches to \(D_{4}\) and waits. f) t = 88, \(R_{1}\) reaches to \(D_{3}\) and waits g) t = 90, \(R_{1}\) delivers at \(D_{3}\). h) t = 118, \(R_{1}\) reaches to \(D_{5}\).
tasks. In cost matrices, the task assignment solution is marked by boxes. Next, the trajectory generation algorithm is used to compute the trajectory of each robot. The obtained trajectories for turtlebot 1 and turtlebot 2 are \(\mu_{1}=\{T_{1},T_{3},T_{5}\}\) and \(\mu_{2}=\{T_{2},T_{4}\}\) respectively.
The hardware experimental run is video-graphed and the video is available at [https://www.youtube.com/watch7v=uwL5-0VjyM](https://www.youtube.com/watch7v=uwL5-0VjyM). Here, we explain the experiment with Fig 9. \(R_{1}\) has been assigned to the tasks \(T_{1},T_{3},T_{5}\) and R2 has been assigned to the tasks \(T_{2},T_{4}\). Robots need to execute these tasks in sequence. As per the assignments, at \(t=0\), \(R_{1}\) starts executing the \(T_{1}\) as per assignment and \(R_{2}\) starts the task \(T_{2}\). \(R_{1}\) navigates to \(P_{1}\) and \(R_{2}\) navigates to \(P_{2}\) for pick-up. After reaching the pick-up location, the turtlebot aligns its heading towards the positive x direction (towards the left in the video). After pick-up, the turtlebot moves toward the respective delivery location. The turtlebot 1 reaches the delivery location \(D_{1}\) and aligns to the positive x direction at \(t=35sec\). The desired delivery time at \(D_{1}\) is also \(35sec\); the turtlebot has reached on time, and task \(T_{1}\) is executed successfully.
Next the turtlebot \(R_{1}\) starts executing its next task \(T_{3}\) and moves towards the pick-up location \(P_{3}\). Meanwhile, \(R_{2}\) reaches it delivery location \(D_{2}\) at \(t=40\); but desired delivery time of \(T_{2}\) is \(50sec\), so \(R_{2}\) waits at \(D_{2}\) for 10 seconds and at \(t=50\)\(R_{2}\) completes the task \(T_{2}\). Then \(R_{2}\) starts its next task \(T_{4}\) by moving towards pick-up location \(P_{4}\). \(R_{2}\) reaches its delivery location \(D_{4}\) at \(t=83\) sec. The delivery time of \(T_{4}\) is 110 sec, hence it waits at \(D_{4}\) for the next 27 seconds and completes the task \(T_{4}\).
Meanwhile, \(R_{1}\) which is executing task \(T_{3}\) reaches the delivery location \(D_{3}\) at \(t=88sec\), waits for 2 sec till its desired delivery time, and completes the task \(T_{3}\) at \(t=90sec\). After completing task \(T_{3}\), \(R_{1}\) starts its next and last task \(T_{5}\), it reaches to \(P_{5}\) for pick-up and then moves toward the delivery location \(D_{5}\). \(R_{1}\) reaches the \(D_{5}\) at \(t=119sec\). \(R_{1}\) waits for 1 sec and completes its last task at the desired delivery time of \(120sec\). One can observe that the feasible tasks were assigned to turtle-bots by the proposed algorithm, and turtlebot executes the tasks on time.
## 6 Conclusion
In this paper, we propose a non-iterative heterogeneous resource allocation for multi-task assignments to handle the online Pickup and Just-In-Time Delivery (PJITD) tasks with heterogeneous robots. In PJITD tasks, delivery time is constrained to the desired time, so the PJITD problem is formulated as the heterogeneous spatio-temporal multi-task assignment (STMTA) problem. The cost function of STMTA has been modified to include the traveling time, operating time, and heterogeneous skills required for the task. The proposed heterogeneous resource allocation approach utilizes the robots minimally to execute all the given heterogeneous PJITD tasks, and obtained assignments are optimal (minimum the total distances traveled by the team of robots). The working of the approach has been demonstrated using high-fidelity simulations and hardware implementation. The PJITD tasks can be assigned to robots/agents by online computation, and the same has been demonstrated using high-fidelity simulations and hardware experiments. Future work will explore the use of spatio-temporal task assignment formulation for various applications which were discarded due to the unavailability of online solutions. Also, the proposed approach will be studied for combining the scheduling and spatio-temporal task assignment problems in future works.
## Acknowledgment
The authors would like to thank Nokia Centre for Excellence in Networked Robotics, IISc, Bangalore, and Nokia CSR funds for their support.
|
2305.10771 | Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph | Recent years have witnessed the rapid development of heterogeneous graph
neural networks (HGNNs) in information retrieval (IR) applications. Many
existing HGNNs design a variety of tailor-made graph convolutions to capture
structural and semantic information in heterogeneous graphs. However, existing
HGNNs usually represent each node as a single vector in the multi-layer graph
convolution calculation, which makes the high-level graph convolution layer
fail to distinguish information from different relations and different orders,
resulting in the information loss in the message passing. %insufficient mining
of information. To this end, we propose a novel heterogeneous graph neural
network with sequential node representation, namely Seq-HGNN. To avoid the
information loss caused by the single vector node representation, we first
design a sequential node representation learning mechanism to represent each
node as a sequence of meta-path representations during the node message
passing. Then we propose a heterogeneous representation fusion module,
empowering Seq-HGNN to identify important meta-paths and aggregate their
representations into a compact one. We conduct extensive experiments on four
widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph
Benchmark (OGB). Experimental results show that our proposed method outperforms
state-of-the-art baselines in both accuracy and efficiency. The source code is
available at https://github.com/nobrowning/SEQ_HGNN. | Chenguang Du, Kaichun Yao, Hengshu Zhu, Deqing Wang, Fuzhen Zhuang, Hui Xiong | 2023-05-18T07:27:18Z | http://arxiv.org/abs/2305.10771v2 | # Seq-HGNN: Learning Sequential Node Representation
###### Abstract.
Recent years have witnessed the rapid development of heterogeneous graph neural networks (HGNNs) in information retrieval (IR) applications. Many existing HGNNs design a variety of tailor-made graph convolutions to capture structural and semantic information in heterogeneous graphs. However, existing HGNNs usually represent each node as a _single_ vector in the multi-layer graph convolution calculation, which makes the high-level graph convolution layer fail to distinguish information from different relations and different orders, resulting in the information loss in the message passing. To this end, we propose a novel heterogeneous graph neural network with _sequential_ node representation, namely Seq-HGNN. To avoid the information loss caused by the single vector node representation, we first design a sequential node representation learning mechanism to represent each node as a sequence of meta-path representations during the node message passing. Then we propose a heterogeneous representation fusion module, empowering Seq-HGNN to identify important meta-paths and aggregate their representations into a compact one. We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB). Experimental results show that our proposed method outperforms state-of-the-art baselines in both accuracy and efficiency. The source code is available at [https://github.com/nobrowning/SEQ_HGNN](https://github.com/nobrowning/SEQ_HGNN).
Heterogeneous Graph, Representation Learning, Meta-path +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
+
Footnote †:
(source nodes) are represented as \(\mathbf{H}^{(1)}[t]\), \(\mathbf{H}^{(1)}[s_{1}]\), and \(\mathbf{H}^{(1)}[s_{2}]\), respectively, which are used as the input of the next layer of graph convolution computation. The information in \(s_{1}\) and \(s_{1}\) is mixed in \(\mathbf{H}^{(1)}[s_{1}]\) and information of \(s_{s}\) and \(s_{s}\) is mixed in \(\mathbf{H}^{(1)}[s_{2}]\). Based on the \(\mathbf{H}^{(1)}[t]\), \(\mathbf{H}^{(1)}[s_{1}]\), and \(\mathbf{H}^{(1)}[s_{2}]\), the second layer of graph convolution cannot distinguish the information from \(s_{1}\) and \(s_{1}\) and the information from \(s_{2}\) and \(s_{2}\).
Intuitively, the semantics learned from each layer and each relation can reflect different-grained features, which strongly correlate to the different tasks, while the mixtures of all information may lead to sub-optimal results for the downstream tasks.
Along this line, we propose a novel heterogeneous graph neural network with _sequential_ node representation (Seq-HGNN), which learns representations of meta-paths and fuses them into high-quality node representations. Specifically, we first propose a sequential node representation learning mechanism that performs message passing over all meta-paths within fixed hops and represents each node as a sequence of meta-path representation. As Figure 1 illustrates, after the calculation of two Seq-HGNN layers, Seq-HGNN can automatically capture the information of all meta-paths and their combinations within 2 hops, which are respectively stored in multiple independent vectors. These vectors then form a sequence as the representation of target \(t\) (i.e. \(\mathbf{H}^{(2)}[t]\)). The sequential representation enables higher Seq-HGNN layers to naturally distinguish messages from different meta-paths. Secondly, we design a heterogeneous representation fusion module to transform the sequence-based node representations into a compact representation, which can be used in various downstream tasks. Also, Seq-HGNN can benefit the discovery of effective entities and relations by estimating the importance of different meta-paths. Finally, we conduct extensive experiments on real-world datasets. The experimental results show that Seq-HGNN achieves the best performance compared with several state-of-the-art baselines.
Our contributions can be summarized as follows:
* We propose a novel heterogeneous graph representation learning model with sequential node representation, namely Seq-HGNN. To the best of our knowledge, the Seq-HGNN is the first work to represent nodes as sequences, which can provide better representations by recording messages passing along multiple meta-paths intact.
* We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) (Hid et al., 2017) and Open Graph Benchmark (OGB) (Kang et al., 2018) to demonstrate the advantage of our model over state-of-the-art baselines.
* Our model performs good interpretability by analyzing the attention weight of meta-paths in heterogeneous graphs.
## 2. Related Work
In this section, we introduce the related work on heterogeneous graph neural networks and the applications of heterogeneous graph neural networks in the field of information retrieval.
### Heterogeneous graph neural networks
Heterogeneous graph neural networks (HGNNs) are proposed to deal with heterogeneous graph data. Some HGNNs apply graph convolution directly on original heterogeneous graphs. RGCN (Hid et al., 2017) is a widely-used HGNN, which sets different transfer matrices for different relations in heterogeneous graphs. R-HGNN (Zhu et al., 2018) learned different node representations under each relation and fuses representations from different relations into a comprehensive representation. Other HGNNs used meta-paths to adopt homogeneous-graph-based methods on the heterogeneous graph. For instance, HAN (Zhu et al., 2019) utilized GAT (Zhu et al., 2019) to calculate node-level and semantic-level attention on meta-path-based sub-graphs. MAGNN (Zhu et al., 2019) introduced intra-meta-path aggregation and inter-meta-path aggregation to capture information on the heterogeneous graph. HeCo (Zhu et al., 2019) selected positive sample nodes based on meta-path on heterogeneous graph comparative learning. The meta-path-based methods require manual-designed meaningful meta-paths and can not be applied in large-scale heterogeneous graphs limited by the computational complexity (Zhu et al., 2018). To overcome the disadvantages of meta-path, Het-SANN (Hid et al., 2017) aggregated multi-relational information of projected nodes by attention-based averaging. GTN (Zhu et al., 2019) and ie-HGCN (Hid et al., 2017) were designed to discover effective meta-paths for the target nodes. HGT (Hid et al., 2017) introduced the dot product attention mechanism (Hid et al., 2017) into heterogeneous graph learning, which can learn the implicit meta-paths. These methods represented each node as one single vector, which means confounding messages from different relations and orders, resulting in the loss of structural information.
Figure 1. The comparison of node representation updates. The shapes of the nodes represent different node types.
In more recent years, in light of learning comprehensive node representations, some researchers adopted Simplified Graph Convolutional Network (SGC) (Hendle et al., 2017)-based methods for heterogeneous graph processing (Wang et al., 2017; Wang et al., 2018; Wang et al., 2019). The core points of them focused on subgraph division and preprocessing. To be specific, these methods first divided a heterogeneous graph into several relation-driven subgraphs based and then conducted simple message passing and pre-computation in the preprocessing stage. However, there are two main drawbacks with this design making them unsuitable for application scenarios: Firstly, multiple downstream tasks are needed to meet the requirements of different messaging passing. For instance, in link prediction tasks, models need to mask some links in the graph, while using SGC-based methods means performing multiple separate preprocessing pipelines, resulting in high computational consumption for various downstream tasks. Secondly, SGC-based methods necessitate learning a distinct set of model parameters for each class of nodes in a heterogeneous graph, with no correlation between parameters of different node types. Such approaches lack the capacity for transfer learning across diverse node types. Specifically, the training and optimization of a particular node type in a heterogeneous graph using SGC-based methods do not contribute to performance enhancement in predicting other node types.
Unlike previous works, our model implements sequential node representation, which records messages from all meta-paths within a fixed step and achieves better performance and interpretability. Moreover, our model possesses end-to-end learning capabilities, enabling it to handle various downstream tasks with a more general and simplified workflow.
### HGNNs applications in IR
In recent years, heterogeneous graph neural networks (HGNNs) have emerged as a powerful tool for extracting rich structural and semantic information from heterogeneous graphs, and have consequently found numerous applications in information retrieval (IR) domains.
In the realm of search engines and matching, Chen et al. (Chen et al., 2018) proposed a cross-modal retrieval method using heterogeneous graph embeddings to preserve abundant cross-modal information, addressing the limitations of conventional methods that often lose modality-specific information in the process. Guan et al. (Guan et al., 2019) tackled the problem of fashion compatibility modeling by incorporating user preferences and attribute entities in their meta-path-guided heterogeneous graph learning approach. Yuan et al. (Yuan et al., 2019) introduced the Spatio-Temporal Dual Graph Attention Network (STDGAT) for intelligent query-Point of Interest (POI) matching in location-based services, leveraging semantic representation, dual graph attention, and spatiotemporal factors to improve matching accuracy even with partial query keywords. Yao et al. (Yao et al., 2019) proposed a knowledge-enhanced person-job fit approach based on heterogeneous graph neural networks, which can use structural information to improve the matching accuracy of resumes and positions.
Recommendation systems have also benefited from HGNNs. Cai et al. (Cai et al., 2018) presented an inductive heterogeneous graph neural network (HGNN) model to address the sparsity of user attributes in cold-start recommendation systems. Pang et al. (Pang et al., 2019) proposed a personalized session-based recommendation method using heterogeneous global graph neural networks (HG-GNN) to capture user preferences from current and historical sessions. Additionally, Song et al. (Song et al., 2019) developed a self-supervised, calorie-aware heterogeneous graph network (SCHGN) for food recommendation, incorporating user preferences and ingredient relationships to enhance recommendations.
HGNNs have also garnered attention from scholars in the field of question-answering systems. For example, Feng et al. (Feng et al., 2019) proposed a document-entity heterogeneous graph network (DEHG) to integrate structured and unstructured information sources, enabling multi-hop reasoning for open-domain question answering. Gao et al. (Gao et al., 2019) introduced HeteroQA, which uses a question-aware heterogeneous graph transformer to incorporate multiple information sources from user communities.
## 3. Preliminaries
**Heterogeneous Graph:** Heterogeneous graph is defined as a directed graph \(G=(V,E)\), with node type mapping \(\tau:V\to A\) and edge type mapping \(\phi:E\to R\), where \(V\) is the node set, \(E\) is the edge set, \(A\) and \(R\) represent the set of node types and edge types respectively, and \(|A|+|R|>2\).
**Relation:** For an edge \(\epsilon=(s,t)\) linked from source node \(s\) to target node \(t\), the corresponding relation is \(r=<\tau(s),\phi(e),\tau(t)>\). A heterogeneous graph can be considered a collection of triples consisting of source nodes \(s\) linked to the target nodes \(t\) through edges \(e\).
**Relational Bipartite Graph:** Given a heterogeneous graph \(G\) and a relation \(r\), the bipartite graph \(G_{r}\) is defined as a graph composed of all the edges of the corresponding type of the relation \(r\). In other words, \(G_{r}\) contains all triples \(<s,e,t>\), where the relation \(\phi(e)=r\).
**Meta-path:** Meta-path \(P\) is defined as a path with the following form: \(A_{1}\xrightarrow{r_{1}}A_{2}\xrightarrow{r_{2}}\xrightarrow{\cdots}\xrightarrow {r_{1}-1}A_{l}\) (abbreviated as \(A_{1}A_{2}\cdots A_{l}\)), where \(A_{i}\in A,r_{i}\in R\). The meta-path describes a composite relation between node types \(A_{1}\) and \(A_{l}\), which expresses specific semantics.
**Graph Representation Learning:** Given a graph \(G=(V,E)\), graph representation learning aims to learn a function \(V\rightarrow\mathbb{R}^{d},d\ll|V|\) to map the nodes in the graph to a low-dimensional vector space while preserving both the node features and the topological structure information of the graph. These node representation vectors can be used for a variety of downstream tasks, such as node classification and link prediction.
## 4. Methodology
The overview of the proposed Seq-HGNN is shown in Figure 2. The Seq-HGNN is composed of multiple **Seq-HGNN Layers** and a **Heterogeneous Representation Fusion** module. The Seq-HGNN Layers aggregate the information provided by the source node \(s\), and update the representation of the target node \(t\). We denote the output representation of the \(l\)-th layer as \(\mathbf{H}^{(l)}\), which is also the input of the \((l+1)\)-th layer (\(1\leq l\leq L\)). By stacking \(L\) Seq-HGNN Layers, each target node \(t\) can receive higher-order neighbor information. The Seq-HGNN Layer consists of three modules: _Sequential Node Representation_, _Transformer-based Message Passing_ and _Sequential Node
_Representation Update._ Among them, the _Sequential Node Representation_ transforms each node into a set of representation vectors. The _Transformer-based Message Passing_ generates neighbor messages for the target node by aggregating the information of neighbors (source nodes). The _Sequential Node Representation Update_ computes a new representation for \(t\) based on the representation from the previous layer and the received neighbor messages. Finally, the Heterogeneous Representation Fusion module estimates the importance of meta-paths and fuses the representations of meta-paths to a single vector as node representation, which can be utilized in downstream tasks.
### Sequential Node Representation
In heterogeneous graphs, the nodes often have multiple attributes and receive messages from multiple types of nodes. For example, in a heterogeneous graph from a movie review website, a _Movie_ node usually contains multiple description attributes such as Stoyrling, Taglines, Release date, etc. Existing methods only support representing each node as a single vector, which implies that the multiple properties of each node are confused into one vector. This causes information loss of node representation.
Different from the above-mentioned graph representation learning methods (Hendle et al., 2015; Wang et al., 2017; Wang et al., 2018), we represent each node as one sequence of vectors, which can record multiple properties of node and messages from multiple meta-paths intact. Concretely, given a node \(i\), we first design a type-specific transform matrix \(W^{\tau(t)}\) to convert features \(x^{i}\) of node \(i\) to the same space:
\[H_{f}^{(0)}\ [i]=W_{f}^{\tau(i)}\cdot x_{f}^{i}+b_{f}^{\tau(i)}, \tag{1}\]
where \(\tau(i)\) is the node type of node \(i\); \(1\leq f\leq F_{\tau(i)}^{(0)};F_{\tau(i)}^{(0)}\) is the number of \(i\)'s features; \(x_{f}^{i}\) is the \(f\)-th initialized feature in the feature sequence of \(i\); \(H_{f}^{(0)}\ [i]\in\mathbb{R}^{d}\) is the node features after the transform; \(b_{f}^{\tau(i)}\) is the bias; \(d\) is the dimension of features.
Next, we concatenate the \(F_{\tau(i)}^{(0)}\) transformed representations of node \(i\) to get an input sequence \(\mathbf{H}^{(0)}\ [i]\) for the Seq-HGNN model:
\[\mathbf{H}^{(0)}\ [i]=\left\|_{f}^{F_{\tau(i)}^{(0)}}H_{f}^{(0)}\ [i]\right., \tag{2}\]
where \(\left\|\text{ is the concatenation operation and }\mathbf{H}^{(0)}\ [i]\in\mathbb{R}^{F_{\tau(i)}^{(0)} \times d}\) is a sequence with the length of \(F_{\tau(i)}^{(0)}\).
It is worth noting that our proposed sequential node representation is independent of time series. During the message passing, our model always represents each node as one sequence of vectors. Each vector in the sequence can represent either the meta-path information or a specific feature attribute of the node. For a detailed description, please refer to Section 4.2 and 4.3.
### Transformer-based Message Passing
The message-passing module aggregates the information of neighbors (source nodes) on each relational bipartite graph to generate neighbor messages for the target node.
#### 4.2.1. Neighbor Importance Estimation.
Before the neighbor message generation, we first estimate the importance of these neighbors. We utilize the mutual attention (Hendle et al., 2015; Wang et al., 2017) to calculate the importance of source nodes to the target node. Specifically, we first project the representations of the target node \(t\) and its neighbors (source nodes \(s\)) to multiple Query vectors \(\mathbf{Q}\) and Key vectors \(\mathbf{K}\), respectively.
\[\mathbf{Q}^{(l)}\ [t]=\left\|\begin{subarray}{c}F_{\tau(t)}^{(t-1)}\\ f\end{subarray}\mathbf{W}_{\tau(t)}^{\text{Query}^{(l)}}H_{f}^{(l-1)}\ [t]+b_{\tau(t)}^{\text{Query}^{(l)}}, \tag{3}\]
Figure 2. The overview of our proposed Seq-HGNN. Given a heterogeneous sub-graph containing a target node \(\mathbf{M}\) and six source nodes, Seq-HGNN first learns a sequential node representation of \(\mathbf{M}\) (i.e. \(\mathbf{H}^{(L)}\) [\(\mathbf{M}\)]), and then fuses the representation \(\mathbf{H}^{(L)}\) [\(\mathbf{M}\)] for multiple downstream tasks. In the sub-graph, \(\mathbf{M}\), \(\mathbf{K}\), \(\mathbf{A}\), and \(\mathbf{D}\) represent node types _Movie_, _Keyword_, _Actor_, _Director_, respectively.
\[\mathbf{K}^{(l)}[s]=\left\|\sum_{f}^{i^{(l-1)}}\mathbf{W}_{r(s)}^{\text{Key}^{(l)} }H_{f}^{(l-1)}\ [s]+b_{r(s)}^{\text{Key}^{(l)}}, \tag{4}\]
where \(\mathbf{W}_{r(t)}^{\text{Query}^{(l)}}\in\mathbb{R}^{d\times d}\) and \(\mathbf{W}_{r(s)}^{\text{Key}^{(l)}}\in\mathbb{R}^{d\times d}\) are type-specific trainable transformation matrices for source node \(s\) and target node \(t\); \(b_{r(t)}^{\text{Query}^{(t)}}\) and \(b_{r(s)}^{\text{Key}^{(t)}}\) are bias vectors. The shapes of \(\mathbf{Q}^{(l)}[t]\) and \(\mathbf{K}^{(l)}[s]\) are \(F_{r(t)}^{(l-1)}\times d\) and \(F_{r(s)}^{(l-1)}\times d\), respectively. \(F_{r(t)}^{(l-1)}\) and \(F_{r(s)}^{(l-1)}\) represent the length of sequence representations of \(t\) and \(s\) in the \((l-1)\) layer, respectively.
We regard the attention weights of the source node \(s\) to the target node \(t\) as the importance of \(s\) to \(t\). Since the nodes would play different roles in different relations, we calculate the attention weights on each bipartite graph separately. More specifically, we denote the set of source nodes connected by the target node \(t\) in the bipartite graph \(G_{r}\) as \(N_{r}(t)\), where \(r\in\mathbb{R}\). Then, the attention weights can be formulated as:
\[\mathbf{Att}_{r}^{(l)}[s,t]=\underset{\forall s\in N_{r}(t)}{\text{Softmax}} \left(\mathbf{K}^{(l)}[s]W_{r}^{\text{ATT}^{(l)}}\mathbf{Q}^{(l)}[t]^{\top} \right)\cdot\frac{1}{\sqrt{d}}, \tag{5}\]
where \(\mathbf{Att}_{r}^{(l)}[s,t]\) is the importance estimation of the source node \(s\) to the target node \(t\) on relation \(r\), and \(W_{r}^{\text{ATT}^{(l)}}\in\mathbb{R}^{d\times d}\) is the transform matrix for relation \(r\).
Unlike the existing attention-based approaches (Han et al., 2017; Wang et al., 2018; Wang et al., 2018), the attention weight \(\mathbf{Att}_{r}^{(l)}[s,t]\) is a matrix with the shape \(F_{r(s)}^{(l-1)}\times F_{r(t)}^{(l-1)}\) rather than a scalar. Each element in \(\mathbf{Att}_{r}^{(l)}[s,t]\) represents the attention weight of an item in the representation sequence of \(s\) to an item in the representation sequence of \(t\).
#### 4.2.2. Neighbor Message Generation
According to the importance of neighbors, the Seq-HGNN aggregates the neighbor information and treats it as the neighbor messages for \(t\).
First, Seq-HGNN extracts features of the source node \(s\) in each bipartite graph \(G_{r}\) separately as follows:
\[\mathbf{Ext}_{r}^{(l)}[s]=\left\|\sum_{f}^{F_{r(s)}^{(l-1)}}W_{r}^{\text{EXT }^{(l)}}\left(\mathbf{W}_{r(s)}^{\text{Value}^{(l)}}H_{f}^{(l-1)}\ [s]+b_{r(s)}^{\text{Value}^{(l)}}\right),\right. \tag{6}\]
where \(\mathbf{Ext}_{r}^{(l)}[s]\in\mathbb{R}^{F_{r(s)}^{(l-1)}\times d}\) is the extracted message from the source node \(s\) under the relation \(r\); \(\mathbf{W}_{r(s)}^{\text{Value}^{(l)}}\in\mathbb{R}^{d\times d}\) is the transformation matrix for the node type \(r(s)\); \(b^{\text{Value}^{(l)}}\) is the bias; \(W_{r}^{\text{EXT}^{(l)}}\) is the transform matrix for the relation \(r\).
Then, we can obtain the neighbor messages for \(t\) under relation \(r\) as follows:
\[\mathbf{Msg}_{r}^{(l)}[t]=\sum_{\forall s\in N_{r}(t)}\left(\mathbf{Att}_{r}^ {(l)}[s,t]^{\top}\mathbf{Ext}_{r}^{(l)}[s]\right), \tag{7}\]
where \(\mathbf{Msg}_{r}^{(l)}[t]\in\mathbb{R}^{F_{r(t)}^{(l-1)}\times d}\) is a sequence with the same shape as the node representation \(\mathbf{H}^{(l-1)}[t]\), and \(N_{r}(t)\) is the set of neighbors (source nodes) of the target node \(t\) in the bipartite graph \(G_{r}\).
### Sequential Node Representation Update
After the message passing process, the target node \(t\) receives messages \(\mathbf{Msg}_{r}^{(l)}[t]\) from multiple relations \(r\in R\). Based on the received messages and the representations from the previous layer \(\mathbf{H}^{(l-1)}[t]\), we get the updated node representation of \(t\).
First, we concatenate the message sequences from different relation types with relation-aware encoding as follows:
\[\widetilde{\mathbf{H}}^{(l)}[t]=\underset{\forall r\in R(t)}{ \parallel}\widetilde{\mathbf{MSg}}_{r}^{(l)}[t], \tag{9}\] \[\widetilde{\mathbf{MSg}}_{r}^{(l)}[t]=\mathbf{Msg}_{r}^{(l)}[t] \oplus W_{r}^{\text{Enc}}, \tag{8}\]
where \(R(t)\) is the set of relation types whose target node type is \(\tau(t)\); \(W_{r}^{\text{Enc}}\in\mathbb{R}^{d}\) is the relation encoding for relation \(r\), which is a learnable vector to distinguish messages from different relation types; \(\oplus\) represents that the relation encoding is added to each vector in the sequence.
Then, we concatenate the representations of the target node from the last layer and encoded messages to obtain a new representation of the target node \(t\):
\[\mathbf{H}^{(l)}[t]=\mathbf{H}^{(l-1)}[t]\quad\parallel\quad\mathbf{W}_{\tau(t )}^{\text{Adopt}^{(l)}}\widetilde{\mathbf{H}}^{(l)}[t]\,, \tag{10}\]
where \(\mathbf{H}^{(l)}[t]\in\mathbb{R}^{F_{r(t)}^{(l)}\times d}\) is the updated representations of target node \(t\); \(\mathbf{W}_{\tau(t)}^{\text{Adopt}^{(l)}}\in\mathbb{R}^{d\times d}\) is a transformation matrix corresponding to the \(\tau(t)\).
We denote that the number of relation types connected to the target node \(t\) is \(\text{len}(R(t))\), then the length of the sequential representations for target node \(t\) grows according to the following:
\[F_{r(t)}^{(l)}=F_{r(t)}^{(l-1)}\times\left(\text{len}(R(t))+1\right), \tag{11}\]
where \(F_{r(t)}^{(l-1)}\) and \(F_{r(t)}^{(l)}\) represent the length of the sequential representation for node \(t\) in the \((l-1)\)-th and \(l\)-th layers, respectively. Referring to Equation 10 and Equation 11, we can summarize that in sequential node representation, information from a node itself and low-order neighbors is located at the beginning of the sequence, followed by high-order information. As deeper Seq-HGNN Layers are performed, information from higher-order neighbors is appended to the sequence.
### Heterogeneous Representation Fusion
After the \(L\)-layer Seq-HGNN computation, each target node \(t\) is represented by a sequence with length \(F_{r(t)}^{(L)}\), which are the representations of the \(t\) from multiple meta-paths. We utilize the self attention (Wang et al., 2018) mechanism to fuse the sequential representations of the target node \(t\) into a single vector. During the representation fusion, Seq-HGNN can identify the effective meta-paths for
downstream tasks.
\[Q^{\text{fus}}[t] =\text{mean}\left(\mathbf{H}^{(0)}\;[t]\;W^{\text{FQ}}\right),\] \[K^{\text{fus}}[t] =\mathbf{H}^{(L)}\;[t]\;W^{\text{FK}},\] \[V^{\text{fus}}[t] =\mathbf{H}^{(L)}\;[t]\;W^{\text{FV}},\] \[A^{\text{fus}}[t] =\text{Softmax}\left(\frac{Q^{\text{fus}}[t]K^{\text{fus}}[t]^{ \top}}{\sqrt{d}}\right),\] \[\mathbf{H}\;[t] =A^{\text{fus}}[t]V^{\text{fus}}[t], \tag{12}\]
where \(\mathbf{H}\;[t]\in\mathbb{R}^{d}\) is the final representation of the target node \(t\); \(W^{\text{FQ}}\), \(W^{\text{FK}}\) and \(W^{\text{FV}}\) are all learnable matrices of dimension \(d\times d\); \(Q^{\text{fus}}[t]\) is generated by original features of target node \(t\); \(A^{\text{fus}}[t]\in\mathbb{R}^{F^{(t)}}\) stands for the importance of each representation for node \(t\), which is also the importance of meta-paths.
Referring to (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018), we adopt the multi-head attention mechanism during the message passing and representation fusion. The output of the multi-head attention is concatenated into a \(d\)-dimensional representation to enhance the stability of the model. In addition, we randomly drop out some fragments of the sequential representation of each node in training loops, which can help the Seq-HGNN model learn more meaningful node representations.
## 5. Experiments
In this section, we evaluate the performance of Seq-HGNN by conducting experiments on multiple datasets.
### Datasets
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) (Kang et al., 2018)1 and Open Graph Benchmark (OGB) (Chen et al., 2019)2. Specifically, three medium-scale datasets, DBLP, IMDB and ACM, are from HGB. A large-scale dataset MAG comes from OGB. Their statistics are shown in Table 1.
Footnote 1: [https://github.com/THUDM/HGB](https://github.com/THUDM/HGB)
Footnote 2: [https://ogb.stanford.edu/](https://ogb.stanford.edu/)
Footnote 3: [https://www.dblp.org/](https://www.dblp.org/)
* **DBLP** is a bibliography website of computer science4. This dataset contains four types of nodes: _Author, Paper, Term_ and _Venue_. In this data set, models need to predict the research fields of authors. Footnote 4: [https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/](https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/)
* **IMDB** is extracted from the Internet Movie Database (IMdb)5. It contains four types of nodes: _Movie, Director, Keyword_ and _Actor_. Models need to divide the movie into 5 categories: "Romance", "Thriller", "Comedy", "Action, Drama". Footnote 5: [https://github.com/THUDM/HGB](https://github.com/THUDM/HGB)
* **ACM** is also a citation network. It contains four types of nodes: _Paper, Author, Subject (Conference)_ and _Term_. The _Paper_ nodes are divided into 3 categories: "database", "wireless communication" and "data mining". The model needs to predict the category the paper belongs to. Footnote 6: [https://www.dmdb.org/](https://www.dmdb.org/)
* **MAG** is a heterogeneous academic network extracted from the Microsoft Academic Graph7, consisting of _Paper, Author, Field_ and _Institution_. Papers are published in 349 different venues. Each paper is associated with a Word2Vec feature. The model needs to predict the category the paper belongs to. The model needs to predict the venues in which the papers are published. Footnote 7: [https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/](https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/)
### Results Analysis
#### 5.2.1. Results on HGB Benchmark
Table 3 shows the results of Seq-HGNN on the three datasets compared to the baselines in the HGB benchmark. Baselines are divided into two categories: meta-path-based methods and meta-path-free methods. Meta-path based methods include RGCN (Kang et al., 2018), HetGNN (Wang et al., 2018), HAN (Wang et al., 2018) and MAGNN (Chen et al., 2019). The meta-path-free methods are RSHN (Wang et al., 2019), HetSANN (Chen et al., 2019), HGT (Kang et al., 2018), HGB (Kang et al., 2018) and SeHGNN (Wang et al., 2019). The results of the baselines are from HGB and their original papers. As shown in Table 3, our proposed method achieves the best performance on ACM and DBLP datasets. In detail, Seq-HGNN gains improvement beyond the best baseline on macro-f1 by (1.2%, 0.4%) and on micro-f1 by (0.5%, 0.4%), respectively. On the IMDB dataset, our method achieves the best micro f1 scores and the second-best macro f1 scores. The performance difference between IMDB and the other two datasets may be due to the following two reasons: (1) Domain difference: DBLP and ACM are datasets in the academic domain while IMDB comes from the film domain. (2) Task difference: IMDB is a multiple-label classification task, but ACM and DBLP are not.
#### 5.2.2. Results on OGB-MAG
Since some types of nodes in the MAG dataset have no initial features, existing methods usually utilize unsupervised representation methods to generate node embeddings (abbreviated as emb) as initial features. For a fair comparison, we also use the unsupervised representation learning method (ComplEx (Kang et al., 2018)) to generate node embeddings. In addition, some baseline methods on the list also adopt multi-stage learning (Kang et al., 2018; Wang et al., 2018; Wang et al., 2018) (abbreviated as ms) tricks to improve the generalization ability of the model. Therefore, we also explored the performance of Seq-HGNN under the multi-stage training.
As shown in Table 3, Seq-HGNN achieves the best performance compared to the baseline on the ogb leaderboard 8. It shows that our method can not only mine information in heterogeneous graphs more effectively, but also reflect good scalability to be applied to large-scale graphs.
Footnote 8: [https://github.stanford.edu/docs/leader_nodeprog/togbn-mag](https://github.stanford.edu/docs/leader_nodeprog/togbn-mag)
### Ablation Study
One of the core contributing components in Seq-HGNN is to explore how to effectively exploit the structural information in heterogeneous graphs. So we design three variants of our model to verify their effects, namely **Seq-HGNN w/o seq**, **Seq-HGNN w/o fus**, and **Seq-HGNN w/o rel**. The performance of these variants on the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline name & \#Nodes & \#Node & \#Edges & \#Edge & Target & \#Classes \\ & & Types & & Types & & \\ \hline DBLP & 26,128 & 4 & 239,566 & 6 & author & 4 \\ IMDB & 21,420 & 4 & 86,642 & 6 & movie & 5 \\ ACM & 10,942 & 4 & 547,872 & 8 & paper & 3 \\ MAG & 1,939,743 & 4 & 21,111,007 & 4 & paper & 349 \\ \hline \hline \end{tabular}
\end{table}
Table 1. Statistics of datasets.
HGB dataset is shown in Table 2. The details of these variants are as follows:
* **Seq-HGNN w/o seq.** It does not use the sequential node representation. After each layer of graph convolution, multiple node representations from different relationships are aggregated into a vector representation by the mean operation. Finally, the **Seq-HGNN w/o seq** concatenates the output of each graph convolutional layer as the final output for the downstream tasks. Comparing **Seq-HGNN w/o seq** and Seq-HGNN, it can be found that after introducing sequential node representation, the performance of the model can be significantly improved. It proves that sequential node representations indeed retain richer and more effective node information.
* **Seq-HGNN w/o fus.** It works on the final representation of the node, in which it drops the heterogeneous representation fusion module, instead using the average representation sequence output sent by the last layer of Seq-HGNN. Comparing **Seq-HGNN w/o fus** and Seq-HGNN, it can be found that the performance decreases after removing the heterogeneous fusion module. It illustrates the importance of recognizing the most contributing meta-path.
* **Seq-HGNN w/o rel.** It does not add relationship-aware encoding when updating the node representation, which is introduced in equation 9, section 4.3. As shown in Table 3, Seq-HGNN performs better than **Seq-HGNN w/o rel** on all datasets. It verifies the relation-distinguishing ability of Seq-HGNN.
### Experiment Setup Detail
We use the PyTorch Geometric framework 2.0 7 to implement the Seq-HGNN. The source code is available at [https://github.com/nobrowning/SEQ_HGNN](https://github.com/nobrowning/SEQ_HGNN). We set the node embedding dimension \(d=512\), and the number of attention heads to 8. The number of layers \(L\) is set to 2 on the DBLP, IMDB and MAG datasets and to 3 on the ACM dataset. During the training process, we set the dropout rate to 0.5, and the maximum epoch to 150. We use the AdamW optimizer (King and Ba, 2015) with a maximum learning rate of 0.0005 and tune the learning rate using the OneCycleLR strategy (King and Ba, 2015). For DBLP, ACM, and IMDB datasets, we use full batch training. For the large-scale dataset MAG, we use the HGTLoader8 subgraph sampling strategy (He et al., 2017), setting the batch size to 256, sampling depth to 3, sample number to 1800. We iterate 250 batches in each epoch.
Footnote 7: [https://www.pyg.org/](https://www.pyg.org/)
Footnote 8: [https://pytorch-geometric.readthedocs.io/en/latest/modules/loader.html](https://pytorch-geometric.readthedocs.io/en/latest/modules/loader.html)
The results of the baselines in Table 2 and Table 3 mainly come from previous works (King and Ba, 2015; Krizhevsky et al., 2014). All experiments can be conducted on a Linux machine with Intel(R) Core(R) i7 8700 CPU, 32G RAM, and a single NVIDIA GeForce RTX 3090 GPU.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & \multicolumn{3}{c}{DBLP} & \multicolumn{3}{c}{IMDB} & \multicolumn{3}{c}{ACM} \\ \hline \multirow{6}{*}{Metapath-based methods} & \multicolumn{2}{c}{macro-f1} & \multicolumn{2}{c}{micro-f1} & \multicolumn{2}{c}{macro-f1} & \multicolumn{2}{c}{micro-f1} & \multicolumn{2}{c}{macro-f1} & \multicolumn{2}{c}{micro-f1} \\ \cline{2-7} & RGCN & 91.52\(\pm\)0.50 & 92.07\(\pm\)0.50 & 58.85\(\pm\)0.26 & 62.05\(\pm\)0.15 & 91.55\(\pm\)0.74 & 91.41\(\pm\)0.75 \\ & HetGNN & 91.76\(\pm\)0.43 & 92.33\(\pm\)0.41 & 48.25\(\pm\)0.67 & 51.16\(\pm\)0.65 & 85.91\(\pm\)0.25 & 86.05\(\pm\)0.25 \\ & HAN & 91.67\(\pm\)0.49 & 92.05\(\pm\)0.62 & 57.74\(\pm\)0.96 & 64.63\(\pm\)0.58 & 90.89\(\pm\)0.43 & 90.79\(\pm\)0.43 \\ & MAGNN & 93.28\(\pm\)0.51 & 93.76\(\pm\)0.45 & 56.49\(\pm\)3.20 & 64.67\(\pm\)1.67 & 90.88\(\pm\)0.64 & 90.77\(\pm\)0.65 \\ \hline \multirow{6}{*}{Metapath-free methods} & \multicolumn{2}{c}{RSHN} & \multicolumn{2}{c}{93.44\(\pm\)0.58} & 93.81\(\pm\)0.55 & 59.85\(\pm\)3.21 & 64.22\(\pm\)1.03 & 90.50\(\pm\)1.51 & 90.32\(\pm\)1.54 \\ & HetSANN & 78.55\(\pm\)2.42 & 80.56\(\pm\)1.50 & 49.47\(\pm\)1.21 & 57.68\(\pm\)0.44 & 90.02\(\pm\)0.35 & 89.91\(\pm\)0.37 \\ & \multicolumn{2}{c}{HGT} & \multicolumn{2}{c}{93.01\(\pm\)0.23} & 93.49\(\pm\)0.25 & 63.00\(\pm\)1.19 & 67.20\(\pm\)0.57 & 91.12\(\pm\)0.76 & 91.00\(\pm\)0.76 \\ & \multicolumn{2}{c}{HGB} & \multicolumn{2}{c}{94.01\(\pm\)0.24} & 94.46\(\pm\)0.22 & 63.53\(\pm\)1.36 & 67.36\(\pm\)0.57 & 93.42\(\pm\)0.44 & 93.35\(\pm\)0.45 \\ & SeHGNN & 95.06\(\pm\)0.17 & 95.42\(\pm\)0.17 & **67.11\(\pm\)0.25** & 69.17\(\pm\)0.43 & 94.05\(\pm\)0.35 & 93.98\(\pm\)0.36 \\ \hline \multirow{6}{*}{Ours} & \multicolumn{2}{c}{Seq-HGNN} & \multicolumn{2}{c}{**96.27\(\pm\)0.24**} & \multicolumn{2}{c}{**95.96\(\pm\)0.31**} & 66.77\(\pm\)0.24 & **69.31\(\pm\)0.27** & **94.41\(\pm\)0.26** & **94.33\(\pm\)0.31** \\ & \multicolumn{2}{c}{-w/o seq} & \multicolumn{2}{c}{93.79\(\pm\)0.34} & 93.51\(\pm\)0.38 & 64.32\(\pm\)0.56 & 67.04\(\pm\)0.62 & 92.44\(\pm\)0.67 & 92.17\(\pm\)0.72 \\ \cline{1-1} & \multicolumn{2}{c}{-w/o fus} & \multicolumn{2}{c}{95.59\(\pm\)0.14} & 95.92\(\pm\)0.13 & 65.01\(\pm\)0.37 & 67.43\(\pm\)0.32 & 93.21\(\pm\)0.48 & 93.20\(\pm\)0.50 \\ \cline{1-1} & \multicolumn{2}{c}{-w/o rel} & \multicolumn{2}{c}{95.49\(\pm\)0.23} & 95.64\(\pm\)0.18 & 64.78\(\pm\)0.41 & 69.09\(\pm\)0.39 & 93.76 \(\pm\)0.43 & 93.67\(\pm\)0.46 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Experiment results on the three datasets from the HGB benchmark. The best results are in bold, and the second-best results are underlined.
\begin{table}
\begin{tabular}{l c c} \hline \hline Methods & Validation accuracy & Test accuracy \\ \hline RGCN & 48.35\(\pm\)0.36 & 47.37\(\pm\)0.48 \\ HGT & 49.89\(\pm\)0.47 & 49.27\(\pm\)0.61 \\ NARS & 51.85\(\pm\)0.08 & 50.88\(\pm\)0.12 \\ SAGN & 52.25\(\pm\)0.30 & 51.17\(\pm\)0.32 \\ GAMLP & 53.23\(\pm\)0.23 & 51.63\(\pm\)0.22 \\ \hline HGT+emb & 51.24\(\pm\)0.46 & 49.82\(\pm\)0.13 \\ NARS+emb & 53.72\(\pm\)0.09 & 52.40\(\pm\)0.16 \\ GAMLP+emb & 55.48\(\pm\)0.08 & 53.96\(\pm\)0.18 \\ SAGN+emb+ms & 55.91\(\pm\)0.17 & 54.40\(\pm\)0.15 \\ GAMLP+emb+ms & 57.02\(\pm\)0.41 & 55.90\(\pm\)0.27 \\ SeHGNN+emb & 56.56\(\pm\)0.07 & 54.78\(\pm\)0.17 \\ SeHGNN+emb+ms & 59.17\(\pm\)0.09 & 57.19\(\pm\)0.12 \\ \hline \hline Seq-HGNN+emb & 56.93\(\pm\)0.11 & 55.27\(\pm\)0.34 \\ Seq-HGNN+emb+ms & **59.21\(\pm\)0.08** & **57.76\(\pm\)0.26** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Experiment results on the large-scale dataset MAG, where “emb” means using extra embeddings and “ms” means using multi-stage training. The best results are in bold, and the second-best results are underlined.
### Training Efficiency
In Seq-HGNN, sequential node representations are computed in parallel. Therefore, Seq-HGNN achieves decent computational efficiency. To further investigate the computational efficiency of Seq-HGNN, we conduct experiments to compare the training time of Seq-HGNN with a state-of-the-art baseline, i.e., SeHGNN.To achieve a fair comparison, we subject all models to the same accuracy performance validation -- making a test on the test set every one train epoch. The variation of test accuracy of the models with training time is shown in Figure 3.
As shown in Figure 3, Seq-HGNN performs the highest accuracy within the least training time. It verifies that Seq-HGNN has good computational efficiency when dealing with heterogeneous graphs. As a comparison, the baseline (SeHGNN) outputs nothing within 42 seconds of starting training. The reason is that SeHGNN cannot directly learn node representations on heterogeneous graphs. It requires a message-passing step before node representation generation. In the message passing step, SeHGNN collects the features of neighbor nodes of the target on all meta-paths. Therefore, the messaging step shows a high time-consuming.
### Parameter Sensitivity Analysis
We study the sensitivity analysis of parameters in Seq-HGNN. Specifically, we conduct experiments on the large-scale dataset OGB-MAG to explore the influence of the number of layers, the dropout rate, and the dimension of node representation. Since the model needs to conduct a sub-graph sampling on the large-scale dataset, we also explore the influence of sampling node numbers. To simplify the evaluation process, we opted not to employ a multi-stage training strategy in the parameter sensitivity experiment. The results are shown in Figure 4, where each subfigure shows the accuracy of classification on the y-axis and hyperparameters on the x-axis.
#### 5.6.1. Number of node samples
Since Seq-HGNN uses HGT-Loader for sampling sub-graphs in the node classification task, we explore the effect of node sampling number on the performance of Seq-HGNN. As shown in Figure 4 (a), Seq-HGNN achieves the best performance when the number of samples is set as 1800.
#### 5.6.2. Dimension of node representation
We report the experimental result varied with the dimension of node representation in Figure 4 (b). It can be seen that as the dimension increases, the performance of Seq-HGNN gradually increases. After the dimension is higher than 256, the performance improvement slows down.
#### 5.6.3. Dropout rate
We adjust the dropout rate during the model training and report the results in Figure 4 (c). We can observe that Seq-HGNN performs best when the dropout rate is 0.5. A high dropout rate would lead to underfitting and poor performance, while a low dropout rate may lead to overfitting.
#### 5.6.4. Number of layers
We explore the performance of our model while stacking from 1 to 3 Seq-HGNN Layers. The experimental results are shown in Figure 4 (d). It can be seen that Seq-HGNN achieves the best performance when it is stacked with 2 layers. On this basis, the performance of Seq-HGNN becomes worse when more layers are stacked. This may be caused by over-smoothing issues.
### Visualization of Effective Meta-Paths
As mentioned in Section 4.4, \(A^{\text{fus}}\) in the Heterogeneous Representation Fusion module indicates the importance of different representations of a node, i.e., the importance of a node on different meta-paths. To visualize how the heterogeneous fusion module of Seq-HGNN identifies the most contributing meta-paths, we present the effective meta-paths in node representation learning on DBLP, IMDB, ACM and MAG datasets, respectively. The most important meta-paths for these target node representations are shown in Figure 5. It is noteworthy that our model can individually identify the significant metapaths characterizing each node. In order to simplify the visualization, we aggregate the propagation path weights of nodes by node type in Figure 5. Due to the large number of meta-paths, here, we only show the top five important paths in each sub-figure.
Comparing the four sub-figure in Figure 5, we can find that the important paths for distinct nodes are obviously different. It verifies that the Seq-HGNN can estimate the path importance separately for different nodes, rather than treat them equally.
Figure 4. Parameter Sensitivity of Seq-HGNN.
Figure 3. The comparison of training efficiency.
In sub-figure (a), we can observe that the self-loop of the target node (_Author_) has a high weight (72.37%). It reveals that in the ACM dataset, the representation of the _Author_ node mainly depends on its own attributes rather than the structural information in the graph. In contrast, the information of the target node (_Movie_) in sub-figure (b) mainly comes from its neighbor nodes. The target node types in sub-figure (c) and sub-figure (d) are both _Paper_. However, there is a significant difference between sub-figure (c) and sub-figure (d): the most important meta-path in sub-figure (c) is "_Paper-Conference_", while the information of the target node in sub-figure (d) mostly comes from the meta-paths related to _Paper_, such as "_Paper-Field-Paper_", "_Paper-Paper_", "_Paper-Author-Paper_", etc. The difference between sub-figure (c) and sub-figure (d) may be mainly caused by their downstream tasks. Specifically, the task of sub-figure (c) is to predict the field of the paper while the task of sub-figure (d) is to predict the journal where the paper is published. This indicates that our model can utilize different aspects of graphs according to different downstream task demands. By mining important propagation paths, the model can provide deep insights and interpretability into the real-world application scenarios.
## 6. Conclusion
In this paper, we proposed a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN. To avoid the information loss caused by the single vector node representation, we first design a sequential node representation learning mechanism to represent each node as a sequence of meta-path representations during the node message passing. Then we propose a heterogeneous representation fusion module, empowering Seq-HGNN to identify important meta-paths and aggregate their representations into a compact one. Third, we conducted extensive experiments on four widely-used datasets from open benchmarks and clearly validated the effectiveness of our model. Finally, we visualized and analyzed effective meta-path paths in different datasets, and verified that Seq-HGNN can provide deep insights into the heterogeneous graphs.
###### Acknowledgements.
This research work is supported by the National Key Research and Development Program of China under Grant No. 2019YFA0707204, the National Natural Science Foundation of China under Grant Nos. 62176014, 62276015, the Fundamental Research Funds for the Central Universities.
|
2308.01858 | Introducing $n$-Magic Groups and Characterizing $3$-Magic Finitely
Generated Abelian Groups | In this paper, we define an $n$-magic square in a group to be an $(n\times
n)$ array of group elements whose rows, columns, and diagonals have the same
product. This definition is akin to the idea of magic squares in the integers.
Groups that have an $n$-magic square are said to be $n$-magic. We begin with
some preliminary results and focus much of our attention on $3$-magic groups.
Through a series of propositions, we ultimately prove a characterization
theorem for $3$-magic finitely generated abelian groups. We then discuss some
additional results about non-abelian groups as well as $n$-magic groups where
$n>3$. | Danielle Bowerman, Nicholas Fleece, Matt Insall | 2023-08-03T16:26:57Z | http://arxiv.org/abs/2308.01858v1 | # Introducing \(n\)-Magic Groups and Characterizing 3-Magic Finitely Generated Abelian Groups
###### Abstract
In this paper, we define an \(n\)-magic square in a group to be an \((n\times n)\) array of group elements whose rows, columns, and diagonals have the same product. This definition is akin to the idea of magic squares in the integers. Groups that have an \(n\)-magic square are said to be \(n\)-magic. We begin with some preliminary results and focus much of our attention on 3-magic groups. Through a series of propositions, we ultimately prove a characterization theorem for 3-magic finitely generated abelian groups. We then discuss some additional results about non-abelian groups as well as \(n\)-magic groups where \(n>3\).
1
Footnote 1: [email protected] — supported by the Chancellor’s Distinguished Fellows Program
2
Footnote 2: [email protected] — supported by the Kummer Institute for Student Success, Research and Economic Development
3
Footnote 3: [email protected]
## 1 Introduction
In this section, we review previous work on magic squares, provide fundamental definitions, and present the manner in which this work is organized.
### Previous Work
Magic squares are one of the most well-known topics in recreational mathematics. The notion initially referred to a square array of distinct positive integers in which every row, column, and diagonal sum to the same number. It is well known that magic squares exist for every size larger than a \((2\times 2)\) array. In fact, many examples of these were found in antiquity.
This paper originates in our work on the problems posed in the Numberphile video "The Parker Square--Numberphile" [3]. While searching for a magic square of squares, we began considering squares in settings other than the integers. In [1] magic squares in finite fields were discussed. The work in this paper began with the goal of defining magic squares in the realm of group theory, at which point we learned that [4] had initiated an investigation of this idea. The definition we created independently of their work is very similar. However, here, we take the idea further and characterize the finitely generated abelian groups with \((3\times 3)\) magic squares.
The previously mentioned 1997 paper by Sun and Yihui [4] touches on this topic and begins to address the topics on which our work is founded. Lower bounds for the orders of the groups that admit magic squares were established in that paper, but the authors also provide results that determine several classes of groups which have \((n\times n)\) magic squares, leaving open the question of classifications of groups which do not have \((n\times n)\) magic squares. It was proved in the paper that for any positive integer \(n\geq 3\), any abelian group of order \(n^{2}\) admits an \((n\times n)\) magic square. They also showed that for any positive prime number \(p\) and any positive integer \(n\), any elementary abelian \(p-\)group of size \(p^{2n}\) admits a magic square of size \((2n\times 2n)\).
### Definitions and Notation
We begin this section with the fundamental definition of this paper.
**Definition 1.1**.: We say that group \(G\) is \(n\)**-magic** if it has an \(n\times n\) magic square. That is, there exist distinct \(g_{1,1},g_{1,2},\ldots,g_{1,n},g_{2,1},g_{2,2},\ldots,g_{2,n},\ldots,g_{n,1}, g_{n,2},\ldots,g_{n,n}\in G\) such that
\[g_{1,1}g_{1,2}\ldots g_{1,n} =g_{2,1}g_{2,2}\ldots g_{2,n}\] \[=\ldots\] \[=g_{n,1}g_{n,2}\ldots g_{n,n}\] \[=g_{1,1}g_{2,1}\ldots g_{n,1}\] \[=g_{1,2}g_{2,2}\ldots g_{n,2}\] \[=\ldots\] \[=g_{1,n}g_{2,n}\ldots g_{n,n}\] \[=g_{1,1}g_{2,2}\ldots g_{n,n}\] \[=g_{n,1}g_{n-1,2}\ldots g_{1,n}.\]
We call this common product the **magic product** of the magic square.
Note that in [4] a "magic group" refers to the automorphism group on a magic square. This greatly differs from our notion of an \(n\)-magic group. It is also worth noting that natural variants of "magic" objects can be defined similarly, such as for semi-groups or for magmas. We leave those for a future investigation.
### Organization of This Work
In the remainder of this paper, we first provide some preliminary results on 3-magic groups. We then prove a series of propositions that act as lemmas for the proof of the characterization of 3-magic finitely generated abelian groups, which is then presented. Finally, we provide some results related to nonabelian groups and discuss some ideas for future work on this topic.
## 2 Results
The following two results for a group \(G\) are immediate. The first provides a lower bound on the order of a group for it to be \(n\)-magic. The second tells us that if a group is \(n\)-magic, then any group containing it as a subgroup is also \(n\)-magic.
**Proposition 2.1**.: _If a group \(G\) is \(n\)-magic, then \(|G|\geq n^{2}\)._
**Proposition 2.2**.: _If \(H\leq G\) and \(H\) is \(n\)-magic, then \(G\) is \(n\)-magic._
We can quickly see the following.
**Theorem 2.3**.: _No groups are 2-magic._
**Proof.** Suppose \(G\) is 2-magic, and let \(\begin{bmatrix}g_{1}&g_{2}\\ g_{3}&g_{4}\end{bmatrix}\) be a magic square. Then \(g_{1}g_{2}=g_{1}g_{3}\), and through left multiplication by \(g_{1}^{-1}\), we have \(g_{2}=g_{3}\). This contradicts the assumption that the entries in a magic square are distinct. \(\square\)
**Corollary 2.4**.: _Any monobinary algebra with the cancellation property will not be 2-magic._
Given this, we can begin investigating which groups are 3-magic. From Proposition 2.1, we have that all groups of order 8 or less cannot be 3-magic. We wish to find "fundamental" classes of magic groups for which we can then apply Proposition 2.2 to find a much larger class of groups. To this end, we endeavour to characterize 3-magic finitely generated abelian groups. The next four propositions are our first steps in this direction.
**Proposition 2.5**.: _All infinite, finitely generated abelian groups are \(3\)-magic._
**Proof.** We have that \(\mathbb{Z}\) is \(3\)-magic since
\[\begin{bmatrix}8&1&6\\ 3&5&7\\ 4&9&2\end{bmatrix}\]
is a magic square. If \(G\) is an infinite finitely generated abelian group then \(\mathbb{Z}\leq G\) by the Fundamental Theorem of Finitely Generated Abelian Groups and so \(G\) is \(3\)-magic by Proposition 2.2. \(\square\)
**Proposition 2.6**.: _The cyclic group \(C_{n}\) is \(3\)-magic if and only if \(n\geq 9\)._
**Proof.** Necessity follows immediately from Proposition 2.1. For sufficiency, we have that, for \(C_{n}=\langle x\rangle\),
\[\begin{bmatrix}x^{n-3}&x^{2}&x\\ x^{4}&1&x^{n-4}\\ x^{n-1}&x^{n-2}&x^{3}\end{bmatrix}\]
forms a magic square with a magic product of \(1\). The fact that \(n\geq 9\) guarantees these entries are distinct. \(\square\)
**Proposition 2.7**.: _The groups \(C_{n}^{2}\) and \(C_{n}^{3}\) are \(3\)-magic if and only if \(n\geq 3\)._
**Proof.** Necessity follows immediately from Proposition 2.1. For sufficiency, given \(C_{n}^{2}\), we must cover a few cases. First, note that \(C_{n}\leq C_{n}^{2}\) for all \(n\) so that, by Proposition 2.2, \(C_{n}^{2}\) is \(3\)-magic for all \(n\geq 9\). Next, for \(C_{2k+1}^{2}=\langle x,y\rangle\),
\[\begin{bmatrix}x^{k}&y^{k+1}&x^{k+1}y^{k}\\ xy^{k}&1&x^{2k}y^{k+1}\\ x^{k}y&y^{k}&x^{k+1}\end{bmatrix}\]
forms a magic square with a common product of \(1\). Thus, \(C_{3}^{2}\), \(C_{5}^{2}\), and \(C_{7}^{2}\) are \(3\)-magic. Furthermore, since
\(C_{3}^{2}\leq C_{6}^{2}\), by Proposition 2.2, \(C_{6}^{2}\) is also \(3\)-magic. Lastly, for \(C_{4}^{2}=\langle x,y\rangle\),
\[\begin{bmatrix}x&y^{3}&x^{3}y\\ x^{2}y&1&x^{2}y^{3}\\ xy^{3}&y&x^{3}\end{bmatrix}\]
forms a magic square with a common product of \(1\). Thus, \(C_{4}^{2}\) is \(3\)-magic. Since \(C_{4}^{2}\leq C_{8}^{2}\), by Proposition 2.2, \(C_{8}^{2}\) is also \(3\)-magic.
Finally, since \(C_{n}^{2}\leq C_{n}^{3}\) for all \(n\), Proposition 2.2 provides sufficiency for \(C_{n}^{3}\).
**Proposition 2.8**.: _The group \(C_{n}\times C_{n^{2}}\) is \(3\)-magic if and only if \(n\geq 3\)._
**Proof.** Necessity again follows from Proposition 2.1. Sufficiency follows from Proposition 2.2, after recognizing that \(C_{n^{2}}\leq C_{n}\times C_{n^{2}}\).
To proceed further, we need to define a "normalized" magic square.
**Definition 2.9**.: A **normalized \(n\)-magic square** in a group \(G\) is an \(n\)-magic square in \(G\) such that one of the entries is the identity element of \(G\). When \(n\) is odd, if the center entry of the magic square is the identity, we call it a **centered normalized \(n\)-magic square**.
It can be shown that in abelian groups we can convert any magic square to a normalized magic square with the identity in any position that we choose.
**Proposition 2.10**.: _An abelian group is \(n\)-magic if and only if for all \(i,j\in\{1,2,\ldots,n\}\) there exists an \(n\)-magic square in \(G\) whose \(g_{i,j}\) entry is the identity. Therefore, an abelian group is \(n\)-magic if and only if it admits a normalized magic square._
**Proof.** Sufficiency is clear. To prove necessity, choose an \(n\)-magic square in \(G\) with magic product \(s\). Let \(i,j\in\{1,2,\ldots,n\}\) and multiply each entry by \(g_{i,j}^{-1}\). Then the resulting square has \(1\) in the \(i,j\) position, and each row, column, and diagonal has a product of \(g_{i,j}^{-n}s\). We can guarantee that all these entries are unique since left multiplication by a group element is an automorphism.
The next three results answer the remaining question as to whether we can find an even stronger biconditional that can be used to quickly show that an abelian group is not \(3\)-magic.
**Theorem 2.11**.: _The elements of any \(3\)-magic square in an abelian group can be generated by three elements._
**Proof.** Sufficiency is immediate. The proof for necessity is adapted from [2] and translated into the language
of groups. If \(G\) is 3-magic, let
\[\begin{bmatrix}g_{1}&g_{2}&g_{3}\\ g_{4}&g_{5}&g_{6}\\ g_{7}&g_{8}&g_{9}\end{bmatrix}\]
be a magic square in \(G\) with magic product \(s\). We then have
\[g_{1}g_{5}g_{9} =s\] \[g_{2}g_{5}g_{8} =s\] \[g_{4}g_{5}g_{6} =s\] \[g_{7}g_{5}g_{3} =s\] \[g_{1}g_{4}g_{7} =s\] \[g_{3}g_{6}g_{9} =s\] \[g_{1}g_{2}g_{3} =s\] \[g_{7}g_{8}g_{9} =s.\]
Multiplying the first four equations gives us
\[(g_{1}g_{2}g_{3})(g_{4}g_{5}g_{6})(g_{7}g_{8}g_{9})g_{5}^{3} =s^{4}\] \[s^{3}g_{5}^{3} =s^{4}\] \[g_{5}^{3} =s\]
Then, define \(a:=g_{1}g_{5}^{-1}\), \(b:=g_{3}g_{5}^{-1}\), and \(c:=g_{5}\). Now note that since \(g_{1}g_{2}g_{3}=g_{5}^{3}\) we know
\[g_{2}=g_{1}^{-1}g_{3}^{-1}g_{5}^{3},\]
since \(g_{7}g_{5}g_{3}=g_{5}^{3}\) we have
\[g_{7}=g_{3}^{-1}g_{5}^{2},\]
and since \(g_{1}g_{5}g_{9}=g_{5}^{3}\) we can state that
\[g_{9}=g_{1}^{-1}g_{5}^{2}.\]
Combining these statements allows us to say that since \(g_{1}g_{4}g_{7}=g_{5}^{3}\) we know
\[g_{4}=g_{1}^{-1}g_{5}^{3}g_{7}^{-1}=g_{1}^{-1}g_{5}^{3}(g_{3}^{-1}g_{5}^{2})^{- 1}=g_{1}^{-1}g_{3}^{-1}g_{5},\]
since \(g_{2}g_{5}g_{8}=g_{5}^{3}\) we have
\[g_{8}=g_{2}^{-1}g_{5}^{2}=(g_{1}^{-1}g_{3}^{-1}g_{5}^{3})^{-1}g_{5}^{2}=g_{1}g_ {3}g_{5}^{-1},\]
and since \(g_{3}g_{6}g_{9}=g_{5}^{3}\) we have
\[g_{6}=g_{3}^{-1}g_{5}^{3}g_{9}^{-1}=g_{3}^{-1}g_{5}^{3}(g_{1}^{-1}g_{5}^{2})^{ -1}=g_{1}g_{3}^{-1}g_{5}.\]
Combining these lets us see that
\[\begin{bmatrix}ac&a^{-1}b^{-1}c&bc\\ a^{-1}bc&c&ab^{-1}c\\ b^{-1}c&abc&a^{-1}c\end{bmatrix} =\begin{bmatrix}g_{1}g_{5}^{-1}g_{5}&(g_{1}g_{5}^{-1})^{-1}(g_{3}g _{5}^{-1})^{-1}g_{5}&g_{3}g_{5}^{-1}g_{5}\\ (g_{1}g_{5}^{-1})^{-1}g_{3}g_{5}^{-1}g_{5}&g_{5}&g_{1}g_{5}^{-1}(g_{3}g_{5}^{- 1})^{-1}g_{5}\\ (g_{3}g_{5}^{-1})^{-1}g_{5}&g_{1}g_{5}^{-1}g_{3}g_{5}^{-1}g_{5}&(g_{1}g_{5}^{ -1})^{-1}g_{5}\end{bmatrix}\] \[=\begin{bmatrix}g_{1}&g_{1}^{-1}g_{3}^{-1}g_{5}^{3}&g_{3}\\ g_{1}^{-1}g_{3}g_{5}&g_{5}&g_{1}g_{3}^{-1}g_{5}\\ g_{3}^{-1}g_{5}^{2}&g_{1}g_{3}g_{5}^{-1}&g_{1}^{-1}g_{5}^{2}\end{bmatrix}\] \[=\begin{bmatrix}g_{1}&g_{2}&g_{3}\\ g_{4}&g_{5}&g_{6}\\ g_{7}&g_{8}&g_{9}\end{bmatrix}\]
This leads to the biconditional result that will be used throughout the remainder of the paper.
**Corollary 2.12**.: _An abelian group \(G\) is \(3\)-magic if and only if there is a centered normalized \(3\)-magic square
_in \(G\) of the form_
\[\begin{bmatrix}a&a^{-1}b^{-1}&b\\ a^{-1}b&1&ab^{-1}\\ b^{-1}&ab&a^{-1},\end{bmatrix}\]
_where \(a,b\in G\)._
**Proof.** This result follows from multiplying every entry in
\[\begin{bmatrix}ac&a^{-1}b^{-1}c&bc\\ a^{-1}bc&c&ab^{-1}c\\ b^{-1}c&abc&a^{-1}c\end{bmatrix}\]
by \(c^{-1}\). \(\square\)
Hence, for an abelian group \(G\) to be \(3\)-magic, there must be two elements \(a,b\in G\) such that \(a\), \(a^{-1}b^{-1}\), \(b\), \(a^{-1}b\), \(ab^{-1}\), \(b^{-1}\), \(ab\), and \(a^{-1}\) are distinct. This proves to be especially useful in showing the impossibility of having a \(3\)-magic square in certain classes of groups.
**Corollary 2.13**.: _The elements of any normalized \(3\)-magic square in an abelian group can be generated by two elements of the group._
**Proof.** From the previous proposition, every \(3\)-magic square in an abelian group is of the form
\[\begin{bmatrix}ac&a^{-1}b^{-1}c&bc\\ a^{-1}bc&c&ab^{-1}c\\ b^{-1}c&abc&a^{-1}c\end{bmatrix}.\]
If this were normalized, then one of the entries would be \(1\). This would give us an equation in which we could solve for \(c\) in terms of \(a\) and \(b\). \(\square\)
We can now make use of these results to continue determining which abelian groups are \(3\)-magic.
**Proposition 2.14**.: _The group \(C_{n}^{k}\), with \(k\geq 4\), is \(3\)-magic if and only if \(n\geq 3\)._
**Proof.** Since \(C_{n}^{3}\leq C_{n}^{k}\), Proposition 2.2 gives sufficiency for \(n\geq 3\). Now let \(n=2\). Assume there is a centered normalized \(3\)-magic square in \(C_{n}^{3}\leq C_{n}^{k}\) given Corollary 2.12, and use the notation in that corollary for its form. Since every nontrivial element in \(C_{2}^{k}\) is an involution, \(a=a^{-1}\), which violates distinctness.
Hence, \(C_{2}^{k}\) is not \(3\)-magic for any \(k\). \(\Box\)
**Proposition 2.15**.: _The group \(C_{n}\times C_{n^{k}}\), with \(k\geq 3\), is \(3\)-magic if and only if \(n\geq 2\)._
**Proof.** The fact that \(C_{n}\times C_{n^{3}}\) is \(3\)-magic for \(n\geq 3\) follows from Proposition 2.2 after recognizing that \(C_{n^{3}}\leq C_{n}\times C_{n^{3}}\).
For \(C_{2^{3}}\times C_{2}=\langle x,y\rangle\), we have that
\[\begin{bmatrix}xy&x^{5}&x^{2}y\\ x&1&x^{7}\\ x^{6}y&x^{3}&x^{7}y\end{bmatrix}\]
forms a magic square with a common product of \(1\). For \(n=2\) and \(k\geq 4\), we can then use Proposition 2.2 in a similar manner as was previously done. \(\Box\)
The following proposition completes the search for abelian \(3\)-magic groups of odd order and leaves us only with those of even order.
**Proposition 2.16**.: _All abelian groups of odd order \(n\) with \(n\geq 9\) are \(3\)-magic._
**Proof.** Let \(G\) be a finite abelian group of odd order with \(|G|=n\geq 9\). By the Fundamental Theorem of Finite Abelian Groups, we know that \(G=C_{n_{1}}\times C_{n_{2}}\times\ldots\times C_{n_{\ell}}\), where \(\ell\) is the rank of \(G\). If \(\ell=1\), then \(G\) is \(3\)-magic by Proposition 2.6. Suppose then that \(\ell>1\). Since \(\ell\) is the rank of \(G\), by the Chinese Remainder Theorem, there must be a \(p|n_{i}\) for \(i\in\{1,2,\ldots,\ell\}\) where \(p\) is an odd prime. Thus \(C_{p}^{\ell}\leq G\) and since \(p\geq 3\), \(G\) is \(3\)-magic by Propositions 2.2, 2.7, and 2.14. \(\Box\)
We now turn our attention to abelian groups of even order.
**Proposition 2.17**.: _The group \(C_{2}^{k}\times C_{4}\) is not \(3\)-magic for any \(k\)._
**Proof.** Let \(C_{2}^{k}\times C_{4}=<x_{1},x_{2},\ldots,x_{k},y>\), where \(|x_{i}|=2\) (that is, the order of \(x_{i}\) is \(2\)) for all \(i\) and \(|y|=4\). Assume there is a centered normalized \(3\)-magic square in \(C_{2}^{k}\times C_{4}\) by Corollary 2.12, and use the notation in that corollary for its form. Since \(a\) and \(b\) cannot be involutions, \(|a|=|b|=4\). Then \(a=X_{1}y^{\alpha}\) and \(b=X_{2}y^{\beta}\), where \(X_{1}=x_{i_{1}}x_{i_{2}}\ldots x_{i_{l}}\), \(X_{2}=x_{j_{1}}x_{j_{2}}\ldots x_{j_{m}}\), and \(\alpha,\beta\in\{1,3\}\).
**Case 1 (\(\alpha=\beta\)):**
Without loss of generality, we may assume that \(\alpha=\beta=1\). Then \(a^{-1}b=(X_{1}y^{3})(X_{2}y)=X_{1}X_{2}=(X_{1}y)(X_{2}y^{3})=ab^{-1}\), contradicting distinctness.
**Case 2 (\(\alpha\neq\beta\)):**
Without loss of generality, we may assume that \(\alpha=1\) and \(\beta=3\). Then \(a^{-1}b^{-1}=(X_{1}y^{3})(X_{2}y)=X_{1}X_{2}=(X_{1}y)(X_{2}y^{3})=ab\), contradicting distinctness.
In a very similar manner as was previously done, we can determine another family of groups that can be omitted from the search.
**Proposition 2.18**.: _The group \(C_{2}^{k}\times C_{3}\) is not \(3\)-magic for any \(k\)._
**Proof.** This is true for \(k=1\) since \(|C_{6}|<9\). Let \(k\geq 2\), and let \(C_{2}^{k}\times C_{3}\cong C_{2}^{k-1}\times C_{6}=<x_{1},x_{2},\ldots,x_{k-1}, y>\), where \(|x_{i}|=2\) for all \(i\) and \(|y|=6\). Assume there is a centered normalized \(3\)-magic square in \(C_{2}^{k-1}\times C_{6}\) by Corollary 2.12, and use the notation used in that corollary for its form. Since \(a\) and \(b\) cannot be involutions, \(|a|,|b|\in\{3,6\}\). Then \(a=X_{1}y^{\alpha}\) and \(b=X_{2}y^{\beta}\), where \(X_{1}=x_{i_{1}}x_{i_{2}}\ldots x_{i_{l}}\), \(X_{2}=x_{j_{1}}x_{j_{2}}\ldots x_{j_{m}}\), and \(\alpha,\beta\in\{1,2,4,5\}\).
**Case 1 (\(\alpha=\beta\)):**
Then \(a^{-1}b=(X_{1}y^{-\alpha})(X_{2}y^{\alpha})=X_{1}X_{2}=(X_{1}y^{\alpha})(X_{2 }y^{-\alpha})=ab^{-1}\), contradicting distinctness.
**Case 2 (\(\beta\equiv_{6}-\alpha\)):**
Then \(a^{-1}b^{-1}=(X_{1}y^{-\alpha})(X_{2}y^{\beta})=X_{1}X_{2}=(X_{1}y^{\alpha})( X_{2}y^{-\alpha})=ab\), contradicting distinctness.
**Case 3 (\(\{\alpha,\beta\}=\{1,2\}\)):**
Without loss of generality, we may assume \(\alpha=1\) and \(\beta=2\). Then \(a^{-1}b^{-1}=(X_{1}y^{5})(X_{2}y^{4})=X_{1}y^{3}X_{2}=(X_{1}y)(X_{2}y^{2})=ab\), contradicting distinctness.
**Case 4 (\(\{\alpha,\beta\}=\{1,4\}\)):**
Without loss of generality, we may assume \(\alpha=1\) and \(\beta=4\). Then \(a^{-1}b=(X_{1}y^{5})(X_{2}y^{4})=X_{1}y^{3}X_{2}=(X_{1}y)(X_{2}y^{2})=ab^{-1}\), contradicting distinctness.
**Case 5 (\(\{\alpha,\beta\}=\{2,5\}\)):**
Without loss of generality, we may assume \(\alpha=2\) and \(\beta=5\). Then \(a^{-1}b=(X_{1}y^{4})(X_{2}y^{5})=X_{1}y^{3}X_{2}=(X_{1}y^{2})(X_{2}y)=ab^{-1}\), contradicting distinctness.
**Case 6 (\(\{\alpha,\beta\}=\{4,5\}\)):**
Without loss of generality, we may assume \(\alpha=4\) and \(\beta=5\). Then \(a^{-1}b^{-1}=(X_{1}y^{2})(X_{2}y)=X_{1}y^{3}X_{2}=(X_{1}y^{4})(X_{2}y^{5})=ab\), contradicting distinctness.
The next proposition may seem less impactful than the previous ones, but its usefulness will be seen in the next section. It turns out knowing whether \(C_{4}\times C_{8}\) is \(3\)-magic or not is an exceptional case in the proving of the characterization theorem; the next proposition handles this case.
**Proposition 2.19**.: _The group \(C_{4}\times C_{8}\) is \(3\)-magic._
**Proof.** Let \(C_{4}\times C_{8}=\langle x,y\rangle\), where \(|x|=4\) and \(|y|=8\). Then
\[\begin{bmatrix}y&x^{3}y^{6}&xy\\ x&1&x^{3}\\ x^{3}y^{7}&xy^{2}&y^{7}\end{bmatrix}\]
is a magic square with common product \(1\). \(\Box\)
## 3 A Characterization Theorem
We now present the characterization of finitely generated abelian \(3\)-magic groups, which is built upon many of the previous results of narrower scope.
**Theorem 3.1**.: _(Characterization of the of Finitely Generated Abelian \(3\)-magic Groups)_
_Let \(G\) be a finitely generated abelian group. If \(G\) is infinite, then it is \(3\)-magic. If \(|G|=n\), then we have the following:_
1. _If_ \(n\) _is odd, we know_ \(G\) _is_ \(3\)_-magic if and only if_ \(n\geq 9\)_._
2. _If_ \(n\) _is even, we can write its Sylow-2-subgroup in the form_ \(C_{2}^{\alpha_{1}}\times C_{2^{2}}^{\alpha_{2}}\times C_{2^{3}}^{\alpha_{3}} \times\ldots\times C_{2^{l}}^{\alpha_{l}}\)_, which lets us say that:_ 1. _If_ \(G\) _is a_ \(2\)_-group, then_ 1. _if_ \(\alpha_{i}\neq 0\) _for some_ \(i\geq 4\)_, we have_ \(G\) _is_ \(3\)_-magic, else_ 2. _if_ \(\alpha_{2}\geq 2\) _or if_ \(\alpha_{3}\geq 2\)_, we have_ \(G\) _is_ \(3\)_-magic, or else_ 3. \(G\) _is_ \(3\)_-magic if and only if (_\(\alpha_{1}\neq 0\) _or_ \(\alpha_{2}=1\)_) and_ \(\alpha_{3}=1\)_._ 2. _If_ \(G\) _is not a_ \(2\)_-group, then_ 1. _if_ \(p|n\)_, where_ \(p\geq 5\)_, we have that_ \(G\) _is_ \(3\)_-magic or else_ 2. _if there is no prime_ \(p\geq 5\) _such that_ \(p|n\)_, then_ 1. _if_ \(\alpha_{i}\neq 0\) _for some_ \(i\geq 2\)_, we know_ \(G\) _is_ \(3\) _magic or else_ 2. \(G\) _is_ \(3\)_-magic if and only if_ \(9|n\)_._
**Proof.** Let \(G\) be a finitely generated abelian group. If \(G\) is infinite then \(G\) is \(3\)-magic by Proposition 2.5, so let \(G\) be an abelian group with \(|G|=n\). Then we have:
1. If \(n\) is odd, then \(G\) is \(3\)-magic if and only if \(n\geq 9\) by Proposition 2.16
2. If \(n\) is even, write its Sylow-\(2\)-subgroup in the form \(C_{2}^{\alpha_{1}}\times C_{2^{2}}^{\alpha_{2}}\times C_{2^{3}}^{\alpha_{3}} \times\ldots\times C_{2^{l}}^{\alpha_{l}}\). 1. If \(G\) is a \(2\)-group, then 1. if \(\alpha_{i_{0}}\neq 0\) for some \(i_{0}\geq 4\), then \(C_{2^{i_{0}}}\leq G\), which is \(3\)-magic by Proposition 2.6. 2. if \(\alpha_{i}=0\) for all \(i\geq 4\), then we know that if \(\alpha_{2}\geq 2\) or \(\alpha_{3}\geq 2\), then \(C_{4}^{2}\leq G\) or \(C_{8}^{2}\leq G\), both of which are \(3\)-magic by Proposition 2.7. Suppose then that \(\alpha_{i}=0\) for all \(i\geq 4\), \(\alpha_{2}\leq 1\), and \(\alpha_{3}\leq 1\). 3. Otherwise, we have the following: 1. if \(\alpha_{1}\neq 0\) and \(\alpha_{3}=1\), we have \(C_{2}\times C_{8}\leq G\), which is \(3\)-magic by Proposition 2.15. 2. if \(\alpha_{2}=1\) and \(\alpha_{3}=1\), then \(C_{4}\times C_{8}\leq G\), which was shown to be \(3\)-magic in Proposition 2.19. 3. if \(\alpha_{3}=0\), then \(G\) is not \(3\)-magic by Propositions 2.14 and 2.17. 2. If \(G\) is not a \(2\)-group, then we can say that: 1. if \(p|n\), where \(p\geq 5\), then \(C_{2p}\leq G\), which is \(3\)-magic by Proposition 2.6. 2. if there is no prime \(p\geq 5\) such that \(p|n\), then \(3|n\) in order for \(G\) to not be a \(2\)-group. It follows that: 1. if \(\alpha_{i_{0}}\neq 0\) for some \(i_{0}\geq 2\), then \(C_{2^{i_{0}}(3)}\leq G\), which is \(3\) magic by Proposition 2.6. 2. if \(\alpha_{i}=0\) for all \(i\geq 2\), then if \(9|n\), we have \(C_{9}\leq G\) or \(C_{3}^{2}\leq G\), both of which are \(3\)-magic by Proposition 2.6 and 2.7, respectively, and if \(9\nmid n\), then \(G\) is not \(3\)-magic by Proposition 2.18.
\(\Box\)
## 4 Nonabelian Groups
We now briefly turn our attention to nonabelian groups and first present some sufficient conditions for \(3\)-magic groups.
**Proposition 4.1**.: _Let \(G\) be a group with \(|G|=n\). If any of the following are true, then \(G\) is \(3\)-magic._
1. _There is a prime_ \(p\geq 11\) _such that_ \(p|n\)_._
2. _There is a prime_ \(p\neq 2\) _such that_ \(p^{2}|n\)_._
**Proof.**
1. By Cauchy's Theorem, \(C_{p}\leq G\), which is \(3\)-magic by Propositon 2.6.
2. As a consequence of the Sylow Theorems, \(H\leq G\), where \(|H|=p^{2}\). This means that \(H\cong C_{p^{2}}\) or \(H\cong C_{p}^{2}\), both of which are \(3\)-magic by Proposition 2.7.
Next we present a nonabelian group that is \(3\)-magic but not covered by the previous proposition.
**Example 4.2**.: When the homomorphism is acting faithfully, \(C_{7}\rtimes C_{3}\) is \(3\)-magic.
**Proof.** When the homomorphism is acting faithfully, we can write \(C_{7}\rtimes C_{3}=\left\langle a,b\mid a^{7}=b^{3}=1,bab^{-1}=a^{4}\right\rangle\). Then
\[\begin{bmatrix}a&ab&a^{3}b^{2}\\ a^{2}b^{2}&1&a^{6}b\\ a^{2}b&a^{5}b^{2}&a^{6}\end{bmatrix}\]
is a magic square with magic product \(1\). \(\Box\)
## 5 Summary
In this paper, we defined a magic group by likening it to a magic square using the language of groups. We were then able to characterize the finitely generated abelian \(3\)-magic groups. Beyond this, we also discussed whether some nonabelian groups are \(3\)-magic.
## 6 Future Endeavors
There are multiple topics of study for future investigations. For example, we could explore "magic cubes" or "magic tesseracts" in groups, potentially as an extension of the concept briefly touched on in [4]. Another idea is to investigate whether various structures (e.g. groups, graphs, etc.) have \(3\)-magic automorphism groups. Finally, in our future study, we would like to find groups that are \(n\)-magic for \(n>3\).
|
2305.19033 | Enhanced triplet superconductivity in next generation ultraclean UTe2 | The unconventional superconductor UTe$_2$ exhibits numerous signatures of
spin-triplet superconductivity -- a rare state of matter which could enable
quantum computation protected against decoherence. UTe$_2$ possesses a complex
phase landscape comprising two magnetic field-induced superconducting phases, a
metamagnetic transition to a field-polarised state, along with pair- and
charge-density wave orders. However, contradictory reports between studies
performed on UTe$_2$ specimens of varying quality have severely impeded
theoretical efforts to understand the microscopic origins of the exotic
superconductivity. Here, we report a comprehensive suite of high magnetic field
measurements on a new generation of pristine quality UTe$_2$ crystals. Our
experiments reveal a significantly revised high magnetic field superconducting
phase diagram in the ultraclean limit, showing a pronounced sensitivity of
field-induced superconductivity to the presence of crystalline disorder. We
employ a Ginzburg-Landau model that excellently captures this acute dependence
on sample quality. Our results suggest that in close proximity to a
field--induced metamagnetic transition the enhanced role of magnetic
fluctuations -- that are strongly suppressed by disorder -- is likely
responsible for tuning UTe$_2$ between two distinct spin-triplet
superconducting phases. | Z. Wu, T. I. Weinberger, J. Chen, A. Cabala, D. V. Chichinadze, D. Shaffer, J. Pospisil, J. Prokleska, T. Haidamak, G. Bastien, V. Sechovsky, A. J. Hickey, M. J. Mancera-Ugarte, S. Benjamin, D. E. Graf, Y. Skourski, G. G. Lonzarich, M. Valiska, F. M. Grosche, A. G. Eaton | 2023-05-30T13:51:20Z | http://arxiv.org/abs/2305.19033v3 | # Enhanced triplet superconductivity in next generation ultraclean UTe\({}_{2}\)
###### Abstract
The spin-triplet superconductor UTe\({}_{2}\) exhibits a myriad of exotic physical phenomena, including the possession of three distinct superconducting phases at ambient pressure for magnetic field \(\mu_{0}H\leq\) 40 T aligned in certain orientations. However, contradictory reports between studies performed on UTe\({}_{2}\) specimens of varying quality have severely impeded theoretical efforts to understand the microscopic properties of this material. Here, we report high magnetic field measurements on a new generation of ultraclean UTe\({}_{2}\) crystals grown by a salt flux technique, which possess enhanced superconducting critical temperatures and fields compared to previous sample generations. Remarkably, for \(H\) applied close to the hard magnetic \(b\) direction, we find that the angular extent of magnetic field-reinforced superconductivity is significantly increased in these pristine quality crystals. This suggests that in close proximity to a field-induced metamagnetic transition the enhanced role of magnetic fluctuations - that are strongly suppressed by disorder - is likely responsible for tuning UTe\({}_{2}\) between two distinct spin-triplet superconducting phases. Our results reveal a strong sensitivity to crystalline disorder of the field-reinforced superconducting state of UTe\({}_{2}\).
+
Footnote †: preprint: APS/123-QED
## I Introduction
A superconducting state is attained when a material exhibits macroscopic quantum phase coherence. Conventional (BCS) superconductors possess a bosonic coherent quantum fluid composed of pairs of electrons that are weakly bound together by phononic mediation to form a Cooper pair [1; 2]. The condensation of Cooper pairs also drives superconductivity in unconventional superconductors, but in these materials the pairing glue originates not from phonons but instead from attractive interactions typically found on the border of density or magnetic instabilities [3]. The majority of known unconventional superconductors exhibit magnetically mediated superconductivity located in close proximity to an antiferromagnetically ordered state, comprising Cooper pairs in a spin-singlet configuration that have a total charge of \(2e\) and zero net spin [4; 5].
The discovery of superconductivity in the ferromagnetic metals UGe\({}_{2}\)[6], URhGe [7], and UCoGe [8] was surprising because most superconducting states are fragile to the presence of a magnetic field, as this tends to break apart the Cooper pairs that compose the charged superfluid. However, an alternative pairing mechanism was proposed for these materials, involving two electrons of the same spin combined in a triplet configuration, for which ferromagnetic correlations may thus enhance the attractive interaction [9].
The discovery of superconductivity below 1.6 K in UTe\({}_{2}\)[10] was also met with surprise, as although this material also exhibits several features characteristic of spin-triplet pairing, it possesses a paramagnetic rather than ferromagnetic groundstate. Two of the strongest observations in favor of triplet superconductivity in UTe\({}_{2}\) include a negligible change in the NMR Knight shift on cooling through the superconducting critical temperature (\(T_{\rm c}\)), and large upper critical fields along each crystallographic axis that are considerably higher than the Pauli-limit for spin-singlet Cooper pairs [11]. Notably, for a magnetic field, \(H\), applied along the hard magnetic \(b\) direction, superconductivity persists to \(\mu_{0}H\approx\) 35 T - over an order of magnitude higher than the Pauli limit [12; 13], at which point it is sharply truncated by a first-order metamagnetic (MM) transition into a field-polarised phase [14; 15]. Remarkably, this field-polarised state hosts a magnetic field-reentrant superconducting phase over a narrow angular range of applied field, which onsets at \(\mu_{0}H\approx\) 40 T [14; 16; 17] and appears to persist to \(\mu_{0}H\approx\) 70 T [18].
Careful angle-dependent resistivity measurements in high magnetic fields, for field applied in close proximity to the \(b\)-axis, observed that there appear to be two
distinct superconducting phases over the field interval of \(0\) T \(\leq\mu_{0}H\lessapprox\) 35 T [14; 15]. This interpretation has recently been corroborated by bulk thermodynamic measurements at this field orientation, indicating the presence of a distinct field-reinforced superconducting state for \(\mu_{0}H\gtrapprox 15\) T [19]. Throughout this report we shall refer to the zero field superconducting state as SC1, to the field-reinforced phase for field applied close to the \(b\) direction as SC2, and to the very high magnetic field-reentrant phase, located at \(\mu_{0}H\gtrapprox 40\) T for inclined angles in the \(b-c\) rotation plane, as SC3.
Several early studies of the superconducting properties of UTe\({}_{2}\) observed two superconducting transitions in the temperature dependence of the specific heat (in zero applied magnetic field) [20; 10; 21], leading to speculation regarding a possible multi-component nature of the superconducting order parameter at ambient pressure and magnetic field. However, subsequent reports demonstrated that this was perhaps instead an artifact of sample inhomogeneity [11; 22], with higher quality samples found to exhibit a singular sharp superconducting transition [23; 24; 25]. Kerr effect measurements on samples exhibiting two specific heat transitions yielded evidence for time reversal symmetry breaking [20]; however, this observation could not be reproduced on higher quality samples [26]. Theoretical efforts to understand the microscopic details of the remarkable superconducting properties of UTe\({}_{2}\) have thus been stymied by these discrepancies between experimental studies performed on samples of varying quality.
In this work we report measurements on a new generation of UTe\({}_{2}\) crystals grown by a molten salt flux (MSF) method, using starting materials of elemental uranium refined by the solid state electrotransport technique [27] and tellurium pieces of 6N purity. The pristine quality of the resulting single crystals is evidenced by their high \(T_{\rm c}\) values of up to 2.10 K, low residual resistivities down to 0.48 \(\upmu\Omega\) cm, and the observation of magnetic quantum oscillations at high magnetic fields and low temperatures [25]. Concomitant with the enhancement in \(T_{\rm c}\), the upper critical fields (\(H_{\rm c2}\)) of SC1 along the \(a\) and \(c\) directions are also enhanced in comparison to samples with lower \(T_{\rm c}\) values. Notably, we also find that the angular extent of SC2 - that is, the rotation angle away from \(b\) over which a zero resistance state is still observed at low temperatures for \(\mu_{0}H\approx\) 30 T - is significantly enhanced for this new generation of high purity crystals. We find that this can be well described by considering the enhanced role of magnetic fluctuations close to the MM transition.
By contrast, we find that the MM transition to the field polarised state still sharply truncates superconductivity at \(\mu_{0}H_{m}\approx\) 35 T in MSF samples. This indicates that while the SC1 and SC2 superconducting phases of UTe\({}_{2}\) are highly sensitive to the effects of crystalline disorder, the first-order phase transition to the high magnetic field polarised paramagnetic state is an intrinsic magnetic feature of the UTe\({}_{2}\) system, and is robust against disorder. We also find that the formation of the SC3 phase in ultraclean MSF samples appears to follow the same field-angle profile found in prior sample generations grown by the chemical vapor transport (CVT) method.
## II Experimental details
UTe\({}_{2}\) single crystals were grown by the MSF technique [28] using the methodology detailed in ref. [25]. Electrical transport measurements were performed using the standard four-probe technique, with current sourced along the \(a\) direction. Electrical contacts on single crystal samples were formed by spot-welding gold wires of 25 \(\upmu\)m diameter onto the sample surface. Wires were then secured in place with a low temperature epoxy. All electrical transport measurements reported in this study up to maximal magnetic field strengths \(\leq\) 14 T were performed in a Quantum Design Ltd. Physical Properties Measurement System (QD PPMS) at the University of Cambridge, down to a base temperature of 0.5 K. Electrical transport measurements up to applied magnetic field strengths of 41.5 T were obtained in a resistive magnet at the National High Magnetic Field Lab, Florida, USA, in a \({}^{3}\)He cryostat with a base temperature of 0.35 K.
Skin depth measurements were performed using the proximity detector oscillator (PDO) technique [29]. This is achieved by measuring the resonant frequency, \(f\), of an LC circuit connected to a coil of wire secured in close proximity to a sample, in order to achieve a high effective filling factor, \(\eta\). As the magnetic field is swept, the resulting change in the resistivity, \(\rho\), and magnetic susceptibility, \(\chi_{s}\), of the sample induce a change in the inductance of the measurement coil. This in turn shifts the resonant frequency of the PDO circuit, which may be expressed as
\[\frac{\Delta f}{f}\approx-\eta\frac{\delta}{d}\left(\mu_{r}\frac{\Delta\rho}{ \rho}+\Delta\chi_{s}\right), \tag{1}\]
where \(d\) is the sample thickness, \(\mu_{r}=\chi_{s}+1\), and the skin depth \(\delta\) may be written as \(\delta=\sqrt{\frac{2\rho}{\mu_{r}\omega\omega}}\), for excitation frequency \(\omega\)[29; 30]. Thus, the PDO measurement technique is sensitive to changes in both the electrical resistivity and the magnetic susceptibility of the sample.
Steady (dc) field PDO measurements were performed at the National High Magnetic Field Lab, Florida, USA. One set of measurements was performed in an all-superconducting magnet utilising a dilution fridge sample space, over the temperature- and field-ranges of 20-100 mK and 0-28 T. Higher temperature, higher field measurements were obtained using a resistive magnet fitted with a \({}^{3}\)He sample environment. Pulsed magnetic field PDO measurements were performed at Hochfeld-Magnetlabor Dresden, Germany, down to a base temperature of 0.6 K and up to a maximum applied field strength of 70 T.
## III Enhancement of \(T_{\rm c}\) and \(H_{\rm c2}\) of SC1
Figure 1 shows the temperature dependence of the electrical resistivity, \(\rho(T)\), for three MSF samples (colored points) of varying quality. Data for \(\rho(T)\) of a CVT sample reported in ref. [10] is plotted in gray for comparison. A clear trend is apparent, with samples exhibiting higher \(T_{\rm c}\) values also possessing higher residual resistivity ratios (RRRs), where the RRR is the ratio between the residual resistivity, \(\rho_{0}\), and \(\rho(T=300\) K).
Table 1 tabulates these data presented in Fig. 1, and also includes data from other studies as indicated. Here, the correlation between \(T_{\rm c}\) and RRR is further emphasised, with samples exhibiting high \(T_{\rm c}\) values also possessing low residual resistivities (and thus high RRRs). A high RRR is indicative of high sample purity [23], as samples containing less crystalline disorder will thus have lower scattering rates for the charge carriers partaking in the electrical transport measurement. Characterising sample quality by comparison of RRR values is a particularly effective methodology, as it is agnostic with regards to the source of the crystalline disorder - be it from grain boundaries or vacancies or impurities, from some other source of disorder, or indeed a combination of several types. The presence of any such defects will lead to an increase in the charge carrier scattering rate, thereby yielding a lower resultant RRR.
Figure 2 shows a comparison of the extent of superconductivity for CVT and MSF samples. For magnetic field applied along the crystallographic \(a\) and \(c\) directions, \(H_{c2}\) is clearly enhanced for the cleaner MSF samples, in good agreement with ref. [33]. Along the hard magnetic \(b\) direction, \(T_{\rm c}(H)\) is also enhanced for all temperatures measured. The effect of magnetic field-reinforced superconductivity along this direction is observed as a kink in the \(T_{\rm c}(H)\) curve at \(\mu_{0}H\approx 15\) T, as reported previously [14; 19] - but this feature occurs at higher temperature in the case of MSF-grown UTe\({}_{2}\) compared to CVT samples. We also find that the lower critical field (\(H_{c1}\)) is enhanced for MSF samples, consistent with a recent report [34], as shown in Appendix B.
This observation of increased sample purity leading to an enhancement of \(T_{\rm c}\) and \(H_{c}\) is not uncommon for
Figure 1: Electrical resistivity, \(\rho\), as a function of temperature, \(T\), for three samples grown by the molten salt flux (MSF) technique (colored points), plotted alongside data reported for a chemical vapor transport (CVT) specimen in ref. [10]. \(T_{\rm c}\) values were determined by zero resistivity. Residual resistivity ratios (RRRs) were computed by fitting the low temperature normal state resistivity with the dashed curves, of functional form \(\rho=AT^{2}+\rho_{0}\) for constant \(A\), to extract the residual normal state resistivity \(\rho_{0}\). The dimensionless RRR value is defined as \(\rho(T=300\) K)/\(\rho_{0}\).
\begin{table}
\begin{tabular}{c|c c c|c} Growth & \multirow{2}{*}{\(T_{\rm c}\) (K) method} & \multirow{2}{*}{\(\rho_{0}\) (\(\mu\Omega\) cm)} & \multirow{2}{*}{RRR} & \multirow{2}{*}{Reference} \\ & & & (\(\mu\Omega\) cm) & & \\ \hline & 2.10 & 0.48 & 904 & \\ MSF & 2.08 & 1.1 & 406 & This study \\ & 2.02 & 4.7 & 105 & \\ \hline MSF & 2.06 & 1.7 & 220 & \begin{tabular}{c} Aoki et al. \\ (2022) [24] \\ \end{tabular} \\ \hline MSF & 2.10 & - & 1000 & \begin{tabular}{c} Sakai et al. \\ (2022) [28] \\ \end{tabular} \\ \hline & 2.00 & 7 & 88 & \\ CVT & 1.95 & 9 & 70 & \begin{tabular}{c} Rosa et al. \\ (2022) [23] \\ \end{tabular} \\ & 1.85 & 12 & 55 & \\ \hline CVT & 1.44 & 16 & 40 & \begin{tabular}{c} Ran et al. \\ (2019) [10] \\ \end{tabular} \\ \hline CVT & 1.55 - 1.60 & 19 & 35 &
\begin{tabular}{c} Aoki et al. \\ (2019) [31] \\ \end{tabular} \\ \hline CVT & 1.55 - 1.60 & 16 & 35 - 40 & Helm et al. \\ CVT FIB & 1.55 - 1.60 & 27 & 25 - 30 & (2022) [18] \\ \end{tabular}
\end{table}
Table 1: Comparison of critical superconducting temperature (\(T_{\rm c}\)), residual resistivity (\(\rho_{0}\)), and the residual resistivity ratio (RRR) for UTe\({}_{2}\) samples grown by the MSF and CVT techniques from various reports as indicated. In all cases, \(T_{\rm c}\) is defined by zero resistivity, which we identify as the first measurement point to fall below 0.1 \(\mu\Omega\) cm on cooling. \(\rho_{0}\) is determined by a quadratic fitting at low temperatures, as depicted in Figure 1, to give the expected normal state resistivity value at 0 K in the absence of superconductivity. RRR is the ratio between \(\rho_{0}\) and \(\rho(T=300\) K). FIB stands for focused ion beam. Note that in Sakai et al. [28] the authors stated that their RRR = 1000 sample was too small to accurately determine the resistivity – therefore a value for \(\rho_{0}\) was not obtained.
unconventional superconductors, with a strong correlation between \(T_{\rm c}\) and \(\rho_{0}\) previously reported, for example, in studies of ruthenates [35], cuprates [36], and heavy fermion superconductors [37; 38]. A quantitative analysis of the effect of crystalline disorder can often be achieved by utilizing the Abrikosov-Gor'kov theory [39]. However, it has been suggested that this approach does not appear to be valid for the case of UTe\({}_{2}\)[40], indicating a complex dependence of superconductivity on the presence of disorder, as may be expected for a \(p\)-wave superconductor.
The high purity of UTe\({}_{2}\) samples investigated in this study is further underlined by their ability to exhibit the de Haas-van Alphen (dHvA) and Shubnikov-de Haas (SdH) effects at high magnetic fields and low temperatures. All measurements reported in this study were performed on crystals from the same batch as those previously reported [25] to exhibit high frequency quantum oscillations, indicative of a long mean free path and thus
Figure 2: Magnetic field–temperature superconducting phase diagram of UTe\({}_{2}\). For field oriented along each crystallographic axis, \(T_{\rm c}(H)\) is enhanced for MSF samples (bold symbols) in comparison to CVT samples (pale symbols). Lines are given as a guide to the eye. Contacted (contactless) resistivity measurements from this study are represented by solid diamonds (circles). Raw resistivity data used in part to construct this figure are given in Appendix B. The procedure for determining error bars for contactless resistivity points is detailed in Appendix A. All contacted resistivity measurements were performed on the RRR = 406 sample from Table 1. Additional MSF resistivity data along the \(b\) direction are reproduced from ref. [32]. CVT resistivity data are given by up (down) triangles, reproduced from ref. [10] (ref. [31]). We identify the normal-superconducting transition temperature by the point at which zero resistivity is first attained, as defined in Table 1.
Figure 3: (a) PDO measurement of the skin depth of UTe\({}_{2}\) for magnetic field applied along the \(c\) direction at various temperatures (strictly, this is a measurement of \(\frac{\partial f}{\partial t}\) as per Eq. 1, which we refer to as skin depth for succinctness). The derivative of the 0.1 K curve is also plotted (\(\frac{\partial f_{\rm PDO}}{\partial h}\)), identifying the superconducting transition out of the SC1 state. These data form part of Fig 1. (b) Skin depth for field oriented along the \(b\) direction (dark blue curve) and tilted \(15^{\circ}\) from \(c\) towards \(b\) (ocher curve). The inset shows a zoomed view of the \(H\parallel b\) data, with an arrow marking the location of an anomalous feature that appears to indicate the boundary between SC1 and SC2. (c) Oscillatory component of the PDO signal at 20 mK, showing prominent quantum oscillations of frequencies \(\approx 3.5\) kT, consistent with prior studies [24; 25]. All data in this figure were collected on the same sample.
high crystalline quality.
Figure 3 shows the PDO response of UTe\({}_{2}\) at low temperatures up to intermediate magnetic field strengths. Note that the response of the PDO circuit is expressed in full in Eq. 1 - for brevity, we shall refer to this throughout as the skin depth, as aspects of both \(\rho\) and \(\chi_{s}\) are important. Fig. 3(a) maps the superconducting phase boundary for \(H\parallel c\). In Fig. 3(c) the oscillatory component (\(\Delta f_{\text{PDO}}\)) of the PDO signal at \(T\) = 20 mK is isolated, which exhibits clear quantum oscillations. The observation of quantum oscillations in a material requires \(\omega_{c}\tau\gtrsim 1\), where \(\omega_{c}\) is the cyclotron frequency and \(\tau\) is the quasiparticle lifetime [41]. Therefore, the manifestation of quantum oscillations in our samples indicates that the mapping of the UTe\({}_{2}\) phase diagram presented in this study gives an accurate description of the UTe\({}_{2}\) system in the clean quantum limit.
## IV Pronounced angular enhancement of SC2
One of the most remarkable features of the UTe\({}_{2}\) phase diagram (at ambient pressure) is the presence of three distinct superconducting phases for magnetic field aligned along certain orientations [14]. For \(H\) applied along the \(b\) direction, at low temperatures (\(T\) \(<\) 0.5 K) zero resistance is observed all the way up to 34.5 T [16]. Remarkably, at higher temperatures (\(T\)\(\approx\) 1 K) and for field applied at a slight tilt angle away from \(H\parallel b\), measurements of CVT samples have shown that rather than a single superconducting state persisting for 0 T \(\leq\)\(\mu_{0}H\)\(\leq\) 34.5 T, there are instead two distinct superconducting phases present over this field interval [19], with the higher-field phase (SC2) having been referred to as a "field-reinforced" superconducting state [11].
Figure 4 shows the skin depth of UTe\({}_{2}\) measured in pulsed magnetic fields up to 70 T, for field applied along the hard magnetic \(b\) direction. The MM transition to the polarised paramagnetic state is clearly observed by a sharp step in the skin depth at \(\mu_{0}H_{m}\)\(\approx\) 35 T for all temperatures [11]. An interesting aspect of our PDO measurements is the presence of an anomalous kink feature, marked with arrows in Fig. 4(a) (and in the inset of Fig. 3(b)), which appears to demarcate the phase boundary between SC1 and either SC2 or the normal state, depending on the temperature. These points are plotted as purple circles in Fig. 4, along with resistivity and specific heat data from previous reports [10; 16; 32; 19]. By Eq. 1 the change in frequency of the PDO circuit is sensitive to both the electrical resistivity and the magnetic susceptibility of the sample. Thus, this observation appears consistent with recent reports [17; 32] in which a kink in the magnetic susceptibility has been attributed to marking the termination of SC1, which is visible in our skin depth measurements even though the resistivity remains zero as the material passes from SC1 to SC2.
Figure 5 shows the resistivity of MSF-grown UTe\({}_{2}\) measured in a resistive magnet over the field interval 0 T \(\leq\)\(\mu_{0}H\)\(\leq\) 41.5 T at \(T\) = 0.4 K for various magnetic field tilt angles as indicated. Data in the \(b-c\) plane were taken on the RRR = 406 sample from Table 1 while those in the \(b-a\) plane are from the RRR = 105 sample.
At \(T\) = 0.4 K, for small tilt angles within 5\({}^{\circ}\) from the \(b\) direction in both rotation planes, zero resistivity persists until the magnetic field strength exceeds 34.0 T, whereupon the resistivity increases rapidly at the MM transition as SC2 terminates and the polarised paramagnetic state is entered. In the \(b-c\) rotation plane, this remains the case for angles up to 19\({}^{\circ}\) away from \(b\); however, by 25\({}^{\circ}\) nonzero resistivity is observed at \(\mu_{0}H\) as low as 20 T (Fig. 5(a)). Above 20 T the resistivity at this angle then remains small but nonzero up to 38 T. At this point the SC3 phase is accessed and zero resistivity is observed up to this measurement's highest applied field strength of 41.5 T.
Figure 6 compares the angular extent of SC2 by collating selected angles from Fig. 5 alongside prior CVT studies. In the \(b-c\) rotation plane, CVT measurements
Figure 4: (a) PDO measurements for \(H\parallel b\) at indicated temperatures. The 0.1 K curve is the same data as in Fig. 3b, measured in a dc magnet; all other data were obtained in a pulsed magnet. Arrows indicate the anomalous feature in the PDO signal displayed in Fig. 3b, marked by purple circles in panel (b), which indicates a magnetic field-induced transition between two superconducting states (SC1 and SC2). (b) Field-temperature phase diagram comparing the phase-space of CVT and MSF UTe\({}_{2}\) samples for \(H\parallel b\). Points are from refs [10; 16; 32; 19] as indicated. Lines are as a guide to the eye. Two distinct superconducting phases are observed at low temperatures for this field orientation, which we label as SC1 and SC2. The extent of both SC1 and SC2 in temperature is clearly enhanced for MSF samples compared to CVT specimens. However, both types of samples see the SC2 phase sharply truncated by a MM transition to a field polarised state at \(\mu_{0}H_{m}\)\(\approx\) 35 T.
reported by Knebel et al. [15] found that for a rotation angle of 8\({}^{\circ}\) away from \(b\), zero resistivity persisted up to their highest accessed field strength of 35 T. However, at 12\({}^{\circ}\) this was no longer the case, with nonzero resistance observed over the field interval of 14 T \(\lessapprox\mu_{0}H\lessapprox\) 25 T. The resistivity then returned to zero for 25 T \(\lessapprox\mu_{0}H\approx\) 30 T, above which it increased up until 35 T (Fig. 6(a)).
By contrast, our measurements on MSF-grown UTe\({}_{2}\) yield zero resistivity over the entire field interval 0 T \(\leq\mu_{0}H\lessapprox\) 34.5 T for successive tilt angles up to and including 19\({}^{\circ}\) away from \(b\) towards \(c\). Notably, our measurements in the \(b-c\) plane were performed in a \({}^{3}\)He system, at a temperature an order of magnitude higher than those reported by Knebel et al. [15]. This indicates a remarkable angular expansion of SC2 resulting from the enhancement of purity in this new generation of crystals.
A similar trend is found in the \(b-a\) rotation plane. Prior measurements on a CVT specimen reported by Ran et al. [14] found a strong sensitivity of the extent of SC2 within a very small angular range of only 0.3\({}^{\circ}\), with markedly different \(\rho(H)\) observed for 4.7\({}^{\circ}\) compared to 5.0\({}^{\circ}\) (Fig. 6(b)). By comparison, at 5\({}^{\circ}\) we observed zero resistance persisting to \(\mu_{0}H>\) 34 T, while at 9\({}^{\circ}\) and 10\({}^{\circ}\) the resistive transition is notably sensitive to such a small change in angle, indicating that the boundary of SC2 for MSF samples lies close to here. Interestingly, it appears that the angular extent of SC2 in both rotation planes appears to be approximately doubled for MSF compared to CVT samples - for angles \(b-c\) from approximately 12\({}^{\circ}\) to between 19\({}^{\circ}\)-25\({}^{\circ}\), and for \(b-a\) from 5\({}^{\circ}\) to around 10\({}^{\circ}\).
Figure 5: Angular dependence of resistivity for rotation in (a) the \(b-c\) plane and (b) the \(b-a\) plane. 0\({}^{\circ}\) corresponds to \(H\parallel b\) for both panels. Insets give a zoomed view of the magnetic field interval over which the MM transition is located. The data in panel (a) were recorded on the RRR = 406 sample from Table 1 while those in panel (b) are from the RRR = 105 sample. All data were obtained at \(T=0.4\) K.
Figure 6: Comparison of UTe\({}_{2}\)\(\rho(H)\) data for MSF and CVT samples in (a) the \(b-c\) rotation plane and (b) the \(b-a\) rotation plane. Insets give a zoomed view of the main panels. MSF curves for selected angles are reproduced from Fig. 5. CVT data in (a) are reproduced from ref. [15] while those in (b) are from ref. [14].
## V Field-angle phase space of UTe\({}_{2}\)
The previous sections have demonstrated that the critical fields of SC1, and the angular extent of SC2, have been enhanced for this new generation of pristine quality UTe\({}_{2}\) crystals. We turn our attention now to consider the behavior of the field polarised state, which is instructive as it is this phase into which SC2 is abruptly quenched, and out of which SC3 emerges.
Fig. 4 shows a clear step in the skin depth for \(H\parallel b\) at \(\mu_{0}H\approx 35\) T. Extensive prior high magnetic field measurements on CVT-grown samples have identified this feature as a first-order MM transition to a polarised paramagnetic state at which the magnetization of the material abruptly jumps by \(\approx 0.5\)\(\mu_{\rm B}\) per formula unit [11; 14; 42; 43].
Figure 7 tracks the MM transition as the orientation of the magnetic field is rotated away from \(b\) towards \(c\), and compares with prior PDO measurements on a CVT specimen reported in ref. [14]. At \(\theta=\{0^{\circ},20^{\circ}\}\) the sharp rise in the skin depth - caused by the abrupt increase in resistivity characteristic of entering the polarised paramagnetic phase - occurs at the same value of \(H\) for both CVT and MSF samples (within experimental resolution). At \(\theta=33^{\circ}\), again both samples see a jump in the skin depth at the same field strength - but here the jump is in the opposite direction, due to the presence of SC3.
Figure 8 depicts the phase space of UTe\({}_{2}\) for applied magnetic fields oriented in the \(b-c\) and \(b-a\) planes, at strengths up to 70 T, combining our MSF data with prior CVT studies. CVT \(\rho\) from Knebel et al. [15] was reportedly measured at \(T=30\) mK; our MSF PDO points tracking the termination of SC1 were measured at \(T=0.1\) K. All our \(\rho\) points in this figure were measured at \(T=0.4\) K in steady fields, while the \(\rho\) and PDO measurements reported by Ran et al. [14] were performed both in steady and pulsed fields, at \(T\approx 0.4\)-\(0.5\) K. Our pulsed field PDO measurements tracking the field polarised state, and the \(\rho\) measurements reported in Helm et al. [18], were performed at \(T\approx 0.6\)-\(0.7\) K.
Upon inspecting Figs. 7 and 8, there appears to be negligible difference between measurements of the MM transition for MSF and CVT samples. This indicates that this transition is an intrinsic property of the UTe\({}_{2}\) system that, unlike SC1 and SC2, is insensitive to crystalline disorder. Furthermore, we find that the temperature evolution of the MM transition tracks very similarly between MSF and CVT samples, implying that the associated energy scale is unchanged under the improvement of sample quality (see Figure 16 in Appendix B for steady field data up to \(T=34\) K) [16; 44].
## VI Modelling the origin of SC2
The mechanism behind, and the precise form of, the superconducting order parameter in UTe\({}_{2}\) remains the subject of much theoretical debate [45; 46; 47; 48; 49; 50; 51; 52]. The current consensus appears to be that at zero external field a triplet order parameter is stabilized by some form of magnetic fluctuations, giving rise to the SC1 phase [11]. The experimental data suggests, however, that the SC2 phase has a rather different character, as evidenced by its acute sensitivity to the field direction, its starkly different NMR spectra, and by the observation of \(T_{c}\) growing with increasing field aligned along the \(b\)-axis [19; 53; 52; 53].
These observations suggest that the SC2 phase likely has a very different pairing mechanism compared to SC1, with a distinct possibility being that it is driven by MM fluctuations. Such a mechanism for magnetic field-reinforced superconductivity has previously been considered in the case of the ferromagnetic superconductors URhGe and UCoGe [54; 55; 56; 9]. We theoretically model this scenario (taking \(k_{\rm B}=\hbar=1\) throughout) for the case of UTe\({}_{2}\) by first considering a Ginzburg-Landau theory describing the MM phase transition [56; 57; 58]:
\[\mathcal{F}[\mathbf{M}](\mathbf{H}) = \frac{1}{2}\chi_{i}^{-1}M_{i}^{2}+\frac{1}{4}\beta_{ij}M_{i}^{2}M_ {j}^{2}+\frac{1}{6}\gamma M_{y}^{6}-\mathbf{M}\cdot\mathbf{H}+ \tag{2}\] \[+ \kappa_{j}(\partial_{j}M_{j})^{2}\]
where \(i,j=x,y,z\), \(\mathbf{M}\) is the magnetic order parameter, and \(\chi_{i}^{-1},\beta_{ij},\gamma\), and \(\kappa_{j}\) are Ginzburg-Landau parameters. Good agreement with the experimental data is obtained only if \(\beta_{xy}\) is non-zero (see caption of Fig. 9 for parameter values). We chose the parameters such that at zero applied field, the free energy has two minima: a global
Figure 7: Angular evolution of the MM transition at high fields in the \(b-c\) plane; \(\theta=0^{\circ}\) corresponds to \(H\parallel b\). Notably, we find that the location of the MM transition is unchanged comparing between MSF (solid curves) and CVT (dashed curves from ref. [14]) samples, including for the onset of re-entrant superconductivity (SC3) at \(\theta=~{}33^{\circ}\).
minimum at \(\mathbf{M}=0\), and a minimum with higher energy at \(\mathbf{M}=\mathbf{M}_{*}\) pointing along the \(b\) direction. As the field is applied, the minimum at \(\mathbf{M}_{*}\) decreases until it becomes the new global minimum at the metamagnetic phase transition point \(H_{m}\). We denote the energy at this minimum as \(\Omega_{*}(\mathbf{q})\). We find that with the free energy Eq. (2), for magnetic fields aligned within the crystallographic \(ab\) and \(bc\) planes, a good fit is given by
\[\Omega_{*}(\mathbf{q})\approx g(H_{m}-H_{y}+\alpha H_{x}^{2})+\sum_{j}\kappa_{ j}q_{j}^{2}, \tag{3}\]
where \(g\) is a constant with dimensions of the magnetic field, and \(\alpha\) is a dimensionless constant (in particular, within this approximation \(\Omega_{*}(\mathbf{q})\) is independent of \(H_{z}\) when \(H_{z}\neq 0\) and \(H_{x}=0\)). To include the effect of fluctuations on superconductivity about this minimum, we quantize the associated mode as a bosonic field \(m_{\mathbf{q}}\), a massive magnon we refer to as a "metamagnon," with Hamiltonian \(\mathcal{H}_{M}=\sum_{\mathbf{q}}\Omega_{*}(\mathbf{q})\mathbf{m}_{\mathbf{q} }^{\dagger}m_{\mathbf{q}}\). The metamagnon couples to the electron spin \(\mathbf{S}(\mathbf{q})=\sum_{\mathbf{k}_{\mathbf{z}}s_{1}s_{2}}c_{\mathbf{k+ q}s_{1}}^{\dagger}\left(\mathbf{\sigma}\right)_{\alpha\beta}c_{\mathbf{k}s_{2}}\) (where \(s_{1},s_{2}=\uparrow,\downarrow\) are spin indices) as \(\mathcal{H}_{m,el}=\mu_{e}\sum_{\mathbf{q}}(m_{\mathbf{q}}+m_{-\mathbf{q}}^{ \dagger})S_{\parallel}(\mathbf{q})M_{*}\), where \(S_{\parallel}(\mathbf{q})=\mathbf{S}(\mathbf{q})\cdot\mathbf{M}_{*}/M_{*}\), and \(\mu_{e}\) is the electron magnetic moment. Integrating out the metamagnon \(m_{\mathbf{q}}\) (see Appendix C for details) gives rise to the usual ferromagnetic spin-fluctuation interactions \(\mathcal{H}_{int}=\sum_{\mathbf{q}}J(\mathbf{q})S_{\parallel}(\mathbf{q})S_{ \parallel}(-\mathbf{q})\), where
\[J(q)=-\frac{\mu_{e}^{2}M_{*}^{2}\Omega_{*}(\mathbf{q})}{\Omega_{*}^{2}( \mathbf{q})+\Gamma_{m}^{2}}.\]
Here we account for disorder via the metamagnon decay rate \(\Gamma_{m}\) (see Appendix C for details). Crucially, \(J(\mathbf{q})<0\) is an increasing function of \(H_{y}\) and \(J(0)\) is maximized at the metamagnetic phase transition.
Solving the linearized gap equation, we find that the superconducting order parameter expressed in the \(\mathbf{d}\)-vector notation is \(\Delta(\mathbf{p})=\mathbf{d}(\mathbf{p})\cdot\mathbf{\sigma}i\sigma^{y}\), with \(d_{x}=-id_{z}\) and \(d_{y}=0\) and \(d_{x}(\mathbf{p})=p_{j}\) with \(j=x,y,z\) corresponding to the largest \(\kappa_{j}\) parameter (see Appendix C for details). We do not speculate which \(\kappa_{j}\) is the largest as there are insufficient data to determine it; however, we note that possible forms of the order parameter we find include the non-unitary paired state proposed for UTe\({}_{2}\) in [59] (belong to the \(B_{1u}+iB_{3u}\) irreducible representation of \(D_{2h}\)), as well as that considered in [52] in order to explain the field direction sensitivity of the SC2 phase.
For any form of the parameter, the critical temperature for SC2 is given by
\[T_{c}^{(SC2)}(\mathbf{H})=1.13\Lambda\exp\left[-\frac{\left(\Omega_{*}^{2}(0) +\Gamma_{m}^{2}\right)^{2}}{8\nu\tilde{\kappa}\mu_{e}^{2}M_{*}^{2}\Omega_{*}^ {2}(0)}\right], \tag{4}\]
Figure 8: Angular magnetic field phase diagram of UTe\({}_{2}\) for \(\mu_{0}H\leq 70\) T. The phase boundary between SC1 and the normal state is located at higher magnetic field strengths for MSF samples compared to prior studies on CVT specimens (blue region). Furthermore, the angular extent of SC2 is greatly enhanced for MSF samples (pink region). The polarised paramagnetic state (orange region) is found to have the same angular profile for both types of samples. Lines and shading are as a guide to the eye. CVT data points from refs. [14; 15; 18].
where \(\nu\) is the density of states, \(\Lambda\) is the energy cutoff, and \(\tilde{\kappa}\) is equal to the largest \(\kappa_{j}\) times some form factor with units of momentum squared coming from integration over momentum. The corresponding \(T_{c}\) vs \(H_{y}\) plot is shown in Fig. 9(a), which also shows a cartoon picture of \(T_{c}^{(SC1)}\) in the SC1 phase. Importantly, we assume that SC1 is driven by some spin fluctuations that do not involve the metamagnon and have a strength that is independent of the applied field. We model the corresponding critical temperature for the SC1 phase as
\[\log\frac{T_{c}^{(SC1)}}{T_{c0}^{(SC1)}}=-(1-c(\theta,\phi))F\left(\frac{ \Gamma_{e}}{T_{c}}\right)-c(\theta,\phi)F\left(\frac{\Gamma_{e}+ih}{T_{c}}\right) \tag{5}\]
with \(F(x)=\text{Re}\left[\psi\left(\frac{1}{2}+\frac{x}{2\pi}\right)-\psi\left( \frac{1}{2}\right)\right]\), where \(h=\mu_{e}H=\mu_{B}g_{e}H/2\), \(T_{c0}^{(SC1)}\) is the critical temperature of SC1 without disorder in zero magnetic field (that we take to be 2.1 K), \(0<c<1\) is a phenomenological form factor that depends on the direction of the applied field, \(\Gamma_{e}\) is the electron decay rate due to disorder, and \(\psi(x)\) is the digamma function. We derive this equation under the assumption that pairing is mediated by generic spin fluctuations that are insensitive to the applied field, with some further simplifying assumptions (see Appendix C). Note that in Fig. 9(a) we extrapolated Eq. (4) all the way up to \(H_{y}=H_{m}\), though the formula is not strictly valid at that point as the coupling becomes strong.
In modelling the effects of disorder, we find that it is crucial that the metamagnon decay rate \(\Gamma_{m}\) depends on the direction of the applied magnetic field, in particular if the decay is dominated by two magnon scattering and/or Gilbert damping processes [60; 61; 62]. The exact functional form depends on the precise decay mechanism, but we find phenomenologically that the data is well described with \(\Gamma_{m}=\gamma_{x}\sin^{4}\phi+\gamma_{z}\sin^{4}\theta\), where \(\phi\) and \(\theta\) are the angles between the direction of magnetic field and the \(b-\)axis in the \(ab-\) and \(bc-\)crystallographic planes, respectively. The resulting phase diagram in Fig. 9(b) is in good qualitative agreement with the experimental data.
Here we neglected several other effects that give SC2 additional dependence on the direction and strength of the magnetic field. First, fields pointing away from the \(b\)-axis have a component parallel to the \(\mathbf{d}\)-vector, and therefore suppress SC2; we find, however, that this effect does not significantly alter the phase diagram. Second, the magnetization \(\mathbf{M}_{*}\) of the polarized paramagnetic phase is itself a function of the applied field and changes both magnitude and direction, which in turn alters the direction of the \(\mathbf{d}\)-vector. Third, we have neglected any mixing between SC1 and SC2, which necessarily occurs due to the breaking of crystalline symmetries by fields aligned away from the \(b\)-axis. And finally, we assumed the high energy cutoff is independent of the applied field, though it is likely a function of \(\Omega_{*}\).
## VII Discussion and Outlook
It is likely that a significant contributory factor to the enhancement of \(T_{\text{c}}\) for MSF-grown UTe\({}_{2}\) is the minimization of uranium vacancies. Recent x-ray diffraction (XRD) studies on UTe\({}_{2}\) specimens of varying quality found that CVT samples with 1.5 K \(\leq T_{\text{c}}\leq\) 2.0 K possessed uranium site defects of between \(\approx\) 1-3%, while low quality samples that did not exhibit (SC1) superconductivity at temperatures down to 0.45 K showed uranium vacancies of \(\approx\) 4-5% [28; 40; 63]. By contrast, an MSF
Figure 9: (a) Magnetic field dependence of critical temperatures for superconducting phases SC1 and SC2 for \(\mathbf{H}\) oriented along the \(b-\)axis estimated from Eq. (4). We used \(T_{c}=2.1\) K, \(\mu_{e}=0.2\mu_{\text{B}}\), \(c(0)=0.7\) for SC1 (see Eq. (5)) and \(\Lambda=1.5\) K, \(\frac{8M_{c}^{2}\nu_{\text{B}}}{g^{2}}=500\), \(H_{m}=35\) T for SC2. The green dashed line is an envelope of the two transition lines that is measured experimentally. (b) Calculated angular magnetic field phase diagram. The color coding is the same as for the experimental phase diagram in Fig. 8. The MM phase transition is obtained from Eq. (2) and is well fit with \(\chi_{y}^{-1}=1\), \(\chi_{z}^{-1}=0.5\), \(\chi_{x}^{-1}=0.01\), \(\beta_{xx}=0.02\), \(\beta_{yy}=-2\), \(\beta_{zz}=2\), \(\beta_{xy}=20\), and \(\gamma=0.8\) (all other parameters set to zero), with magnetic field measured in Tesla. For SC1, we took \(c=0.7+0.1\sin^{2}\theta\), \(T=0.35\) K (0.035 K) and \(\Gamma_{e}=0\) (0.2) to model the clean (dirty) sample. (We neglect the anisotropy of \(H_{c}\) seen in experiment in this case.) For SC2 we use Eq. (4) with \(\Omega_{*}\) from Eq. (3), with resulting parameters \(H_{m}=35\) T, \(g=1.6\times 10^{-3}\) and \(\alpha=0.024\). For the metamagnon decay rate \(\Gamma_{m}=\gamma_{x}\sin^{4}\phi+\gamma_{z}\sin^{4}\theta\), we took \(\gamma_{x}=0.4\) and \(\gamma_{z}=0.007\) to model the MSF samples and \(\gamma_{x}=4\) and \(\gamma_{z}=0.07\) to model the CVT samples (i.e. \(\Gamma_{m}(\text{CVT})=10\Gamma_{m}(\text{MSF})\)), and we took \(T=0.1\) K. A good agreement with experimental data is observed for both panels.
specimen with \(T_{\rm c}=2.1\) K exhibited no uranium deficiency within the experimental resolution of the XRD instrument [28].
Therefore, the enhancement of \(T_{\rm c}(H)\) of the SC1 phase for field applied along each crystallographic direction, as reported for measurements of MSF samples in ref. [33] and reproduced here in Section III, is likely to be due to the minimization of uranium site vacancies for this alternative growth process utilizing a salt flux. Our striking observation of the enhanced angular profile of the SC2 phase, which we detailed in Sections IV & V, can be very well described by considering the effects of disorder on MM fluctuations, as we outlined in Section VI.
It has been proposed in ref. [19] that the SC2 phase may be spin-singlet in character, rather than spin-triplet as widely considered by other studies [52, 53, 54, 55, 56, 46, 11, 45, 46, 52]. The authors of ref. [19] argue in favor of a singlet pairing mechanism for SC2 based on the profile of their high field specific heat measurements performed on CVT specimens. However, recent NMR measurements up to applied field strengths of 24.8 T argue strongly in favor of SC1 and SC2 both being spin-triplet [53]. Interestingly, the field dependence of the \({}^{125}\)Te-NMR intensity reported in ref. [53] indicates that in the SC1 phase the dominant spin component of the triplet pair points along the \(a\)-axis, while measurements at higher fields show that in the SC2 state the spins are instead aligned along the \(b\)-axis. This scenario is fully consistent with our MM fluctuation model presented in Section VI. We note that the broader profile of the SC2 superconducting transition (compared to that of SC1) observed in specific heat measurements in ref. [19] fits this picture of strong magnetic fluctuations near \(H_{m}\) driving the formation of the SC2 phase, with the broader heat capacity anomaly being analogous to prior studies of superconducting states driven by nematic fluctuations [66, 67]. Indeed, such a profile of a broad specific heat anomaly for magnetic fluctuation-induced field-reinforced superconductivity has recently been considered for the case of the ferromagnetic superconductor URhGe [55]. More empirical guidance, particularly from thermodynamic probes, is urgently needed to carefully unpick the microscopics underpinning the remarkable magnetic field-reinforced SC2 superconducting phase of UTe\({}_{2}\).
An interesting question posed by the observation of higher \(T_{\rm c}(H)\) for the SC1 phase of MSF UTe\({}_{2}\), and the purity-driven enhancement of the angular range of the SC2 phase, concerns the dependence of the SC3 state on the crystalline disorder. It has recently been observed that a very low quality sample with a RRR of 7.5, which does not exhibit SC1 superconductivity down to \(T\approx 0.5\) K, nevertheless exhibits SC3 superconductivity at high magnetic fields [68]. This robustness to disorder of the SC3 phase implies that it is likely very different in character to the SC2 phase, which as we showed in Section IV is highly sensitive to crystalline quality.
Since the optimization of the MSF growth technique for high quality UTe\({}_{2}\) specimens in 2022 [28], a number of experiments on this new generation of samples have helped clarify important physical properties of this system. These include dHvA and SdH effect measurements that reveal the Fermi surface geometry [24, 25], NMR and thermal conductivity measurements that give strikingly different results to prior CVT studies [69, 70] - providing a new perspective on the possible gap symmetry - along with Kerr rotation and specific heat measurements that also differ from prior observations and interpretations of studies on CVT specimens [11, 26]. We are therefore hopeful that continued experimental investigation of this new generation of higher quality crystals will provide the empirical impetus to enable more detailed theoretical models of this intriguing material to soon be attained.
In summary, we have performed a detailed comparative study of UTe\({}_{2}\) crystals grown by the molten salt flux (MSF) and chemical vapor transport (CVT) techniques. We found that the higher critical temperatures and lower residual resistivities of our ultraclean MSF crystals translated into higher critical field values compared to prior CVT studies. Comparatively, the properties of the metamagnetic (MM) transition, located at \(\mu_{0}H_{m}\approx 35\) T for \(H\parallel b\), appeared the same for both types of samples. This implies that the MM transition is a robust feature of the UTe\({}_{2}\) system that is insensitive to crystalline disorder, unlike the superconductivity. Strikingly, we found that the magnetic field-reinforced superconducting state close to this MM transition (SC2) has a significantly enhanced angular range for the cleaner MSF crystals. This observation can be well described by considering the enhanced role of magnetic fluctuations in proximity to the MM transition, thereby underpinning this intriguing field-reinforced superconducting phase, which is then quenched upon passing through the MM transition. This interpretation is consistent with recent NMR measurements, which taken together strongly imply that the field-reinforced phase (SC2) is markedly different in character compared to the zero-field superconducting state (SC1).
###### Acknowledgements.
We are grateful to N.R. Cooper, H. Liu, A.B. Shick, P. Opletal, H. Sakai, Y. Haga, and A.F. Bangura for stimulating discussions. We thank T.J. Brumm, S.T. Hannahs, E.S. Choi, T.P. Murphy, T. Helm, and C. Liu for technical advice and assistance. This project was supported by the EPSRC of the UK (grant no. EP/X011992/1). A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779 and the State of Florida. We acknowledge support of the HLD at HZDR, a member of the European Magnetic Field Laboratory (EMFL). The EMFL also supported dual-access to facilities at MGML, Charles University, Prague, under the European Union's Horizon 2020 research and innovation programme through
the ISABEL project (No. 871106). Crystal growth and characterization were performed in MGML (mgml.eu), which is supported within the program of Czech Research Infrastructures (project no. LM2023065). We acknowledge financial support by the Czech Science Foundation (GACR), project No. 22-22322S. Z.W. acknowledges studentship support from the Cambridge Trust (www.cambridgetrust.org) and the Chinese Scholarship Council (www.chinesescholarshipcouncil.com). T.I.W. and A.J.H. acknowledge support from EPSRC studentships EP/R513180/1 & EP/M506485/1. T.I.W. and A.G.E. acknowledge support from QuantEmX grants from ICAM and the Gordon and Betty Moore Foundation through Grants GBMF5305 & GBMF9616. D.V.C. acknowledges financial support from the National High Magnetic Field Laboratory through a Dirac Fellowship, which is funded by the National Science Foundation (Grant No. DMR-1644779) and the State of Florida. A.G.E. acknowledges support from the Henry Royce Institute for Advanced Materials through the Equipment Access Scheme enabling access to the Advanced Materials Characterisation Suite at Cambridge, grant numbers EP/P024947/1, EP/M000524/1 & EP/R00661X/1; and from Sidney Sussex College (University of Cambridge).
## Appendix A Sample Characterization and Calibration
In this section we compare characterization measurements of the sample measured by the PDO technique to high fields, for which the data are presented in Fig. 4. We measured the superconducting transition of this sample by three different methods: (a) PDO, (b) superconducting quantum interference device (dc SQUID), and (c) specific heat. PDO was measured by connecting the same coil as was later used in the 70 T pulsed magnet onto a homemade low temperature probe by a coaxial cable. This was then measured on cooling to the base temperature (1.8 K) of a PPMS system at 0.02 K/min in zero applied field. The dc magnetic moment, \(M\), was measured by a QD Magnetic Property Measurement System (MPMS). The curve shown in Fig. 10(b) was measured on warming with a 10 Oe field applied after a zero-field cool-down. Heat capacity (\(C_{p}\)) was measured by a standard QD PPMS heat capacity module.
Figure 11 shows contacted and contactless resistiv
Figure 11: Simultaneous measurement of (a) contacted resistivity and (b) contactless resistivity performed on the same sample. The derivative of the contactless resistivity data is given in (c).
Figure 10: Superconducting transition of the sample measured to high fields in Fig. 4, measured at ambient magnetic field. The superconducting transition was measured by (a) PDO, (b) SQUID, and (c) heat capacity.
ity measurements performed simultaneously on the same sample. A Gaussian is fitted to the derivative, with dashed lines marking the location of the Gaussian midpoint, 0.5\(\sigma\), and 1\(\sigma\). We find in Fig. 2 that very good correspondence between PDO and contacted resistivity measurements is observed by empirically taking the Gaussian centre of the derivative of the PDO signal plus 0.5\(\sigma\). The PDO error bars in Fig. 2 are each of length 1\(\sigma\) (to represent an approximate uncertainty of \(\pm\) 0.5\(\sigma\)).
Crystallographic orientation was calibrated by Laue diffraction as shown in Fig. 12. For pulsed field measurements, the data at 20\({}^{\circ}\) and 33\({}^{\circ}\) shown in Fig. 7 were obtained by mounting the sample on wedges of PEEK machined to the desired angles. The rotation study in dc magnetic fields presented in Fig. 5 was performed with a single-axis rotation probe utilizing a gear mechanism, with the rotation angle calibrated using a Hall sensor.
## Appendix B Phase Mapping of UTe\({}_{2}\)
All contacted resistivity measurements to determine the upper critical field (\(H_{c2}\)) of the SC1 phase, presented as solid diamonds in Fig. 2, were obtained on the RRR = 406 sample from Table 1. This sample was oriented by Laue diffractometry and then securely mounted on a G10 sample board to enable easy orientation along each crystallographic axis. Figure 13 shows the raw data from which Fig. 2 is partly constructed. These data were obtained using the dc electrical transport module of a QD PPMS down to a base temperature of 0.5 K. Each data point was obtained by stabilizing the temperature and averaging over several measurements. \(T_{c}(H)\) was defined by zero resistivity, which we identify as the first measurement point to fall below 0.1 u\(\Omega\) cm on cooling. The excitation current for measurements with field applied along the \(a\)- and \(c\)-axes was 100 uA; the excitation current for measurements with field applied along the \(b\)-axis was 200 uA. Small applied currents were required to maintain the temperature stability, due to low cooling power for \(T\lessapprox\) 1 K.
Figure 14 shows the PDO signal for rising and falling magnetic field over the duration of a pulsed field measurement. Due to the high \(\partial H/\partial t\) of a pulsed magnet, some amount of heating (from eddy currents and vortex motion) is inevitable [71; 72; 14]. On inspecting the up- and down-sweeps in Fig. 14, the location of the kink feature - which identifies the transition from SC1 to SC2 - has clearly moved to lower field on the down-sweep. This is highly likely to be an effect of heating during the pulse. Therefore, in Fig. 4 we use only the up-sweep data of each PDO measurement, to mitigate this effect.
Figure 14: Comparison of pulsed field PDO data, with \(H\parallel b\) at 1.0 K, for the up-sweep (solid) and down-sweep (dashed) of a magnetic field pulse. Arrows indicate the direction of field sweep, with markers indicating the anomalous kink feature displayed in Fig. 4(b). We note that the sharp feature at the start of the up-sweep is likely due to a flux line moving in the SC1 state – similar features have been observed in prior pulsed field studies of superconductors [71; 72; 14]. The overlap of the rising and falling traces is very good above 35 T in the field polarised state, which is known to have minimal temperature dependence around 1 K, but is noticeably different below 35 T when superconductivity returns, where the temperature dependence is much more sensitive. This observation is consistent with heating effects from eddy currents and/or vortex motion during the pulse.
Figure 13: Resistivity curves as a function of temperature for the RRR = 406 sample from Table 1 at intermediate magnetic fields with \(H\) applied along the (a) \(a\)-axis, (b) \(b\)-axis, and (c) \(c\)-axis. The strength of the applied field is indicated by the color scale. The corresponding profile of \(T_{c}(H)\) is given in panels (d-f). A comparison is made between MSF-grown and CVT-grown UTe\({}_{2}\) using CVT data points from ref. [10].
Magnetization measurements to determine the lower critical field (\(H_{c1}\)) were obtained using the helium-3 option of a QD MPMS, for which the data are presented in Figure. 15. The sample was mounted inside a Kapton tube, with the field aligned along the \(a\)-axis. For each isothermal field sweep, the sample was first warmed up above its critical temperature and the magnet was turned off at high temperature. Then the sample was cooled down to the assigned temperature in zero field. dc magnetic moment measurements were then performed with stabilized magnetic field.
When a sample is in the Meissner phase, it will be in a diamagnetic state of constant susceptibility [74]. In terms of moment versus field, a straight line is thus expected within the Meissner state. The lower critical field may therefore be identified as the lowest field value where the \(M\) vs \(H\) curve deviates from linearity (with a correction for the demagnetization effect, as detailed in e.g. refs. [75; 76]). We fit a linear function to the data below 5 Oe at each temperature, which is then subtracted from each curve. The background-subtracted data for each temperature are shown in Fig. 15(b). The flux penetration field \(H_{p}\) is then extracted by finding the first point that deviates from the flat line at each temperature. Following the discussion in ref. [77], \(H_{c1}\) may be related to \(H_{p}\) via the expression:
\[H_{c1}=\frac{H_{p}}{\tanh\sqrt{0.36t/w}}, \tag{12}\]
Figure 16: (a) dc field resistivity data tracking the evolution of the metamagnetic transition at \(H_{m}\) (indicated with markers). (b) Comparison between the progression in temperature of \(H_{m}\) from panel (a) with that reported for a CVT sample in ref. [44]. The red symbols indicate the first measured temperature point of each study at which the MM transition is no longer observed (which at elevated temperatures is identified as a broad maximum).
Figure 15: Magnetization measurements at low temperatures and magnetic fields. Measurements were performed on the RRR = 105 sample from Table 1. (a) Isothermal measurements of the dc magnetic moment, \(M\), as a function of magnetic field strength for \(H\parallel a\). Temperature points are indicated by the color scale. (b) Magnetic moment versus field after subtracting a linear fit to the data for \(\mu_{0}H<\) 5 Oe, as described in the text. Each curve is offset for clarity. (c) Magnetic field–temperature phase diagram of the Meissner state of UTe\({}_{2}\) for \(H\parallel a\). A comparison of CVT data is included from ref. [73].
where \(t\) is the sample thickness and \(w\) is the sample width. For this measurement, with \(H\parallel a\), \(t=3.46\) mm (along the \(a\) direction) and \(w=0.51\) mm.
We find that \(H_{c1}\) is enhanced for this new generation of higher quality samples (Fig. 15(c)), similar to the higher \(H_{c2}\) values shown in Fig. 2. We note that the \(H_{c1}\) value of \(\approx 20\) Oe we observe for \(H\parallel a\) agrees well with a recent report of a similar study on MSF-grown UTe\({}_{2}\)[34].
Figure 4 shows the temperature evolution of the MM transition up to 3 K, over which interval it displays little change. We also tracked the evolution of \(H_{m}\) to higher temperatures, as shown in Figure 16. Whereas at low temperature \(H_{m}\) is very clearly visible by the sudden increase of the resistivity, at high temperatures this feature rounds out into a broad maximum, indicated with markers in Fig. 16(a).
In Fig. 16(b) we compare the temperature evolution of the MM transition between our study on a MSF sample with that reported previously in ref. [44] for a CVT specimen. Note that our study and that of ref. [44] are performed at different angles, so the location of \(H_{m}\) (at equivalent temperature) is slightly different. However, as we show in Figs. 7 & 8 the angular evolution of \(H_{m}\) is the same for MSF and CVT samples. Furthermore, from the comparison in Fig. 16(b), it is clear that \(H_{m}\) displays a very similar temperature dependence for both types of sample. This indicates that the energy scale of the MM transition is unchanged between the two types of samples. We note that the MM transition at \(H_{m}\) is still observed even in very low quality samples that do not show SC1 superconductivity down to temperatures \(\approx 0.5\) K [68]. Given that we observe no change in the profile of this transition for this new generation of ultraclean crystals, we conclude that the MM transition is an intrinsic feature of the UTe\({}_{2}\) system, and unlike the superconductivity, is insensitive to the presence of crystalline disorder.
## Appendix C Details of theoretical calculations
### Model for SC2
Here we present the details of the derivation of the spin-fluctuation interactions by integrating out the metamagnons. The action is given by
\[\begin{split}\mathcal{S}=-\sum_{n,\mathbf{q}}(i\Omega_{n}-\Omega _{*}(\mathbf{q})+i\Gamma_{m}\text{sgn}\Omega_{n})m_{\mathbf{q}}^{\dagger}m_{ \mathbf{q}}+\\ +\sum_{\mathbf{q}}\mu_{e}(m_{\mathbf{q}}+m_{-\mathbf{q}}^{\dagger })M_{*}S_{\parallel}\end{split} \tag{10}\]
with \(\Omega_{*}(\mathbf{q})\) given in Eq. (3) and where we use the Matsubara formalism; \(\Omega_{n}=2\pi nT\) are bosonic Matsubara frequencies. We also introduced the decay term \(\Gamma_{m}\) to account for the finite lifetime of the metamagnon. Here we assume that the metamagnon is a degree of freedom that stems from localized magnetic moments and not the itinerant fermionic degrees of freedom; this is consistent with recent theories of metamagnetic phase transitions in Kondo lattices [78; 79]. This assumption allows us to integrate the bosons out in the standard way, which gives an effective dimensionless action for the fermions
\[\mathcal{S}_{M}[c,c^{\dagger}]=\beta\sum_{n,\mathbf{q}}J(i\Omega_{n},\mathbf{ q})S_{\parallel}(i\Omega_{n},\mathbf{q})S_{\parallel}(-i\Omega_{n},-\mathbf{q}), \tag{11}\]
where
\[J(i\Omega_{n},\mathbf{q})=-\frac{\mu_{e}^{2}M_{*}^{2}\Omega_{*}(\mathbf{q})}{ \Omega_{*}^{2}(\mathbf{q})+\Omega_{n}^{2}+\Gamma_{m}^{2}} \tag{12}\]
are the effective ferromagnetic-fluctuation type interactions. This is equivalent to writing the interaction Hamiltonian as
\[\mathcal{H}_{int}=\sum_{n,\mathbf{q},\mathbf{k},\mathbf{p}}J(i\Omega_{n}, \mathbf{q})c_{\mathbf{k}+\mathbf{q}s_{1}}^{\dagger}\left(\mathbf{\sigma}\cdot \hat{\mathbf{M}}_{*}\right)_{s_{1}s_{2}}c_{\mathbf{k}s_{2}}c_{\mathbf{p}- \mathbf{q}s_{3}}^{\dagger}\left(\mathbf{\sigma}\cdot\hat{\mathbf{M}}_{*}\right)_{s _{3}s_{4}}c_{\mathbf{p}s_{4}}. \tag{13}\]
A proper treatment of the frequency dependence of the interaction would require solving the Eliashberg equations [80; 81; 82]. However, because close to the metamagnetic phase transition the interaction strength has a similar frequency dependence as in the case of phonons, i.e., the attraction happens mostly at low frequency, we can approximate \(J(i\Omega_{n},\mathbf{q})\approx J(0,\mathbf{q})\). This gives us the form of the interactions as stated in Section VI. We note that there are additional corrections due to the fact that the metamagnons, unlike phonons, are massive excitations away from the metamagnetic transition. This would likely modify the low energy cutoff of the theory in a more rigorous treatment, but the effect should be small close to the metamagnetic transition.
To get the gap equation and obtain the expression for \(T_{c}\) we first need to recast the interaction in the singlet/triplet pairing channels using the Pauli matrix completeness relation
\[2\delta_{s_{1}s_{2}}\delta_{s_{3}s_{4}} =\sum_{\mu=0,x,y,z}\sigma_{s_{1}s_{3}}^{\mu}\sigma_{s_{2}s_{4}}^{ \mu}=\] \[=\sum_{\mu}(\sigma^{\mu}i\sigma^{y})_{s_{1}s_{3}}\left[(\sigma^{ \mu}i\sigma^{y})^{\dagger}\right]_{s_{2}s_{4}} \tag{14}\]
that yields (using the four-momentum notation
\((i\omega_{n},{\bf p})\) etc)
\[H_{int} =V_{\mu}(p;k) \tag{10}\] \[\times\left(c^{\dagger}_{-ks_{1}}(\sigma^{\mu}i\sigma^{y})_{s_{1}s_ {3}}c^{\dagger}_{ks_{3}}\right)\left(c_{ps_{4}}(\sigma^{\mu}i\sigma^{y})^{*}_{s _{4}s_{2}}c_{-ps_{2}}\right)\]
with \(\mu=0\) and \(\mu=j=x,y,z\) corresponding to singlet and triplet pairing channels respectively, where
\[V_{0}(p;k) =-J_{x}^{(S)}(p-k)-J_{y}^{(S)}(p-k)-J_{z}^{(S)}(p-k)\] \[V_{x}(p;k) =-J_{x}^{(A)}(p-k)+J_{y}^{(A)}(p-k)+J_{z}^{(A)}(p-k)\] \[V_{y}(p;k) =J_{x}^{(A)}(p-k)-J_{y}^{(A)}(p-k)+J_{z}^{(A)}(p-k)\] \[V_{z}(p;k) =J_{x}^{(A)}(p-k)+J_{y}^{(A)}(p-k)-J_{z}^{(A)}(p-k) \tag{11}\]
and where \(J_{j}^{(S/A)}=J^{(S/A)}(q)\hat{M}_{*j}\) is proportional to the \(j^{th}\) component of \(\hat{\bf M}_{*}\), with
\[J^{(S/A)}(p-k)=\frac{J(p-k)\pm J(p+k)}{2}\,. \tag{12}\]
The functions \(J^{(S/A)}(p-k)\) can be decomposed into terms transforming according to particular irreducible representations of the crystalline point group symmetries. To leading order, this can be achieved by expanding \(J^{(S/A)}(p-k)\) in momentum and keeping only the leading term. This yields
\[J^{(S)}({\bf p-k}) \sim-\frac{\mu_{e}^{2}M_{*}^{2}\Omega_{*}(0)}{\Omega_{*}^{2}(0)+ \Gamma_{m}^{2}},\] \[J^{(A)}({\bf p-k}) \sim-\frac{2\mu_{e}^{2}M_{*}^{2}\Omega_{*}^{2}(0)}{(\Omega_{*}^{ 2}(0)+\Gamma_{m}^{2})^{2}}\sum_{j}\kappa_{j}p_{j}k_{j}. \tag{13}\]
We next introduce the gap functions via a Hubbard-Stratonovich transformation:
\[H_{\Delta}=\sum\Delta^{(\mu)}(p)(\sigma^{\mu}i\sigma^{y})_{s_{1}s_{2}}c^{ \dagger}_{ps_{1}}c^{\dagger}_{-ps_{2}}. \tag{14}\]
The corresponding linearized gap equation (valid in the weak coupling approximation) reads
\[\Delta^{(\mu)}(p)=-T\sum_{k}V_{\mu}(p;k)\Pi_{\mu\mu^{\prime}}(k)\Delta^{(\mu^ {\prime})}(k) \tag{15}\]
where
\[\Pi_{\mu\mu^{\prime}}(k)=\text{Tr}\left[\sigma^{\mu}i\sigma^{y}G(k)(\sigma^{ \mu^{\prime}}i\sigma^{y})^{*}G^{T}(-k)\right] \tag{16}\]
is the particle-particle bubble (before the Matsubara sum), the trace is over spin indices, and
\[G(k) =\frac{1}{i\omega_{n}-\varepsilon({\bf k})-\mu_{e}{\bf H}\cdot \boldsymbol{\sigma}}= \tag{17}\] \[=\frac{i\omega_{n}-\varepsilon({\bf k})+\mu_{e}{\bf H}\cdot \boldsymbol{\sigma}}{(i\omega_{n}-\varepsilon({\bf k}))^{2}-\mu_{e}^{2}H^{2}}\]
Figure 17: Angular magnetic field phase diagram for MSF-grown UTe\({}_{2}\) for \(\mu_{0}H\leq 70\) T. Lines are as a guide to the eye. Comparative data points from pulsed field studies on CVT samples are from refs. [14; 18].
is the Green's function that includes the Zeeman term.
For the special case of \(\mathbf{H}\) along the \(y\) axis, since we can assume that the Fermi surfaces are spin polarized close to the phase transition, we take \(G(k)\propto\frac{1}{2}(\pm 1+\sigma^{y})\) (with \(\pm\) corresponding to the two spin split Fermi surfaces). One can then check that the \(\mu=0,y\) channels vanish while
\[\Pi_{xx}(k)=\Pi_{zz}(k) =\frac{1}{\omega_{n}^{2}-(\varepsilon(\mathbf{k})\pm\mu_{e}H)^{2}}\] \[\Pi_{xz}(k)=\pm\Pi_{zx}(k) =\frac{i}{\omega_{n}^{2}-(\varepsilon(\mathbf{k})\pm\mu_{e}H)^{2}} \tag{101}\]
with the \(\pm\) in the denominator corresponding to the Fermi surface with spin aligned against and with the magnetic field, respectively. Note that the relevant interactions are thus \(V_{x}=V_{z}=J_{y}^{(A)}\), so that the \(\mathbf{d}=d(1,0,\pm i)\) vector is interestingly non-unitary, similar to that proposed in [59].
Combining with our knowledge of the \(\mathbf{d}\)-vector, we obtain the final equation for \(\Delta\) with \(\Delta^{(x)}(\mathbf{k})=-i\Delta^{(z)}(\mathbf{k})=\Delta(\mathbf{k})\) (summing over both spin polarized Fermi surfaces for additional factor of two, another factor of two from the eigenvalue of \(\Pi\) matrix, and neglecting form factors from Fermi surface shapes):
\[\Delta(\mathbf{p})=\frac{8\nu\mu_{e}^{2}M_{s}^{2}\Omega_{s}^{2}(0)}{(\Omega_{ *}^{2}(0)+\Gamma_{m}^{2})^{2}}\log\frac{1.13\Lambda}{T_{c}}\sum_{j}\kappa_{j} \int p_{j}k_{j}\Delta(\mathbf{k})dS_{FS} \tag{102}\]
where the surface integral is taken over the Fermi surface. The solutions are thus \(\Delta(\mathbf{p})\propto p_{j}\), with different \(j\) belonging to different irreps. Possibilities include the \(B_{1u}+iB_{3u}\) irrep combination of \(D_{2h}\) irrep for \(j=y\) (corresponding to the \(B_{u}\) irrep of \(C_{2h}\) in the presence of a magnetic field along the \(b\) axis) as proposed in [59]; or, for either \(j=x\) or \(j=z\), the \(A_{u}+iB_{2u}\) combination (\(A_{u}\) irrep of \(C_{2h}\)) that was considered in [52]. Regardless of the form of the order parameter, within weak coupling we obtain an expression for \(T_{c}\) in Eq. (4) with a parameter \(\kappa\) that accounts for any form factors resulting from the integration over the Fermi surface. As the form of the order parameter is still under debate, we simply consider \(\kappa\) as a phenomenological parameter.
### Model for SC1
To model SC1, let us assume that the FM (or AFM) fluctuation-induced interaction at zero external field has the form
\[H_{int}=V\left(c_{-ks_{1}}^{\dagger}(\hat{\mathbf{d}}(\mathbf{k})\cdot \boldsymbol{\sigma}i\sigma^{y})_{s_{1}s_{3}}c_{ks_{3}}^{\dagger}\right)\left(c _{ps_{4}}(\hat{\mathbf{d}}(\mathbf{p})\cdot\boldsymbol{\sigma}i\sigma^{y})_{s _{4}s_{2}}^{*}c_{-ps_{2}}\right) \tag{103}\]
where \(V\) is a constant. Unlike the SC2 model, here we assume that \(V\) is independent of the applied field and arises from intrinsic spin fluctuations present in the ground state of the system in the absence of any field. It is then easy to see that the self-consistent gap functions have the form \(\Delta(\mathbf{p})=\mathbf{d}(\mathbf{k})\cdot\boldsymbol{\sigma}i\sigma^{y}\). For simplicity, let us quantize spin along the direction of \(\mathbf{H}\), so that
\[G(k)=\left(\begin{array}{cc}\frac{1}{i\omega_{n}-\varepsilon(\mathbf{k})-h+ i\Gamma_{e}\text{sgn}\omega_{n}}&0\\ 0&\frac{1}{i\omega_{n}-\varepsilon(\mathbf{k})+h+i\Gamma_{e}\text{sgn}\omega_{ n}}\end{array}\right)\equiv\left(\begin{array}{cc}G_{\uparrow}(k)&0\\ 0&G_{\downarrow}(k)\end{array}\right) \tag{104}\]
where \(h=\mu_{e}H=\mu_{B}g_{e}H/2\) and we introduced the electron decay rate \(\Gamma_{e}\) to account for disorder. Evaluating the trace we then obtain the following self-consistency gap equation (cf. [83; 84]):
\[1=-VT\sum_{\mathbf{k}}\left[(|d_{x}(\mathbf{k})|^{2}+|d_{y}(\mathbf{k})|^{2})( G_{\uparrow}(k)G_{\uparrow}(-k)+G_{\downarrow}(k)G_{\downarrow}(-k))+|d_{z}( \mathbf{k})|^{2}(G_{\uparrow}(k)G_{\downarrow}(-k)+G_{\uparrow}(-k)G_{ \downarrow}(k))\right]. \tag{105}\]
We can generalize to any orientation of the magnetic field by using a coordinate-free notation:
\[1=-VT\sum_{\mathbf{k}}\left[|d_{\perp}(\mathbf{k})|^{2}(G_{\uparrow}(k)G_{ \uparrow}(-k)+G_{\downarrow}(k)G_{\downarrow}(-k))+|d_{\parallel}(\mathbf{k})| ^{2}(G_{\uparrow}(k)G_{\downarrow}(-k)+G_{\uparrow}(-k)G_{\downarrow}(k))\right]. \tag{106}\]
The Matsubara sums for the \(d_{\perp}\) part are the same as without magnetic field (these components are thus insensitive to the magnetic field), and in the absence of disorder we obtain the usual logarithmic term. With disorder, we get
\[\sum_{n}\int d\varepsilon G_{\uparrow}(k)G_{\uparrow}(-k)=\sum_{n}\int d \varepsilon G_{\downarrow}(k)G_{\downarrow}(-k)=\log\frac{1.13\Lambda}{T}-\psi \left(\frac{1}{2}+\frac{\Gamma_{e}}{2\pi T}\right)+\psi\left(\frac{1}{2} \right). \tag{107}\]
Evaluating the sum for the \(d_{\parallel}\) term, on the other hand, gives (assuming \(h\ll\Lambda\))
\[\sum_{n}\int d\varepsilon G_{\uparrow}(k)G_{\downarrow}(-k)=\log\frac{1.13 \Lambda}{T}-\text{Re}\left[\psi\left(\frac{1}{2}+\frac{\Gamma_{e}+ih}{2\pi T} \right)-\psi\left(\frac{1}{2}\right)\right] \tag{100}\]
where \(\psi\) is the digamma function. After doing the sum over \(\mathbf{k}\) in Eq. (101), we then have
\[1=-\tilde{V}\left[\log\frac{1.13\Lambda}{T}-(1-c(\theta,\phi))\left[\psi\left( \frac{1}{2}+\frac{\Gamma_{e}}{2\pi T}\right)-\psi\left(\frac{1}{2}\right) \right]-c(\theta,\phi)\text{Re}\left[\psi\left(\frac{1}{2}+\frac{\Gamma_{e}+ih }{2\pi T}\right)-\psi\left(\frac{1}{2}\right)\right]\right]\]
with
\[\tilde{V}=2V\nu\int|\mathbf{d}(\mathbf{k})|^{2}dS_{FS}\]
and
\[0<c(\theta,\phi)=\frac{\int|d_{\parallel}(\mathbf{k})|^{2}dS_{FS}}{\int| \mathbf{d}(\mathbf{k})|^{2}dS_{FS}}<1\]
is a form factor that we can treat as a phenomenological parameter that only depends on the direction of the field \(\mathbf{H}\). This is most conveniently re-written as
\[\log\frac{T_{c}(\mathbf{H},\Gamma_{e})}{T_{c0}}=-(1-c(\theta,\phi))F\left( \frac{\Gamma_{e}}{T_{c}}\right)-c(\theta,\phi)F\left(\frac{\Gamma_{e}+ih}{T_{c }}\right)\]
where \(F(x)=\text{Re}\left[\psi\left(\frac{1}{2}+\frac{x}{2\pi}\right)-\psi\left( \frac{1}{2}\right)\right]\), leading to the expression in the main text that was used to obtain the plots in Fig. 9.
|
2310.14979 | ACTOR: Active Learning with Annotator-specific Classification Heads to
Embrace Human Label Variation | Label aggregation such as majority voting is commonly used to resolve
annotator disagreement in dataset creation. However, this may disregard
minority values and opinions. Recent studies indicate that learning from
individual annotations outperforms learning from aggregated labels, though they
require a considerable amount of annotation. Active learning, as an annotation
cost-saving strategy, has not been fully explored in the context of learning
from disagreement. We show that in the active learning setting, a multi-head
model performs significantly better than a single-head model in terms of
uncertainty estimation. By designing and evaluating acquisition functions with
annotator-specific heads on two datasets, we show that group-level entropy
works generally well on both datasets. Importantly, it achieves performance in
terms of both prediction and uncertainty estimation comparable to full-scale
training from disagreement, while saving up to 70% of the annotation budget. | Xinpeng Wang, Barbara Plank | 2023-10-23T14:26:43Z | http://arxiv.org/abs/2310.14979v1 | ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation
###### Abstract
Label aggregation such as majority voting is commonly used to resolve annotator disagreement in dataset creation. However, this may disregard minority values and opinions. Recent studies indicate that learning from individual annotations outperforms learning from aggregated labels, though they require a considerable amount of annotation. Active learning, as an annotation cost-saving strategy, has not been fully explored in the context of learning from disagreement. We show that in the active learning setting, a multi-head model performs significantly better than a single-head model in terms of uncertainty estimation. By designing and evaluating acquisition functions with annotator-specific heads on two datasets, we show that group-level entropy works generally well on both datasets. Importantly, it achieves performance in terms of both prediction and uncertainty estimation comparable to full-scale training from disagreement, while saving up to 70% of the annotation budget.
## 1 Introduction
An important aspect of creating a dataset is asking for multiple annotations and aggregating them in order to derive a single _ground truth_ label. Aggregating annotations, however, implies a single golden ground truth, which is not applicable to many subjective tasks such as hate speech detection (Ovesdotter Alm, 2011). A human's judgement on subjective tasks can be influenced by their perspective and beliefs or cultural background (Waseem et al., 2021; Sap et al., 2022). When addressing disagreement in annotation, aggregating them by majority vote could result in the viewpoints of the minority being overlooked (Suresh and Guttag, 2019).
In order to address this issue, many works have been proposed to directly learn from the annotation disagreements in subjective tasks. There are two major approaches to achieving that: learning from the _soft label_(Peterson et al., 2019; Uma et al., 2020; Fornaciari et al., 2021) and learning from the _hard label_ of individual annotators (Cohn and Specia, 2013; Rodrigues and Pereira, 2018; Davani et al., 2022).
In a recent work, Davani et al. (2022) shows that modelling the individual annotators by adding annotator-specific classification heads in a multi-task setup outperforms the traditional approach that learns from a majority vote. However, training such a model needs a huge amount of data with multiple annotations to model the opinions and beliefs of the individual annotators.
On another line, Active Learning (AL) is a framework that allows learning from limited labelled data by querying the data to be annotated. In this paper, we propose to take the best of both worlds: active learning and human label variation, to mitigate the high cost of the annotation budget needed for training the model. In particular, we propose a novel active learning setting, where the multi-head model actively selects the annotator and the sample to be labelled. Our results show this effectively reduces annotation costs while at the same time allowing for modelling individual perspectives.
Figure 1: For each sample that needs to be labelled, our model actively selects specific annotators for annotations to learn from the label variation.
Key FindingsWe made several key observations:
* The multi-head model works significantly better than the single-head model on uncertainty estimation in the active learning setting.
* The use of group-level entropy is generally recommended. Individual-level entropy methods perform differently depending on the dataset properties.
* The multi-head model achieves a performance comparable to full-scale training with only around 30% annotation budget.
## 2 Related Work
### Learning from Disagreement
There is a growing body of work that studies irreconcilable differences between annotations (Plank et al., 2014; Aroyo and Welty, 2015; Pavlick and Kwiatkowski, 2019; Uma et al., 2021). One line of research aims at resolving the variation by aggregation or filtering (Reidsma and Carletta, 2008; Beigman Klebanov et al., 2008; Hovy et al., 2013; Gordon et al., 2021). Another line of research tries to embrace the variance by directly learning from the raw annotations (Rodrigues and Pereira, 2018; Peterson et al., 2019; Fornaciari et al., 2021; Davani et al., 2022), which is the focus of our paper.
### Active Learning
In active learning, many different methods for selecting data have been proposed to save annotation cost, such as uncertainty sampling (Lewis, 1995) based on entropy (Dagan and Engelson, 1995) or approximate Bayesian inference (Gal and Ghahramani, 2016). Other approaches focus on the diversity and informativeness of the sampled data (Sener and Savarese, 2017; Gissin and Shalev-Shwartz, 2019; Zhang and Plank, 2021). Herde et al. (2021) proposed a probabilistic active learning framework in a multi-annotator setting, where the disagreement is attributed to errors. Recent work by Baumler et al. (2023) accepted the disagreement in the active learning setting, and they showed improvement over the passive learning setting using the single-head model. Our work shows the advantage of the multi-head model and compares it with traditional single-head active learning methods.
## 3 Method
Multi-head ModelWe use a multi-head model where each head corresponds to one unique annotator, following Davani et al., 2022. In the fine-tuning stage, annotations are fed to the corresponding annotator heads, adding their losses to the overall loss. During testing, the F1 score is calculated by comparing the majority votes of the annotator-specific heads with the majority votes of the annotations.
### Multi-head Acquisition Functions
We study five acquisition functions for the multi-head model. Since our model learns directly from the annotation, we care about which annotator should give the label. So we query the instance-annotation pair \((x_{i},y_{i}^{a})\) with its annotator ID \(a\). In this way, our data is duplicated by the number of annotations available.
Random Sampling (Rand.)We conduct random sampling as a baseline acquisition method where we randomly sample \(K\) (_data, annotation, annotator ID_) pairs from the unlabeled data pool \(U\) at each active learning iteration.
Individual-level Entropy (Indi.)Intuitively, the annotator-specific heads model the corresponding annotators. Therefore, we can calculate the entropy of the classification head to measure the specific annotator's uncertainty. Given the logits of the head \(a\): \(\mathbf{z}^{a}=[z_{1}^{a},...,z_{n}^{a}]\), the entropy is calculated as following: \(H_{indi}(p^{a}|x)=-\sum_{i=1}^{n}p_{i}^{a}(x)\log(p_{i}^{a}(x))\), where \(p_{i}^{a}(x)=\mathrm{softmax}(z_{i}^{a}(x))\). Then we choose the (_instance_, _annotator_) pair with the highest entropy: \(\mathrm{argmax}_{x\in U,a\in A}H_{indi}(p^{a}|x)\), where \(U\) denotes the unlabeled set and \(A\) denotes the annotator pool. We compute entropy only for the remaining annotators who have not provided annotations for the instance.
Group-level Entropy (Group)Instead of looking at the individual's uncertainty, we can also query the data by considering the group-level uncertainty. One way to represent the uncertainty of the group on a sample is to calculate the entropy based on the aggregate of each annotator-specific head's output. Therefore, we normalize and sum the logits of each head at the group level: \(\mathbf{z}_{group}=[z_{1},...,z_{n}]=\sum_{h=1}^{H}\mathbf{z}_{norm}^{h}\), and calculate the group-level entropy as follows: \(H_{group}(x)=-\sum_{i=1}^{n}p_{i}(x)\log(p_{i}(x))\), where \(p_{i}(x)=\mathrm{softmax}(z_{i}(x))\). We then query the data with the highest uncertainty.
Vote Variance (Vote)Another way to measure the uncertainty among a group is by measuring
the variance of the votes. Given the prediction \(y^{h}\) of classification head \(h\), we calculate the vote variance: \(\mathrm{Var}=\frac{1}{H}\sum_{i=1}^{H}(y^{h}-\mu)^{2}\), where \(\mu=\frac{1}{H}\sum_{h=1}^{H}y^{h}\). This approach can be applied to binary classification or regression problems.
**Mixture of Group and Indi. Entropy (Mix.)** We also consider a variant which combines the group-level and individual-level entropy by simply adding the two: \(H_{mix}=H_{indi}+H_{group}\).
## 4 Experiments
DatasetWe selected two distinct hate speech datasets for our experiments: **Hate Speech on Brexit (HS-Brexit)**(Akhtar et al., 2021) and **Gab Hate Corpus (GHC)**(Kennedy et al., 2022). We split the raw annotation dataset according to the split of the aggregated version dataset provided. The **HS-Brexit** dataset includes 1,120 English tweets relating to Brexit and immigration, where a total of six individuals were involved in annotating each tweet. As each tweet contains all six annotations, we refer to HS-Brexit as _densely_ annotated. In **GHC**, 27,665 social-media posts were collected from the public corpus of Gab.com (Gaffney, 2018). From a set of 18 annotators, each instance gets at least three annotations. Therefore, GHC is _sparsely_ annotated. Both datasets contain binary labels \(y\in[0,1]\) and have almost the same positive/negative raw annotation ratio (\(0.15\)).
**Single-head Model Baselines** We implement four acquisition methods for single-head model active learning for comparison: Random sampling **(Rand.)**, Max-Entropy **(Ent.; Dagan and Engelson, 1995)**, Bayesian Active Learning by Disagreement **(BALD; Houlsby et al., 2011)** and Discriminative Active Learning **(DAL; Gissin and Shalev-Shwartz, 2019)**. We compare them with the multi-head approach with random sampling which has an average performance among the five multi-head acquisition methods we investigated.
Two different single-head model approaches are considered: Learning from the Majority Vote **(Single-Majority)** and Learning from Raw annotations **(Single-Annotation)**. In the first setting, all annotators' annotations are queried, and the majority vote is used to train the model. In the second setting, we train the model with individual annotations without aggregation, following the repeated labelling approach by Sheng et al. (2008).
**Experimental Setup** We follow the setup of Davani et al. (2022) for modelling and evaluation. We initialize the layers before the heads with a BERT-base model (Devlin et al., 2019). To balance training data, we do oversampling following Kennedy et al. (2022). Moreover, we use class weights on the loss function for multi-head model training, which makes it more stable. It is not used for the single-head model as it degrades performance.
To evaluate the model, we first report the F1 score against the majority vote. Secondly, we also compute individual F1 scores, measuring annotator-specific heads against annotator labels. Thirdly and importantly, we are interested to gauge how well the model can predict the data uncertainty
Figure 2: Comparison of the Multi-head model and Single-Majority (upper row) and Single-Annotation (bottom row). Results are averaged over 4 runs. All the methods have the same annotation cost of the seed dataset and the queried batch at each round.
by calculating the Pearson correlation between the model's uncertainty and the annotation disagreement measured by the variance of the annotations on the same instance. For the single-head model, we use _Prediction Softmax Probability_ proposed by Hendrycks and Gimpel (2017) as the uncertainty estimation of the model. For the multi-head model, we follow Davani et al. (2022) and calculate the variance of the prediction of the heads as the model's uncertainty.
## 5 Result
Single-head vs Multi-head ModelFigure 2 shows the comparison of the multi-head model and the single-head during the active learning process In the upper row, we compare the _multi-head_ approach with _single-majority_ approach on majority F1 score and uncertainty estimation. In terms of predicting the majority vote, the _multi-head_ model performs on par with the best-performing _single-head_ method on both datasets, such as BALD. For uncertainty estimation measured against annotator disagreement, the _multi-head_ model outperforms the _single-head_ model by a large margin.
We have the same observation when comparing with _single-annotation_ model, shown in the bottom row. Therefore, we recommend using a _multi-head_ model in a subjective task where humans may disagree and uncertainty estimation is important.
Label Diversity vs. Sample DiversityFor _group-level_ uncertainty based acquisition functions (**Group** and **Vote**), we tested two approaches to determine which annotator to query from: _Label Diversity First_ and _Sample Diversity First_. In _Label Diversity First_, we query from all the available annotators to prioritize the label diversity of a single sample. In the _Sample Diversity First_ approach, we only randomly choose one of the annotators for annotation. Given the same annotation budget for each annotation round, _Label Diversity First_ would query fewer samples but more annotations than _Sample Diversity First_ approach. In our preliminary result, _Label Diversity First_ shows stronger performance in general. Therefore, we adopt this approach for the following experiments.
Comparison of Multi-head acquisition functionsTo compare different strategies to query for annotations, we compare the five proposed acquisition functions from Section 3.1 in Fig 3. **Group** performs generally well on both datasets. We also see a trend here that HS-Brexit favours acquisition function based on _group-level_ uncertainty (**Vote**), while _individual-level_ uncertainty (**Indi.**) works better on GHC dataset. For HS-Brexit, **Group** is the best-performing method based on the majority F1 score. When evaluated on raw annotation (F1 indi. score), both vote variant and group-level entropy perform well. For uncertainty estimation, random sampling is slightly better than group-level entropy approach. On the GHC dataset, both **Indi.** and **Group** perform well on uncertainty estimation and raw annotation prediction. However, we don't see an obvious difference between all the acquisition functions on the majority vote F1 score.
Annotation CostIn terms of saving annotation cost, we see that the F1 score slowly goes into a
Figure 3: Comparison of multi-head acquisition functions. Results are averaged over 4 runs. The group-level entropy method (**Group**) performs generally well on both datasets on all three metrics. Individual-level uncertainty (**Indi.**) only performs well on GHC.
plateau after around 25 rounds on both datasets in Fig 3, which is around 30% usage of the overall dataset (both datasets are fully labelled at around 90 rounds). For example, _Vote_ achieves the majority F1 score of 52.3, which is 94% of the performance (55.8) of the full-scale training (round 90).
## 6 Conclusion
We presented an active learning framework that embraces human label variation by modelling the annotator with annotator-specific classification heads, which are used to estimate the uncertainty at the individual annotator level and the group level. We first showed that a multi-head model is a better choice over a single-head model in the active learning setting, especially for uncertainty estimation. We then designed and tested five acquisition functions for the annotator-heads model on two datasets. We found that group-level entropy works generally well on both datasets and is recommended. Depending on the dataset properties, the individual-level entropy method performs differently.
## Limitations
The multi-head approach is only viable when the annotator IDs are available during the active learning process since we need to ask the specific annotator for labelling. Furthermore, the annotators should remain available for a period of time in order to provide enough annotations to be modelled by the specific head successfully. Note that we here simulate the AL setup. The multi-head approach is good at estimating the uncertainty based on the annotators it trains on, however, whether the uncertainty can still align with yet another pool of people is still an open question.
Further analysis is needed to understand why GHC and HS-Brexit favour different acquisition functions. Besides the difference between _dense_ and _sparse_ annotation, factors such as _diversity_ of the topics covered and annotator-specific annotation statics are also important, which we leave as future work.
## Acknowledgements
We thank the anonymous reviewers for their feedback. This research is supported by the ERC Consolidator Grant DIALECT 101043235.
|
2305.06113 | Thermal masses and trapped-ion quantum spin models: a self-consistent
approach to Yukawa-type interactions in the $λ\!φ^4$ model | The quantum simulation of magnetism in trapped-ion systems makes use of the
crystal vibrations to mediate pairwise interactions between spins, which are
encoded in the internal electronic states of the ions, and measured in
experiments that probe the real-time dynamics. These interactions can be
accounted for by a long-wavelength relativistic theory, where the phonons are
described by a coarse-grained Klein-Gordon field $\phi(x)$ locally coupled to
the spins that acts as a carrier, leading to an analogue of pion-mediated
Yukawa interactions. In the vicinity of a structural transition of the ion
crystal, one must go beyond the Klein-Gordon fields, and include additional
$\lambda\phi^4$ terms responsible for phonon-phonon scattering. This leads to
quantum effects that can be expressed by Feynman loop integrals that modify the
range of the Yukawa-type spin interactions; an effect that could be used to
probe the underlying fixed point of this quantum field theory (QFT).
Unfortunately, the rigidity of the trapped-ion crystal makes it challenging to
observe genuine quantum effects, such as the flow of the critical point with
the quartic coupling $\lambda$. We hereby show that thermal effects, which can
be controlled by laser cooling, can unveil this flow through the appearance of
thermal masses in interacting QFTs. We perform self-consistent calculations
that resum certain Feynman diagrams and, additionally, go beyond mean-field
theory to predict how measurements on the trapped-ion spin system can probe key
properties of the $\lambda\phi^4$ QFT. | Pablo Viñas Martínez, Esperanza López, Alejandro Bermudez | 2023-05-10T12:59:07Z | http://arxiv.org/abs/2305.06113v4 | # Thermal masses and trapped-ion quantum spin models:
###### Abstract
The quantum simulation of magnetism in trapped-ion systems makes use of the crystal vibrations to mediate pairwise interactions between spins, which are encoded in the internal electronic states of the ions, and measured in experiments that probe the real-time dynamics. These interactions can be accounted for by a long-wavelength relativistic theory, where the phonons are described by a coarse-grained Klein-Gordon field \(\phi(x)\) locally coupled to the spins that acts as a carrier, leading to an analogue of pion-mediated Yukawa interactions. In the vicinity of a structural transition of the ion crystal, one must go beyond the Klein-Gordon fields, and include additional \(\lambda\phi^{4}\) terms responsible for phonon-phonon scattering. This leads to quantum effects that can be expressed by Feynman loop integrals that modify the range of the Yukawa-type spin interactions; an effect that could be used to probe the underlying fixed point of this quantum field theory (QFT). Unfortunately, the rigidity of the trapped-ion crystal makes it challenging to observe genuine quantum effects, such as the flow of the critical point with the quartic coupling \(\lambda\). We hereby show that thermal effects, which can be controlled by laser cooling, can unveil this flow through the appearance of thermal masses in interacting QFTs. We perform self-consistent calculations that resum certain Feynman diagrams and, additionally, go beyond mean-field theory to predict how measurements on the trapped-ion spin system can probe key properties of the \(\lambda\phi^{4}\) QFT.
###### Contents
* I **Introduction**
* II **Quantum simulation of Yukawa-type models*
* II.1 From trapped ions to \(\lambda\phi^{4}\) quantum fields
* II.2 Yukawa-type interactions and real-time spin dynamics
* II.3 Effect of \(\lambda\phi^{4}\) term and non-zero temperatures
* III **Self-consistency beyond mean field theory*
* III.1 Perturbative generating functional of \(\lambda\phi^{4}\) fields
* III.2 Feynman diagrams and self-consistent equations
* III.3 Non-zero temperature and thermal field theories
* IV **Critical line and trapped-ion spin-spin couplings*
* IV.1 Numerical estimate of the critical line
* IV.2 Estimates for the trapped-ion quantum simulator
* V **Conclusions and outlook**
* A **Thermal effects and Matsubara mode sums**
* B **Analytical estimate of the critical ratio**
* C **Critical line crossings**
## I Introduction
The field of quantum technologies, which aims at developing quantum devices that provide novel functionalities with a quantifiable advantage with respect their classical counterparts, has become a promising area of research in both the academic sector and the technological industry, e.g. [1, 2]. Regarding the application of these technologies to quantum computation [3], there has been a remarkable recent progress [4, 5, 6, 7, 8] towards the long-term goal of a large-scale fault-tolerant error-corrected device [9] that can outperform classical computers in relevant tasks [10]. However, before these large-scale devices become available, one is restricted to operate with small to mid-scale prototypes in the so-called noisy intermediate-scale quantum (NISQ) era [11]. Here, one aims at realizing specific circuits, or even prototype quantum algorithms [12], on the largest possible number of qubits and gates, evading the overhead of active quantum error correction. Remarkably, even in the presence of noise, these NISQ devices have already enabled the demonstration of quantum advantage [13, 14], that is, using a quantum device to solve a problem that would require an unfeasible amount of time with a classical machine. A current research goal is to extend these demonstrations of quantum advantage to problems that can be of practical relevance in various areas of science.
In this respect, the simulation of quantum many-body models is a problem of considerable interest in different disciplines, ranging from quantum chemistry, to condensed matter and high-energy physics. As originally emphasised by Richard Feynman [15], the characteristic exponential scaling of the size of the Hilbert space of a quantum many-body system hints at the inherent complexity of this type of problems, and the inefficiency of a brute-force numerical approach based on classical computers. Although various numerical methods have been developed over the years to overcome these difficulties, there are still many open questions regarding real-time dynamics, finite-fermion densities and, generally, strongly-correlated phenomena. The idea of quantum simulations [16, 17, 18] is to use a quantum device instead of a classical one, which can be controlled so as to reproduce the equilibrium properties and dynamics of the model of interest. This has already found several applications in the aforementioned areas [19, 20, 21, 22, 23]. The characteristic of these interactions determine, in turn, the real-time dynamics of the spins.
A quantum simulation proceeds by, first, encoding the degrees of freedom of the target model into those of the quantum device, and then preparing a specific initial state. This can be achieved in two ways. One can use the same building blocks as in quantum computers, i.e. qubits, which are then acted upon by a sequence of quantum logic gates. This sequence reproduces approximately the real-time dynamics of the model under a Suzuki-Trotter expansion [24], and leads to the so-called digital quantum simulations [25]. Note that, in spite of working with qubits, one can simulate fermionic and bosonic degrees of freedom at the expense of an overhead in the number of gates and/or qubit-register size, e.g. [26]. Alternatively, one can use special-purpose quantum simulators that already have spins, fermions, bosons, or combinations thereof, as the relevant degrees of freedom. This advantage comes at the price of certain limitations in the range of models that can be simulated since, in general, one cannot realize an arbitrary unitary on the exponentially-large Hilbert space. In fact, these quantum simulators are not acted upon by concatenating gates drawn from a universal gate set, but rather by letting the system evolve continuously in time under approximate effective Hamiltonians with a restricted set of terms. By tuning the strength of these terms, one can mimic approximately the target model in a specific parameter regime. These devices are known as analog quantum simulators [27].
In contrast to the accumulation of errors in digital quantum simulators, which arise from both the imperfect operations in a gate sequence and the approximations inherent to the Suzuki-Trotter expansion, it is more difficult to account for the growth of errors in analog quantum simulators. Nonetheless, one expects that their accumulation in time will not be as detrimental as in a generic digital approach, especially when one is interested in recovering intensive observables [16]. Accordingly, the common expectation is that one will be able to demonstrate quantum advantage using near-term experiments with these analog quantum simulators [20; 21]. In fact, some experiments have already been able to track real-time dynamics of a many-body model, going beyond the capabilities of current classical computers with state-of-the-art numerical algorithms [28]. Even if it is difficult to provide mathematical proofs of quantum advantage, as one is departing from the quantum-computing framework in which the scaling of required resources for a target accuracy is routinely estimated, there has been recent progress in this direction [29; 30].
In this manuscript, we are interested in the use of trapped-ion quantum simulators [21; 22] for high-energy physics, which leverages on the developments of this experimental platform for frequency standards and quantum computing. The potential advantage of working with a larger number of laser-cooled ions for frequency standards and clocks led to the development of linear Paul traps [31]. Here, one can store ion chains along the symmetry axis of the trap while, simultaneously, minimize micromotion and the associated shifts of the atomic transitions used in the clocks [32]. As first realised in [33], these ion crystals can also serve as registers to realize a quantum computer, in which the quantum information is encoded in the electronic levels of the ions, and processed by a universal gate set that uses additional lasers that also excite their collective vibrations. Building on this seminal proposal, the subsequent experimental and theoretical efforts have turned trapped ions into one of the leading platforms in the quest of building a large-scale fault-tolerant quantum computer [34]. In addition to the experiments that we have already mentioned [4; 7], which contribute to the continuous effort [35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55] towards trapped-ion quantum error correction [48], a variety of NISQ algorithms have also been realized over the years [49; 50; 51; 52; 53; 54; 55]. The success of these implementations relies on the very accurate performance of the universal gate set [56; 57; 58; 59; 60; 61]. This also includes high-accuracy two-qubit gates which, typically, are the most challenging part of the gate set in any platform. Moreover, either by exploiting ion shuttling [62; 63; 64] or individually-addressed laser beams [65; 66; 67], one can implement these entangling gates between arbitrary qubit pairs in the crystal. This allows for a programmable connectivity that has allowed to demonstrate complex protocols with current NISQ error rates, e.g. [4; 7], which would not be possible if the connectivity was local.
As advanced before, trapped-ion quantum simulators benefit from all of these developments. In the digital approach, the high-fidelity gate set has been exploited for quantum simulations of small-scale spin models in condensed matter, either following a Hamiltonian [68; 69] or a Lindbladian [70; 71] time evolution. Additionally, digital quantum simulations of a lattice gauge theory [72; 73; 74] have also been performed [75; 76], in which the gauge fields are eliminated by exploiting Gauss' law, whereas the matter fermions are simulated by qubits/spins. This elimination leads to an effective spin model with long-range interactions mediated by the gauge fields. We note that the digital methods can also be combined with classical variational methods in a hybrid approach, which has found applications in quantum chemistry [77; 78] and also lattice gauge theories [79]. Particularly in the context of lattice field theories, where one eventually aims at recovering the continuum limit, increasing the size of these quantum simulators will be important in the near future. In addition, including both the matter particles and interaction carriers in the quantum simulation will also be important, allowing one to get closer to the higher-dimensional non-Abelian gauge theories of the standard model, which is one of the longer-term goals [80; 81; 82; 83; 84; 85; 86; 87]. From this perspective, the prospects of analog quantum simulators are very promising, as the milder error accumulation mentioned above may allow to simulate larger and more complex models. Moreover, the different microscopic constituents available in these simulators may be exploited to efficiently encode the matter particles and interaction carriers. One the other hand, these simulators are more limited in the type of achievable models, i.e. non-universality, which can hinder the quantum simulation the full standard model of particle physics.
Let us briefly review the advances in analog quantum simulations based on trapped ions that are relevant for the present work. Regarding the spin models mentioned above, instead of using laser-driven schemes for single and two-qubit gates [88; 89], and concatenate them following specific Suzuki-Trotter circuits; one may alternatively obtain effective spin models for the whole ion crystal by acting with always
on far-detuned lasers, which typically lead to long-range spin-spin interactions mediated by the phonons [90]. This idea has turned out to be extremely fruitful for the analog quantum simulation of magnetism [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]. In all of these experiments, the focus lies on the interacting spins, while the phonons are mere auxiliary degrees of freedom. In contrast, as advocated in the recent works [102, 103], including these bosonic degrees of freedom in the quantum simulation offers a more complete picture, and provides a neat understanding of some characteristics of the spin models. More importantly, it allows one to move ahead and connect to the phenomenon of renormalization and real-time dynamics of QFTs in high-energy physics. Here, the phonons are represented by a relativistic scalar field theory that mediates interactions between the matter spins, having a clear analogy with the aforementioned interactions carried by bosons in gauge theories.
As first noted in the experimental works [93, 94], by controlling the detuning of the laser beams used to generate the long-range interactions in a trapped-ion crystal, the decay of the spin-spin couplings with the inter-ion distance can be approximately fitted to a power law with a tunable exponent. A more-accurate microscopic description without any fitting parameter is that of a dipolar decay with an additional exponential tail that is controlled by the value of the laser detuning [104, 105]. The origin of this particular distance dependence is clarified by a long-wavelength description of the model [103], in which the phonons are described as quantised sound waves in terms of a relativistic Klein-Gordon field \(\phi(x)\)[106, 107]. This scalar field has a Yukawa-type coupling to the spins, and can be used to mediate the spin-spin interactions and generate entanglement during a real-time evolution [103, 108]. As discussed in detail in [103], the exponential part of the spin-spin interactions is the typical contribution of a Yukawa interaction in \(D=1+1\) dimensions [109, 110]. On the other hand, the physical lattice regularization of this QFT stemming from the trapped-ion microscopic model also includes long-range couplings that modify the dispersion relation. This leads to a branch-cut discontinuity that is responsible for an additional dipolar-decaying part in the spin-spin couplings. As discussed in [103], the effective coarse-grained description of these Yukawa-type interactions is very accurate already for moderate-size chains with tens of ions, considering realistic parameters and inhomogeneous ion crystals.
Since the distance dependence of the spin-spin couplings can be inferred from various experimental techniques that measure the real-time dynamics of the effective spin model [93, 95, 96], it follows that one could use the spins as probes of the underlying effective relativistic QFT. One can thus use experiments in which the crystal sizes are sufficiently large to admit a continuum limit as quantum simulators of a relativistic Yukawa-type problem. In analogy to the simulations of lattice gauge theories [72, 73, 74] mentioned above, the trapped-ion spins would mimic the matter degrees of freedom since, under a Jordan-Wigner transformation [111], they can be interpreted as fermionic matter. In the present case, this matter would be quenched, and sit on the sites of a real physical lattice corresponding to the ion crystal. Instead of having a gauge field to mediate the interactions, which must be defined on the links of the lattice to allow for a local symmetry, here one has a global inversion symmetry of a real scalar field \(\phi(x)\) defined on the lattice sites. Note that the continuum limit is not recovered by sending the physical lattice spacing to zero, but rather by working in parameter regimes where only the long-wavelength properties are of relevance. These regimes correspond to the vicinity of a second-order phase transition.
The interesting question is to push this QFT analogy further, and move to situations beyond the simple Klein-Gordon QFT. Inspired by the fermion-Higgs sector of the electroweak theory [112], our trapped-ion quantum simulator would become more interesting in the presence of \(\lambda\phi^{4}\) interactions [113]. As discussed in [103], this type of self-interactions become the relevant ones in the vicinity of a structural phase transition [114], and lead to a variety of scattering events for the bosonic excitations of the scalar field. i.e. the phonons in the trapped-ion crystal. During the propagation of the bosons between a pair of spins, scattering takes place modifying the Yukawa-type interactions. In the language of QFTs, this scattering changes the boson Feynman propagator due to renormalization of the low-energy modes by the high-energy ones. These quantum effects can be expressed in terms of Feynman loop diagrams and lead to a physical mass for the scalar bosons that differs from the bare original mass. Interestingly, this can affect the effective spin-spin interactions of the trapped-ion quantum simulator as one approaches the structural phase transition, opening an original route for probing the underlying effective QFT via the real-time dynamics of the spins. Unfortunately, for typical realizations, the rigidity of the ion chain tends to mask these quantum effects [103, 115], such that the flow of the critical point with the quartic coupling \(\lambda\) is predicted to be very small. In the present manuscript, we argue that one can overcome this limitation and unveil this flow by exploiting thermal effects via the so-called thermal masses and the phenomenon of symmetry restoration in QFTs [116, 117]. We present a non-perturbative self-consistent approach to predict the Yukawa interactions at finite temperatures, and discuss how these can be used to probe the underlying thermal QFT.
Our presentation is organised as follows. In Sec. II, we discuss how to describe a structural phase transition in a trapped-ion chain by an effective \(\lambda\phi^{4}\) QFT, and how the trapped-ion quantum simulators of spin models can be understood in light of this QFT as a Yukawa-type problem of interactions mediated by a scalar field. We finish by discussing qualitatively the effects of self-interactions of the scalar field and non-zero temperatures in these Yukawa-type spin-spin interactions. In Sec. III, we derive a set of equations to deal quantitatively with self-interactions and non-zero temperatures in this field theory, which requires resuming certain types of Feynman diagrams in a non-perturbative approach. We emphasise how to deal with ultraviolet and infrared divergences in this approach, both of which arise for the low spacetime dimensions relevant to the trapped-ion problem. In Sec. IV, we discuss in detail how to adapt these techniques to the specific details of trapped ions, and describe our numerical approach to solve the aforementioned equations. Finally, we discuss how these results predict a temperature-dependent contribution to the physical
mass of the scalar particles, which changes the range of the Yukawa-type spin-spin interactions, giving concrete predictions for a realistic trapped-ion experiment. Finally, in Sec. V, we present our conclusions and outlook.
## II Quantum simulation of Yukawa-type models
For the shake of completeness, and to fix our notation, we review in this section some concepts about trapped ions, as well as the connection of the phonon-mediated spin-spin interactions in the vicinity of a structural transition to the quantum simulation of a Yukawa-type QFT at non-zero temperatures.
### From trapped ions to \(\lambda\phi^{4}\) quantum fields
As advanced in the introduction, we consider a system of \(N_{1}\) atomic ions of charge \(e\) confined in a linear Paul trap [118, 34], and assume that the symmetry axis of this trap lies along the \(x\) direction. In this kind of devices, ions are trapped using a combination of AC and DC potentials, and can reach a stable crystalline distribution for hours or even days, depending on the trap design, and the cooling and vacuum conditions [119]. In the pseudo-potential approximation [120, 31], the secular motion of the ions can be described by an effective quadratic potential with constant trap frequency \(\{\omega_{\alpha}\}\) that aims at confining the ions along each axis \(\alpha\in\{x,y,z\}\). Since the ions are charged, there is a competition between this overall trapping potential and the inter-particle Coulomb repulsion, leading to
\[H=\sum_{i,\alpha}\left(\frac{1}{2m_{a}}p_{i,\alpha}^{2}+\frac{1}{2}m_{a}\omega _{\alpha}^{2}r_{i,\alpha}^{2}\right)+\frac{1}{2}\sum_{i\neq j}\frac{e^{2}}{4 \pi\epsilon_{0}}\frac{1}{|\mathbf{r}_{i}-\mathbf{r}_{j}|}, \tag{1}\]
where \(\mathbf{r}_{i},\mathbf{p}_{i}\) are the canonical position-momentum operators of the ions with mass \(m_{a}\) and charge \(e\), and \(\epsilon_{0}\) is the vacuum permittivity. This competition leads to a set of equilibrium positions \(\{\mathbf{r}_{i}^{0}=l\mathbf{r}_{i}^{0}\}_{i=1}^{N}\), where we have introduced a constant with the units of length \(l\) that fulfills \(l^{3}=e^{2}/4\pi\epsilon_{0}m_{a}\omega_{\alpha}^{2}\)[121, 122], which are obtained by solving the non-linear equations
\[\bar{r}_{i,\alpha}^{0}-\kappa_{\alpha}\sum_{j\neq i}\frac{\bar{r}_{i,\alpha}^{ 0}-\bar{r}_{j,\alpha}^{0}}{|\bar{\mathbf{r}}_{i}^{0}-\bar{\mathbf{r}}_{j}^{0}|^{3}}=0, \tag{2}\]
where \(\kappa_{\alpha}=(\omega_{\alpha}/\omega_{\alpha})^{2}\). For \(\omega_{\alpha}\ll\omega_{\gamma},\omega_{\alpha}\), these equilibrium positions lie along the symmetry axis \(\mathbf{r}_{i}^{0}=\kappa_{i}\mathbf{\varepsilon}_{\gamma}\), and present a typical lattice spacing that is rather homogeneous in the bulk of the ion chain \(|\mathbf{r}_{i}^{0}-\mathbf{r}_{i+1}^{0}|\approx d=\min_{i\neq j}\{\mathbf{r}_{i}^{0}-\bm {r}_{j}^{0}\}|\).
After expanding the Coulomb potential in Eq. (1) to quadratic order in the small displacements \(\mathbf{r}_{i}(t)=\mathbf{r}_{i}^{0}+\mathbf{u}_{i}(t)\), the secular ion motion is accurately described by \(H\approx H_{2}\) with
\[H_{2}=\sum_{i,\alpha}\left(\frac{\pi_{i,\alpha}^{2}}{2m_{a}}+\frac{1}{2}k_{ \alpha}u_{i,\alpha}^{2}+\frac{1}{4}\sum_{i\neq j}k_{i,j}^{\alpha}(u_{i,\alpha} -u_{j,\alpha})^{2}\right), \tag{3}\]
Here, \(\pi_{i,\alpha}=m_{a}\partial_{t}u_{i,\alpha}\) are the momenta conjugate to the displacement operators \([u_{i,\alpha}(t),\pi_{j,\beta}(t)]=\mathrm{i}\hbar\delta_{ij}\delta_{\alpha, \beta}\), and we have introduced the effective spring constants
\[k_{i,j}^{z}=k_{i,j}^{y}=-\frac{1}{2}k_{i,j}^{x}=-\frac{e^{2}}{4\pi\epsilon_{0 }}\frac{1}{|\kappa_{i}-\kappa_{j}|^{3}}. \tag{4}\]
According to this approximation, the small displacements of the ion crystal are governed by a model that resembles a simple elastic/harmonic chain [123]. In comparison to this textbook example, we note that the effective spring constants do not restrict to nearest neighbors and, moreover, one also finds an additional local elastic term \(k_{\alpha}=m_{a}\omega_{\alpha}^{2}\). For the longitudinal displacements \(u_{i,x}\), these differences do not introduce qualitative changes in the physics: the ions vibrate in the form of collective excitations in analogy to the acoustic phonons in a solid, which are associated to quantised compressional sound waves. In contrast, the different sign of the spring constants (4) that couple to the transverse displacements \(u_{i,z}\) can actually lead to a situation that differs markedly from the quantised shear sound waves of an elastic solid. In fact, as one rises the ratio of the Paul-trap frequencies \(\kappa_{z}=(\omega_{\alpha}/\omega_{z})^{2}\) above a critical value \(\kappa_{z}>\kappa_{z,c}(N_{1})\)[124, 125], there is a structural transition from the linear chain into what is known as a zigzag ladder, as first observed experimentally in [31]. For a specific ion number and frequency ratio, one can numerically solve the system of non-linear equations for the equilibrium positions (2), leading to the configurations displayed in Fig. 1. The critical value decreases with the number of ions as a power law \(\kappa_{z,c}(N_{1})=aN_{1}^{b}\) for certain constants \(a>0\) and \(b<0\)[124]; a scaling verified in experiments [126].
These structural changes are the finite-size precursors of a quantum phase transition, which can be characterised by the spontaneous breaking of a discrete inversion symmetry with respect to the trap axis. Even working far away from the thermodynamic limit \(N_{1}\rightarrow\infty\), these mesoscopic structural transitions have a well-defined soft mode [114], which allows one to derive a low-energy/long-wavelength approximation that goes beyond the elastic limit. This requires going beyond the quadratic Hamiltonian in Eq. (3) by considering higher orders in the expansion of the Coulomb interaction (1). This long-wavelength th
Figure 1: **Trapped-ion chain and zigzag ladder:** The equilibrium positions have been calculated numerically solving the non-linear system of equations (2) for a chain with \(N=30\) ions, to illustrate the linear and zigzag configurations. **(a)** Linear-chain configuration for a trap-frequency ratio \(\kappa_{z}=(\omega_{\alpha}/\omega_{z})^{2}=10^{-4}\). **(b)** Zigzag-ladder distribution for a trap frequency ratio of \(\kappa_{z}=0.02\).
the \(\lambda\phi^{4}\) model QFTs in which the aforementioned inversion symmetry is \(\phi(x)\rightarrow-\phi(x)\) and gets spontaneously broken at a certain critical point. In the Hamiltonian formulation, this QFT can be written as
\[H\!=\!\int\!\!\mathrm{d}x\left(\frac{1}{2}\pi^{2}(x)+\frac{1}{2}(\partial_{x} \phi(x))^{2}+\frac{m_{0}^{2}}{2}\phi^{2}(x)+\frac{\lambda_{0}}{4!}\phi^{4}(x) \right)\!, \tag{5}\]
where \(x=(t,x)\) are the Minkowski spacetime coordinates, and \(m_{0},\lambda_{0}\) are the bare mass and bare coupling constants, respectively. In this expression, we have used natural units \(\hbar=c=1\), as customary in high-energy physics. For the connection to trapped ions, it is more appropriate to work in SI units [127], such that
\[H\!=\!\int\!\!\mathrm{d}x\!\left(\frac{c^{2}\pi^{2}(x)}{2\hbar^{2}}+\frac{ \hbar^{2}}{2}(\partial_{x}\phi(x))^{2}+\frac{m_{0}^{2}c^{2}}{2}\phi^{2}(x)+ \frac{\lambda_{0}}{4!}\phi^{4}(x)\right)\!. \tag{6}\]
In comparison to natural units, where the field operator and its conjugate momentum have scaling dimensions \([\phi]=\mathrm{L}^{0}\), \([\pi]=\mathrm{L}^{-1}\), the dimensional analysis in SI units leads to the following scaling \([\phi]=1/\mathrm{M}^{1/2}\mathrm{L}^{1/2}\), whereas \([\pi]=\mathrm{M}^{3/2}\mathrm{L}^{3/2}/\mathrm{T}\).
Let us discuss first how the quadratic Klein-Gordon part of the QFT (6) arises through a coarse-graining procedure of Eq. (1), when focusing on the quadratic approximation (3). In this way, we will highlight the important differences with respect to the effective QFT for compressional sound waves in an elastic chain [123]. By focusing on the bulk of the ion crystal, we can use periodic boundary conditions, and move to a Fourier representation for the small displacements
\[u_{i,z}=\frac{1}{\sqrt{N_{1}}}\sum_{\mathrm{k}\in\mathrm{BZ}}\mathrm{e}^{ \mathrm{i}\mathrm{k}Ai}u_{z}(\mathrm{k}). \tag{7}\]
Here, the quasi-momentum is \(\mathrm{k}=\frac{2\pi}{dN_{1}}n_{1}\) for \(n_{1}\in\{1,\cdots,N_{1}\}\), and thus lies within the first Brillouin zone \(\mathrm{BZ}=(0,2\pi/d]\), where \(d\) is the lattice spacing in the bulk of the ion chain, which is approximately constant (see Fig. 1**(a)**). Using this transformation, and focusing first on the quadratic part (3), one can derive the dispersion relation
\[\omega(\mathrm{k})=\omega_{z}\sqrt{1-\kappa_{c}\left(\frac{l}{d}\right)^{3} \sum_{r=1}^{\frac{1}{4}N_{1}}\!\frac{4}{r^{3}}\sin^{2}\!\left(\frac{\mathrm{k }\,dr}{2}\right)}. \tag{8}\]
This dispersion relation resembles that of a standard lattice regularization of the Klein-Gordon QFT in Eq. (6). In a Hamiltonian lattice field theory [73], the spatial components are discretised as \(\mathrm{x}\mapsto\mathrm{x}_{i}=ia\), where \(i\in\{1,...,N_{1}\}\), which require introducing an artificial lattice spacing \(a\) that regularises the QFT by an ultra-violet cutoff \(\Lambda_{\mathrm{c}}=\pi/a\). Following the lattice approach [128], one performs the following substitutions
\[\int\!\mathrm{d}x\mapsto a\!\sum_{i},\quad\partial_{x}\phi(x)\mapsto\frac{1}{ a}\!\left(\phi(t,(i+1)a)-\phi(t,ia)\right)\!. \tag{9}\]
By applying a similar Fourier transformation (7) to the discretised QFT (6), one finds the dispersion relation
\[\omega(\mathrm{k})=\sqrt{\frac{m_{0}^{2}c^{4}}{\hbar^{2}}+\frac{4c^{2}}{a^{2} }\sin^{2}\left(\frac{\mathrm{k}a}{2}\right)}, \tag{10}\]
where \(\mathrm{k}=\frac{2\pi}{dN_{1}}n_{1}\) for \(n_{1}\in\{1,\cdots,N_{1}\}\). Although this expression clearly resembles the trapped-ion case (8), there are important differences. The most apparent one is that the dispersion relation in Eq. (8) contains a dipolar tail due to the long-range nature of the effective spring constants (4). In addition, there is a sign difference with respect to (10) that will be crucial for the long-wavelength coarse-graining.
The dispersion relation of the Klein-Gordon QFT \(\omega^{2}(\mathrm{k})=m_{0}^{2}c^{4}/\hbar^{2}+c^{2}\mathrm{k}^{2}\) is recovered from Eq. (10) by considering \(|\mathrm{k}|\ll\Lambda_{\mathrm{c}}\). This particular low-energy limit corresponds to a coarse-graining approximation, where the relevant scale is much larger than the lattice spacing \(\xi_{0}\propto 1/m_{0}\gg a\), and one says that the continuum QFT is recovered by sending \(a\to 0\). The coarse graining in the trapped-ion case (8) is slightly different. Due to the sign difference in Eq. (8), the lowest-energy mode corresponds to the so-called zigzag mode, and one has to expand around \(\mathrm{k}=\pi/d+\delta\mathrm{k}\). From this expansion, one can readily recover the same Klein-Gordon dispersion, identifying the speed of the transversal sound waves as
\[c_{\mathrm{t}}^{2}=d^{2}\omega_{x}^{2}\left(\frac{l}{d}\right)^{3}\!\eta_{N_{ 1}}(1), \tag{11}\]
which plays the role of an effective speed of light in the quantum simulator of the QFT (6). In this expression, we have used a truncated version of the Dirichlet eta function, namely
\[\eta_{N_{1}}(s)=\sum_{r=1}^{\frac{1}{2}N_{1}}\frac{(-1)^{r+1}}{r^{s}}, \tag{12}\]
such that \(\eta_{N_{1}}(1)\rightarrow\log 2\) in the thermodynamic limit \(N_{1}\rightarrow\infty\). In addition, one obtains the effective bare mass
\[m_{0}^{2}=:\frac{\hbar^{2}\omega_{\mathrm{zz}}^{2}}{c_{\mathrm{t}}^{4}}=\frac{ \hbar^{2}}{c_{\mathrm{t}}^{4}}\!\left(\omega_{z}^{2}-\omega_{x}^{2}\frac{7}{2} \left(\frac{l}{d}\right)^{3}\!\zeta_{N_{1}}(3)\right), \tag{13}\]
where \(\omega_{\mathrm{zz}}\) is the frequency of the zigzag mode, and we have introduced a truncated version of the Riemann zeta function
\[\zeta_{N_{1}}(s)=\sum_{r=1}^{\frac{1}{4}N_{1}}\frac{1}{r^{s}}, \tag{14}\]
such that \(\zeta_{N_{1}}(3)\to 1.202\) in the thermodynamic limit \(N_{1}\rightarrow\infty\), corresponding to Apery's constant.
At this point, we emphasise that the coarse-graining procedure in a physical trapped-ion lattice of spacing \(d\) does not require sending the spacing to \(d\to 0\). Alternatively, one can tune the parameters close to a critical point where the bare mass (13) would vanish \(m_{0}\to 0\), such that the effective Compton wavelength fulfills \(\xi_{0}=\hbar/m_{0}c_{\mathrm{t}}\gg d\), and the low-energy physics does not depend on microscopic lattice details, but rather on the universal properties captured by a QFT. As it
can be checked from Eq. (13), this critical point \(m_{0}^{2}|_{\rm c}=0\) coincides precisely with the linear-to-zigzag transition at
\[1-\kappa_{z,c}\,\frac{7}{2}\biggl{(}\frac{l}{d}\biggr{)}^{3}\zeta_{N_{1}}(3)=0. \tag{15}\]
This critical point \(\kappa_{z,c}(N_{1})\) has a scaling with the number of ions that agrees with the previously-mentioned power laws [124; 125], already for moderate ion numbers [129].
So far, our discussion has focused on the elastic/quadratic terms and the dispersion relation, but we have not identified the fields yet. In order to find their trapped-ion analogue, we need to separate the rapid oscillations around the zigzag mode [130; 103] from a slowly-varying envelope that will play the role of the coarse-grained field [123; 131]. In addition, one has to rescale the position and momentum operators in Eq. (3) to achieve the correct scaling dimensions below Eq. (6)
\[\phi(x)=\frac{(-1)^{i}}{\sqrt{m_{a}d^{3}}}u_{i,z}(t),\quad\pi(x)=(-1)^{i} \sqrt{m_{a}d}\ \pi_{i,z}(t), \tag{16}\]
such that one recovers the canonical algebra \([\phi(t,{\rm x}),\pi(t,y)]=\mathrm{i}\hbar\delta_{ij}/d\to\mathrm{i}\hbar \delta({\rm x}-{\rm y})\). Since the coarse-grained fields vary slowly, one can perform a gradient expansion \(\phi(t,{\rm y})\approx\phi(t,{\rm x})+({\rm y}-{\rm x})\delta_{\phi}\phi(t,{ \rm x})\), and obtain the Klein-Gordon part of Eq. (6) from the original microscopic theory (3).
Let us note that, at the level of the effective QFT (6), the critical point \(m_{0}^{2}=0\) is stable thanks to the additional quartic potential with \(\lambda_{0}>0\). In the trapped-ion case, one needs to extend the microscopic theory (3) to quartic order [122], and apply again the gradient expansion to identify the analogue of the bare quartic coupling. In this procedure, the rigidity of the trapped-ion chain becomes relevant. As occurs with other long-wavelength descriptions in condensed-matter and high-energy physics [132; 133; 134], one can use thermodynamic arguments when constructing the effective field theory. In the present case, where the coarse-grained QFT can be directly obtained from the trapped-ion model via the gradient expansion, one finds that, in addition to the effective speed of light (11) and bare mass (13), the rigidity modulus of the trapped-ion chain [135; 136] gives rise to an additional dimensionless Luttinger paramater [103], namely
\[K_{0}=\frac{m_{a}dc_{1}}{\hbar}. \tag{17}\]
This parameter quantifies the rigidity of the trapped-ion chain under a shear strain that aims at deforming it transversely, and becomes important when identifying the coupling constant of the \(\lambda\phi^{4}\) model. In fact, after expanding Eq. (1) to fourth order and identifying the terms that are more important for the structural phase transition, one finds \(H\approx H_{2}+H_{4}\) where
\[H_{4}=\frac{1}{2}\sum_{i\neq j}\frac{\beta_{i,j}^{z}}{4!}(u_{i,z}-u_{j,z})^{4}, \tag{18}\]
and we have introduced the quartic coupling matrix
\[\beta_{i,j}^{z}=\frac{e^{2}}{4\pi\epsilon_{0}}\frac{9}{|x_{i}-x_{j}|^{5}}. \tag{19}\]
At this point, one can perform again the gradient expansion below Eq. (16), which allows one to find the final microscopic expression for the \(\lambda\phi^{4}\) coupling
\[\lambda_{0}=\frac{729\zeta_{N_{1}}(5)}{2K_{0}^{4}}m_{a}^{3}\omega_{c}^{2}l^{3}. \tag{20}\]
We have thus discussed how a trapped-ion chain in the vicinity of a structural phase transition serves as a quantum simulator of a regularised self-interacting QFT (6) with the effective speed of light in Eq. (11), and the bare parameters in Eqs. (13), (17) and (20). In this context, note that the critical point (15) is obtained by setting the bare mass (13) to zero, and thus corresponds to the classical field-theory calculation in which the minimum of the quartic potential underlying Eq. (6) changes from a single to a double well. At this point, the \(\mathbb{Z}_{2}\) inversion symmetry of the real scalar field \(\phi\to-\phi\) gets spontaneously broken. From the perspective of QFTs, one knows that the classical quartic potential gets quantum contributions in terms of Feynman loop diagrams [137; 138], such that the single to double-well will no longer be characterised by the classical critical point. Instead, the excitations are dressed leading to a physical mass \(m_{0}^{2}\to m_{\rm P}^{2}\), and one finds that the phase transition \(m_{\rm P}^{2}=0\) yields a critical line in parameter space \((m_{0}^{2},\lambda_{0})\) that separates the symmetry-broken and symmetry-preserved regions. Going back to the trapped-ion problem, the critical point (15) will flow with the coupling strength \(\kappa_{z,c}\to\kappa_{z,c}(\lambda_{0})\), defining a line that separates the linear chain \(\kappa_{z}<\kappa_{z,c}(\lambda_{0})\) from the zigzag ladder \(\kappa_{z}>\kappa_{z,c}(\lambda_{0})\).
Note that, if one takes the continuum limit \(a\to 0\) in the lattice field approach (9), the UV divergences of the loop integrals must be subtracted from the bare mass in order to get finite parameters and draw a meaningful phase diagram. In the trapped-ion case, on the contrary, the lattice spacing remains constant as one approaches the critical point, and it is the physical Compton wavelength which becomes very large \(\xi_{0}\mapsto\xi_{\rm P}\gg d\), justifying the long-wavelength description. Accordingly, one can stick to the bare parameters without any additional subtraction, and still find a meaningful phase diagram. One must bear in mind that the critical line will depend on the lattice spacing and other non-universal microscopic properties. In contrast, as one approaches this critical line, the universal properties of the phase transition, i.e. scaling critical exponents, should be controlled by the fixed point of the continuum QFT (6). This corresponds to the so-called Wilson-Fisher fixed point, which can be characterised by an \(\epsilon\)-expansion in higher dimensions [139]. In \(D=1+1\) dimensions, however, the perturbative renormalization-group techniques [113] underlying this \(\epsilon\)-expansion break down, as all perturbations are relevant in the renormalization-group sense. Localising the critical line of the lattice model, as well as the critical exponents of the corresponding fixed point, requires using non-perturbative techniques, such as Monte Carlo or tensor-network methods [140; 141; 142; 143; 144; 145; 146; 147; 148]. The continuum limit of these studies is consistent with a QFT presenting a second-order phase transition where the \(\mathbb{Z}_{2}\) symmetry gets broken [149], and where the fixed point lies in the universality class of the two-dimensional Ising model.
### Yukawa-type interactions and real-time spin dynamics
As advanced in the introduction, we are interested in the use of trapped-ions as quantum simulators of Yukawa-type spin models, and how the real-time dynamics of the spins can serve to probe the underlying interacting QFT. So far, however, we have only discussed the motional degrees of freedom of the ions, leading to the effective QFT (6) in the vicinity of the linear-to-zigzag transition (15). As noted in the introduction, the ions also have an atomic structure with many electronic levels, among which one can select a pair of long-lived states to encode the degrees of freedom of a chain of spins \(\{\ket{\uparrow_{i}},\ket{\downarrow_{i}}\}_{i=1}^{N_{1}}\). These two states can correspond to the so-called optical, hyperfine or Zeeman qubits in trapped-ion quantum computing [34]. In the analog approach to quantum simulation [91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101], these spins are coupled to the transverse phonons of the ion crystal by applying an off-resonant spin-dependent dipole force. This force can be obtained by a bichromatic laser beam with a pair of tones \(n=1,2\) of frequency (wave-vector) \(\omega_{\text{L},n}(k_{\text{L},n})\), which are either _(i)_ symmetrically detuned with respect to the red and blue motional sidebands [88; 89], or _(ii)_ far-detuned from the atomic transition [150; 151]. In the following, we consider the second scheme although, as discussed later, the formalism also applies to the former, albeit in a different spin basis.
Working in the Lamb-Dicke regime [151], the light-matter interaction of the ions with the bichromatic laser beam leads to a local interaction between the spins and the ion displacements. If the laser illuminates the entire ion chain, this reads
\[V(t)=\sum_{i}g\sin(\Delta\omega_{\text{L}}t-\Delta k_{\text{L},x}\mathbf{x}_{ i})\sigma_{i}^{z}u_{i,z}(t), \tag{21}\]
where we have introduced the beatnote frequency (wave-vector) \(\Delta\omega_{\text{L}}=\omega_{\text{L},1}-\omega_{\text{L},2}\) (\(\Delta k_{\text{L}}=k_{\text{L},1}-k_{\text{L},2}\)), and the Pauli operator \(\sigma_{i}^{z}=\ket{\uparrow_{i}}\bra{\uparrow_{i}}-\ket{\downarrow_{i}} \bra{\downarrow_{i}}\). In the above expression, the force strength reads \(g=\hbar\Omega_{\text{L}}\Delta k_{\text{L},z}\), where \(\Omega_{\text{L}}\) is the differential ac-Stark shift between the two electronic states [151]. For simplicity, we will assume that \(\Delta k_{\text{L},\text{L}}\ket{\mathbf{e}_{z}}\), such that \(\Delta k_{\text{L},x}=0\) from now on. Working in the weak-force regime
\[\frac{|g|}{\sqrt{2m_{a}\omega(\text{k})}}\ll|\omega(\text{k})-\Delta\omega_{ \text{L}}|, \tag{22}\]
it is possible to obtain an effective spin model with long-range interactions mediated by the transverse phonons [90], which governs the slower dynamics of the spins [91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101]. Using the coarse-graining in Eq. (16), this coupling can be expressed as
\[V(t)=-\sum_{i}J(t,\mathbf{x}_{i})\phi(t,\mathbf{x}_{i})\sigma_{i}^{z}, \tag{23}\]
which can be understood as a Yukawa-type coupling if one writes \(\sigma_{i}^{z}=2\psi^{\dagger}(t,\mathbf{x}_{i})\psi(t,\mathbf{x}_{i})-1\) in terms of a local fermionic field. Here, we have introduced the harmonic source terms
\[J(t,\mathbf{x}_{i})=\frac{(-1)^{i+1}}{\sqrt{m_{a}d^{3}}}\frac{g}{K}\sin( \Delta\omega_{\text{L}}t-\Delta k_{\text{L},x}\mathbf{x}_{i}). \tag{24}\]
Let us now address an important point by discussing when the coarse-grained description is expected to capture the properties of the long-range spin-spin interactions. The idea is that, whenever the harmonic sources (24) oscillate at a frequency that is close to the frequency \(\omega(\text{k})\) (8) of the lowest zigzag mode at \(\text{k}=\pi/d\), namely \(\Delta\omega_{\text{L}}\approx\omega(\pi/d)\), then the long-wavelength approximation will provide reliable results. This is actually more general than working at the vicinity of the structural phase transition, which is a low-energy approximation since the zigzag mode becomes the soft mode of the transition \(\omega(\pi/d)\approx 0\). Accordingly, the long-wavelength approximation is also valid at other parameter regimes far from the structural transition, in which the additional non-linearities are unimportant. It is in these regimes, in which the elastic terms (3) suffice to describe the problem, where most of the experimental trapped-ion quantum simulators work [91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101].
In this case, the coarse-grained theory corresponds to a Klein-Gordon field with the dispersion relation of Eq. (8), and the real-time dynamics of the spins is governed by a unitary evolution operator with an effective Ising Hamiltonian
\[U_{\text{eff}}(t)\approx\text{e}^{-\text{i}(t_{\text{f}}-t_{0})H_{\text{eff}}/ \hbar},\quad H_{\text{eff}}=\frac{1}{2}{\sum_{i,j}}J_{ij}\sigma_{i}^{z}\sigma_ {j}^{z}, \tag{25}\]
where we have introduced the spin-spin coupling strengths
\[J_{ij}=-\hbar\Omega_{\text{L}}^{2}\eta_{x}^{2}2\omega_{x}\,G_{m_{\text{eff}}}^ {\text{E}}(\mathbf{x}_{i}-\mathbf{x}_{i}). \tag{26}\]
In this expression, the distance decay of the spin-spin couplings is controlled by a dimensionally-reduced Euclidean propagator of the Klein-Gordon field, which is obtained after integrating the temporal components in \(x_{1}-x_{2}=(\tau,\mathbf{x}_{i}-\mathbf{x}_{j})\), and reads as follows
\[G_{m_{\text{eff}}}^{\text{E}}(\mathbf{x}_{i}-\mathbf{x}_{i})=d\int_{0}^{\frac{2 \pi}{d}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
spin-spin couplings with a typical decay length controlled by the Compton wavelength [103]. For a standard lattice regularization of the Klein-Gordon QFT, which only has nearest-neighbor couplings leading to Eq. (10), the previous integral can be evaluated by extending the momentum to the complex plane \(\mathrm{k}d\mapsto z\in\mathbb{C}\), and by noticing that the integrand contains a simple pole that contributes with an exponential distance decay [109]. However, for the trapped-ion regularization, the dispersion relation (8) also presents a branch cut, which contributes with an additional term with a dipolar distance decay. Altogether, the spin-spin couplings read
\[J_{ij}=J_{\mathrm{eff}}\Bigg{(}\frac{\omega_{x}^{4}\eta_{N_{i}}(1)}{(\omega_{ z}^{2}-\Delta\omega_{\mathrm{L}}^{2})^{2}}\frac{l^{3}}{|\mathrm{x}_{i}- \mathrm{x}_{j}|^{3}}-(-1)^{i-j}\frac{\xi_{\mathrm{eff}}d^{2}}{l^{3}}\mathrm{e }^{-\frac{|\mathrm{x}_{i}-\mathrm{x}_{j}|}{\xi_{\mathrm{eff}}}}\Bigg{)}, \tag{29}\]
where we have introduced an effective strength
\[J_{\mathrm{eff}}=\frac{\hbar\Omega_{\mathrm{L}}^{2}\eta_{x}^{2}}{\omega_{x}} \frac{2}{\eta_{N_{1}}(1)}, \tag{30}\]
and \(\eta_{x}=k_{\mathrm{L},x}\sqrt{\hbar/2m_{a}\omega_{x}}\) is the Lamb-Dicke parameter. In the final expression (29), one can substitute the inhomogeneous equilibrium positions \(\mathrm{x}_{i}\) that stem from the solution of Eq. (2). Given the relation of the bare mass with the zigzag mode frequency in Eq. (13), we see that the effective Compton wavelength (67) that sets the range of the interactions in turn depends on by how close the laser beatnote is with respect to this vibrational mode.
To test the validity of this expression (29) in a specific realistic setup, one may consider crystals of up to \(N=51\) atomic \({}^{40}\)Ca\({}^{+}\) ions in a linear Paul trap, forming a chain with an overall length of \(L\approx 250\,\mathrm{\mu m}\)[152]. Reference [152] reviews several aspects of the coherent manipulation of such long strings. It is also shown that stability of the chain is characterised by a lifetime of approximately \(27\,\mathrm{s}\), a period after which the collisions of the ions with the background gas become important and can even melt the crystal [153, 154]. This sets an upper bound for the possible time of the experimental runs, and demands fast and efficient laser cooling in order to reach a low vibrational state of all axial and transverse modes, as required for high-fidelity quantum control. We note that these constraints can be met by using resolved-sideband cooling for the transverse modes [155], and polarization-gradient cooling for the axial ones [156]. For the transverse modes relevant for our work, we will assume that the mean phonon number ranges between 1-30, which will become important below.
The spins are encoded as optical qubits in the long-lived electronic states \(\downarrow\!=\!4\mathrm{S}_{1/2}(m=+1/2)\), and \(\uparrow\!=\!4\mathrm{D}_{5/2}(m=+5/2)\), which have long coherence \(T_{2}=64\,\mathrm{ms}\) and decay \(T_{1}=1\,\mathrm{s}\) times [152]. The motional degrees of freedom depend on the average lattice spacing \(d\approx 5\,\mathrm{\mu m}\), and the transverse and axial trap frequencies, which have typical values of \(\omega_{y}/2\pi=2.93\,\mathrm{MHz}\), \(\omega_{z}/2\pi=2.89\,\mathrm{MHz}\) and \(\omega_{x}/2\pi=127\,\mathrm{kHz}\) in most of the experiments discussed in [152]. In the following, we assume a \(N_{1}=30\) ion chain, in which the classical estimate of the linear-to-zigzag transition (15) for a fixed axial trap frequency of \(\omega_{x}/2\pi=127\,\mathrm{kHz}\) corresponds to the critical transverse frequency \(\omega_{z,c}/2\pi\approx 2.7\,\mathrm{MHz}\). We thus consider modifying this trap frequency in the range \(\omega_{z}/2\pi\in[2.75,2.89]\,\mathrm{MHz}\) to cover the regimes in the symmetry-preserved phase, i.e. a stable chain configuration, which are well described by either the Klein-Gordon effective QFT for \(\omega_{z}\gtrsim\omega_{z,c}\), or otherwise by the full \(\lambda\phi^{4}\) model for \(\omega_{z}\gtrsim\omega_{z,c}\).
In the experiments discussed in [152], they use a bichromatic laser scheme with a pair of beams symmetrically detuned with respect to the red and blue motional sidebands [88, 89]. This leads to a state-dependent dipole force that acts in a different spin-basis with respect to Eq. (21). As discussed in [152, 90, 157], the spin-spin interactions mediated by the transverse phonons are given by the following expression
\[H_{\mathrm{eff}}=\frac{1}{2}\!\sum_{i,j}\!J_{ij}\sigma_{i}^{x} \sigma_{j}^{x},\quad J_{ij}=|\Omega_{\mathrm{L}}|^{2}E_{\mathrm{R}}\!\sum_{n_{ 1}=1}^{N_{1}}\!\frac{\mathcal{M}_{in_{1}}\mathcal{M}_{jn_{1}}}{\Delta\omega_{ \mathrm{L}}^{2}-\omega_{z,n_{1}}^{2}}, \tag{31}\]
where the recoil energy is \(E_{\mathrm{R}}=(\hbar\Delta k_{\mathrm{L}})^{2}/2m_{a}\), and the laser beatnote is now referenced to the qubit transition \(\omega_{\mathrm{L},1}-\omega_{\mathrm{L},2}=\omega_{0}+\Delta\omega_{L}\), with \(\omega_{0}/2\pi=411.5\,\mathrm{THz}\). Also, in contrast to the previous case (21), \(\Omega_{\mathrm{L}}\) is now the quadrupole Rabi frequency [158] instead of the differential ac-Stark shift. In this expression (31), we have introduced the normal-mode frequencies \(\omega_{z,n_{1}}\) and wavevectors \(\mathcal{M}_{in_{1}}\) of the transverse phonons [122]. We note that Eq. (31) does not rely on the approximations used to obtain the coarse-grained QFT prediction (29). The only common implicit assumption is that one neglects the higher-order quartic terms, as well as other off-resonant contributions beyond the dipole force (21).
Considering the experimental trap frequencies \(\omega_{z}/2\pi=2.89\,\mathrm{MHz}\) and \(\omega_{x}/2\pi=127\,\mathrm{kHz}\), and setting \(\omega_{\mathrm{L},1}-\omega_{\mathrm{L},2}=\omega_{0}+\Delta\omega_{\mathrm{L}}\) where \(\Delta\omega_{\mathrm{L}}/2\pi=2.43\,\mathrm{MHz}\) is red-detuned with respect to the zigzag mode at \(\omega_{z,N_{1}}/2\pi=2.7\,\mathrm{MHz}\), we compare in Fig. 2 the spin interactions in a trapped-ion chain of
Figure 2: **Spin-spin interactions and Yukawa-type QFT:** We represent the spin-spin interactions \(J_{ij}\) in log-scale for half of a chain of \(N_{1}=30\) ions (the remaining half is related by inversion symmetry). The green bars represent the exact expression (31), whereas the red lines stand for the coarse-grained approximation of Yukawa-type interactions mediated by an effective Klein-Gordon field (29).
\(N_{1}=30\) ions obtained from the exact expressions of the inhomogeneous crystal (31) with those of the coarse-grained Yukawa-mediated expression (29). In this figure, we have set \(\Omega_{\mathrm{L}}/2\pi=1\,\mathrm{MHz}\), such that \(J_{i_{0},i_{0}+1}/h\approx 1.4\,\mathrm{kHz}\) for the ion at the center of the chain \(i_{0}=15\). The agreement of the coarse-grained QFT prediction, which has no fitting parameter, is rather remarkable for such moderate-size chains. As expected, there are some deviations at distances around the UV lattice cutoff, but the accuracy becomes very good as the ion distance increases.
In summary, we can conclude that trapped-ion quantum simulators of spin models, like those of the experiments in [94; 95; 96; 97; 98; 99; 100; 101] but performed on chains with a few tens of ions and a spin-dependent force that is red-detuned with respect to the transverse zigzag mode, can be used to probe the physics of a relativistic QFT when interpreted in the light of a Yukawa-type interaction. As seen from the glasses of QFT, the problem becomes more interesting in the presence of quartic interactions and non-zero temperatures, as discussed below.
### Effect of \(\lambda\phi^{4}\) term and non-zero temperatures
Let us now discuss how to take into account the non-linearities (6) that go beyond the harmonic approximation (3), as well as a non-zero temperature, both of which become relevant in a trapped-ion experiment. In the parameter regime underlying Fig. 2, the trapping conditions are far from the linear-to-zigzag critical point \(\omega_{z}\gg\omega_{z,c}\), such that the harmonic-crystal approximation (3) is an accurate description of the collective phonons. In terms of the coarse-grained QFT (6), the bare quartic term (20) is negligible in comparison to the bare mass (13), and the effective QFT can be reduced to that of a real Klein-Gordon field. Temperature enters in the conditions that must be imposed on the strength of the Yukawa-type coupling (23), which we recall had to fulfil the weak-coupling constraint (22) in the zero-temperature limit. In the presence of thermal fluctuations, there can be some bosonic enhancement of the Yukawa-type coupling, and the condition (22) must be upgraded to
\[\frac{|g|\sqrt{1+2\overline{n}(\mathrm{k})}}{\sqrt{2m_{a}\omega(\mathrm{k})}} \ll|\omega(\mathrm{k})-\Delta\omega_{\mathrm{L}}|, \tag{32}\]
where \(\overline{n}(\mathrm{k})\) is the mean excitation number of the scalar field. In this regime, the real-time time-evolution of the spins is still described by a long-range Ising model (25) in complete analogy to the zero-temperature case. Once again, the spin-spin couplings (26) are proportional to a dimensionally-reduced Euclidean propagator (27). The important difference is that this is not the free propagator of a Klein-Gordon QFT, and can get a temperature-dependent contribution as one goes beyond the harmonic non-interacting limit.
As mentioned previously, one can control \(\overline{n}(\mathrm{k})\) by means of laser cooling. For a single trapped ion [159], one can show that the resulting state corresponds to a thermal Gibbs state, and that the mean-phonon number can be controlled by adjusting the ratio of the cooling and heating rates, which depend themselves on the detuning and intensity of the cooling laser. For a trapped-ion chain, the steady state that results from the cooling will depend on the laser-cooling scheme. For instance, in the resolved-sideband limit, one can individually cool each of the transverse modes to a target mean excitation number \(\overline{n}(\mathrm{k})\). In this work, we assume that, after such cooling stage, the trapped-ion crystal is allowed to thermalize, and one can then define a single effective temperature according to the Bose-Einstein distribution of a thermal Gibbs state
\[T=\frac{\hbar\omega(\mathrm{k})}{k_{\mathrm{B}}\log(1+1/\overline{n}(\mathrm{ k}))}. \tag{33}\]
Given the nature of the coarse-grained approximation underlying (6), we believe that, even if there are deviations to this idealised thermalization, and the effective temperature varies for the different modes, the only relevant quantity is the effective temperature around the zigzag mode \(\mathrm{k}=\pi/d\).
Let us now discuss why we need to go beyond the harmonic limit. In this limit, increasing the temperature only results in more stringent conditions for the Yukawa-coupling strength (32), which in turn result in a weaker magnitude for the spin-spin interactions (29). The situation becomes more interesting in the presence of non-linearities, such as the quartic coupling (20), which become more important as one approaches the linear-to-zigzag transition \(\omega_{z}\to\omega_{z,c}\). At zero temperature [103], a path integral can be used to show that the spin-spin couplings (26) are still described by Eq. (26), but this time controlled by a dressed propagator and renormalised sources. This propagator depends on all of the possible scattering processes of the self-interacting scalar field, as its excitations propagate between a pair of distant ions. These interaction effects will shift the physical mass of the carrier from \(m_{0}^{2}\mapsto m_{\mathrm{P}}^{2}\), changing the effective Compton wavelength (67), which will in turn change the range of the Yukawa-type interactions (29). This can be inferred experimentally from changes of Fig. 2 as one approaches the structural transition.
Perturbatively, one expects a zero-temperature shift of the bare mass that scales with the quartic coupling and stems from the so-called tadpole Feynman diagram \(\delta m_{0}^{2}|_{T=0}\propto\lambda_{0}\)[109; 110], which will be discussed at length below. As estimated in [103], these type of effects are inhibited by the very large rigidity of the ion chain, as the quartic coupling scales with the inverse fourth power of the rigidity modulus (20). Moreover, given the fact that the effective coarse-grained parameters in Eqs. (11), (13), (17) and (20) depend on the microscopic experimental parameters in a convoluted manner, it is not straightforward to modify \(\lambda_{0}\) independently of the others to see its effect on the range of the interactions.
In addition to the above zero-temperature shift of the bare mass, the tadpole diagram also contributes to the so-called thermal mass [160; 161]. Perturbatively, this reads as follows
\[\delta m_{0}^{2}|_{T}\propto\frac{\lambda_{0}}{2}\int\frac{\mathrm{dk}}{2\pi} \frac{\overline{n}(\mathrm{k})}{\omega(\mathrm{k})}, \tag{34}\]
where the proportionality stems from changes that would be required to convert from natural units into SI units. Regardless of the proportionality factor, the important result of thermal field theory is that, in spite of having a small quartic
coupling, the above integral can lead to a shift proportional to some power of the ratio \(\delta m_{0}^{2}|_{T}\propto\lambda_{0}(T/m_{0})^{\alpha}\)[160]. For \(D=1+1\) dimensions, we find \(\delta m_{0}^{2}|_{T}\propto\lambda_{0}(T/|m_{0}|)\) in the high-temperature regime \(T\gg m_{0}\), which can thus amount to a large shift even for a perturbative \(\lambda_{0}\). In the present context, this thermal shift will change the Compton wavelength (67), and the range of the spin-spin interactions (29); an effect that will be characterised non-perturbatively below.
Coming back to the trapped-ion regularization of the \(\lambda\phi^{4}\) QFT (6), the classical critical point (15) of the \(\mathbb{Z}_{2}\)-breaking phase transition, obtained by setting the bare mass (13) to zero \(m_{0}^{2}=0\), will get contributions from the thermal masses. Hence, the physical mass will be dressed with both temperature and quartic-coupling contributions, such that the phase transition at \(m_{\rm P}^{2}(\lambda_{0},T)|_{\rm c}=0\) now corresponds to a critical surface in parameter space \((m_{0}^{2},\lambda_{0},T)\). Determining how the critical point flows with temperature and coupling strength in the lattice model is a non-perturbative problem that requires going beyond the previous discussion and will be addressed in the following sections. In general, if one starts in a symmetry-broken phase \(m_{\rm P}^{2}(\lambda_{0},T)<0\), by solely increasing the temperature, symmetry restoration can take place \(m_{\rm P}^{2}(\lambda_{0},T+\delta T)>0\)[116; 117], such that one ends in a symmetry-preserved phase. In terms of a linear-to-zigzag phase transition at finite temperatures [162], an analogue of this restoration of symmetry has actually been observed already in experiments with small trapped-ion chains [163; 164; 165]. To the best of our knowledge, the connection to relativistic QFTs and thermal masses has not been previously noticed in the trapped-ion literature. In these experiments, the cooling lasers are not only used to prepare an initial thermal state, but are actually applied continuously during the experiment, such that one is exploring the steady state of a driven-dissipative system. In spite of these differences, the observations resemble the phenomenon of thermal masses and the restoration of symmetry. As discussed in [163], a linear shift of the critical ratio \(\kappa_{z,{\rm c}}\) (15) with temperature has been reported, which is somewhat reminiscent of the previous thermal mass shift. In the following section, we will present a self-consistent non-perturbative method that can be used to derive quantitative predictions of how the critical point flows, and how the range of the spin-spin interactions changes with temperature.
## III Self-consistency beyond mean field theory
In this section, we present a detailed account of our self-consistent prediction for the range of the spin-spin interactions, and how it changes with temperature as one approaches the linear-to-zigzag transition. We start by reviewing the functional approach to the self-interacting scalar field theory, and then move on to discuss our approach to get a set of finite self-consistent equations in spite of UV and IR divergences.
### Perturbative generating functional of \(\lambda\phi^{4}\) fields
In this subsection, we review the functional approach to the diagrammatic perturbation theory of the self-interacting scalar field [110; 166; 109]. The central object in this approach is the generating functional, obtained by adding a source term to the path integral. In its Euclidean version, where the \(\mathbf{x}=(\tau,{\rm x})\) is obtained from \(x=(t,{\rm x})\) after a Wick rotation \(\tau={\rm i}t\), the generating functional is given by
\[Z[J]=\int\!\mathcal{D}\phi{\rm e}^{-\int\!\!\mathrm{d}^{2}x\left(\mathcal{L}- J(\mathbf{x})\phi(\mathbf{x})\right)}\,, \tag{35}\]
where \(\mathcal{L}\) the Lagrangian density associated to the Hamiltonian field theory in Eq. (5), provided one works in imaginary time. The generating functional gives the \(n\)-point Green's functions upon derivation with respect to the sources
\[\langle\phi(\mathbf{x}_{1})...\phi(\mathbf{x}_{n})\rangle=\frac{\delta^{n}\mathcal{Z} [J]}{\delta J(\mathbf{x}_{1})...\delta J(\mathbf{x}_{n})}\,\bigg{|}_{J=0}\,, \tag{36}\]
where the expectation value is taken on the vacuum, and \(\mathcal{Z}[J]=Z[J]/Z[0]\) is the normalized generating functional.
In the absence of interactions, the Lagrangian \(\mathcal{L}=\mathcal{L}_{0}\) reduces to a real Klein-Gordon theory quadratic in the fields, and the normalised generating functional finds a simple analytical expression
\[\mathcal{Z}_{0}[J]={\rm e}^{\frac{1}{2}\int\!\!\mathrm{d}^{2}x\int\!\!\mathrm{ d}^{2}yJ(\mathbf{x})\Delta_{0}(\mathbf{x}-\mathbf{y})J(\mathbf{y})}\, \tag{37}\]
where \(\Delta_{0}(\mathbf{x}-\mathbf{y})=\langle\phi(\mathbf{x})\phi(\mathbf{y})\rangle\) is the Euclidean propagator of a free scalar field. The propagator is most conveniently written in terms of its Fourier decomposition
\[\Delta_{0}(\mathbf{x})=\!\int\!\!\frac{\mathrm{d}^{2}k}{(2\pi)^{2}}\,\tilde{\Delta }_{0}(\mathbf{k}){\rm e}^{{\rm i}\mathbf{k}\cdot\mathbf{x}},\quad\tilde{\Delta}_{0}(\mathbf{k}) =\frac{1}{\mathbf{k}^{2}+m_{0}^{2}}\,, \tag{38}\]
where the Euclidean momentum \(\mathbf{k}=(k_{0},{\rm k})\) is related to the 2-momentum \(k=(\omega,{\rm k})\) in Minkowski spacetime by \(k_{0}=-{\rm i}\omega\). In an interacting theory \(\mathcal{L}=\mathcal{L}_{0}+\mathcal{L}_{\rm int}\), such as \(\mathcal{L}_{\rm int}=\frac{\lambda_{0}}{4!}\phi^{4}\) in our case, the generating functional can be obtained with the help of the following identity in functional analysis
\[\mathcal{Z}[J]=\frac{\exp\left[-\int\!\!\mathrm{d}^{2}z\mathcal{L}_{\rm int} \left(\frac{\delta}{\delta J(\mathbf{z})}\right)\right]\!\mathcal{Z}_{0}[J]}{\exp \left[-\int\!\!\mathrm{d}^{2}z\mathcal{L}_{\rm int}\left(\frac{\delta}{\delta J (\mathbf{z})}\right)\right]\!\mathcal{Z}_{0}[J]\bigg{|}_{J=0}}. \tag{39}\]
This expression can be expanded in a power series in \(\lambda_{0}\) to the desired perturbative order. The expansion can be graphically represented in terms of Feynman diagrams. Taking into account that the denominator in Eq. (39) cancels out the so-called vacuum diagrams, i.e. those diagrams that do not contain any external source terms, the final expression reads
The crosses \(\times=J(\mathbf{x})\) represent sources, the blobs \(\bullet=\lambda_{0}\) interaction vertices, and the different free propagators obey \(\mathbf{\times}\!\!-\!\bullet=\Delta_{0}(\mathbf{x}-\mathbf{z}_{1})\), \(\mathbf{\widehat{\mathbf{0}}}=\Delta_{0}(0)\) and \(\mathbf{\bullet}\!\!-\!\bullet=\Delta_{0}(\mathbf{z}_{1}-\mathbf{z}_{2})\). As usual, we recall that one has to integrate over the location of the interaction vertices, here denoted by \(\{\mathbf{z}_{i}\}\).
### Feynman diagrams and self-consistent equations
The Feynman diagrams in Eq. (40) provide quantum corrections to the \(n\)-point correlation functions of the QFT. For instance, the propagator of the interacting theory, which corresponds to the 2-point function, can be written as
\[\tilde{\Delta}(\mathbf{k})=\frac{1}{\mathbf{k}^{2}+m_{0}^{2}+\Sigma(\mathbf{k})}\, \tag{41}\]
where \(\Sigma(\mathbf{k})\), the self-energy [167; 168; 169], contains the contributions of all 1-particle irreducible diagrams with two external legs. We recall that these diagrams are those that cannot be separated into disconnected pieces by cutting an internal propagator [110]. At order \(\lambda_{0}^{2}\), the self-energy is given by the first, second and fourth diagrams in Eq. (40). The third diagram is not 1-particle irreducible and,thus, does not contribute to \(\Sigma(\mathbf{k})\). These three diagrams belong to the tadpole type, namely, each loop integral depends on a single internal momentum while being independent of the external momenta.
Evaluating the self-energy non-perturbatively is of course out of reach. It is however possible to resum the tadpole family to all orders in the quartic coupling. The result is encoded in the self-consistent equation
\[\Sigma_{\rm td}=\frac{\lambda_{0}}{2}\!\int\!\!\frac{\mathrm{d}^{2}k}{(2\pi)^ {2}}\,\frac{1}{\mathbf{k}^{2}+m_{0}^{2}+\Sigma_{\rm td}}. \tag{42}\]
This approximation to the self-energy becomes exact for an \(O(N)\) vector model in the limit \(N\to\infty\)[170]. In condensed matter, it corresponds to the Hartree method of mean-field theory [171]. At this point, we must discuss the occurrence of divergences in the QFT (6). In fact, the integral in Eq. (42) contains a UV logarithmic divergence in \(D=1+1\) dimensions, which must be regularized by introducing a cutoff. Since we ultimately want to use this self-consistent resummation of Feynman diagrams to predict changes of the Yukawa-type spin-spin interactions in the trapped-ion chain (29), the regularisation scheme should correspond to a lattice. In the standard approach of lattice field theories, where the spatial derivatives are exchanged for discrete differences (9) that only lead to nearest-neighbor couplings, the continuum propagator appearing in Eq. (41) with the tadpole-resummed self-energy (42), must be substituted by
\[\tilde{\Delta}_{\rm td}(\mathbf{k})=\frac{1}{k_{0}^{2}+\hat{k}^{2}+\mu^{2}}, \tag{43}\]
where the analogue of the spatial momentum is
\[\hat{\rm k}=\frac{2}{a}\sin\left(\frac{\mathrm{k}a}{2}\right). \tag{44}\]
As already mentiond above, the quasi-momentum lies within the First Brillouin zone \(\mathrm{k}=\frac{2\pi}{N_{\rm i}a}n_{1}\) for \(n_{1}\in\{1,\cdots,N_{1}\}\). In the propagator, we have also defined the tadpole-renormalised mass through the following self-consistent equation, sometimes refereed to as the gap equation,
\[\mu^{2}=m_{0}^{2}+\Sigma_{\rm td}=m_{0}^{2}+\frac{\lambda_{0}}{2}\!\int\!\! \frac{\mathrm{d}^{2}k}{(2\pi)^{2}}\tilde{\Delta}_{\rm td}(\mathbf{k}). \tag{45}\]
We note that the integrals over quasi-momenta are to be understood as mode sums \(\int\!\!\frac{\mathrm{d}k}{2\pi}\to\frac{1}{N_{\rm i}a}\,\Sigma_{n_{1}}\). In the thermodynamic limit, one sends \(N_{1}\to\infty\), such that \(\mathrm{k}\in[0,\frac{2\pi}{a})\), and the corresponding integral can be evaluated analytically
\[\mu^{2}=m_{0}^{2}+\frac{\lambda_{0}}{4\pi}\frac{1}{\sqrt{1+\frac{1}{4}\mu^{2} a^{2}}}\,\mathsf{K}\!\left(\frac{1}{1+\frac{1}{4}\mu^{2}a^{2}}\right). \tag{46}\]
Here, \(\mathsf{K}(x)=\int_{0}^{\pi/2}\!\mathrm{d}\theta(1-x\sin^{2}\theta)^{-1/2}\) is the complete elliptic integral of the first kind [172], which is finite, showing that the UV divergence is regularised by the non-zero lattice spacing. For our scalar field theory, this is the only UV divergence. In Sec. IV, we will discuss how this self-consistent equation, as well as the expression that follow, can be adapted to the trapped-ion case where the dispersion relation (8) includes the not only nearest-neighbors but the full dipolar tail. For the moment, however, we continue with the standard Hamiltonian lattice regularization, and the corresponding propagator (43).
Let us now discuss the connection of this renormalised mass with the aforementioned phase transition. The physical mass of the interacting QFT is determined by the pole of the quantum corrected propagator (41) in Minkowski spacetime. The analytical prolongation is simply achieved by replacing \(k_{0}\to-\mathrm{i}\omega\), such that the propagator in Minkowski spacetime
\(\tilde{\Delta}(k)\) is obtained from \(-{\rm i}\tilde{\Delta}(-{\rm i}\omega,{\rm k})\to\tilde{\Delta}(k)\). In the lattice version (43), the Lorenz invariance between energy and spatial momentum is broken. The physical mass is then defined as the on-shell energy at vanishing spatial momentum \(m_{\rm P}^{2}=\tilde{\omega}^{2}\), and determined by the pole of the propagator
\[\tilde{\omega}^{2}-m_{0}^{2}-\Sigma(-{\rm i}\omega,0)=0. \tag{47}\]
In the mean-field tadpole approximation, the physical mass would thus be \(m_{\rm P}^{2}=\mu^{2}\), which implies that the classical critical point for the \(\mathbb{Z}_{2}\)-breaking phase transition \(m_{0}^{2}=0\) will flow to a different value that can be obtained by solving for \(\mu^{2}=0\). When trying to solve this equation, one faces a different type of divergence in the QFT, namely an infra-red (IR) divergence. Note that the elliptic integral in Eq. (46) inherits the logarithmic IR divergence of the tadpole (42) when \(\mu^{2}=0\), since \(\lim_{x\to 1}{\rm K}(x)=\infty\). This prevents criticality to be achieved for any finite negative value of the bare mass \(m_{0}^{2}\), independently of the value of the quartic coupling \(\lambda_{0}\) and the non-zero lattice spacing. Accordingly, the vacuum of the 1+1 \(\lambda\phi^{4}\) theory would always remain in the unbroken phase if one sticks to this mean-field tadpole approximation, which is clearly wrong in light of other studies [140; 141; 142; 143; 144; 145; 146; 147; 148]. We note that this caveat only appears in \((1+1)\) dimensions, and forces us to go beyond the mean-field approximation.
In order to circumvent this problem, and obtain a line of critical points in parameter \({\rm space}(m_{0}^{2},\lambda_{0})\), further quantum corrections need to be included in the self-energy. We will make the self-energy exact at second order in the quartic coupling by adding the sunrise contribution, which has a diagrammatic representation given by the fourth Feynman diagram in Eq. (40). Contrary to the previous tadpole terms, this contribution depends on the external momenta of the propagator, making a full self-consistent treatment that includes tadpole- and sunrise-like diagrams to all orders of the quartic coupling impractical. We can, however, include all tadpole decorations in the internal propagators of the sunrise diagram, leading in this way to an improved self-energy \(\Sigma(-{\rm i}\omega,{\rm k})\). Paralleling our discussion around Eq. (47), the critical line determined by \(m_{\rm P}^{2}=0\) will thus be defined by the new condition
\[\mu^{2}+\Sigma_{\rm sr}(\mathbf{0})=0\, \tag{48}\]
where tadpole corrections come from Eq. (45), and
\[\Sigma_{\rm sr}(\mathbf{0})=-\frac{\lambda_{0}^{2}}{6}\int\!\!\frac{{\rm d}^{2}k{ \rm d}^{2}q}{(2\pi)^{4}}\tilde{\Delta}_{\rm dd}(\mathbf{k})\tilde{\Delta}_{\rm dd }(\mathbf{q})\tilde{\Delta}_{\rm dd}(\mathbf{k}+\mathbf{q}). \tag{49}\]
Eventually, we will also be interested in moving out of criticality, since it is the non-zero value of the physical mass \(m_{\rm P}^{2}\) via its associated effective Compton wavelength \(\xi_{\rm eff,P}\), which controls the range of the spin-spin interactions (29). Considering that the effective \(\lambda\phi^{4}\) model is only relevant close to the structural phase transition of the ion chain, we can assume that in the region of interest for the experiment, the physical mass \(m_{\rm P}^{2}\) will be small. In order to solve for the pole of the improved propagator in Eq. (47), it will then suffice to consider
\[\Sigma_{\rm sr}(-{\rm i}\omega,0)\approx\Sigma_{\rm sr}(\mathbf{0})-\left.\frac{ \partial\Sigma_{\rm sr}}{\partial k_{0}^{2}}\right|_{\mathbf{0}}\omega^{2}\, \tag{50}\]
where the Taylor expansion of the sunrise diagram yields
\[\frac{\partial\Sigma_{\rm sr}}{\partial k_{0}^{2}} \equiv \int_{\mathbf{0}}^{\lambda_{0}^{2}}\frac{\lambda_{0}^{2}}{6}\int\!\! \frac{{\rm d}^{2}p{\rm d}^{2}q}{(2\pi)^{4}}\tilde{\Delta}_{\rm dd}(\mathbf{p}) \tilde{\Delta}_{\rm dd}(\mathbf{q})\] \[\times\left(\tilde{\Delta}_{\rm dd}^{2}(\mathbf{p}+\mathbf{q})-\left(p_{0 }+q_{0}\right)^{2}\tilde{\Delta}_{\rm dd}^{3}(\mathbf{p}+\mathbf{q})\right)\.\]
Close to this pole, the propagator with contributions from all tadpoles and the sunrise diagram has the expression
\[-{\rm i}\tilde{\Delta}(-{\rm i}\omega,0)\to\tilde{\Delta}(\omega,0)\approx \frac{{\rm i}\mathbf{z}}{\omega^{2}-m_{\rm P}^{2}}, \tag{52}\]
where we have rotated back to Minkowski spacetime. Here, one can readily identify the physical mass to be
\[m_{\rm P}^{2}=\mathsf{z}\left(\mu^{2}+\Sigma_{\rm sr}|_{\mathbf{0}}\right). \tag{53}\]
In addition to the additive contribution of these Feynman diagrams to the physical mass, one also observes the appearance of a multiplicative contribution from the residue at the pole
\[\mathsf{z}=\left(1+\frac{\partial\Sigma_{\rm sr}}{\partial k_{0}^{2}}(\mathbf{0}) \right)^{-1}. \tag{54}\]
We thus see that quantum corrections do not only modify the mass of the theory, but also the normalization of the field itself. This is the physical meaning of \(\mathsf{z}\), the so-called wavefunction renormalization. In order to have the field canonically normalized, it is necessary to rescale \(\phi(x)\to\sqrt{\mathsf{z}}\phi(x)\). This explains the appearance of \(\mathsf{z}\) in the physical mass (53).
Let us now discuss the improved prediction of the critical line, which is obtained by solving \(m_{\rm P}^{2}=0\). This is no longer impeded by the IR divergence of the tadpole integrals, as the resummed contributions to the mass \(\mu^{2}\) are no longer required to be zero. In some sense, the lattice regularization provides a cutoff at large momenta that allows us to get finite results in the UV limit, while the addition of the sunrise diagram provides an effective "mass cutoff" at low energies that allows us to get finite results in the IR limit. In summary, the equation that must be solved is given by
\[m_{\rm P}^{2}=\mathsf{z}\left(\mu^{2}-\frac{\lambda_{0}^{2}}{6}\int\!\!\frac{{ \rm d}^{2}k{\rm d}^{2}q}{(2\pi)^{4}}\tilde{\Delta}_{\rm dd}(\mathbf{k})\tilde{ \Delta}_{\rm dd}(\mathbf{q})\tilde{\Delta}_{\rm dd}(\mathbf{k}+\mathbf{q})\right), \tag{55}\]
where the wavefunction renormalization \(\mathsf{z}\) is given by Eqs. (54) and (51), and the tadpole contribution \(\mu^{2}\) is that of Eqs. (45) and (42). Even though Eq. (55) is not a proper self-consistency equation when written in this form, we opt for simplicity and refer to the whole set of equations (45) and (55) as the self-consistency equations. In the following sections, we will present numerical solutions of this set of equations applied to the Yukawa-type interactions in trapped ions. Let us, however, first discuss how the formalism of thermal field theories can account for non-zero temperatures.
### Non-zero temperature and thermal field theories
As stated in Subsec. II.3, the main goal of this work is to explore the effect of non-zero temperatures in the Yukawa-mediated spin-spin interactions of a trapped-ion quantum simulator (25). We argued that, in the vicinity of a structural
phase transition, the non-linearities will lead to a shift of the physical mass of the effective \(\lambda\phi^{4}\) QFT, which will change the distance decay of the spin-spin couplings (29). Moreover, there can be additional contributions at non-zero temperatures related to the perturbative thermal mass of Eq. (34). In this subsection, we discuss how to generalise the previous self-consistency equations (55) to a non-zero temperature using the formalism of thermal field theories [160, 161].
According to a path integral approach [109] in Euclidean time for a system at thermal equilibrium [173], the generating functional of Eq. (35) corresponds to the partition function of the \(\lambda\phi^{4}\) model in the limit of a vanishing temperature. For non-zero temperatures \(T>0\), the integrals in Eq. (37) must be modified, as the spacetime \(x=(\tau,\mathrm{x})\) is no longer the Euclidean plane. The field in the path integral must fulfill periodic boundary conditions
\[\phi(\tau,\mathrm{x})=\phi(\tau+\beta,\mathrm{x}), \tag{56}\]
where \(\beta=1/T\) is the inverse temperature in natural units, such that the Euclidean plane is effectively compactified into a cylinder \(\mathbb{R}^{2}\mapsto S_{r}\times\mathbb{R}\) of radius \(r=\beta/2\pi\). Given the periodicity of the scalar field (56), the propagator obeys the Kubo-Martin-Schwinger condition [174, 175], namely \(\Delta_{0}(\tau,\mathrm{x})=\Delta_{0}(\tau+\beta,\mathrm{x})\). Accordingly, the transformation to momentum space (38) requires using a Fourier series [160, 161, 176] in the temporal coordinate instead of a Fourier transform
\[\Delta_{0}(\tau,\mathrm{x})=\frac{1}{\beta}\sum_{n_{0}\in\mathbb{Z}}\int\! \frac{\mathrm{d}\mathrm{k}}{2\pi}\ \tilde{\Delta}_{0}(\omega_{n_{0}},\mathrm{k})\mathrm{e}^{ \mathrm{i}(\omega_{n_{0}}\tau+\mathrm{k}\mathrm{x})}, \tag{57}\]
where \(\omega_{n_{0}}=2\pi n_{0}/\beta\) are the bosonic Matsubara frequencies. The important aspect of the Matsubara formalism is that the generating functional of the equilibrium \(n\)-point functions of the thermal field theory has the same functional form as the \(T=0\) case (39), provided one substitutes the frequency integrals by a series in the Matsubara frequencies. Accordingly, one can use the previous diagrammatic results (40), as well as the self-consistency equations (55) with the lattice propagator (43), by substituting: _(i)_ the 2-momentum by \(k\to\left(\frac{2\pi}{B}n_{0},\frac{2\pi}{aN_{1}}n_{1}\right)\), where \(n_{0}\in\mathbb{Z}\) and \(n_{1}\in\{1,\cdots,N_{1}\}\); _(ii)_ the Euclidean lattice propagator by the Matsubara propagator \(\Delta_{0}(k)\to\Delta_{0}(\omega_{n},\mathrm{k})\); and _(iii)_ the momentum integrals by mode sums \(\int\!\frac{\mathrm{d}^{2}k}{(2\pi)^{2}}-\frac{1}{B}\sum_{n_{0}\in\mathbb{Z}} \frac{2\pi}{aN_{1}}\sum_{n_{1}=1}^{N_{1}}\).
Let us illustrate this procedure by considering the lattice-regularised tadpole contribution (42). At finite temperatures, we have
\[\Sigma_{\mathrm{td}}=\frac{\lambda_{0}}{2}\frac{T}{aN_{1}}\!\sum_{n_{0},n_{1 }}\!\frac{1}{(2\pi Tn_{0})^{2}+\hat{\mathrm{k}}^{2}+\mu^{2}}. \tag{58}\]
The Matsubara sum can be performed explicitly, yielding
\[\Sigma_{\mathrm{td}}=\frac{\lambda_{0}}{4N_{1}}\!\sum_{n_{1}=1}^{N_{1}}\frac{ \coth\!\left(\frac{\sqrt{\hat{\mathrm{k}}^{2}+\mu^{2}}}{2T}\right)}{a\sqrt{ \hat{\mathrm{k}}^{2}+\mu^{2}}}. \tag{59}\]
This contribution is the mean field counterpart of Eq. (34), as detailed in Appendix A. We plot in Fig. 3**(a)** the dimensionless quotient \(\Sigma_{\mathrm{td}}/\lambda_{0}\) for \(N_{1}=30\) as a function of the temperature, measured in lattice units, and for the value of the tadpole renormalized mass \(\mu a=1\). The dotted line in that figure corresponds to the zero temperature thermodynamic limit, evaluated in Eq. (46). We observe a quite small deviation from the thermodynamic limit already for the moderate number of sites we have chosen.
Remarkably, the Matsubara mode sums involved in the sunrise contribution \(\Sigma_{\mathrm{sr}}(\mathbf{0})\) (53) can also be performed analytically. The resulting expression can be found in Eq. (A6) of Appendix A. The dimensionless combination \(\mu^{2}\Sigma_{\mathrm{sr}}(\mathbf{0})/\lambda_{0}^{2}\) is shown in Fig. 3**(b)**, where again a small deviation from the zero temperature thermodynamic limit is found for \(N_{1}=30\). Conversely, the wavefunction renormalization (54) at finite temperature needs to be computed numerically, as no analytical expression for the Matsubara mode sums was found. This
Figure 3: **Tadpole and sunrise mass shifts: (a)** The tadpole renormalised mass \(m_{0}^{2}\to\mu^{2}=m_{0}^{2}+\Sigma_{\mathrm{td}}\) can be expressed in terms of a dimensionless mass shift \(\Sigma_{\mathrm{td}}/\lambda_{0}\) independent of the coupling. We set \(\mu a=1\), and depict both the analytical expression for the \(T=0\) mass shift (46) (black dotted line), obtained in the thermodynamic limit \(N_{1}\to\infty\), and the finite-temperature expression (red dashed line), obtained by performing the Matsubara sum explicitly (59) for a lattice with \(N_{1}=30\) sites. **(b)** The sunrise renormalised mass \(\mu^{2}\to mp^{2}=\mu^{2}+\Sigma_{\mathrm{sr}}(\mathbf{0})\) is proportional to a dimensionless ratio \(\mu^{2}\Sigma_{\mathrm{sr}}(\mathbf{0})/\lambda_{0}^{2}\) that is, again, independent of the quartic coupling. We compare the exact \(T=0\) result in the thermodynamic limit (black dotted line), discussed in Appendix B, with the finite-temperature expression obtained by performing the Matsubara sums explicitly (A6) (see Appendix A), for a lattice with \(N_{1}=30\) sites (red dashed line).
requires the truncation of the Matsubara sums to a finite number of modes \(n_{0}\in\{-N_{0},\cdots,N_{0}\}\), and thus raises the issue of the convergence with \(N_{0}\). This is studied in Appendix A, obtaining a fast convergence which will be important for the efficiency of our numerical approach, even in the typically more demanding low temperature limit.
## IV Critical line and trapped-ion spin-spin couplings
In this section, we will numerically solve the self-consistent equations (45) and (55) at non-zero temperatures. We will obtain results for the standard Hamiltonian lattice discretization of the scalar field in Eqs (43)-(44), but also introduce the dipolar tail of the dispersion (8) to be able to make explicit predictions for the trapped-ion case.
### Numerical estimate of the critical line
As discussed above, the \(\mathbb{Z}_{2}\)-breaking phase transition in the \(\lambda\phi^{4}\) model is characterised by a classical critical point at \(m_{0}^{2}=0\), which will flow with temperature and quartic coupling due to thermal and quantum effects that shift the pole of the propagator to the physical mass \(m_{0}^{2}\to m_{\mathrm{P}}^{2}\). As a consequence, the classical critical point will become a critical surface in parameter space \((m_{0}^{2}a^{2},\lambda_{0}a^{2},Ta)\) determined by solving the equation \(m_{\mathrm{P}}^{2}(\lambda_{0}a^{2},Ta)=0\). If one fixes the temperature to a specific value, we want to obtain the corresponding critical line for the bare mass \(m_{0}^{2}|_{\mathrm{c}}\) as a function of the bare coupling strength \(\lambda_{0}\). This line will separate the symmetry-broken \(m_{0}^{2}<m_{0}^{2}|_{\mathrm{c}}\) from the symmetry-preserved \(m_{0}^{2}>m_{0}^{2}|_{\mathrm{c}}\) phase at the given temperature \(Ta\).
In order to numerically obtain these critical lines for various non-zero temperatures, one needs to impose the condition \(m_{\mathrm{P}}^{2}=0\) in Eq. (55), and solve the self-consistency equations to extract the bare critical parameters \((m_{0}^{2}a^{2},\lambda_{0}a^{2})|_{\mathrm{c}}\). We note that the wavefunction renormalization, contributing multiplicatively to the physical mass (53), does not play any role in the determination of the critical points. Then, the routine followed to compute the critical line reads as follows
Routine R1
1 Select an interval \((\mu_{1}^{2}a^{2},\mu_{2}^{2}a^{2})\) in the \(\mu a^{2}\) axis
2 For each \(\mu^{2}a^{2}\) in \((\mu_{1}^{2}a^{2},\mu_{2}^{2}a^{2})\) and a given \(Ta\):
3 Impose \(m_{\mathrm{P}}^{2}\!=\!0\) in the lattice counterpart of Eq. (55) using Matsubara sums to obtain \(\lambda_{0}a^{2}\) as a function of \(\mu a\), \(Ta\) and \(N_{1}\)
4 Substitute \(\mu^{2}a^{2}\) in Eq. (45) to obtain \(m_{0}^{2}a^{2}|_{\mathrm{c}}\).
5 Return the list of critical points \(\{(m_{0}^{2}a^{2},\lambda_{0}a^{2})\,|_{\mathrm{c}}\}\)
In Fig. 4, we represent the critical line for different temperatures. For a fixed value of the temperature \(Ta\), the region above the critical line represents a symmetric phase, adiabatically connected to the Klein-Gordon thermal state, whereas the region below is the symmetry-broken phase in which the scalar field acquires a non-zero expectation value and can no longer be adiabatically connected to the Klein-Gordon vacuum. As can be observed in this figure, the critical lines take lower values of the quartic coupling as the temperature increases. This behaviour is consistent with the appearance of a thermal mass (34) and the phenomenon of restoration of symmetry [116; 117]. For a fixed value of the coupling strength \(\lambda_{0}a^{2}\), if one starts at a point \((m_{0}^{2}a^{2},T_{1}a)\) in which the symmetry is broken (i.e. a point below the corresponding critical line of Fig. 4), one may increase the temperature towards \(T_{2}>T_{1}\), such that the corresponding equilibrium state lies now above the critical line, and thus belongs to the symmetry-preserved phase. Let us note that the dependence of the critical line with the quartic coupling is similar to the zero-temperature results obtained using other numerical methods, such as Monte Carlo [140; 143]. If one aims at exploring regimes beyond those of relevance in the trapped-ion realization (see our discussion in Sec. II.3), there are certain limitations of the current approach that are discussed in detail in Appendix C.
Once the critical lines have been derived, a more informative figure for the connection to the trapped-ion case and the Yukawa-mediated interactions discussed in the following section would be a contour plot for the non-zero values of the physical mass \(m_{0}^{2}a^{2}\) in the plane \((m_{0}^{2}a^{2},Ta)\) at a fixed quartic coupling \(\lambda_{0}a^{2}\). This will be the first step to make a connection with the distance decay of the spin-spin interactions in future sections. Achieving this goal requires incorporating the wavefunction renormalization (54), which contributes multiplicatively to the physical mass (55), into a different numerical
Figure 4: **Critical lines for the parity-breaking phase transition:** We solve numerically the self-consistency equations (45) and (55) for the critical point \(m_{\mathrm{P}}^{2}|_{\mathrm{c}}=0\). We consider a lattice regularization (43) with \(N_{1}=30\) sites and use the analytical Matsubara mode sums discussed in Appendix A along with the Routine R1 described in the text. When the temperature increases, the effect of a thermal mass makes the critical line take lower values for the quartic coupling. For each colored line, the upper region corresponds to the symmetry-preserved ground state (SP-vacuum), whereas the region below corresponds to the ground state with spontaneous breaking of the parity symmetry (SSB-vacuum).
routine
Routine R2
1 Select the interval of dimensionless temperatures \((T_{1}a,T_{2}a)\) and a fixed \(\lambda_{0}a^{2}\) value
2 For each \(Ta\) in \((T_{1}a,T_{2}a)\):
3 Initialize \(\mu=\mu_{0}^{2}a^{2}>0\) and \(m_{\mathrm{P}}^{2}a^{2}=1\)
4 Set \(\varepsilon\)
5 While \(m_{\mathrm{P}}^{2}a^{2}\mathrm{z}^{-1}\geq 0\):
6 Update \(m_{\mathrm{P}}^{2}a^{2}\mathrm{z}^{-1}\) and compute \(m_{0}^{2}a^{2}\) from Eq. (55) and Eq. (45) respectively
7 Compute z using (54)
8\(\mu\leftarrow\mu-\varepsilon\)
9 Using z return the \(m_{\mathrm{P}}^{2}a^{2}\) values in \((\mu_{0}^{2}a^{2},\mu_{j}^{2}a^{2})\); \((m_{0}^{2}a^{2},m_{0}^{2}a^{2});(T_{1}a,T_{2}a)\)
In practice, the same \(\varepsilon\) variation of \(\mu\) leads to different size increments on \(m_{\mathrm{P}}^{2}a^{2}\) and \(m_{0}^{2}a^{2}\), depending on \(Ta\) and the previous \(m_{\mathrm{P}}^{2}a^{2}\) value. To obtain uniform increments independently of position in parameter space, we heuristically define an adapted step \(\varepsilon_{i}\propto m_{\mathrm{P},\mathrm{i}-1}^{2}a^{2}/(1+Ta)^{(1/3)}\). Now, we can run a simulation based on this numerical routine to obtain a contour plot of the physical mass. The numerical results are shown in Fig. 5, where the region in parameter space with a non-zero physical mass is coloured according to the scale specified in the rightmost inset. The white region corresponds to the symmetry-broken phase, where the physical mass would become negative, leading to a transition from a single to a double well in the effective potential [137, 138]. This potential, via the self-consistency equations, has quantum corrections stemming from all the tadpole diagrams, as well as perturbative corrections of the sunrise diagram. We can see how the symmetry-broken region changes with temperature for the various quartic couplings. In this contour plots, the restoration of symmetry by increasing temperature becomes very clear, as it corresponds to any vertical line connecting the symmetry-broken and symmetry-preserved phases.
As explained before, our self-consistent equations are exact to order \(\lambda_{0}^{2}\), but miss many higher-order contributions. It is therefore important to benchmark the performance of our approach. We will consider a specific universal quantity, namely the ratio \(f_{\mathrm{c}}=\lambda_{0}/\mu^{2}|_{\mathrm{c}}\) at the \(T=0\) critical point of the \(\lambda\phi^{4}\) QFT in the continuum, which has also been computed with other numerical approaches. In the continuum, the UV divergence of the tadpole contribution (42) would make the dimensionless ratio \(\lambda_{0}/m_{0}^{2}|_{\mathrm{c}}\) vanish. It is then customary to replace the bare mass by the UV-finite tadpole renormalised mass \(\mu^{2}\) (45). Using our lattice discretization, we can access the critical ratio as a function of the lattice spacing \(f_{\mathrm{c}}(a)\), or, alternatively, as a function of \(f_{\mathrm{c}}(\lambda_{0}a^{2})\). Its value in the continuum QFT can be thus obtained by estimating \(f_{\mathrm{c}}(\lambda_{0}a^{2})\) as the dimensionless coupling strength \(\lambda_{0}a^{2}\to 0\).
To compute \(f_{\mathrm{c}}\), one can make use of Routine R1 with some minor modifications. For a fixed number of spatial lattice sites \(N_{1}\), and using a small-enough interval of values for \(\mu^{2}a^{2}\), one can naively extract the critical ratio by plotting \(\lambda_{0}a^{2}|_{\mathrm{c}}\) vs \(\mu^{2}a^{2}|_{\mathrm{c}}\), and fitting the graph to a linear function in order to extract the slope. We find \(f_{\mathrm{c}}\simeq 20.1\) using \(N_{1}=2000\). As discussed in detail in [177], logarithmic contributions must be taken into account in order to be get a more precise estimate. Guided by this work, we employ three logarithmic models to fit our results, as depicted in Fig. 6. As shown in this figure, the three models give the same critical ratio value up to the second decimal place, \(f_{\mathrm{c}}=20.11\). We again use \(N_{1}=2000\), as we appreciate a clear convergence to the continuum (see the inset in Fig. 6). Within our approach, the universal ratio can actually be computed directly in the continuum. From equations (48) and (49), we have \(f_{\mathrm{c}}=\sqrt{6(2\pi)^{4}/I}\) with
\[I=\int\!\!\mathrm{d}^{2}k\mathrm{d}^{2}q\frac{1}{(k^{2}+1)(q^{2}+1)((k+q)^{2} +1)}\;. \tag{60}\]
In Appendix B we evaluate analytically this integral with the help of Feynman parameters obtaining \(f_{\mathrm{c}}=20.1055\), which certifies the good behaviour of the continuum limit of the lattice discretization.
The quantum critical point of the \(\lambda\phi^{4}\) theory in 1+1 dimensions has been studied with a variety of non-perturbative methods, such as Tensor Networks [177, 178, 179] or Monte Carlo [146], among others [180, 181]. All these works find values near \(f_{\mathrm{c}}\simeq 66.0\) with increasing levels of precision. The deviation of our self-consistent approach with respect to these more-accurate estimates was to be expected, and is common to any self-consistent mean-field treatment of a quantum many-body model that displays a quantum critical point. We can say that the simplicity of our method, which reduces drastically the computational complexity to a solution of self-consistency equations, comes at a price. On the other hand,
Figure 5: **Physical mass for the \(\lambda\phi^{4}\) model on a lattice:** Contour plot for non-zero values of the tadpole and sunrise contributions to the physical mass \(m_{\mathrm{P}}^{2}a^{2}\) within the symmetric ground state (coloured region). The physical mass is obtained by solving the self-consistent equations (45) and (55). The numerical solution uses the Matsubara node sums discussed in Appendix A and the Routine R2, described in the text, considering \(N_{1}=30\) lattice sites. The dashed arrow indicates the phenomenon of symmetry restoration by increasing the temperature.
the interest of our approach lies in being the most economic one capable of coping with the divergences and still detecting the phase transition and study the phase diagram. Indeed, this simplicity will be crucial in the next section, where we will apply it to the trapped-ion quantum simulators of spin models [92, 94, 95, 96, 97, 98, 99, 100, 101]. Indeed, the non-standard dispersion relation (8), responsible for the dipolar tail of the spin-spin interactions in Eq. (29), might be difficult to treat accurately with more precise methods, such as the aforementioned tensor networks which can approximate long-range couplings by using the so-called matrix-product operators. Moreover, since we are ultimately interested in dealing with non-zero temperatures, this would further increase the computational complexity of tensor-network methods, requiring again to work with matrix-product operators. Finally, in the context of the trapped-ion implementation, addressing the complete problem of the Yukawa-type spin-spin interactions would require solving the real-time dynamics of the spins under locally coupled to the phonons, which evolve on a much slower timescale in comparison to the effective \(\lambda\phi^{4}\) model. This would require simulation long-time dynamics, which is a notably difficult for Tensor Networks, and out of reach for Monte Carlo methods due to the sign problem. Our self-consistent treatment should thus be seen as a proof of concept for the proposal to use thermal effects to study interacting QFTs with trapped-ion simulators.
### Estimates for the trapped-ion quantum simulator
Let us now discuss how the previous results can be connected to the trapped-ion quantum simulators of spin models [92, 94, 95, 96, 97, 98, 99, 100, 101]. In Sec. II, we described in detail how, far from the linear-to-zigzag structural phase transition (15), the spin-spin couplings mediated by the transverse phonons (31) of a trapped-ion chain can be described accurately by the Yukawa-type interactions mediated by a Klein-Gordon field (29) (see the comparison in Fig. 2). As the trap frequencies are modified and one gets closer to the structural phase transition, the Coulomb non-linearities start to play a bigger role, and transform this QFT into the \(\lambda\phi^{4}\) model (6) with an effective speed of light in Eq. (11), and bare parameters in Eqs. (13), (17) and-(20). We argued that the specific dispersion relation \(\omega(\mathrm{k})\) for the transverse modes of the trapped-ion chain (8) is very similar to that of a discretized scalar field (10), which underlies the Feynman propagator (43) we used in the solution of the self-consistent equations (55) of the previous subsection. Note that this propagator, as well as the results in Figs. 3-6, have all been obtained using natural units \(\hbar=c=k_{\mathrm{B}}=1\). Moreover, in the lattice discretization at finite temperature, these self-consistency equations (45) and (55) are rewritten in terms of finite mode sums, and depend on dimensionless parameters obtained through a specific power of the lattice constant \(m_{0}^{2}a^{2},\lambda_{0}a^{2},Ta\),
In the trapped-ion case, the effective speed of light \(c_{\mathrm{t}}\) (11) only appears after the gradient expansion leading to Eq. (6). This gradient expansion cannot account for the branch-cut discontinuity of the dispersion relation (8) when extended to the complex plane, and would thus miss the dipolar part (29) of the Yukawa-mediated interactions. Therefore, rather than setting \(\hbar=c_{\mathrm{t}}=k_{\mathrm{B}}=1\) in the coarse-grained trapped-ion case, it would be better to work with the full phonon propagator prior to the long-wavelength approximation. This requires reformulating the self-consistency equations (45) and (55) in terms of dimensionless trapped-ion parameters, which can no longer be obtained by multiplying the microscopic parameters with a power of the ion lattice spacing \(d\), as we need to use SI units.
In order to find such a formulation, we start by revisiting the tadpole-resummed propagator on the lattice (43). For trapped ions, the tadpole-resummed propagator is analogous to Eq. (43), but has inverse squared-energy dimension
\[\tilde{\Delta}_{\mathrm{td}}(k_{0},\mathrm{k})=\frac{1}{(\hbar k_{0})^{2}+( \hbar\hat{\mathrm{k}})^{2}+(\mu c_{\mathrm{t}}^{2})^{2}}, \tag{61}\]
where \(k_{0}\) has dimensions of inverse time, and will be substituted by the Matsubara frequencies for a non-zero temperature. Therefore, the trapped-ion analogue of the spatial lattice momentum \(\hat{\mathrm{k}}\) in Eq. (44) must also have units of inverse time. Moreover, in the absence of quartic interactions, the effective bare mass in Eq. (13) should appear as a pole in this propagator \(\mu^{2}=m_{0}^{2}\) upon the substitution of \(k_{0}\to-\mathrm{i}\omega\). Taking into account these two conditions, we find that the analogue of the lattice spatial momentum (44) that appears for the nearest-neighbor discretization of the scalar field (9) is
\[\hat{\mathrm{k}}^{2}=\frac{7}{2}\omega_{x}^{2}\left(\frac{l}{d}\right)^{3} \zeta_{N_{1}}(3)-\omega_{x}^{2}\left(\frac{l}{d}\right)^{3}\sum_{r=1}^{3}\frac {4}{r^{3}}\sin^{2}\!\left(\frac{\mathrm{k}dr}{2}\right), \tag{62}\]
where we have used the truncated Riemann zeta in Eq. (14). Accordingly, \(\hat{\mathrm{k}}\) has the desired dimension of inverse time, and
Figure 6: **Critical ratio as a function of \(\lambda_{0}\):** We use Routine R1 and \(N_{1}=2000\) to compute \(\lambda_{0}a^{2}\) and \(f_{c}\) for each \(\mu^{2}a^{2}\) value. Three different logarithmic models are fitted (blue line, green dashed line and red dotted line respectively), obtaining \(f_{1,c}=20.11(4),\quad f_{2,c}\simeq f_{3,c}=20.11(5)\). The inset shows how the the critical ratio \(f_{3,c}\) converges to the continuum for different \(N_{1}\) values to check the validity of the discretization used.
one sees how the dipolar tail of the phonons dispersion relation (8) enters in the propagator.
Once the propagator has been identified, we can re-scale the tadpole self-consistency equation (45) with a certain power of the effective speed of light and Planck's constant, such that the equation has the right dimension of energy squared. In SI units, the Matsubara frequencies in Eq. (57) are expressed in terms of \(\omega_{n_{0}}=2\pi n_{0}/\beta\hbar\), where \(\beta=1/k_{\text{B}}T\), such that the trapped-ion tadpole self-consistency equation reads
\[\mu^{2}c_{\text{t}}^{4}=m_{0}^{2}c_{\text{t}}^{4}+c_{\text{t}}^{4}\frac{ \lambda_{0}}{2}\frac{k_{\text{B}}T}{dN_{1}}\sum_{n_{0},n_{1}}\tilde{\Delta}_{ \text{ad}}\left(\omega_{n_{0}},\frac{2\pi}{dN_{1}}n_{1}\right). \tag{63}\]
Considering that the quartic coupling (20) in SI units has dimensions \([\tilde{\lambda}_{0}]=\text{M}^{3}\text{L}^{3}\text{T}^{-2}\), one can check that the above equation (63) has the desired squared-energy dimension. In order to get an equation involving only dimensionless parameters, one can simply divide by the squared energy associated to the motional quanta \((\hbar\omega_{x})^{2}\), which allows us to identify the following dimensionless renormalized parameters
\[\overline{\mu}^{2}=\left(\frac{\mu c_{\text{t}}^{2}}{\hbar\omega_{x}}\right) ^{2},\quad\overline{T}=\frac{k_{B}T}{\hbar\omega_{x}} \tag{64}\]
as well as the following dimensionless bare couplings
\[\overline{m}_{0}^{2} =\left(\frac{m_{0}c_{\text{t}}^{2}}{\hbar\omega_{x}}\right)^{2} =\left(\frac{\omega_{z}}{\omega_{x}}\right)^{2}-\frac{7}{2}\bigg{(}\frac{l}{d }\bigg{)}^{3}\zeta_{N_{1}}(3),\] \[\overline{\lambda}_{0} =\left(\frac{\epsilon_{\text{t}}}{\hbar}\right)^{3}\frac{\lambda _{0}}{\omega_{x}^{2}} =\frac{729\zeta_{N_{1}}(5)}{2}\frac{\hbar}{m_{a}\omega_{d}d^{2}} \bigg{(}\frac{l}{d}\bigg{)}^{\frac{3}{2}}(\eta_{N_{1}}(1))^{-\frac{1}{2}}. \tag{65}\]
Finally, one can rewrite the trapped-ion tadpole self-consistency Eq. (63) in terms of dimensionless quantities as
\[\overline{\mu}^{2}=\overline{m}_{0}^{2}+\frac{\overline{\lambda}_{0}}{2} \frac{\overline{T}}{N_{1}}\sum_{n_{0},n_{1}}\frac{(l/d)^{\frac{3}{2}}\left( \eta_{N_{1}}(1)\right)^{\frac{1}{2}}}{(2\pi\overline{T}n_{0})^{2}+(\tilde{ \kappa}/\omega_{x})^{2}+\overline{\mu}^{2}}, \tag{66}\]
which has the same mathematical structure as the lattice self-consistent equation previously found (58). Again, Matsubara mode sums are to be performed analytically attending to (59). The only difference, apart from the pre-factor \(\left(l/d\right)^{3/2}(\eta_{N_{1}}(1))^{1/2}\), is that the part that depends on the spatial momentum \((\tilde{\kappa}/\omega_{x})\) in the propagator (61) contains now the dipolar terms (62). Following this procedure, we can find the trapped-ion analogue of all the self-consistency equations that include the sunrise diagram (55), which now depend on dimensionless parameters that can be numerically adjusted.
The numerical routines introduced in the previous subsection can be applied to the trapped-ion case directly. We can now use this numerical simulation to approximate the value of the physical mass (55) as the trap frequencies are modified, and the trapped-ion chain gets closer to the linear-to-zigzag phase transition. With this value, we can estimate the effect of the interactions on the effective Compton wavelength (67) \(\xi_{\text{eff}}\to\xi_{\text{eff},p}\), as well as on the spin-spin coupling strength \(J_{\text{eff}}\to J_{\text{eff},P}\). The former is due to the quantum and thermal contributions to the physical mass (53), whereas the later comes from the field rescaling with the wavefunction renormalization (54). Both expressions would enter in a renormalised version of the spin-spin couplings of Eq. (29), as we recall that these are mediated by the excitations of the self-interacting scalar field, which are controlled by the physical pole and the full propagator. In particular, we find
\[J_{\text{eff},p}=J_{\text{eff}}\sqrt{2_{R}},\quad\xi_{\text{eff},p}=\frac{(l/d )^{3/2}(\eta_{N_{1}}(1))^{1/2}}{\sqrt{(m_{\text{p}}c_{\text{t}}^{2}/\hbar \omega_{x})^{2}-(\Delta\omega_{\text{L}}/\omega_{x})^{2}}}d. \tag{67}\]
In this way, one sees that quantum and thermal effects in the \(\lambda\phi^{4}\) model will change the spin-spin interactions of the trapped-ion quantum simulator. This opens a very interesting perspective, allowing future experiments to probe the nature of the fixed point of this effective QFT by measuring the dynamics of the spins. In fact, the distance dependence of the spin-spin couplings has been inferred using various experimental techniques in recent years [93; 95; 96]. Using these techniques while gradually approaching the linear-to-zigzag phase transition would allow one to infer the flow of the critical point, and address universal properties of the QFT in a quantum simulator. For the sake of completeness, we also note that the renormalization of the quartic coupling (20), and the associated four-point functions, would give rise to four- and higher-spin interactions [103]. These are, however, much weaker and negligible in a trapped-ion experiment given the constraints considered in this work (32).
To make the numerical results closer to the trapped-ion language, one can also make use of the Bose-Einstein distribution (33) to obtain a contour plot of the mean number of phonons \(\overline{n}(\pi/d)\) for the relevant zigzag mode. The final result is that of Fig. 7 where, rather than plotting the Compton wavelength as a function of the dimensionless bare mass, we plot it as a function of the radial trap frequency, which is the standard experimental parameter used to control the shape of the ion crystal. We can see in Fig. 7**(a)** how the classical critical point (15) flows with the temperature and with the radial trap frequency (blue dotted line). In the coloured region, which lies well within the symmetry-preserved phase (i.e. linear ion chain), we see how the effective Compton wavelength (67) entering the spin-spin couplings (29) changes as one approaches the critical line. Therefore, using some of the experimental techniques based on probing the real-time dynamics of the spins [93; 95; 96], one can extract the distance decay and test how the effective Compton wavelength gets renormalised by quantum and thermal effects. In Fig. 7**(b)**, we also represent the average number of phonons in the zigzag mode as a function of the temperature and the transverse trap frequency. This sets the target detuning of the laser cooling on the ion crystal used to control the contribution of the thermal masses, and the renormalization of the spin-spin couplings in an experiment.
Note that, in order to comply with the constraint (32) and still have spin-spin coupling strengths that are not too slow in comparison to additional experimental sources of noise such as dephasing or motional heating/decoherence, one should avoid getting extremely close to the structural phase transi
tion. There, the renormalised version of the zigzag mode (13), which is proportional to the physical mass \(m_{\text{P}}\) of the QFT, softens \(\omega_{\text{zz,P}}\to 0\). In light of the constraint in Eq. (32), the laser beatnote \(\Delta\omega_{\text{z}}<\omega_{\text{zz,P}}\) must be very small, leading to very slow spin dynamics. For this reason, our plot in Fig. 7 restricts to the colored regions, and does not consider all the parameters down to the critical line. In fact, the renormalised zigzag frequency, which changes according to Fig. 7**(b)** is always higher than \(\omega_{\text{zz,P}}/2\pi\geq 318\,\text{kHz}\), leaving enough frequency space for the spin-dependent dipole force to fulfill Eq. (32) and still lead to sufficiently-fast spin dynamics in the 0.5-10ms scale. Altogether, Fig. 7 provides a quantitative prediction of how the range of the spin-spin interactions changes as a function of the temperature, and recall that this is a direct consequence of the underlying interactions and Feynman diagrams of the coarse-grained \(\lambda\phi^{4}\) QFT, and cannot be accounted for if one truncates the description of the system at the quadratic order for the phonons.
## V Conclusions and outlook
In this manuscript, we have presented a self-consistent approach to estimate non-zero temperature effects in the trapped-ion quantum simulators of spin models. We have argued that the range of the spin-spin interactions mediated by the transverse phonons of the ion chain can be accurately captured by an effective QFT of a real scalar field that has a Yukawa-type coupling to the spins. In the vicinity of a linear-to-zigzag transition of the ion chain, \(\lambda\phi^{4}\) interactions must be included in this QFT, which can modify the nature of the Yukawa interactions through phonon-phonon scattering. In light of the renormalizability of this QFT, this interaction effects can be recast in a renormalization of the bare quartic coupling, bare mass, and wavefunction renormalization. We have argued that the later two effects yield a renormalization of the range and magnitude of the Yukawa-type spin-spin couplings \(J_{ij}\), which could be inferred from trapped-ion experiments that reconstruct \(J_{ij}\) from the real-time dynamics of the spins. Accordingly, the trapped-ion quantum simulator could be used to probe renormalization of this paradigmatic QFT.
To find a quantitative prediction of these effects, we have presented a self-consistent approach that resums tadpole-like Feynman diagrams to all orders of the quartic coupling (42). This so-called self-consistent Hartree approximation is afflicted by an infra-red divergence when trying to use it to determine thermal and quantum effects of the critical point of the linear-to-zigzag transition. We have thus extended our approach beyond mean-field theory by also considering the sunrise diagram, and its additive and multiplicative contributions to the renormalized mass of the QFT (55). We have discussed how these self-consistent approach can be applied to the trapped-ion case at non-zero temperatures, which requires using a specific propagator (61) that includes a dipolar regularization (62) of the QFT stemming from a multipole expansion of the Coulomb interactions among the trapped ions. Moreover, we have also discussed how to apply the Matsubara formalism in Euclidean time to account for non-zero temperatures in the experiment, i.e. laser cooling to a non-zero mean phonon number. Using realistic parameters for recent experiments with long ion chains [152], we have been able to derive specific quantitative prediction of thermal effects in the spin-spin interactions, which have been depicted in Fig. 7. As an outlook, we note that these predictions can serve as a guide to future trapped-ion experiments that aim at exploring the present connection between spin-model quantum simulators and this relativistic Yukawa-type problem. We note that the experimental quantum simulation, when working in
Figure 7: **Renormalization of the range of the spin-spin interactions in a trapped-ion chain:****(a)** Contour plots of the effective Compton wavelength \(\xi_{\text{eff,P}}/d\) that controls the range of the Yukawa-type spin-spin interactions. In the expression of the spin-spin couplings \(J_{ij}\) (29), one substitutes \(\lambda_{\text{eff}}\to J_{\text{eff,P}}\) and \(\xi_{\text{eff}}\to\xi_{\text{eff,P}}\), due to the quantum and thermal contributions in Eq. (67). Thermal effects are encoded in the dependence of the effective Compton wavelength with the temperature. In the \(x\) axis, we represent the radial frequency, while the \(y\) axis is reserved for the dimensionless temperature (64). We use analytical Matsubara mode sums and \(N_{1}=30\) spatial points. Experimental parameters for these simulations have been considered according to a string of \({}^{40}\text{Ca}^{+}\) ions [152], and fixing the axial trap frequency to \(\omega_{\text{z}}/2\pi=0.45\,\text{MHz}\), and the laser detuning with respect to the qubit transition to \(\Delta\omega_{\text{z}}/2\pi=0.318\,\text{MHz}\). The explored region in \((\omega_{\text{z}},\overline{T})\) parameter space avoids getting extremely close to the structrural phase transition, here represented by the blue dotted line, since the constraint (32) for red detunings \(\Delta\omega_{\text{z}}<\omega(\mathbf{k})\) would imply very slow spin dynamics (see our discussion in the main text). **(b)** Mean number of phonons \(\overline{n}(\pi/d)=(\text{e}^{(\hbar\omega_{\text{z,P}}/\hbar\omega_{\text{z,T}})}-1)^{-1}\), where \(\omega_{\text{z,P}}\) refers to the zigzag-mode frequency. **(c)** Renormalised zigzag mode frequency \(\omega_{\text{zz,P}}=\overline{m}_{\text{P}}\omega_{\text{z}}\).
the long-wavelength regime discussed in this work, will actually go beyond our approximations, and effectively compute the contributions to the Compton wavelength stemming from all possible Feynman diagrams. Moreover, the scaling of this quantity with the experimentally-tunable parameters will unveil the critical exponents of the phase transition of the \(\lambda\phi^{4}\) model, which should correspond to those of the Ising universality class. Finally, we would like to mention that an interesting problem for future study would be to go beyond the schemes for the spin models of the Ising type, and consider further terms that can lead to an effective relativistic QFT of Dirac fermions Yukawa-coupled to the self-interacting scalar field. This trapped-ion quantum simulator would be closer to a lower-dimensional version of the fermion-Higgs sector of the electroweak interactions, and can provide a way to go beyond semi-classical calculations for the fractionalization of charge in fermion-scalar QFTs [182].
###### Acknowledgements.
We acknowledge support from PID2021-127726NB-I00 (MCIU/AEI/FEDER, UE), from the Grant IFT Centro de Excelencia Severo Ochoa CEX2020-001007-S, funded by MCIN/AEI/10.13039/501100011033, and from the CSIC Research Platform on Quantum Technologies PTI-001. P. V. and A. B. acknowledge support from the EU Quantum Technology Flagship grant AQTION under grant number 820495. The project leading to this application/publication has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101114305 ("MILLENION-SGA1" EU Project).
## Appendix A Thermal effects and Matsubara mode sums
In this Appendix, we evaluate the thermal corrections resulting from the tadpole and sunrise diagrams in the lattice regularized \(\lambda\phi^{4}\) QFT. The contribution of the resumed tadpole family to the self-energy is
\[\Sigma_{\text{td}}=\frac{\lambda_{0}}{2}\frac{T}{aN_{1}}\sum_{n_{0},n_{1}} \frac{1}{(2\pi Tn_{0})^{2}+\hat{\text{k}}^{2}+\mu^{2}}. \tag{101}\]
The sum over the Matsubara frequencies runs on \(n_{0}\in\mathbb{Z}\) and \(n_{1}=1,..,N_{1}\), with \(N_{1}\) the number of sites of the lattice. The lattice analogue of the spatial momentum \(\hat{\text{k}}\) is given in (44). The Matsubara sum can be explicitly performed with the help of Cauchy's theorem [161], obtaining
\[\Sigma_{\text{td}}=\frac{\lambda_{0}}{4N_{1}}\sum_{n_{1}=1}^{N_{1}}\frac{1}{a \omega(n_{1})}\coth\frac{\omega(n_{1})}{2T} \tag{102}\]
where
\[\omega(n_{1})=\sqrt{\frac{4}{a^{2}}\sin^{2}\frac{\pi n_{1}}{N_{1}}+\mu^{2}}. \tag{103}\]
This can be rewritten as
\[\Sigma_{\text{td}}=\frac{\lambda_{0}}{2N_{1}}\sum_{n_{1}=1}^{N_{1}}\frac{1}{ a\omega(n_{1})}\left(\frac{1}{2}+\frac{1}{\text{e}^{\omega(n_{1})/T}-1} \right). \tag{104}\]
Being independent of the external momenta, the tadpole diagrams represent a shift on the bare mass. The first term in the parenthesis is the zero temperature contribution. The second term, which is the mean field analogue of (25), contains the thermal effects. As explained in Sec. III.2, the sunrise diagram contributes both to the mass and wave function renormalization. The finite temperature lattice version of the mass shift (49) is given by
\[\Sigma_{\text{sr}}(\mathbf{0})=-\frac{\lambda_{0}^{2}}{6}\frac{T^{2}}{a^{2}N_ {1}^{2}}\sum_{n_{0},n_{1}}\sum_{l_{0},l_{1}}\frac{1}{(2\pi Tn_{0})^{2}+\omega (n_{1})^{2}}\frac{1}{(2\pi T(n_{0}+l_{0}))^{2}+\omega(n_{1}+l_{1})^{2}}\frac{1 }{(2\pi Tl_{0})^{2}+\omega(l_{1})^{2}}. \tag{105}\]
Figure 8: **Convergence of tadpole and sunrise Matsubara mode sums for the mass shifts:** Convergence of the Matsubara sums displayed in Fig. 3 as a function the number of modes \(N_{0}\), both for the tadpole **(a)** and the sunrise contributions **(b)**.
The Matsubara sums can again be performed explicitly, with the result
\[\Sigma_{\text{sr}}(\mathbf{0})=-\frac{\lambda_{0}^{2}T^{2}}{24N_{1}^{2}}\ \sum_{n_{1},l_{1}}\frac{1}{\omega_{1}\omega_{2}\omega_{3}\big{(}\sum_{i}\omega_{ i}\big{)}}\left(1+\sum_{i<j}\frac{(\omega_{i}^{2}+\omega_{j}^{2}-\omega_{ij}^{2}) \omega_{ij}}{\prod_{k<l}(\omega_{k}+\omega_{l}-\omega_{kl})}\Big{(}\coth\frac{ \omega_{i}}{2T}\coth\frac{\omega_{j}}{2T}-1\Big{)}\right)\, \tag{10}\]
where we have defined \(\omega_{1}=\omega(n_{1})\), \(\omega_{2}=\omega(l_{1})\) and \(\omega_{3}=\omega(n_{1}+l_{1})\) and \(\omega_{ij}=\omega_{k}\) with \(k\neq i,j\) and \(i,j=1,2,3\). The first term in parenthesis in the zero temperature contribution, and the second one the thermal correction.
Finally, let us address the contribution to the wave function renormalization of the sunrise diagram, Eq. (51). At finite temperature, we have
\[\frac{\partial\Sigma_{\text{sr}}}{\partial k_{0}^{2}}(\mathbf{0})=\frac{\lambda_{ 0}^{2}}{6}\frac{T^{2}}{a^{2}N_{1}^{2}}\ \sum_{n_{0},n_{1}}\sum_{l_{0},l_{1}}\ \frac{1}{(2\pi Tn_{0})^{2}+\omega(n_{1})^{2}}\frac{1}{(2\pi Tl_{0})^{2}+ \omega(m_{1})^{2}}\frac{\omega(n_{1}+l_{1})^{2}}{\big{(}(2\pi T(n_{0}+l_{0})) ^{2}+\omega(n_{1}+l_{1})^{2}\big{)}^{3}}. \tag{11}\]
Unfortunately, we have not found an analytical expression for the Matsubara sums. They have to be evaluated numerically, which implies truncating the sums to a finite domain \(n_{0},m_{0}=-N_{0},\cdots,N_{0}\). It is thus important to analyze the convergence of the truncated sums with \(N_{0}\). For the sake of completeness, we compare in Fig. 8 the numerical evaluation of the tadpole and sunrise mass shifts with the analytical results Eqs. (10) and (10). Whereas the sunrise contribution converges very fast for moderate \(N_{0}\) at the temperatures of interest, an accurate estimate of the tadpole shift requires much larger values. Therefore, incorporating these expressions in the numerical routines that solve the self-consistent equations makes them much more efficient. The result of the numerical evaluation of the wavefunction renormalization contribution (11) is shown in Fig. 9. As it was the case for the sunrise mass shift, the convergence is very good even in the low-temperature limit for moderate values of \(N_{0}\).
## Appendix B Analytical estimate of the critical ratio
We evaluate here the \(T=0\) critical point of the \(\lambda\phi^{4}\) theory in the continuum. As explained in text main text, it is customary to describe the critical point in terms of the dimensionless ratio \(\lambda_{0}/\mu^{2}\), with \(\mu^{2}\) UV-finite tadpole renormalized mass (45). Within our self-consistent approach, the critical point is determined by Eq. (48) and (49)
\[\mu^{2}=\frac{\lambda_{0}^{2}}{6}\!\int\!\!\frac{\mathrm{d}^{2}k\mathrm{d}^{2} q}{(2\pi)^{4}}\frac{1}{k^{2}+\mu^{2}}\frac{1}{q^{2}+\mu^{2}}\frac{1}{(k+q)^{2}+ \mu^{2}}. \tag{12}\]
Rescaling the momenta in the integral \(p,q\to\mu p,\mu q\), we obtain
\[f_{\text{c}}=\left.\frac{\lambda_{0}}{\mu^{2}}\right|_{\text{c}}=\sqrt{\frac{6 (2\pi)^{4}}{I}}\, \tag{13}\]
with
\[I=\int\!\!\frac{\mathrm{d}^{2}k\mathrm{d}^{2}q}{(2\pi)^{4}}\frac{1}{k^{2}+1} \frac{1}{q^{2}+1}\frac{1}{(k+q)^{2}+1}. \tag{14}\]
We evaluate this integral making use of Feynman parameters [110], namely
\[I=\int\mathrm{d}^{2}k\mathrm{d}^{2}q\frac{1}{k^{2}+1}\int_{0}^{1}\mathrm{d} \mathrm{x}\mathrm{d}y\frac{\delta(1+x+y)}{(q^{2}+k^{2}xy+1)^{2}}. \tag{15}\]
Integrating over \(q\) we obtain
\[I=\pi\int\mathrm{d}^{2}k\int_{0}^{1}\mathrm{d}x\frac{1}{k^{2}+1}\frac{1}{k^{2} x(1-x)+1}. \tag{16}\]
Figure 9: **Convergence of the Matsubara mode sums for the wavefunction renormalization:** Convergence of the Matsubara sums as a function of the number of modes \(N_{0}\) for the quantity \(\partial_{k_{0}}z_{\text{sr}}(0)/{\lambda_{0}}^{2}\) appearing in the wavefunction renormalization contribution.
Finally, we use Feynman parameters again to integrate over \(p\), with the result
\[I =\pi^{2}\int_{0}^{1}\mathrm{d}x\mathrm{d}z\frac{1}{1-z(1-x(1-x))}=\] \[=\frac{\pi^{2}}{18}\left(\psi^{1}\bigg{(}\frac{1}{6}\bigg{)}+\psi^{ 1}\bigg{(}\frac{1}{3}\bigg{)}-\psi^{1}\bigg{(}\frac{2}{3}\bigg{)}-\psi^{1} \bigg{(}\frac{5}{6}\bigg{)}\right),\]
where we have introduced the PolyGamma functions defined in terms of derivatives of Euler's gamma function \(\Gamma(z)=\int_{0}^{\infty}\mathrm{d}t\mathrm{i}^{z}\mathrm{e}^{-t}\), namely \(\psi^{n}(z)=\mathrm{d}^{n+1}\log\Gamma(z)/\mathrm{d}z^{n+1}\).
## Appendix C Critical line crossings
In this Appendix, we describe certain limitations of the current approach that can become important away for the regime of interest discussed in the main text, namely that of large couplings and masses. As it can be seen in Fig. 10, the critical lines separating the broken and unbroken phases have crossing points in the \((m_{0}^{2}a^{2},\lambda_{0}a^{2})\) plane. Actually, the critical lines of any two temperatures always cross for sufficiently negative \(m_{0}^{2}\) and large \(\lambda_{0}\). This behavior is an artefact that stems from the approximations underlying our procedure, as not only many Feynman diagrams are being discarded in the loop expansion, but also the self-consistency resummations make tadpole-like diagrams prevail upon the rest. Fig. 11 shows that these crossing are however not manifest when the critical lines are plotted as a function of the tadpole renormalized mass \(\mu^{2}\). In Fig. 12, we also present the contour plots of the physical mass as a function of \((\mu^{2}a^{2},Ta)\), which show a very regular behavior for large values of the couplings and masses. On the other hand, when displaying these contour plots in terms of the bare mass, we encounter an unphysical re-entrance of the symmetry-broken phase at low temperatures, that becomes more evident as one increases the quartic coupling (see Fig. 13). This reentrance is again a consequence of the aforementioned crossings of critical lines.
We now show that indeed equations (45) and (49) allow for pairs of parameters \(\mu_{1}\), \(\mu_{2}\) and \(T_{1}\), \(T_{2}\) such that \(m_{0,1}^{2}=m_{0,2}^{2}\) and \(\lambda_{0,1}=\lambda_{0,2}\). Let us defined the dimensionless combinations \(\Gamma_{\mathrm{td}}=\Sigma_{\mathrm{td}}/\lambda_{0}\) and \(\Gamma_{\mathrm{sr}}=-\Sigma_{\mathrm{sr}}(0)/\lambda_{0}^{2}a^{2}\), which are functions of \(\mu a\) and \(Ta\). From (45) and (49), at the crossing we have
\[\left(\frac{\mu_{1}}{\mu_{2}}\right)^{2}=\frac{\Gamma_{\mathrm{sr},1}}{\Gamma _{\mathrm{sr},2}}\;, \tag{10}\]
together with
\[\mu_{1}^{2}-\lambda_{0}\Gamma_{\mathrm{td},1}=\mu_{2}^{2}-\lambda_{0}\Gamma_{ \mathrm{td},2}, \tag{11}\]
where \(\Gamma_{\mathrm{sr},1}\) and \(\Gamma_{\mathrm{sr},2}\) refer to \(\Gamma_{\mathrm{sr}}(\mu_{1}a,T_{1}a)\) and \(\Gamma_{\mathrm{sr}}(\mu_{2}a,T_{2}a)\), and equivalently for \(\Gamma_{\mathrm{td}}\). Once \(T_{1}\) and \(T_{2}\) are chosen, the first equation determines \(\mu_{2}=\mu_{2}(\mu_{1})\), while the second singles out a value for \(\mu_{1}\). On the contrary, it is immediate to see that crossings are absent when the critical values of the coupling are plotted as a function of \(\mu a\). From Eq. (10), such a crossing would imply \(\Gamma_{\mathrm{sr},1}=\Gamma_{\mathrm{sr},2}\). Since the sunrise mass shift is a monotonic function of the temperature (see Fig. 3), this condition is never satisfied.
Figure 11: **Critical lines with no crossings:****(a)** We solve the self-consistent equations for the critical point \(m_{0}^{2}|_{\mathrm{c}}=0\) and plot \(\lambda_{0}a^{2}\) with respect to \(\mu^{2}a^{2}\), which leads to non-crossing critical lines. **(b)** We also plot the bare mass \(m_{0}^{2}a^{2}\) as a function of \(\mu a^{2}\) to confirm that no crossings occur.
Figure 10: **Critical lines with crossings:** Two critical lines for different temperatures show crossings in the \((m_{0}^{2}a^{2},\lambda_{0}a^{2})\) plane as a consequence of the approximations in our procedure.
Figure 12: **Finite-physical mass of the \(\lambda\phi^{4}\) model on a lattice with respect to \(\mu^{2}a^{2}\):** Contour plot for finite values of the tadpole and sunrise contributions to the physical mass \(m_{\rm p}^{2}a^{2}\), which are obtained by solving the self-consistent equation (55). The numerical solution uses analytical Matsubara mode sums and \(N_{1}=30\) spatial points to calculate the tadpole and sunrise diagrams. The white region corresponds to the symmetry-broken phase, and the coloured region to the symmetric one. As \(\lambda_{0}a^{2}\) increases, the critical line \(m_{\rm p}^{2}=0\) separating both phases is folded to the right. As shown, no crossings occur when plotting with respect to \(\mu^{2}a^{2}\).
Figure 13: **Finite-physical mass of the \(\lambda\phi^{4}\) model on a lattice with respect to \(m_{0}^{2}a^{2}\):** Contour plot for finite values of the tadpole and sunrise contributions to the physical mass \(m_{\rm p}^{2}a^{2}\), which are obtained by solving the self-consistent equation (55). The numerical solution uses analytical Matsubara mode sums and \(N_{1}=30\) spatial points to calculate the tadpole and sunrise diagrams. The white region corresponds to the symmetry-broken phase, and the coloured region to the symmetric one. As \(\lambda_{0}a^{2}\) increases, the critical line \(m_{\rm p}^{2}=0\) separating both phases is shifted to the left. Crossings can be appreciated for low temperatures. |
2307.08747 | Asymptotic Safety Guaranteed at Four Loop | We investigate a family of four-dimensional quantum field theories with
weakly interacting ultraviolet fixed points up to four loop order in
perturbation theory. Key new ingredients are the three loop gauge contributions
to quartic scalar beta functions, which we compute in the
$\overline{\text{MS}}$ scheme for a template $SU(N_c)$ gauge theory coupled to
$N_f$ fundamental fermions and elementary scalars. We then determine fixed
point couplings, field and mass anomalous dimensions, and universal scaling
exponents up to the first three non-trivial orders in a small Veneziano
parameter. The phase diagram and UV-IR connecting trajectories are found and
contrasted with asymptotic freedom. Further, the size of the conformal window,
unitarity, and mechanisms leading to the loss of conformality are investigated.
Our results provide blueprints for concrete 4d non-supersymmetric conformal
field theories with standard model-like field content, and invite further model
building. | Daniel F. Litim, Nahzaan Riyaz, Emmanuel Stamou, Tom Steudtner | 2023-07-17T18:00:03Z | http://arxiv.org/abs/2307.08747v3 | # Asymptotic Safety Guaranteed at Four Loop
###### Abstract
We investigate a family of four-dimensional quantum field theories with weakly interacting ultraviolet fixed points up to four loop order in perturbation theory. Key new ingredients are the three loop gauge contributions to quartic scalar beta functions, which we compute in the \(\overline{\text{MS}}\) scheme for a template \(SU(N_{c})\) gauge theory coupled to \(N_{f}\) fundamental fermions and elementary scalars. We then determine fixed point couplings, field and mass anomalous dimensions, and universal scaling exponents up to the first three non-trivial orders in a small Veneziano parameter. The phase diagram and UV-IR connecting trajectories are found and contrasted with asymptotic freedom. Further, the size of the conformal window, unitarity, and mechanisms leading to the loss of conformality are investigated. Our results provide blueprints for concrete 4d non-supersymmetric conformal field theories with standard model-like field content and invite further model building.
DO-TH 22/13
###### Contents
* I Introduction
* II Asymptotically Safe Gauge Theory
* II.1 Model
* II.2 Veneziano Limit
* II.3 Systematics
* II.4 Fixed Points
* III Computing Beta Functions
* III.1 Computational Strategy
* III.2 Treatment of \(\gamma_{5}\)
* III.3 Consistency Checks
* III.4 Higher Orders
* IV Results
* IV.1 Beta Functions
* IV.2 Anomalous Dimensions
* IV.3 Fixed Point
* IV.4 Scaling Exponents
* IV.5 Bounds from Series Expansions
* IV.6 Unitarity
* IV.7 Scales and Phase Diagram
* V Discussion and Outlook
* A Tensor Structures for Three-Loop Quartic RGEs
* B Finite-\(N\) Beta Functions
* C Gauge-dependent Anomalous Dimensions
## I Introduction
Ultraviolet (UV) fixed points play a central role for the fundamental definition of quantum field theory (QFT). They ensure that theories are UV-complete, meaning well-defined and predictive up to highest energies. This is quite different from effective field theories that tend to break down above a certain energy. Moreover, and much like critical points in systems with continuous phase transitions, fixed points in particle physics also relate to an underlying conformal field theory (CFT). The existence of free UV fixed points, known as asymptotic freedom, has been established long ago [1; 2]. The more recent discovery that high-energy fixed points can also be interacting [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14], known as asymptotic safety [15; 16], has opened up new territory to look for UV-complete extensions of the Standard Model, and for genuinely new phenomena beyond the paradigms of asymptotic freedom or effective field theory [17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36].
A role model for an UV-complete particle theory with a weakly interacting fixed point is given by \(N_{f}\) fermions coupled to \(SU(N_{c})\) gauge fields and elementary scalars through gauge and Yukawa interactions [3]. Crucially, in the regime where asymptotic freedom is absent, quantum fluctuations ensure that the growth of the gauge coupling is countered by Yukawa couplings, leading to an interacting fixed point in the UV (see Fig. 1). In the large-\(N\) limit, the fixed point is under strict perturbative control, and specifics of the theory can be extracted systematically in perturbation theory by using \(\epsilon=N_{f}/N_{c}-11/2\) as a small control parameter. Thus far, critical couplings, universal exponents, and the size of the conformal window have been determined up to the second non-trivial order in \(\epsilon\), including finite \(N\) corrections [3; 13; 8; 17].
In this paper, we extend the study of the UV critical theory to the complete third order in \(\epsilon\). The rationale for this is that while the fixed point occurs for the first time at the leading order in \(\epsilon\)[3], a bound on the UV conformal window \(\epsilon<\epsilon_{\text{max}}\) arises for the first time at the complete second order in \(\epsilon\)[8; 13]. Thereby, it has also been noted that \(\epsilon_{\text{max}}\) is numerically small, suggesting that the entire UV conformal window could be within the perturbative domain.1 The validation of this pic
ture warrants a study up to the third non-trivial order in \(\epsilon\). To achieve this goal, the four-loop gauge, three-loop Yukawa and quartic \(\beta\) functions, and three-loop anomalous dimensions are required as input. Some of these can be extracted from generic expressions for \(\beta\) functions of gauge-Yukawa theories [40; 41]. The missing pieces, however, are the three-loop contributions to scalar \(\beta\) functions containing gauge interactions, which we compute using standard techniques in the \(\overline{\text{MS}}\) scheme, and which is one of the central results of this work. In addition, we provide fixed point couplings and conformal data of the UV critical theory up to cubic order in \(\epsilon\), and look into the loss of conformality, the range of perturbativity, and UV-IR connecting trajectories in comparison with asymptotic freedom.
Footnote 1: The \(\beta\) functions are defined as \(\beta=\frac{1}{2}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac {1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac {1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\),\(\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2} }\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{ 2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt {2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt {2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{ \sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac {1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{ 2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt {2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt {2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt {2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{ \sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1} {\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{ \sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1} {\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1} {\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1} {\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1 }{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac {1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}} \,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2} }\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt {2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{ \sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{ \sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{ \sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{ \sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\, \frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\frac{1}{\sqrt{2}}\,\
where \(x=g,y\). Notice that the gauge, Yukawa, and single-trace scalar couplings scale linearly, while the double-trace scalar couplings scales quadratically with matter field multiplicity. In the Veneziano limit, any explicit dependence on \((N_{c},N_{f})\) drops out after the rescaling (2), and leaves us with a dependence on \(\epsilon\),
\[\epsilon\equiv\frac{N_{f}}{N_{c}}-\frac{11}{2}\,. \tag{3}\]
Moreover, the parameter (3) becomes continuous in this limit, taking values in the entire range \(\epsilon\in[-\frac{11}{2},\,\infty)\). We are particularly interested in the regime
\[|\epsilon|\ll 1 \tag{4}\]
where it serves as a small control parameter for perturbativity. The virtue of the parameter (3) is that it is proportional to the one-loop coefficient of the gauge \(\beta\) function, which is at the root for perturbatively controlled fixed points in any 4d quantum field theory [4; 5].
This last point can be illustrated, exemplarily, by expanding a gauge \(\beta\) function to second loop order,
\[\beta_{g}\big{|}_{\rm null}=\tfrac{4}{3}\epsilon\,\alpha_{g}^{2}+C\,\alpha_{g }^{3}+\mathcal{O}(\epsilon\,\alpha_{g}^{3},\alpha_{g}^{4})\,. \tag{5}\]
If other couplings \(\alpha_{i}\) are present, we project them onto their nullclines (\(\beta_{i}=0\)). The coefficient \(C\), generically of order unity, relates to the gauge two loop coefficient, possibly modified through Yukawa interactions by the nullcline projection [4]. Consequently, a non-trivial fixed point arises from a cancellation between the parametrically suppressed one-loop term and the two-loop term,
\[\alpha_{g}^{*}=-\frac{4\,\epsilon}{3C}+\mathcal{O}(\epsilon^{2})\,, \tag{6}\]
leading to a power series in the control parameter \(\epsilon\), with higher order loop terms leading to subleading corrections in \(\epsilon\).2 If other couplings are present, the nullcline conditions dictate that their fixed points are \(\alpha_{i}^{*}\propto\alpha_{g}^{*}\). We conclude that strict perturbativity of fixed points in non-abelian gauge theories can always be guaranteed for sufficiently small \(\epsilon\to 0\)[4; 5]. For examples of gauge theories where interacting UV fixed points exist non-perturbatively, including away from a Veneziano limit and at large \(\epsilon\), we refer to [14].
Footnote 2: Physicality of the fixed point requires that \(\epsilon\cdot C<0\).
### Systematics
A key feature of non-abelian gauge theories coupled to matter is that fixed point couplings \(\alpha_{i}^{*}\) (2) can be systematically expanded as a power series in the small parameter \(\epsilon\)[8]. For our setting, this implies the "conformal expansion" in powers of \(\epsilon\),
\[\alpha_{i}^{*}=\sum_{n=1}^{\infty}\alpha_{i}^{(n)}\,\epsilon^{n}\,,\quad(i=g, y,u,v)\,. \tag{7}\]
The expansion coefficients \(\alpha_{i}^{(n)}\) are determined using perturbation theory. In order to obtain all fixed point couplings (7) accurately up to and including the order \(\epsilon^{n}\), the perturbative loop expansion must be performed up to the loop order \(n+1\) in the gauge, and up to order \(n\) in the Yukawa and quartic \(\beta\) functions, to which we refer as the (n+1)nn approximation [8].3 Ultimately, the reason why the systematics of the perturbatively controlled expansion requires one more loop order in the gauge sector is that the one-loop gauge coefficient is parametrically as large as the gauge two-loop coefficient. This result establishes a link between the perturbative loop expansion and the conformal expansion in \(\epsilon\). The leading order \(\epsilon^{0}\) (LO) relates to the loop order 100, where the running gauge coupling is parametrically slowed down but a fixed point cannot (yet) arise. The next-to-leading order \(\epsilon^{1}\) (NLO), corresponding to 211, offers the first non-trivial order where a fixed point materialises [3], and the next-to-next-to-leading order \(\epsilon^{2}\) (2NLO), corresponding to 322, is the first non-trivial order where bounds on the conformal window arise [8; 13]. In this work, we provide the order \(\epsilon^{3}\) (3NLO) corresponding to the 433 approximation.
Footnote 3: For want of terminology, we denote settings which retain the gauge/Yukawa/quartic \(\beta\) functions up to the \(l/m/n\) loop order as the “1nm approximation.”
### Fixed Points
We briefly recall the weakly interacting fixed points of the theory (1). For \(\epsilon<0\), the theory is asymptotically free [1; 2] and one finds the seminal Caswell-Banks-Zaks IR fixed point [44; 45] with \(\alpha_{g}^{*}>0\) and \(\alpha_{y,u,v}^{*}=0\). The IR fixed point is known to exist within a conformal window \(\epsilon_{\rm min}<\epsilon<0\), analogous to the conformal window in QCD with extra fermions. The upper end is determined by the loss of asymptotic freedom. The fixed point becomes strongly coupled at the lower end \(\epsilon=\epsilon_{\rm min}\). The exact value for \(\epsilon_{\rm min}>-\frac{11}{2}\), however, is not established with high accuracy, see for instance [38; 39] and references therein. Also, in the regime with asymptotic freedom, the theory does not exhibit a perturbatively controlled fixed point with non-trivial Yukawa interactions \(\alpha_{y}^{*}\neq 0\)[4]. These main characteristics are illustrated in Fig. 2.
For \(\epsilon>0\), on the other hand, asymptotic freedom is absent. Then, a UV completion requires the appearance of an interacting UV fixed point. Most importantly, such a phenomenon necessitates a delicate interplay of non-abelian gauge, Yukawa and scalar interactions, and cannot
arise from gauge interactions alone [4; 5]. It then gives rise to a fully interacting UV fixed point (\(\alpha_{g,y,u,v}^{*}\neq 0\)) [3; 17] and a conformal window \(0<\epsilon<\epsilon_{\rm max}\).
This UV fixed point and its renormalisation group (RG) flow in the \((\alpha_{g},\,\alpha_{y})\) plane is shown in Fig. 1. In the vicinity the UV fixed point, the RG flow is power-law rather than logarithmic, with respect to the renormalisation scale \(\mu\)
\[\alpha_{i}=\alpha_{i}^{*}+\sum_{j}c_{i,j}\left(\frac{\mu}{\mu_{0}}\right)^{ \vartheta_{j}}\,, \tag{8}\]
characterised by universal scaling exponents \(\vartheta\). The sign of the critical exponents \(\vartheta_{i}\) in (8) determines whether an RG trajectory connects to the fixed point in the UV or IR, in which case it is called relevant or irrelevant, respectively. While there are three irrelevant eigendirections, only one RG trajectory reaches the fixed point in the UV. Thus, asymptotic safety is established along a one-dimensional submanifold in parameter space. Emanating from the UV fixed point, the two outgoing trajectories lead either to IR freedom, or towards a strongly coupled regime with either confinement or an interacting conformal fixed point. For \(\epsilon>\epsilon_{\rm max}\), the UV fixed point disappears and the theory is described by an effective field theory in the UV, and a free theory in the IR; see Fig. 2 for an illustration of these features.
## III Computing beta functions
It is the central aim of this work to find and study the renormalisation group flow for the theory (1) at the complete 3NLO order in the conformal expansion, which corresponds to the 433 approximation. It requires four-loop \(\overline{\rm MS}\)\(\beta\) functions for the gauge coupling \(g\), as well as the three-loop ones for the Yukawa coupling \(y\), and the scalar quartic \(u\) and \(v\). Generic \(\beta\) functions for the gauge and Yukawa couplings have been obtained in Refs. [40; 41] using Weyl consistency conditions at 432 [46],4 while the fully general quartic \(\beta\) functions are available at two-loop order [48; 49]. These results are conveniently accessible via software packages such as RGBeta[50] and FoRGEr[51]. Moreover, quartic and Yukawa contributions to the three-loop \(\beta\) functions for the single- and double-trace quartic couplings \(u\) and \(v\) have been determined in Refs. [52; 53]. Therefore, the only missing pieces for a complete 433 analysis are the three-loop contributions to \(\beta_{u}\) and \(\beta_{v}\) containing gauge interactions. Their computation is the main task of this section.
Footnote 4: Note that our model (1) is CP-even and cannot generate an additive \(\beta\) function to its topological angle; thus, the caveat raised in Ref. [47] regarding the Weyl consistency condition does not apply.
### Computational Strategy
We have conducted a complete computation of all scalar, fermion, vector-boson, and ghost two-point functions, gauge and Yukawa vertex three-point functions, and scalar four-point functions up to three-loop order. This allows to compute the \(\overline{\rm MS}\) counterterms that determine all \(\gamma\) and \(\beta\) functions, including the missing three-loop results for the single- and double-trace quartic scalar couplings \(\beta_{u,v}\). While we are ultimately interested in the Veneziano limit, our computations have been conducted for finite \(N_{f}\) and \(N_{c}\).
The calculation has been achieved using the framework MaRTIn [54], which has been extended to three-loop order for this purpose. All Feynman diagrams are generated using QGRAF [55] and further evaluated in FORM[56]. Overall, almost 33,500 diagrams have been processed. To distinguish UV and IR poles, we employ the technique of infrared rearrangement (IRA) [57; 58]. For convenience, we choose the scalar mass in Eq. (1) to be zero and expand each propagator (with integration momentum \(p\)) recursively with a universal mass parameter \(m_{\rm IRA}\)
\[\frac{1}{(p-q)^{2}}=\frac{1}{p^{2}-m_{\rm IRA}^{2}}+\frac{2\,p\!\cdot\!q-p^{2} }{p^{2}-m_{\rm IRA}^{2}}\frac{1}{(p-q)^{2}}\,.\]
Finite terms with a sufficiently negative degree of divergence are dropped systematically. In order to cancel subdivergences in two- and three-loop diagrams, counterterms for scalar and vector-boson masses proportional to \(m_{\rm IRA}^{2}\) are introduced, while this is not necessary for ghosts or fermions [59]. In the end, all pole terms should be independent of \(m_{\rm IRA}\) and logarithms thereof, which is a non-trivial consistency check of the result. We apply tensor and integration by parts reduction techniques [58] and the program LiteRed[60; 61] is utilised to reduce all remaining three-loop scalar vacuum integrals to a set of masters [62; 63].
### Treatment of \(\gamma_{5}\)
Moreover, we would like to comment on the treatment of \(\gamma_{5}\), as its naive definition
\[\{\gamma_{5},\gamma^{\mu}\}=0,\qquad\gamma_{5}=\frac{i}{4!}\varepsilon_{\mu \nu\rho\sigma}\,\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma} \tag{9}\]
Figure 2: Main characteristics of the theory (1) as a function of the Veneziano parameter \(\epsilon\). In the UV, we indicate whether the theory is asymptotically free, safe, or UV-incomplete and described by an effective field theory. In the IR, we indicate whether the theory achieves confinement, IR freedom, or an interacting conformal fixed point.
with the 4-dimensional Levi-Civita symbol \(\varepsilon\) is in conflict with the dimensional regularisation procedure. In fact this treatment is algebraically inconsistent. In our case, the inconsistencies and ambiguities regarding the \(\gamma_{5}\) treatment can only arise starting at three loops when contracting two different terms \(\propto\operatorname{tr}\left(\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{ \sigma}\gamma_{5}\right)\) or with traces of more \(\gamma\) matrices [64], e.g., from diagrams in Fig. 3. As observed in Ref. [46], such terms are only generated if for each closed fermion line \(\ell\) with \(n_{g}^{(\ell)}\) gauge-vertex insertions and \(n_{y}^{(\ell)}\) Yukawa-vertex insertions
\[2\,n_{g}^{(\ell)}+n_{y}^{(\ell)}\geq 5\,. \tag{10}\]
This constraint cannot be satisfied for scalar two-point functions at three-loop order. There is a single set of scalar four-point diagrams at three-loop where Eq. (10) is fulfilled. These are diagrams containing two fermion loops with \(n_{y}^{(1)}=n_{g}^{(1)}=n_{y}^{(2)}=n_{g}^{(2)}=2\), as depicted in Fig. 3. Each fermion loop in these diagrams features their own loop momentum, which can be integrated over independently from the rest of the diagram. External momenta can be set to zero as the \(1/\varepsilon\) UV pole terms relevant for computing the \(\beta\) functions are independent of them. In the end, the \(\gamma\) matrices along each fermion line either carry Lorentz indices from the gauge boson propagators, or are contracted with the third integration momentum exchanged between the loops. In either case, there are insufficient independent Lorentz indices and momenta feeding into each fermion trace to form a tensor \(\propto\operatorname{tr}\left(\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{ \sigma}\gamma_{5}\right)\)[59]. Hence, three-loop quartic \(\beta\) functions, which represent the main novel result of our computation, cannot depend on the \(\gamma_{5}\) scheme and can be treated without inconsistencies within the semi-naive \(\gamma_{5}\) scheme employed in this work and discussed below. We extract the four-loop gauge and three-loop Yukawa \(\beta\) functions from literature results [40; 41]. In this case it is known that all potential \(\gamma_{5}\) ambiguities are fixed due to Weyl consistency conditions [65; 66; 46].
In order to deal with \(\gamma_{5}\) in our calculation, we employ the semi-naive scheme [67; 59] with
\[\{\gamma_{5},\gamma^{\mu}\}=0,\qquad\gamma_{5}=\frac{i}{4!}\widetilde{ \varepsilon}_{\mu\nu\rho\sigma}\,\gamma^{\mu}\gamma^{\nu}\gamma^{\rho}\gamma^{\sigma} \tag{11}\]
where \(\widetilde{\varepsilon}\) is a \((4-2\varepsilon)\)-dimensional, completely antisymmetric tensor which satisfies the identity
\[\widetilde{\varepsilon}^{\mu_{1}\nu_{1}\rho_{1}\sigma_{1}}\widetilde{ \varepsilon}_{\mu_{2}\nu_{2}\rho_{2}\sigma_{2}}=-\delta^{[\mu_{1}}_{\,\,\,[ \mu_{2}}\,\delta^{\nu_{1}}_{\,\,\,\nu_{2}}\,\delta^{\rho_{1}}_{\,\,\,\rho_{2}} \,\delta^{\sigma_{1}]}_{\,\,\,\sigma_{2}]}+\mathcal{O}(\varepsilon)\,. \tag{12}\]
In exactly four space-time dimensions, \(\widetilde{\varepsilon}\) is the Levi-Civita symbol and the naive definition in Eq. (9) is recovered. Slightly away from the integer dimension at \(d=4-2\varepsilon\), \(\widetilde{\varepsilon}\) digresses by terms \(\mathcal{O}(\varepsilon)\) from the Levi-Civita case. Hence, the otherwise four-dimensional identity in Eq. (12) picks up corrections \(\mathcal{O}(\varepsilon)\). The exact shape of these \(\mathcal{O}(\varepsilon)\) corrections is irrelevant for the calculation of counterterms as long as Eq. (12) is only applied in terms that are already finite or only contain a single pole \(\frac{1}{\varepsilon}\). We have verified that this is indeed the case in our calculation. Finally, we would like to mention that poles due to non-hermitian field strength renormalisation tensors are absent as the flavour symmetry is unbroken [68; 69; 70; 71].
### Consistency Checks
Overall, the computation agrees at finite \(N_{f,c}\) with generic literature results [40; 41; 48; 49; 72; 73; 74; 75; 76] at 432 as well as previous calculations for 433 in the gaugeless limit [52; 53]. To cross check the gauge contributions, we have extended the basis of tensor structures for the general scalar \(\gamma\) and quartic \(\beta\) functions [53] to account for gauge interactions among fermions (while retaining scalars as not charged). Details can be found in App. A. Each tensor structure in the general \(\beta\) functions has a universal coefficient that can be determined by comparing the corresponding renormalisation group equations of suitable literature results. In our case, we have utilised the three-loop data for the Higgs self-interactions in the SM [77; 78; 59; 79] with \(g_{1}=g_{2}=0\), as well as a QED-like gauge-Yukawa theory with a real scalar singlet [80]. All references employ the same semi-naive \(\gamma_{5}\) scheme. The literature models are compatible with the generalised lagrangian (11). All relevant parts of their scalar quartic \(\beta\) functions, mass and field anomalous dimensions can be computed using the prescription (14) and (15), up to a number of model-independent coefficients. Comparing these results with the explicit computations of [77; 78; 59; 80] yields relations of those coefficients. Although not all coefficients can be fixed, the data is sufficient to obtain the complete quartic \(\beta\) functions for the theory (1) by using the formalism of (14) and (15). We find full agreement with our explicit calculation at finite \(N_{f,c}\).
### Higher Orders
In order to advance the conformal expansion to 4NLO (544 approximation), the complete 5-loop gauge as well as 4-loop Yukawa and quartic \(\beta\) functions are required. Partial results are available from QCD-like theories [81; 82; 83; 84; 85; 86], and from purely scalar theories [87; 52]. What is missing, however, are the crucial contributions from Yukawa interactions, the coupling that mediates between the gauge and scalar sectors. It is well-known that Yukawa interactions are key for the primary existence of the fixed point [3; 4; 5; 88], and their contributions are therefore expected to be equally important at higher orders.
As 544 requires the computation of 4-point functions, it is prudent to employ infrared rearrangement by massive propagators as demonstrated in this work. Some tools for this endeavour have already been developed, see for instance [89; 90; 91; 92; 93; 94] and references therein. However, and given the limitations of the semi-naive algorithm, the main new complication will be the consistent treatment of \(\gamma_{5}\). Notice that up until now this has not been an issue in
QCD-like or purely scalar theories. Also, while at 432 all \(\gamma_{5}\)-ambiguities have been removed using Weyl consistency conditions [65; 46], it is far from evident that the same can be achieved at higher orders. For starters, this would require the formulation of a basis for generalised 543 and 654\(\beta\) functions, which in itself is a massive undertaking. We also point out that a complete basis for the general quartic \(\beta\) function at three loops does not yet exist. These ambitious endeavours are left for future work.
## IV Results
In this section, we summarise our results for \(\beta\) functions and anomalous dimensions, and determine fixed points and universal scaling dimensions up to the third non-trivial order in the Veneziano parameter. We also discuss aspects of unitarity, bounds on the conformal window, and the phase diagram and UV-IR connecting trajectories in comparison with asymptotic freedom.
### Beta Functions
In this section we list the \(\beta\) functions in the loop expansion
\[\beta_{i}\equiv\frac{\mathrm{d}\alpha_{i}}{\mathrm{d}\ln\mu}=\sum_{\ell=1}^{ \infty}\beta_{i}^{(\ell)}, \tag{13}\]
with \(i=g,\,y,\,u,\,v\). The new pieces with respect to the previous analysis [8] are the four loop contributions to the gauge \(\beta_{g}^{(4)}\), the three loop contribution to the Yukawa \(\beta_{y}^{(3)}\), and the three loop contributions to the scalar \(\beta\) functions \(\beta_{u,v}^{(3)}\). Specifically,
\[\begin{split}\beta_{g}^{(1)}&=\tfrac{4}{3}\epsilon \,\alpha_{g}^{2}\,,\\ \beta_{g}^{(2)}&=\left(25+\tfrac{26}{3}\epsilon \right)\alpha_{g}^{3}-\tfrac{12}{2}\left(11+2\epsilon\right)^{2}\alpha_{y} \alpha_{g}^{2}\,,\\ \beta_{g}^{(3)}&=\left(\tfrac{701}{6}+\tfrac{53}{3 }\epsilon-\tfrac{112}{27}\epsilon^{2}\right)\alpha_{g}^{4}-\tfrac{27}{8}\left( 11+2\epsilon\right)^{2}\alpha_{y}\alpha_{g}^{3}+\tfrac{1}{4}\left(20+3 \epsilon\right)\left(11+2\epsilon\right)^{2}\alpha_{y}^{2}\alpha_{g}^{2}\,, \\ \beta_{g}^{(4)}&=-\left[\tfrac{14731}{72}+550\zeta_{3 }+\left(\tfrac{123473}{324}+\tfrac{1808}{9}\zeta_{3}\right)\epsilon+\left( \tfrac{21598}{243}+\tfrac{56}{3}\zeta_{3}\right)\epsilon^{2}+\tfrac{260}{243} \epsilon^{3}\right]\alpha_{g}^{5}\\ &\qquad+\tfrac{1}{48}\left(11+2\epsilon\right)^{2}\left[\left(-10 7+432\zeta_{3}+\tfrac{758}{3}\epsilon\right)\alpha_{y}\alpha_{g}^{4}+3\left( 647-48\zeta_{3}+92\epsilon\right)\alpha_{y}^{2}\alpha_{g}^{3}\right]\\ &\qquad+\left(11+2\epsilon\right)^{2}\left[3\,\alpha_{u}^{2}- \left(\tfrac{875}{16}+\tfrac{179}{12}\epsilon+\tfrac{11}{12}\epsilon^{2} \right)\alpha_{y}^{2}\right]\alpha_{y}\alpha_{g}^{2}-\tfrac{5}{4}\left(11+2 \epsilon\right)^{3}\alpha_{u}\alpha_{y}^{2}\alpha_{g}^{2}\,.\end{split} \tag{14}\]
We note that irrational coefficients \(\propto\zeta_{3}\) arise for the first time at four loop. Further, the quartic coupling \(\alpha_{u}\) makes its first appearance at four loop, as it must.5 This influence of the scalar sector is channelled through the Yukawa sector, which itself is supplemented by three-loop results
Footnote 5: Had the scalars been charged under the gauge symmetry, contributions would have appeared at three loop.
\[\begin{split}\beta_{y}^{(1)}&=\left(13+2\epsilon \right)\alpha_{y}^{2}-6\,\alpha_{g}\alpha_{y}\,,\\ \beta_{y}^{(2)}&=-\tfrac{1}{8}\left(35+2\epsilon \right)\left(11+2\epsilon\right)\alpha_{y}^{3}+\left(49+8\epsilon\right)\alpha_{ g}\alpha_{y}^{2}\\ &\quad-4\left(11+2\epsilon\right)\alpha_{u}\alpha_{y}^{2}-\tfrac {1}{6}\left(93-20\epsilon\right)\alpha_{g}^{2}\alpha_{y}+4\,\alpha_{u}^{2} \alpha_{y}\,,\\ \beta_{y}^{(3)}&=\left(\tfrac{17413}{64}+\tfrac{259 5}{32}\epsilon+\tfrac{59}{38}\epsilon^{2}\right)\alpha_{y}^{4}-\tfrac{1}{2} \left(118+19\epsilon\right)\left(11+2\epsilon\right)\alpha_{g}\alpha_{y}^{3}\\ &\quad+6\left(8+\epsilon\right)\left(11+2\epsilon\right)\alpha_{u }\alpha_{y}^{3}-\left[\tfrac{1217}{16}+198\zeta_{3}+\tfrac{1}{8}\epsilon \left(893+288\zeta_{3}+136\epsilon\right)\right]\alpha_{g}^{2}\alpha_{y}^{2} \\ &\quad+2\left(11+2\epsilon\right)\alpha_{g}\alpha_{u}\alpha_{y}^{2} +5\left(\tfrac{5}{2}+\epsilon\right)\alpha_{u}^{2}\alpha_{y}^{2}-8\,\alpha_{u}^ {3}\alpha_{y}\\ &\quad+\left[\tfrac{641}{6}+132\zeta_{3}+\tfrac{\epsilon}{27}\left( 1947+648\zeta_{3}+70\epsilon\right)\right]\alpha_{g}^{3}\alpha_{y}\,.\end{split} \tag{15}\]
In the quartic sector, the gauge dependent terms \(\propto\alpha_{g}\) and \(\propto\alpha_{g}^{2}\) are computed here for the first time. This terms must vanish for \(\alpha_{y}=0\), which decouples the fermionic from the gauge sector. This is indeed manifest in the evolution
Figure 3: Scalar four-point diagrams that fulfil Eq. (10), but can still be treated without inconsistencies within the naïve \(\gamma_{5}\) scheme as argued in Ref. [59] and discussed in the main text.
of both the single- and double-trace quartics. The latter reads
\[\begin{split}\beta_{u}^{(1)}&=8\,\alpha_{u}^{2}+4\, \alpha_{y}\alpha_{u}-\left(11+2\epsilon\right)\alpha_{y}^{2}\,,\\ \beta_{u}^{(2)}&=-24\,\alpha_{u}^{3}-16\,\alpha_{y} \alpha_{u}^{2}-3\left(11+2\epsilon\right)\alpha_{y}^{2}\alpha_{u}+10\,\alpha_{ g}\alpha_{y}\alpha_{u}+\left(11+2\epsilon\right)^{2}\alpha_{y}^{3}-2\left(11+2 \epsilon\right)\alpha_{g}\alpha_{y}^{2}\\ \beta_{u}^{(3)}&=104\,\alpha_{u}^{4}+34\,\alpha_{u}^ {3}\alpha_{y}+\left(889+166\epsilon\right)\alpha_{u}^{2}\alpha_{y}^{2}-\tfrac{1 }{8}\left(\tfrac{11}{2}+\epsilon\right)^{2}\left(21-26\epsilon\right)\alpha_{y }^{4}\\ &\qquad-\left(\tfrac{2953}{16}+\tfrac{315}{8}\epsilon\right)\left( 11+2\epsilon\right)\alpha_{u}\alpha_{y}^{3}-\left(102-96\zeta_{3}\right)\alpha_ {u}^{2}\alpha_{y}\alpha_{g}\\ &\qquad+\tfrac{1}{4}\left(11+2\epsilon\right)\left(149-240\zeta_{ 3}\right)\alpha_{u}\alpha_{y}^{2}\alpha_{g}-\tfrac{1}{4}\left(11+2\epsilon \right)^{2}\left(5-24\zeta_{3}\right)\alpha_{y}^{3}\alpha_{g}\\ &\qquad+\left(\tfrac{13}{4}-8\epsilon\right)\alpha_{u}\alpha_{y} \alpha_{g}^{2}+\tfrac{1}{8}\left(11+2\epsilon\right)\left(23+20\epsilon\right) \alpha_{y}^{2}\alpha_{g}^{2}\,.\end{split} \tag{16}\]
Note the absence of a term \(\propto\alpha_{y}\alpha_{g}^{3}\), which can easily be understood diagrammatically. As expected in the planar large-\(N\) limit [95], the double-trace quartic does not enter other \(\beta\) functions than its own, namely
\[\begin{split}\beta_{v}^{(1)}&=12\,\alpha_{u}^{2}+16 \,\alpha_{u}\alpha_{v}+4\,\alpha_{v}^{2}+4\,\alpha_{y}\alpha_{v}\,,\\ \beta_{v}^{(2)}&=-96\,\alpha_{u}^{3}-40\,\alpha_{u} ^{2}\alpha_{v}-24\,\alpha_{y}\alpha_{u}^{2}-32\,\alpha_{y}\alpha_{u}\alpha_{v} -8\,\alpha_{y}\alpha_{v}^{2}+4\left(11+2\epsilon\right)\alpha_{u}\alpha_{y}^{ 2}\\ &\qquad-3\left(11+2\epsilon\right)\alpha_{v}\alpha_{y}^{2}+10\, \alpha_{g}\alpha_{y}\alpha_{v}+\left(11+2\epsilon\right)^{2}\alpha_{y}^{3}\,, \\ \beta_{v}^{(3)}&=12\,\alpha_{v}^{2}\alpha_{u}^{2}+480 \,\alpha_{v}\alpha_{u}^{3}+(772+384\zeta_{3})\alpha_{u}^{4}+66\,\alpha_{v} \alpha_{u}^{2}\alpha_{y}+192\,\alpha_{u}^{3}\alpha_{y}\\ &\qquad+\left(\tfrac{427}{2}+41\epsilon\right)\alpha_{v}^{2} \alpha_{y}^{2}+\left(788+152\epsilon+96\zeta_{3}\left(\tfrac{11}{2}+\epsilon \right)\right)\alpha_{v}\alpha_{u}\alpha_{y}^{2}\\ &\qquad+\left(\tfrac{1985}{2}+187\epsilon+192\zeta_{3}\left( \tfrac{11}{2}+\epsilon\right)\right)\alpha_{u}\alpha_{y}^{2}\\ &\qquad-4\left(\tfrac{11}{2}+\epsilon\right)\left(105+22\epsilon+ 24\zeta_{3}\left(\tfrac{11}{2}+\epsilon\right)\right)\alpha_{u}\alpha_{y}^{ 3}\\ &\qquad-\tfrac{1}{8}\left(\tfrac{11}{2}+\epsilon\right)\left(1545+3 74\epsilon\right)\alpha_{v}\alpha_{y}^{3}-\left(\tfrac{11}{2}+\epsilon\right)^ {2}\left(73+10\epsilon\right)\alpha_{y}^{4}\\ &\qquad-9\left(17-16\zeta_{3}\right)\alpha_{u}^{2}\alpha_{y} \alpha_{g}-\left(204-192\zeta_{3}\right)\alpha_{v}\alpha_{u}\alpha_{y}\alpha_{ g}-\left(51-48\zeta_{3}\right)\alpha_{v}^{2}\alpha_{y}\alpha_{g}\\ &\qquad+8\left(11+2\epsilon\right)\left(7-9\zeta_{3}\right)\alpha _{u}\alpha_{y}^{2}\alpha_{g}+\tfrac{1}{4}\left(11+2\epsilon\right)\left(149- 240\zeta_{3}\right)\alpha_{v}\alpha_{y}^{2}\alpha_{g}\\ &\qquad+\tfrac{1}{2}\left(11+2\epsilon\right)^{2}\left(-1+12\zeta _{3}\right)\alpha_{y}^{3}\alpha_{g}+\left(\tfrac{13}{4}-8\epsilon\right) \alpha_{v}\alpha_{y}\alpha_{g}^{2}+6\left(11+2\epsilon\right)^{2}\alpha_{y}^ {2}\alpha_{g}^{2}\,,\end{split} \tag{17}\]
where it only appears to order \(\propto\alpha_{v}^{2}\). This potentially leads to pair-wise fixed-point solutions that only differ by \(\alpha_{v}^{*}\) and potentially merge at some value of \(\epsilon\), disappearing into the complex plane. Finite-\(N\) corrections to (14)-(17) are more lengthy and can be found in App. B.
### Anomalous Dimensions
Next, we provide some results for physical meaningful anomalous dimensions for mass and field strength renormalisation. The scalar squared mass \(m^{2}\) in (1) corresponds to the only bilinear field operator that does not violate local or global symmetries. Its gauge-independent anomalous dimension
\[\gamma_{m^{2}}=\frac{\mathrm{d}\ln m^{2}}{\mathrm{d}\ln\mu}=\sum_{\ell=1}^{ \infty}\gamma_{m^{2}}^{(\ell)}\,, \tag{18}\]
cannot be obtained from our loop computation, since we have chosen \(m^{2}=0\) for convenience. Thus, its counterterm is tainted by contributions from the gauge boson IRA mass. Instead, we make use of the general \(\beta\) functions for the quartic interactions, and employ the dummy field trick [96; 49; 76] to obtain mass \(\beta\) functions. At three loops, we rely on the ansatz detailed in App. A of tensor structures with the incomplete set of coefficients extracted from the literature, see Sec. III. The information is sufficient to obtain
\[\begin{split}\gamma_{m^{2}}^{(1)}&=8\,\alpha_{u}+4\, \alpha_{v}+2\,\alpha_{y}\,,\\ \gamma_{m^{2}}^{(2)}&=-20\,\alpha_{u}^{2}-8\,\alpha_{v} \alpha_{y}-16\,\alpha_{u}\alpha_{y}+5\,\alpha_{y}\alpha_{g}\\ &\quad-\tfrac{3}{2}(11+2\epsilon)\alpha_{y}^{2}\,,\\ \gamma_{m^{2}}^{(3)}&=240\,\alpha_{u}^{3}+12\,\alpha_{v} \alpha_{u}^{2}+33\,\alpha_{u}^{2}\alpha_{y}\\ &\quad+\tfrac{1}{2}(427+82\epsilon)\alpha_{v}\alpha_{y}^{2}\\ &\quad+(394+264\zeta_{3}+76\epsilon+48\zeta_{3}\epsilon)\alpha_{u} \alpha_{y}^{2}\\ &\quad+3(16\zeta_{3}-17)(2\,\alpha_{u}+\alpha_{v})\alpha_{y}\alpha_ {g}\\ &\quad-\tfrac{1}{32}(11+2\epsilon)(1545+374\epsilon)\alpha_{y}^{3} \\ &\quad-\tfrac{1}{8}(11+2\epsilon)(240\zeta_{3}-149)\alpha_{y}^{2} \alpha_{g}\\ &\quad+\tfrac{1}{8}(13-32\epsilon)\alpha_{y}\alpha_{g}^{2}\,, \end{split} \tag{19}\]
while anomalous dimensions of other scalar bilinear operators violating global symmetries cannot be determined.
As for the fermions, a Dirac mass term \(m_{\psi}\,\overline{\psi}\psi\) breaks the global symmetry but leaves the gauge symmetry intact.
Its anomalous dimension can be extracted from the generic Yukawa \(\beta\) function up to three-loops [40; 41], again using the dummy field trick. Employing the notation
\[\gamma_{m_{\psi}}=\frac{\mathrm{d}\ln m_{\psi}}{\mathrm{d}\ln\mu}=\sum_{\ell=1} ^{\infty}\gamma_{m_{\psi}}^{(\ell)}\,, \tag{20}\]
the results read
\[\begin{split}\gamma_{m_{\psi}}^{(1)}&=\tfrac{1}{2}( 11+2\epsilon)\alpha_{y}-3\,\alpha_{g}\,,\\ \gamma_{m_{\psi}}^{(2)}&=-\tfrac{1}{16}(11+2 \epsilon)(23+2\epsilon)\alpha_{y}^{2}+2(11+2\epsilon)\alpha_{y}\alpha_{g}\\ &\quad\quad-\tfrac{1}{12}(93-20\epsilon)\alpha_{g}^{2}\,,\\ \gamma_{m_{\psi}}^{(3)}&=-\tfrac{11}{4}(11+2 \epsilon)\alpha_{u}^{2}\alpha_{y}+(11+2\epsilon)^{2}\alpha_{u}\alpha_{y}^{2}\\ &\quad\quad+\big{(}\tfrac{13387}{128}+\tfrac{219}{64}\epsilon+ \tfrac{49}{32}\epsilon^{2}-\tfrac{3}{16}\epsilon^{3}\big{)}\,\alpha_{y}^{3}\\ &\quad\quad-\tfrac{1}{8}(11+2\epsilon)(477-48\zeta_{3}+76 \epsilon)\alpha_{y}^{2}\alpha_{g}\\ &\quad\quad-\tfrac{1}{16}(11+2\epsilon)(113+288\zeta_{3}+136 \epsilon)\alpha_{y}\alpha_{g}^{2}\\ &\quad\quad+\big{(}\tfrac{641}{12}+66\zeta_{3}+\tfrac{649}{18} \epsilon+12\zeta_{3}\epsilon+\tfrac{35}{27}\epsilon^{2}\big{)}\,\alpha_{g}^{3} \,.\end{split} \tag{21}\]
Furthermore, a renormalisation procedure of all field \(X\) has been conducted via the substitution
\[X_{\text{bare}}=\sqrt{Z_{X}}\,X\,. \tag{22}\]
These field strength renormalisation factors \(Z_{X}\) contain counterterms and imply anomalous dimensions
\[\gamma_{X}=\frac{\mathrm{d}\ln\sqrt{Z_{X}}}{\mathrm{d}\ln\mu}=\sum_{\ell=1}^{ \infty}\gamma_{X}^{(\ell)}\,. \tag{23}\]
Note that all factors \(Z_{X}\) are just multiplicative numbers as the global symmetries remain intact. This excludes any ambiguities stemming from antihermitian parts of anomalous-dimension matrices [68; 69; 70; 71]. However, field strength anomalous dimensions \(\gamma_{X}\) are in general gauge dependent and thus unphysical. The scalar field anomalous dimension \(\gamma_{\phi}\) represents a notable exception, as its fixed point value is part of the CFT data. Unsurprisingly, we find it to be gauge independent up to three loop order
\[\begin{split}\gamma_{\phi}^{(1)}&=\alpha_{y}\,,\\ \gamma_{\phi}^{(2)}&=2\,\alpha_{u}^{2}+\tfrac{5}{2} \,\alpha_{y}\alpha_{g}-\tfrac{3}{4}\,(11+2\epsilon)\,\alpha_{y}^{2}\,,\\ \gamma_{\phi}^{(3)}&=-4\,\alpha_{u}^{3}-\tfrac{15}{ 2}\alpha_{u}^{2}\alpha_{y}+\tfrac{5}{2}(11+2\epsilon)\alpha_{u}\alpha_{y}^{2}\\ &\quad\quad+\tfrac{1}{64}(183+10\epsilon)(11+2\epsilon)\alpha_{y} ^{3}\\ &\quad\quad-\tfrac{1}{16}(48\zeta_{3}-5)(11+2\epsilon)\alpha_{y} ^{2}\alpha_{g}\\ &\quad\quad+\tfrac{1}{16}(13-32\epsilon)\alpha_{y}\alpha_{g}^{2}\,. \end{split} \tag{24}\]
As for the other fields, anomalous dimensions in \(R_{\xi}\) gauge are collected in App. C.
### Fixed Point
With \(\beta\) functions available at the complete 433 order, we are now in a position to determine interacting fixed points accurately up to complete cubic order in the Veneziano parameter \(\epsilon\). Complete sets of coefficients up to quadratic order have previously been found in [8] (see also [3]).
Using the expansion (7), and solving \(\beta_{i}(\alpha_{j}^{*})=0\) systematically as a power series in \(\epsilon\), we find for the gauge coupling coefficients
\[\begin{split}\alpha_{g}^{(1)}&=\tfrac{26}{57}\,,\\ \alpha_{g}^{(2)}&=23\tfrac{75245-13068\sqrt{23}}{3703 86}\,,\\ \alpha_{g}^{(3)}&=\tfrac{353747709269}{2406768228}- \tfrac{663922754}{22284891}\sqrt{23}+\tfrac{386672}{185193}\zeta_{3}\,.\end{split} \tag{25}\]
Note that \(\zeta_{3}\) arises for the first time in the cubic coefficient. Similarly, for the Yukawa coupling we obtain
\[\begin{split}\alpha_{y}^{(1)}&=\tfrac{4}{19}\,,\\ \alpha_{y}^{(2)}&=\tfrac{435549}{20577}-\tfrac{2300}{6859 }\sqrt{23}\,,\\ \alpha_{y}^{(3)}&=\tfrac{2893213181}{44569782}- \tfrac{96807908}{7428297}\sqrt{23}+\tfrac{4576}{6859}\zeta_{3}\,.\end{split} \tag{26}\]
The single- and double-trace quartic scalar couplings give rise to the coefficients
\[\alpha_{u}^{(1)} =\tfrac{\sqrt{23}-1}{19}\,, \tag{27}\] \[\alpha_{u}^{(2)} =\tfrac{365825\sqrt{23}-1476577}{631028}\,,\] \[\alpha_{u}^{(3)} =-\tfrac{5173524931447\sqrt{23}-24197965967251}{282928976136}- \tfrac{416(\sqrt{23}-12)}{6859}\zeta_{3}\]
and
\[\begin{split}\alpha_{v}^{(1)}&=\tfrac{\sqrt{20+6} \sqrt{23}-2\sqrt{23}}{19}\,,\\ \alpha_{v}^{(2)}&=\tfrac{-643330\sqrt{23}+2506816}{6310 28}+\tfrac{452563\sqrt{23}-1542518}{315514\sqrt{20+6}\sqrt{23}}\,,\\ \alpha_{v}^{(3)}&=\tfrac{442525351896048-249223363466258 \sqrt{23}}{282928976136(307+60\sqrt{23})}\\ &\quad+\tfrac{(12283416037083-26761631049822\sqrt{23})\sqrt{20+6 \sqrt{23}}}{282928976136(307+60\sqrt{23})}\\ &\quad+\tfrac{659988864\zeta_{3}(942-338\sqrt{23}+39\sqrt{2-(529426 +583581\sqrt{23})})}{282928976136(307+60\sqrt{23})}\,,\end{split} \tag{28}\]
respectively. Numerically, the expansions read
\[\begin{split}\alpha_{g}^{*}&=\phantom{-}0.456\,\epsilon+0.78 1\,\epsilon^{2}+6.610\,\epsilon^{3}+24.137\,\epsilon^{4}\\ \alpha_{y}^{*}&=\phantom{-}0.211\,\epsilon+0.508\, \epsilon^{2}+3.322\,\epsilon^{3}+15.212\,\epsilon^{4}\\ \alpha_{u}^{*}&=\phantom{-}0.200\,\epsilon+0.440\,\epsilon^{2}+ 2.693\,\epsilon^{3}+12.119\,\epsilon^{4}\\ \alpha_{v}^{*}&=-0.137\,\epsilon-0.632\,\epsilon^{2}-4. 313\,\epsilon^{3}-24.147\,\epsilon^{4}\,,\end{split} \tag{29}\]
where we have neglected subleading corrections \(\propto\epsilon^{5}\). All coefficients up to and including \(\propto\epsilon^{3}\) remain unchanged even if higher loops are included. To indicate the trend beyond the strict 433 approximation, we also show the incomplete next-order coefficients \(\propto\epsilon^{4}\) that will receive as-of-yet unknown corrections at order 544. At the preceding loop order 322, for example, the incomplete contributions \(\propto\epsilon^{3}\) accounted for \(60-85\%\) of the complete cubic coefficients at order 433 [8]. We note from (29) that corrections for all couplings at any order arise with the same sign,
and that the cubic coefficients are almost an order of magnitude larger than the quadratic ones.
Finally, we note that since \(\beta_{v}\) is quadratic in \(\alpha_{v}\) to any loop order in perturbation theory [95], there also exists a second fixed point solution in the double-trace sector with \(\alpha_{v-}^{*}\leq\alpha_{v}^{*}\) and the coordinates for \(\alpha_{g,y,u}^{*}\) unchanged [17; 3]. This second solution, however, is unphysical in that it leads to an unstable vacuum [17; 3].
### Scaling Exponents
Universal critical exponents are obtained as the eigenvalues of the stability matrix
\[M_{ij}=\left.\frac{\partial\beta_{i}}{\partial\alpha_{j}}\right|_{\alpha= \alpha^{*}} \tag{30}\]
and can equally be expanded as a power series in the Veneziano parameter,
\[\vartheta_{i}=\sum_{n=1}^{\infty}\epsilon^{n}\vartheta_{i}^{(n)}. \tag{31}\]
The stability matrix factorises because the double-trace coupling does not couple back into the single-trace couplings in the Veneziano limit. Quantitatively, we then find a single relevant and three irrelevant eigenvalues,
\[\vartheta_{1}<0<\vartheta_{2}<\vartheta_{3}<\vartheta_{4}\,, \tag{32}\]
and the UV critical surface due to canonically marginal interactions is one dimensional, with \(\vartheta_{3}\) the isolated eigenvalue for the double-trace quartic.
For \(\vartheta_{1}\), the expansion starts out at quadratic order and is accurate up to including the fourth order,
\[\begin{split}\vartheta_{1}^{(1)}&=0\,,\\ \vartheta_{2}^{(2)}&=-\frac{104}{171}\,,\\ \vartheta_{1}^{(3)}&=\frac{2296}{3249}\,,\\ \vartheta_{1}^{(4)}&=\frac{1405590649319}{156439934 82}-\frac{15630102884}{869110749}\sqrt{23}+\frac{1546688}{555579}\zeta_{3}\,. \end{split} \tag{33}\]
The irrelevant directions start out at linear order and are accurate up to the cubic order in \(\epsilon\). We find
\[\begin{split}\vartheta_{2}^{(1)}&=\frac{52}{19}\,, \\ \vartheta_{2}^{(2)}&=\frac{136601719-22783308\sqrt{23}}{4 094823}\,,\\ \vartheta_{2}^{(3)}&=-\frac{11906415214466858}{117078 859819806}+\frac{93098590593718400}{44802295975923}\sqrt{23}\,,\end{split} \tag{34}\]
as well as
\[\begin{split}\vartheta_{3}^{(1)}&=\frac{8}{19}\sqrt{ 20+6\sqrt{23}}\,,\\ \vartheta_{3}^{(2)}&=\frac{4(-1682358+410611\sqrt{23} )}{157757\sqrt{20+6\sqrt{23}}}\,,\\ \vartheta_{3}^{(3)}&=2\frac{96845792758245\sqrt{23} +8579855232(19847+6564\sqrt{23})\zeta_{3}}{35366122107(307+60\sqrt{23})\sqrt{ 20+6\sqrt{23}}}\\ &\quad-2\frac{16512472540856}{35366122017(307+60\sqrt{23})\sqrt{ 20+6\sqrt{23}}}\,,\end{split} \tag{35}\]
and finally
\[\begin{split}\vartheta_{4}^{(1)}&=\frac{16}{19} \sqrt{23}\,,\\ \vartheta_{4}^{(2)}&=-\frac{44492672}{1364941}+\frac{ 727993948}{31393643}\sqrt{23}\,,\\ \vartheta_{4}^{(3)}&=\frac{2(-174067504271892880236+3 7418532792608300581\sqrt{23})}{278706225801048183}\,.\end{split} \tag{36}\]
Numerically, the expansion coefficients read
\[\begin{split}\vartheta_{1}&=-0.608\,\epsilon^{2}+0. 707\,\epsilon^{3}+6.947\,\epsilon^{4}+4.825\,\epsilon^{5}\\ \vartheta_{2}&=\phantom{-}2.737\,\epsilon+6.676\, \epsilon^{2}+22.120\,\epsilon^{3}+102.55\,\epsilon^{4}\\ \vartheta_{3}&=\phantom{-}2.941\,\epsilon+1.041\, \epsilon^{2}+5.137\,\epsilon^{3}-62.340\,\epsilon^{4}\\ \vartheta_{4}&=\phantom{-}4.039\,\epsilon+9.107\, \epsilon^{2}+38.646\,\epsilon^{3}+87.016\,\epsilon^{4}\,.\end{split} \tag{37}\]
up to subleading corrections in \(\epsilon\). We recall that all coefficients up to order \(\epsilon^{4}\) for \(\vartheta_{1}\) and up to order \(\epsilon^{3}\) for \(\vartheta_{2,3,4}\) remain unchanged even if higher loops are included, and that the new coefficients from the order \(\mathbf{433}\) are about \(\mathcal{O}(4-9)\) times larger than those from the preceding order \(\mathbf{322}\). Once more, to indicate the trend beyond \(\mathbf{433}\), we also show the incomplete next-order coefficient.
### Bounds from Series Expansions
Next, we exploit the expansions of fixed point couplings and exponents to estimate the size of the conformal window \(\epsilon\leq\epsilon_{\text{max}}\), focusing on the coefficients which are unambiguously determined up to order \(\mathbf{433}\).
In order to satisfy vacuum stability, the quartic couplings must obey the conditions \(0\leq\alpha_{u}\) and \(0\leq\alpha_{w}\equiv\alpha_{u}+\alpha_{v}\)[17; 97]. The former is always satisfied, as can be seen from (29). Using the exact fixed point couplings for the latter, we find the series expansion
\[\alpha_{w}^{*}=0.063\,\epsilon-0.192\,\epsilon^{2}-1.620\,\epsilon^{3}+ \mathcal{O}(\epsilon^{4})\,. \tag{38}\]
Corrections arise with a sign opposite to the leading term. At order \(\epsilon^{2}\), this implies \(\epsilon\leq\epsilon_{\text{max}}\), with \(\epsilon_{\text{max}}\approx 0.327\)[13; 8]. At order \(\epsilon^{3}\), the bound tightens by more than a factor of two, \(\epsilon_{\text{max}}\approx 0.147\). As an estimate for higher order corrections in \(\epsilon\), we also employ a Pade resummation by writing of (38) as \(\alpha_{w}^{*}=\frac{A\epsilon+B\epsilon^{2}}{1+C\epsilon}\) (and similarly for other couplings).6 Using the Pade approximant suggests that higher order effects tighten the constraint even further, \(\epsilon_{\text{max}}\approx 0.087\). Overall, we conclude that the series expansion (38) indicates a loss of vacuum stability in the range
Footnote 6: Notice that the loop order \(\mathbf{433}\) is the first perturbative order where resummation techniques can be applied.
\[\epsilon_{\text{max}}\approx 0.087-0.146\,. \tag{39}\]
We now turn to the expansion of scaling exponents, (37). From the explicit expressions up to order \(\mathbf{433}\), we
notice that the series expansion for exponents \(\vartheta_{2}\), \(\vartheta_{3}\) and \(\vartheta_{4}\) are monotonous, with same-sign corrections to the leading order, at every order. However the relevant scaling exponent \(\vartheta_{1}\) has all higher-order contributions with a sign opposite to the leading one. An overall change of sign is indicative for a collision of the UV fixed point with an IR fixed point. We estimate \(\epsilon_{\rm max}\) from solving \(\vartheta_{1}=0\) and reproduce the result \(\epsilon_{\rm max}\approx 0.860\) at order \(\epsilon^{3}\)[8, 13]. The newly established coefficient at order \(\epsilon^{4}\) now tightens the constraint by roughly a factor of three, \(\epsilon_{\rm max}\approx 0.249\). Using a Pade approximant as before, we find an even tighter estimate \(\epsilon_{\rm max}\approx 0.091\). Overall, the series expansion indicates that the conformal window terminates due to a fixed point merger in the range
\[\epsilon_{\rm max}\approx 0.091-0.249\,. \tag{40}\]
Next, we ask whether the fixed point can disappear due to a merger in the double-trace sector, \(\alpha^{*}_{v-}\to\alpha^{*}_{v}\)[98]. If so, it implies a double-zero of \(\beta_{v}\) and the corresponding scaling exponent must vanish, \(\vartheta_{3}=0\). However, the first three universal expansion coefficients have all the same sign, (37), giving no hints for a zero at 433. Also, computing the difference between the double-trace quartic couplings \(\Delta\alpha_{v}\equiv\alpha^{*}_{v}-\alpha^{*}_{v-}\) we find
\[\Delta\alpha^{*}_{v}=0.735\,\epsilon+0.570\,\epsilon^{2}+0.326\,\epsilon^{3}+ \mathcal{O}(\epsilon^{4})\,. \tag{41}\]
The first three expansion coefficients in (41) have all the same sign, offering no hints for a zero for any \(\epsilon>0\). We conclude that a merger in the double-trace sector is not supported by the 433 data.
Finally, we provide a rough estimate for the range in \(\epsilon\) with perturbative control. Based on naive dimensional analysis with couplings scaled in units of natural loop factors [99], as done here, we take the view that this regime is characterised by \(0<|\alpha^{*}|\lesssim 1\).7 We note that the expansion coefficients (29) of the single- (double-) trace couplings receive only positive (negative) contributions, implying \(\alpha^{*}_{g,y,u}>0\) and \(\alpha^{*}_{v}<0\), and that the tightest bound \(\epsilon<\epsilon_{\rm strong}\) arises from the gauge coupling. We find \(\epsilon_{\rm strong}\approx 0.877\) at order \(\epsilon^{2}\), and \(\epsilon_{\rm strong}\approx 0.457\) at order \(\epsilon^{3}\). To estimate higher order effects in \(\epsilon\), we once more use a Pade approximant for the gauge coupling fixed point and find the tighter bound \(\epsilon_{\rm strong}\approx 0.117\), suggesting an onset of strong coupling in the range
Footnote 7: We stress that this criterion is not rigorous, and must be confirmed with higher loops or non-perturbatively.
\[\epsilon_{\rm strong}\approx 0.117-0.457\,. \tag{42}\]
We notice that regimes with vacuum instability or a fixed point merger are reached before the theory becomes strongly coupled. Also, in all cases (39), (40), (42), the tightest parameter bound arises from the Pade resummations, giving bounds of the same size as obtained from the 322 \(\beta\) functions [8].
In summary, the constraints on the conformal window as derived from series expansions of couplings have become tighter, owing to the corrections established at order 433 over those at order 322. The overall picture shows that
\[\epsilon_{\rm max}<\epsilon_{\rm strong} \tag{43}\]
for each of the successive approximation orders 322, 433, and for a Pade approximant of the latter. Results also indicate that the conformal window is primarily limited by the onset of vacuum instability and a nearby fixed point merger, rather than a merger in the double-trace sector or the onset of strong coupling. Our results are further illustrated in Fig. 4, including an extrapolation to finite field multiplicities \((N_{c},N_{f})\). In particular, the smallest set of integer multiplicities compatible with an interacting UV fixed point increases from \((N_{c},N_{f})|_{\rm min}=(3,17)\) at order 322 to \((N_{c},N_{f})|_{\rm min}=(5,28)\) at order 433, and to \((N_{c},N_{f})|_{\rm min}=(7,39)\) if we were to consider the forecast from Pade approximants. We defer a more detailed investigation of the conformal window to a forthcoming publication [100].
### Unitarity
In our setting, scale invariance at the weakly interacting UV fixed point entails full conformal invariance [101], and the critical theory can be described by a conformal field theory. Accordingly, the bound on unitarity for a spin-0
Figure 4: The size of the UV conformal window (yellow band) from series expansions, comparing the new upper bound on \(\epsilon\) at order 433, and the Padé approximant bound (dashed line), see (39), with the previous upper bounds at order 322[8]. Also shown are regimes with asymptotic freedom (green) and effective theories (grey). Dots indicate integer values for \((N_{c},N_{f})\) in the \((\epsilon,N_{c})\) plane.
operator \(\mathcal{O}\)
\[\Delta_{\mathcal{O}}=\dim\,\mathcal{O}+\gamma^{*}_{\mathcal{O}}\geq 1 \tag{44}\]
must be observed [102]. These CFT constraints can be addressed by exploiting our results for anomalous dimensions (19) - (24) and fixed points (29). We find
\[\gamma^{*}_{\phi}= \phantom{-}0.2105\,\epsilon+0.4625\,\epsilon^{2}+2.471\,\,\, \epsilon^{3}+\mathcal{O}(\epsilon^{4})\,,\] \[\gamma^{*}_{m^{2}}= \phantom{-}1.470\,\,\,\,\epsilon+0.5207\,\epsilon^{2}+2.568\,\, \,\,\epsilon^{3}+\mathcal{O}(\epsilon^{4})\,, \tag{45}\] \[\gamma^{*}_{m_{\psi}}= -0.2105\,\epsilon+0.4628\,\epsilon^{2}+0.3669\,\epsilon^{3}+ \mathcal{O}(\epsilon^{4})\,,\]
retaining all terms determined unambiguously in the \(\epsilon\)-expansion at 433. Subleading terms starting at order \(\epsilon^{4}\) necessitate the full 544 approximation.
We observe from (45) that the scalar field and mass anomalous dimensions \(\gamma^{*}_{\phi}\) and \(\gamma^{*}_{m^{2}}\) are manifestly positive and satisfy (44) without further ado. On the other hand, the fermion mass anomalous dimension \(\gamma^{*}_{m_{\psi}}\) comes out negative to the leading order. Still, the subleading positive contributions up to cubic order in \(\epsilon\) ensure that the anomalous dimension remains strictly bounded from below, \(\gamma^{*}_{m_{\psi}}\gtrsim-0.02\). In consequence, it cannot become sufficiently negative for \(\Delta_{\bar{\psi}\psi}\) to fall below the unitarity bound (44). Altogether, we conclude that the unitarity constraints (44) are satisfied non-marginally in perturbation theory. Moreover, unitarity does not offer bounds on \(\epsilon\) within the conformal window.
### Scales and Phase Diagram
We are now in a position to revisit the phase diagram of the theory. Fig. 1 illustrates the phase diagram in the 433 approximation. Trajectories are shown in the \((\alpha_{g},\alpha_{y})\) plane, with arrows pointing from the UV to the IR. Evidently, asymptotic freedom is absent and the Gaussian fixed point is an IR attractive fixed point for all couplings. Nevertheless, the theory is UV-complete and remains predictive up to highest energies, courtesy of the interacting UV fixed point. It displays a single relevant direction amongst the classically marginal interactions. Without loss of generality, we take
\[\delta\alpha_{g}=\alpha_{g}-\alpha^{*}_{g} \tag{46}\]
as the fundamentally free parameter at the high scale \(\mu_{0}\). The running of the Yukawa and quartic couplings \(\alpha_{i}(\mu)\) with \(i=y,u,v\) is entirely dictated by the running of \(\alpha_{g}(\mu)\), and they can be expressed in terms of the gauge coupling as \(\alpha_{i}(\mu)=F_{i}[\alpha_{g}(\mu)]\) for suitable functions \(F_{i}\). The IR fate of trajectories emanating from the fixed point is determined by whether \(\delta\alpha_{g}<0\) or \(\delta\alpha_{g}>0\) at the high scale. In the former case, the theory becomes free in the infrared. In the latter case, the theory becomes strongly coupled and displays either confinement (such as in QCD), or IR conformality such as at an interacting IR fixed point. Our results are illustrated in Fig. 5, where sample trajectories connecting the UV fixed point with the IR are shown, also contrasting settings for initial conditions \(\delta\alpha_{g}(\mu_{0})<0\) (left panel) leading to IR freedom, with initial conditions \(\delta\alpha_{g}(\mu_{0})>0\) (right panel).
The transition from the UV to the IR is characterised by an RG-invariant scale \(\Lambda_{c}\), analogous to \(\Lambda_{\rm QCD}\) in QCD. It arises through dimensional transmutation from the dimensionless fundamental parameter \(\delta\alpha_{g}\ll|\alpha^{*}_{g}|\) at the high scale \(\mu\), and reads
\[\Lambda_{c}\propto\mu\cdot\left|\delta\alpha(\mu)\right|^{\nu}, \tag{47}\]
where \(\nu=-1/\vartheta_{1}\) with \(\vartheta_{1}\) the relevant scaling exponent (37). One readily confirms that \(\mathrm{d}\Lambda_{c}/\mathrm{d}\ln\mu=0\). The proportionality constant \(c\) can be determined from a cross-over condition. For \(\delta\alpha_{g}>0\), strong coupling sets in as soon as \(\delta\alpha_{g}\) is of order unity, hence \(c\approx 1\). For negative \(\delta\alpha_{g}\), the Gaussian fixed point takes over as soon as \(\delta\alpha_{g}\approx-\alpha^{*}_{g}/3\)[17], giving \(c=(3/\alpha_{*})^{\nu}\) instead.
We briefly compare this with asymptotically free theories by taking \(\epsilon<0\). In this case, the UV critical surface is now two-dimensional with both the gauge and the Yukawa coupling being marginally relevant. We then find a range of asymptotically free trajectories emanating from the Gaussian fixed point and characterised by
\[\delta\alpha_{g} =\alpha_{g}-\alpha^{*}_{g} \tag{48}\] \[\delta\alpha_{y} =\alpha_{y}-\alpha^{*}_{y}\,.\]
For sufficiently small \(0<-\epsilon\ll 1\) and \(\alpha_{y}=0\), the theory also displays a Banks-Zaks fixed point \(\alpha^{*}_{g}\) of order \(|\epsilon|\) and \(\alpha^{*}_{i}=0\,\,\,(i=y,u,v)\). In either case, we find the transition scale \(\Lambda_{c}\) as
\[\Lambda_{c}\propto\mu\cdot\exp\left(\frac{1}{\beta^{(1)}_{g}\,\delta\alpha(\mu )}\right)\,, \tag{49}\]
characteristic for asymptotic freedom, with a negative one-loop gauge coefficient \(\beta^{(1)}_{g}<0\). All trajectories run towards strong coupling where either confinement or conformality take over, except for the trajectory which terminates at the Banks-Zaks fixed point.
Even though the UV critical surface is two-dimensional, it is interesting to note that the Yukawa nullcline is an IR attractor and all outgoing trajectories collapse onto it. Hence, as soon as the gauge coupling becomes of the order of the one-loop gauge coefficient, \(\alpha(\mu)\gtrsim|\epsilon|\), outgoing trajectories along the nullcline of the asymptotically free theory (with \(0<-\epsilon\ll 1\)) become indistinguishable from the outgoing trajectory of the asymptotically safe theory (with \(0<\epsilon\ll 1\)).
## V Discussion and Outlook
The quantum field theory (1) provides an important template for an asymptotically safe 4d particle theory with an interacting and perturbatively-controlled fixed point
at highest energies. We have extended the investigation of the UV theory up to four loops in perturbation theory, a prerequisite to achieve the complete cubic order in the underlying conformal expansion in terms of a small Veneziano parameter \(\epsilon\). The central input for this are the four-loop gauge, three-loop Yukawa and quartic \(\beta\) functions, and three-loop anomalous dimensions. We have computed the previously missing pieces which are the three-loop contributions to scalar \(\beta\) functions containing gauge interactions (Sec. III).
With these results at hand, we have determined all fixed point couplings, critical exponents, and anomalous dimensions up to the third non-trivial order in \(\epsilon\), also investigating the phase diagram and UV-IR connecting trajectories (Sec. IV). Findings are in accord with unitarity, as they must. Most notably, bounds on the conformal window (39), (40) have become tighter in comparison with the preceding order, and strengthen the view that the upper boundary remains under perturbative control. Our work further substantiates the existence of the fixed point at finite values of the Veneziano parameter and at finite \(N\). Ultimately, conformality is lost due to the onset of vacuum instability and a nearby fixed point merger, (43), rather than through a merger in the double-trace sector or strong coupling phenomena.
While our results have been achieved specifically for Dirac fermions coupled to \(SU\) gauge fields and complex scalars \(\phi_{ij}\) (see Tab. 1), they equally hold true for theories with Majorana fermions coupled to either \(SO\) gauge fields with symmetric complex scalars \(\phi_{(ij)}\), or to \(Sp\) gauge fields with antisymmetric scalars \(\phi_{[ij]}\)[11]. The reason for this is that these three types of matter-gauge theories are mutually equivalent to each other in the Veneziano limit, even though field content, gauge symmetries, and global symmetries are different [11]. In particular, and modulo the normalisation of couplings, their \(\beta\) functions are identical to any loop order, and results of this work are equally valid for the partner theories.
We close with a few comments from the viewpoints of lattice Monte Carlo simulations, conformal field theory, and model building. It would be valuable to explore the UV conformal window using complementary tools such as the lattice, taking advantage of the vast body of works on IR fixed points in 4d matter-gauge theories [103]. In a related vein, in our QFT setting, scale invariance at the UV fixed point entails full conformal invariance [101]. Hence, our renormalisation group results offer direct access to conformal data characterising an interacting 4d CFT [104, 105]. It would then be equally important to investigate the 4d UV critical theory using first-principle CFT methods such as the bootstrap [106], or other. Finally, we emphasise that our setting provides a blueprint for concrete 4d non-supersymmetric CFTs with standard model-like field content in the UV, which invites further model building.
###### Acknowledgements.
We are thankful to Anders Eller Thomsen for comments about parameterising \(\gamma_{5}\) diagrams. Some of our results have been presented by one of us (TS) at _Asymptotic Safety meets Particle Physics IX_, Dec 2022, DESY, Hamburg, and at _Loopfest XXI_, June 2023, SLAC National Accelerator Laboratory. This work is supported by the Science and Technology Facilities Council (STFC) under the Consolidated Grant ST/T00102X/1 (DL).
Figure 5: Running couplings along UV-IR connecting trajectories emanating out of the interacting UV fixed point, corresponding to the (orange) separatrices highlighted in Fig. 1. Trajectories with UV initial condition \(\delta\alpha_{g}(\mu_{0})<0\) approach a free theory in the IR (left panel), while those with \(\delta\alpha_{g}(\mu_{0})>0\) enter a strongly coupled regime with either confinement or conformality in the IR (right panel). Here, \(t_{c}=\ln\Lambda_{c}/\mu_{0}\) with \(\Lambda_{c}\) as in (47) and (49), respectively, and \(\epsilon=0.06\).
## Appendix A Tensor Structures for Three-Loop Quartic RGEs
In this appendix, we detail the general structure for three-loop scalar field anomalous dimensions and quartic \(\beta\) functions of any renormalisable QFT with uncharged scalars. We closely follow the notation of [53] with the Lagrangian
\[\begin{split}\mathcal{L}=&\ \tfrac{1}{2}\partial^{\mu} \phi_{a}\partial_{\mu}\phi_{a}+\tfrac{i}{2}\psi^{j}\partial^{\mu}D_{\mu}\psi_{ j}-\tfrac{1}{4}\,g_{AB}^{-2}\,F_{\mu\nu}^{A}F^{B\mu\nu}\\ &-\tfrac{1}{2}y^{ajk}\,\phi_{a}(\psi_{j}\varepsilon\psi_{k})- \tfrac{1}{24}\lambda_{abcd}\,\phi_{a}\phi_{b}\phi_{c}\phi_{d}\\ &-\tfrac{1}{2}\text{m}^{jk}\,(\psi_{j}\varepsilon\psi_{k})- \tfrac{1}{2}m_{ab}^{2}\,\phi_{a}\phi_{b}-\tfrac{1}{6}h_{abc}\,\phi_{a}\phi_{b} \phi_{c}\\ &+\mathcal{L}_{\text{gauge-fix}}+\mathcal{L}_{\text{ghost}}\,, \end{split} \tag{10}\]
featuring real scalar field components \(\phi^{a}\), vectors of fermionic Weyl components as well as their conjugates \(\psi_{i}=(\psi^{i})^{*}\) whose spinor indices are contracted by the two-dimensional Levi-Civita \(\varepsilon\) as well as gauge fields with the field strength tensors \(F_{\mu\nu}^{A}\) and covariant derivative \(D_{\mu}\). Note the latter do not couple to scalars in accord with our model. Moreover, only the gauge coupling square \(g_{AB}^{2}\), Yukawa interaction \(y_{aij}\) and scalar quartic coupling \(\lambda_{abcd}\) are relevant, while we will neglect fermionic \(\text{m}^{ij}\) and scalar masses \(m_{ab}^{2}\) and cubic terms \(h_{abc}\). In the same manner, details of the gauge fixing and Fadeev-Popov ghosts do not play a role. We will make fermionic indices implicit wherever appropriate. Moreover, \(t_{ij}^{A}\) is introduced as the fermionic generator, and \((C_{2}^{G})^{AB}\) as the Casimir invariant of the gauge interaction. Moreover, the fermion Casimir and Dynkin index are given by
\[(C_{2}^{F})_{ij}=g_{AB}^{2}\,(t^{A}t^{B})_{ij},\qquad(S_{2}^{F})^{AB}=\text{tr }\,(t^{A}t^{B})\,. \tag{11}\]
The quartic \(\beta\) function is composed of external leg and vertex corrections
\[\beta_{\lambda,3\ell}^{abcd}=\gamma_{\phi,3\ell}^{e(a}\,\lambda^{bcd)e}+\beta _{\phi^{4},3\ell}^{(abcd)} \tag{12}\]
The gaugeless part of both quantities are given in [53]. In this limit, the leg corrections coincide with the scalar anomalous dimension. For gauge dependent terms, the bases agree only structurally. The reason is that each coefficient may also contain vertex corrections that can be brought to the shape of leg corrections using gauge transformations of tensor structures. For instance, this cancels all gauge dependence of the anomalous dimensions. The gauge interaction terms missing in [53] are of order \(\propto y^{4}g^{2}\) and \(\propto y^{2}g^{4}\) and read
\[\begin{split}\gamma_{\phi,3\ell}^{ab}=&\ \gamma_{\phi,3\ell}^{ab}\big{|}_{g^{2}=0}\\ &+G_{1}\,\text{tr}\,\big{(}y^{a}C_{2}^{F}y^{c}y^{b}y^{c}+y^{b}C_{ 2}^{F}y^{c}y^{a}y^{c}\big{)}+G_{2}\,\text{tr}\,\big{(}y^{a}y^{c}C_{2}^{F}y^{c }y^{b}\big{)}+G_{3}\,\text{tr}\,\big{(}y^{a}y^{b}y^{c}y^{c}C_{2}^{F}+y^{b}y^{a }y^{c}y^{c}C_{2}^{F}\big{)}\\ &+G_{4}\,\text{tr}\,\big{(}y^{a}C_{2}^{F}y^{b}y^{c}y^{c}\big{)}+ G_{5}\,\text{tr}\,\big{(}y^{a}t^{A}y^{b}y^{c}t^{B}y^{c}\big{)}\,g_{AB}^{2}+G_{6}\, \text{tr}\,\big{(}y^{a}C_{2}^{F}y^{b}C_{2}^{F}\big{)}\\ &+G_{7}\,\text{tr}\,\big{(}y^{a}C_{2}^{F}C_{2}^{F}y^{b}\big{)}+G_ {8}\,\text{tr}\,\big{(}y^{a}y^{b}t^{A}t^{B}\big{)}\,\big{(}g^{2}S_{2}^{F}g^{2} \big{)}_{AB}+G_{9}\,\text{tr}\,\big{(}y^{a}y^{b}t^{A}t^{B}\big{)}\,\big{(}g^{2 }C_{2}^{G}g^{2}\big{)}_{AB}\,.\end{split} \tag{13}\]
Here \(G_{1.9}\) are _a priori_ unknown coefficients that can be fixed by an actual loop computation. Note that this parametrisation assumes \(\gamma_{\phi,3\ell}\) to be symmetric, as there is no explicitly broken flavour symmetry in our model.
As for the vertex corrections, we again refer to [53] for the gaugeless tensor structures, and provide the missing ones:
\[\beta^{abcd}_{\phi^{4},3\ell} =\beta^{abcd}_{\phi^{4},3\ell}\big{|}_{g^{2}=0}+Q_{1}\,\lambda^{abef }\lambda^{cdeg}\,\text{tr}\left(y^{f}y^{g}C_{2}^{F}\right)+Q_{2}\,\lambda^{abef }\,\text{tr}\left(y^{c}y^{e}y^{f}y^{d}C_{2}^{F}\right) \tag{10}\] \[+Q_{3}\,\lambda^{abef}\,\text{tr}\left(y^{c}y^{e}t^{A}y^{d}y^{f}t ^{B}\right)g^{2}_{AB}+Q_{4}\,\lambda^{abef}\,\text{tr}\left(y^{c}y^{e}t^{A}y^{ f}y^{f}y^{d}t^{B}\right)g^{2}_{AB}\] \[+Q_{5}\,\lambda^{abef}\,\text{tr}\left(y^{e}t^{A}y^{e}y^{f}t^{B}y ^{d}\right)g^{2}_{AB}+Q_{6}\,\lambda^{abef}\,\text{tr}\left(y^{c}y^{e}C_{2}^{F }y^{f}y^{d}\right)g^{2}_{AB}\] \[+Q_{7}\,\lambda^{abef}\,\text{tr}\left(y^{e}y^{e}C_{2}^{F}y^{d}y^ {f}\right)g^{2}_{AB}+Q_{8}\,\lambda^{abef}\,\text{tr}\left(y^{e}y^{e}y^{f}C_{2 }^{F}y^{d}\right)g^{2}_{AB}\] \[+Q_{9}\,\text{tr}\left(y^{a}y^{b}y^{b}y^{c}ydC_{2}^{F}\right)+Q_{ 10}\,\text{tr}\left(y^{a}y^{b}y^{c}C_{2}^{F}y^{d}C_{2}^{F}\right)\] \[+Q_{11}\,\text{tr}\left(y^{a}y^{b}C_{2}^{F}y^{d}y^{d}C_{2}^{F} \right)+Q_{12}\,\text{tr}\left(y^{a}y^{b}t^{A}y^{c}C_{2}^{F}y^{d}t^{B}\right)g^ {2}_{AB}\] \[+Q_{13}\,\text{tr}\left(y^{a}t^{A}C_{2}^{F}y^{b}y^{c}t^{B}y^{d} \right)g^{2}_{AB}+Q_{14}\,\text{tr}\left(y^{a}t^{A}y^{c}t^{C}y^{b}t^{B}y^{d} \right)g^{2}_{AB}\,g^{2}_{CD}\] \[+Q_{15}\,\text{tr}\left(y^{a}t^{A}t^{C}y^{b}t^{B}y^{c}t^{D}y^{d} \right)g^{2}_{AB}\,g^{2}_{CD}\] \[+Q_{16}\,\text{tr}\left(y^{a}y^{b}t^{A}t^{C}y^{c}y^{d}t^{B}t^{D} \right)g^{2}_{AB}\,g^{2}_{CD}\] \[+Q_{17}\,\text{tr}\left(y^{a}y^{b}t^{A}t^{C}\right)\,\text{tr} \left(y^{c}y^{d}t^{B}t^{D}\right)g^{2}_{AB}\,g^{2}_{CD}\] \[+Q_{18}\,\text{tr}\left(y^{a}t^{A}y^{b}t^{C}\right)\,\text{tr} \left(y^{c}t^{B}y^{d}t^{D}\right)g^{2}_{AB}\,g^{2}_{CD}\] \[+Q_{19}\,\text{tr}\left(y^{a}t^{A}y^{b}t^{C}\right)\,\text{tr} \left(y^{c}y^{d}t^{B}t^{D}\right)g^{2}_{AB}\,g^{2}_{CD}\] \[+Q_{20}\,\text{tr}\left(y^{a}y^{b}t^{A}y^{c}y^{d}t^{B}\right)\left( g^{2}S_{2}^{F}g^{2}\right)_{AB}+Q_{21}\,\text{tr}\left(y^{a}y^{b}t^{A}y^{c}y^{d}t^{B} \right)\left(g^{2}C_{2}^{G}g^{2}\right)_{AB}\] \[+Q_{22}\,\text{tr}\left(y^{a}y^{b}y^{c}t^{A}y^{d}t^{B}\right) \left(g^{2}S_{2}^{F}g^{2}\right)_{AB}+Q_{23}\,\text{tr}\left(y^{a}y^{b}y^{c}t^ {A}y^{d}t^{B}\right)\left(g^{2}C_{2}^{G}g^{2}\right)_{AB}\] \[+Q_{24}\,\text{tr}\left(y^{a}y^{b}y^{c}y^{d}t^{A}t^{B}\right) \left(g^{2}S_{2}^{F}g^{2}\right)_{AB}+Q_{25}\,\text{tr}\left(y^{a}y^{b}y^{c}y^ {d}t^{A}t^{B}\right)\left(g^{2}C_{2}^{G}g^{2}\right)_{AB}\] \[+Q_{26}\,\text{tr}\left(y^{a}t^{A}y^{b}y^{c}y^{c}t^{B}y^{d}y^{d}y^ {e}\right)g^{2}_{AB}+Q_{27}\,\text{tr}\left(y^{a}y^{b}C_{2}^{F}y^{e}y^{e}y^{d} y^{e}\right)\] \[+Q_{28}\,\text{tr}\left(y^{a}y^{b}y^{e}t^{A}y^{e}y^{e}y^{d}t^{B} \right)\,g^{2}_{AB}+Q_{29}\,\text{tr}\left(y^{a}y^{b}t^{A}y^{e}y^{e}t^{B}y^{d} y^{e}\right)\,g^{2}_{AB}\] \[+Q_{30}\,\text{tr}\left(y^{a}y^{b}y^{e}y^{e}y^{e}y^{d}C_{2}^{F} \right)+Q_{31}\,\text{tr}\left(y^{a}y^{b}y^{c}y^{e}y^{d}C_{2}^{F}\right)\] \[+Q_{32}\,\text{tr}\left(y^{a}t^{A}y^{b}y^{b}t^{B}y^{c}y^{d}y^{e} \right)g^{2}_{AB}+Q_{33}\,\text{tr}\left(y^{a}y^{b}y^{c}y^{e}y^{d}y^{e}C_{2}^{F}\right)\] \[+Q_{34}\,\text{tr}\left(y^{a}y^{b}y^{e}y^{c}y^{d}y^{e}C_{2}^{F} \right)+Q_{35}\,\text{tr}\left(y^{a}y^{b}y^{e}y^{c}C_{2}^{F}y^{d}y^{e}\right)\] \[+Q_{36}\,\text{tr}\left(y^{a}y^{b}t^{A}y^{c}y^{e}y^{e}y^{d}t^{B} \right)\,g^{2}_{AB}+Q_{37}\,\text{tr}\left(y^{a}t^{A}y^{e}y^{b}y^{c}t^{B}y^{e}y^ {d}\right)\,g^{2}_{AB}\] \[+Q_{38}\,\text{tr}\left(y^{a}t^{A}y^{b}y^{c}y^{e}t^{B}y^{e}y^{d} \right)\,g^{2}_{AB}+Q_{39}\,\text{tr}\left(y^{a}t^{A}y^{e}y^{e}y^{b}y^{e}t^{B}y^ {d}\right)\,g^{2}_{AB}\] \[+Q_{40}\,\text{tr}\left(y^{a}y^{b}y^{c}y^{c}C_{2}^{F}y^{d}y^{e} \right)+Q_{41}\,\text{tr}\left(y^{a}y^{b}y^{c}t^{A}y^{d}y^{e}t^{B}y^{e}\right)\,g^{2} _{AB}\] \[+Q_{42}\,\text{tr}\left(y^{a}y^{b}y^{c}y^{d}C_{2}^{F}y^{e}y^{e} \right)+Q_{43}\,\text{tr}\left(y^{a}y^{b}y^{c}y^{d}y^{e}y^{c}C_{2}^{F}y^{e}\right)\]
where \(Q_{1..43}\) are again open coefficients. Note that we do not need to account for any non-naive influence of \(\gamma_{5}\) at this loop order as discussed in Sec. III.
## Appendix B Finite-\(N\) Beta Functions
Here we present the finite-\(N\) corrections to the \(\beta\) functions (14)-(17). Apart from the Veneziano parameter \(\epsilon\), these also retain an explicit dependence on inverse powers of the parameter \(N_{c}\). An extensive analysis of the finite-\(N\) conformal window at 2NLO was conducted in [13], see there for explicit expressions up to 322. In the following, we
make use of the abbreviations \(r_{c}\equiv N_{c}^{-2}\) and \(r_{f}\equiv\left[(\frac{11}{2}+\epsilon)N_{c}\right]^{-2}\) and provide the four loop gauge \(\beta\) function
\[\begin{split}\alpha_{g}^{-2}\beta_{g}^{(4)}=\big{\{}& \left[-\frac{260}{243}\epsilon^{3}+\left(-\frac{56\zeta_{3}}{3}-\frac{21598} {243}\right)\epsilon^{2}+\left(-\frac{180\zeta_{3}}{9}-\frac{123473}{324} \right)\epsilon-550\zeta_{3}-\frac{14731}{72}\right]\\ &+r_{c}\left[\left(\frac{128\zeta_{3}}{9}+\frac{7495}{243}\right) \epsilon^{2}+\left(\frac{2504\zeta_{3}}{9}+\frac{71765}{324}\right)\epsilon+39 6\zeta_{3}+\frac{154\epsilon^{3}}{243}+\frac{30047}{72}\right]\\ &+r_{c}^{2}\left[\left(\frac{623}{27}-\frac{488\zeta_{3}}{9} \right)\epsilon^{2}+\left(\frac{29753}{108}-\frac{5456\zeta_{3}}{9}\right) \epsilon-1694\zeta_{3}+\frac{19613}{24}\right]\\ &+r_{c}^{3}\left[\frac{234}{4}+\frac{523}{8}\right]\big{\}}\, \alpha_{g}^{3}+\\ \big{\{}&\left[\left(36\zeta_{3}+\frac{8017}{36} \right)\epsilon^{2}+\left(396\zeta_{3}+\frac{38797}{72}\right)\epsilon+1089 \zeta_{3}+\frac{379\epsilon^{3}}{18}-\frac{12947}{48}\right]\\ &+r_{c}\left[\left(-54\zeta_{3}-\frac{1184}{9}\right)\epsilon^{2 }+\left(-594\zeta_{3}-\frac{45749}{72}\right)\epsilon-\frac{3267\zeta_{2}}{2} -\frac{161\epsilon^{3}}{18}-\frac{24079}{24}\right]\\ &+r_{c}^{2}\left[\left(18\zeta_{3}-\frac{3}{4}\right)\epsilon^{2 }+\left(198\zeta_{3}-\frac{33}{4}\right)\epsilon+\frac{1088\zeta_{3}}{2}- \frac{363}{16}\right]\big{\}}\,\alpha_{g}^{2}\alpha_{y}\,+\\ \big{\{}&\left[\left(\frac{1659}{4}-12\zeta_{3} \right)\epsilon^{2}+\left(2475-132\zeta_{3}\right)\epsilon-363\zeta_{3}+23 \epsilon^{3}+\frac{78287}{16}\right]\\ &+r_{c}\left[\left(12\zeta_{3}+\frac{89}{4}\right)\epsilon^{2}+ \left(132\zeta_{3}+154\right)\epsilon+363\zeta_{3}+\epsilon^{3}+\frac{5445}{1 6}\right]\big{\}}\,\alpha_{g}\alpha_{y}^{2}\,+\\ \big{\{}&\left[-\frac{11\epsilon^{4}}{3}-100\epsilon^{ 3}-986\epsilon^{2}-\frac{252676}{6}-\frac{105875}{16}\right]\\ &+r_{c}\left[\left(\frac{7}{3}-6\zeta_{3}\right)\epsilon^{2}+ \left(\frac{73}{3}-66\zeta_{3}\right)\epsilon-\frac{363\zeta_{3}}{2}+\frac{84 7}{12}\right]\big{\}}\,\alpha_{y}^{3}\,+\\ \big{\{}&\left[-10\epsilon^{3}-165\epsilon^{2}-\frac{1815 \epsilon}{2}-\frac{6655}{4}\right]-r_{c}\left[55+106\right]\big{\}}\,\alpha_{ y}^{2}\alpha_{u}-r_{c}\left[20\epsilon+110\right]\alpha_{y}^{2}\alpha_{v}\,+\\ \big{\{}&\left[12\epsilon^{2}+132\epsilon+363\right]+12r_{c} \big{\}}\alpha_{y}\alpha_{u}^{2}+48r_{c}\alpha_{y}\alpha_{u}\alpha_{v}+12r_{c }\left[1+r_{f}\right]\alpha_{y}\alpha_{v}^{2}\,.\end{split} \tag{10}\]
Due to subleading corrections absent in the large-\(N\) limit, the double-trace quartic \(\alpha_{v}\) makes a direct appearance in the gauge four loop \(\beta\) function. The same happens for the three loop expressions of the Yukawa
\[\begin{split}\alpha_{y}^{-1}\,\beta_{y}^{(3)}=\big{\{}& \left[-\frac{3\epsilon^{3}}{8}+\frac{59\epsilon^{2}}{16}+\frac{2595 \epsilon}{32}+\frac{17413}{64}\right]+r_{c}\left[\left(6\zeta_{3}-28\right) \epsilon+39\zeta_{3}-162\right]\big{\}}\,\alpha_{y}^{3}\,+\\ &\big{\{}&\left[-19\epsilon^{2}-\frac{445\epsilon}{2}-6 49\right]+r_{c}\left[19\epsilon^{2}+\frac{445\epsilon}{2}+633\right]+16r_{c}^{2 }\big{\}}\,\alpha_{g}\alpha_{y}^{2}\,+\\ &\big{\{}&\left[\left(-36\zeta_{3}-\frac{893}{8}\right) \epsilon-198\zeta_{3}-17\epsilon^{2}-\frac{1217}{16}\right]+r_{c}\left[\left(54 \zeta_{3}+92\right)\epsilon+279\zeta_{3}+17\epsilon^{2}+31\right]\\ &+r_{c}^{2}\left[\left(\frac{157}{8}-18\zeta_{3}\right)\epsilon-81 \zeta_{3}+\frac{721}{16}\right]\big{\}}\,\alpha_{g}^{2}\alpha_{y}\,+\,\big{\{} \left[\left(24\zeta_{3}+\frac{649}{9}\right)\epsilon+132\zeta_{3}+\frac{70 \epsilon^{2}}{27}+\frac{641}{6}\right]\\ &+r_{c}\left[-\frac{70\epsilon^{2}}{27}-\frac{856\epsilon}{9}- \frac{2413}{12}\right]+r_{c}^{2}\left[\left(23-24\zeta_{3}\right)\epsilon-132 \zeta_{3}+62\right]+\frac{129}{4}r_{c}^{3}\big{\}}\,\alpha_{g}^{3}\,+\\ &\big{\{}&\left[12\epsilon^{2}+162\epsilon+528\right]+60r _{c}+30(\frac{11}{2}+\epsilon)r_{f}\big{\}}\,\alpha_{y}^{2}\alpha_{u}+\big{\{} 48r_{c}+60(\frac{11}{2}+\epsilon)r_{f}+24r_{c}r_{f}\big{\}}\,\alpha_{y}^{2} \alpha_{v}\,+\\ &\big{\{}&\left[5\epsilon+\frac{25}{2}\right]+r_{f} \left[58\epsilon+\frac{95}{2}\right]\big{\}}\,\alpha_{y}\alpha_{u}^{2}+\big{\{} r_{f}\left[100\epsilon+490\right]+r_{f}^{2}\left[80\epsilon+440\right]\big{\}}\, \alpha_{y}\alpha_{u}\alpha_{v}\,+\\ &\big{\{}&\left[r_{f}\left[5\epsilon+\frac{25}{2}\right]+r_ {f}^{2}\left[85\epsilon+\frac{95}{2}\right]\right\}\,\alpha_{y}\alpha_{v}^{2}- \big{\{}8+32r_{f}\big{\}}\alpha_{u}^{3}-\big{\{}84r_{f}+36r_{f}^{2}\big{\}} \alpha_{u}^{2}\alpha_{v}\,-\\ &\big{\{}& 24r_{f}+96r_{f}^{2}\big{\}}\alpha_{u}\alpha_{v}^{2}-\big{\{}4r_{f}+20r _{f}^{2}+16r_{f}^{3}\big{\}}\alpha_{v}^{3}+\big{\{}4\left[\frac{11}{2}+ \epsilon\right]\left[1-r_{c}\right]\left[1+r_{f}\right]\big{\}}\,\alpha_{g} \alpha_{y}\alpha_{u}\,+\\ &\big{\{}&\left[\frac{11}{2}+\epsilon\right]\left[r_{f}-r_{c}r_{f} \right]\big{\}}\,\alpha_{g}\alpha_{y}\alpha_{v}\,,\end{split} \tag{11}\]
as well as the single-trace coupling
\[\begin{split}\beta_{u}^{(3)}&=\left\{104+r_{f}\left[1152 \zeta_{3}+2360\right]\right.\right\}\alpha_{u}^{4}+\left\{r_{f}\left[1536\zeta_{ 3}+2912\right]+r_{f}^{2}\left[6144\zeta_{3}+6752\right]\right.\right\}\alpha_{u }^{3}\alpha_{v}\left.-\\ &\left.\left.\left\{280r_{f}-r_{f}^{2}\left[9216\zeta_{3}+12728 \right]\right.\right\}\alpha_{u}^{2}\alpha_{v}^{2}-\left\{104r_{f}-r_{f}^{2} \left[768\zeta_{3}+1472\right]-r_{f}^{3}\left[5376\zeta_{3}+6568\right]\right. \right\}\alpha_{u}\alpha_{v}^{3}\left.+\\ &\left.\left\{34+226r_{f}\right\}\alpha_{v}\alpha_{u}^{3}+648r_{f} \alpha_{v}\alpha_{u}^{2}\alpha_{v}+\left\{66r_{f}+642r_{f}^{2}\right\}\alpha_{ v}\alpha_{u}\alpha_{v}^{2}\left.+\right.\\ &\left.\left\{\left[166\epsilon+889\right]+r_{f}\left[\left(216 \zeta_{3}+156\right)\epsilon+1188\zeta_{3}+858\right]\right.\right\}\alpha_{v }^{2}\alpha_{u}^{2}\left.+\right.\\ &\left.\left\{r_{f}\left[\left(192\zeta_{3}+734\right)\epsilon+105 6\zeta_{3}+3965\right]\right.\right\}\alpha_{v}^{2}\alpha_{u}\alpha_{v}\left.+ \right.\\ &\left.\left\{r_{f}\left[64\epsilon+352\right]+r_{f}^{2}\left[ \left(216\zeta_{3}+136\right)\epsilon+1188\zeta_{3}+748\right]\right.\right\} \alpha_{y}^{2}\alpha_{v}^{2}\left.+\right.\\ &\left.\left\{\left.-\frac{315\epsilon^{2}}{4}-\frac{3209\epsilon }{4}-\frac{32483}{16}\right]+r_{c}\left[12\zeta_{3}-168\right]\right.\right\} \alpha_{y}^{3}\alpha_{u}-\left\{r_{c}\left[152+96\zeta_{3}\right]-64\left[ \frac{11}{2}+\epsilon\right]r_{f}\right\}\alpha_{y}^{3}\alpha_{v}\left.+ \right.\\ &\left.\left\{\left[\frac{13\epsilon^{3}}{4}+\frac{265\epsilon^{2 }}{8}+\frac{1111\epsilon}{16}-\frac{2541}{32}\right]+r_{c}\left[\left(20-24 \zeta_{3}\right)\epsilon-132\zeta_{3}+110\right]\right.\right\}\alpha_{y}^{4} \left.+\right.\\ &\left.\left\{\left.\left[\left(24\zeta_{3}-5\right)\epsilon^{2} +\left(264\zeta_{3}-55\right)\epsilon+726\zeta_{3}-\frac{605}{4}\right]+r_{c} \left[\left(5-24\zeta_{3}\right)\epsilon^{2}+\left(55-264\zeta_{3}\right) \epsilon-726\zeta_{3}+\frac{605}{4}\right]\right.\right\}\alpha_{g}\alpha_{y}^ {3}\\ &+\left.\left\{\left.\left[\left(\frac{149}{2}-120\zeta_{3}\right) \epsilon-660\zeta_{3}+\frac{1639}{4}\right]+r_{c}\left[\left(120\zeta_{3}- \frac{149}{2}\right)\epsilon+660\zeta_{3}-\frac{1639}{4}\right]\right.\right\} \alpha_{g}\alpha_{y}^{2}\alpha_{u}\left.+\right.\\ &\left.\left\{r_{f}\left[\left(112-144\zeta_{3}\right)\epsilon-792 \zeta_{3}+616\right]+r_{c}r_{f}\left[\left(144\zeta_{3}-112\right)\epsilon+792 \zeta_{3}-616\right]\right.\right\}\alpha_{g}\alpha_{y}^{2}\alpha_{v}\left.+ \right.\\ &\left.\left\{\left.\left[96\zeta_{3}-102\right]\left[1-r_{c} \right]\right.\right\}\alpha_{g}\alpha_{y}\alpha_{u}^{2}\left.+\right.\\ &\left.\left\{r_{f}\left[288\zeta_{3}-306\right]+r_{f}^{2}\left[ \left(306-288\zeta_{3}\right)\epsilon^{2}+\left(3366-3168\zeta_{3}\right) \epsilon-8712\zeta_{3}+\frac{18513}{2}\right]\right.\right\}\alpha_{g}\alpha_ {y}\alpha_{u}\alpha_{v}\left.+\right.\\ &\left.\left\{\left[5\epsilon^{2}+\frac{133\epsilon}{4}+\frac{253} {8}\right]+r_{c}\left[\left(24\zeta_{3}-66\epsilon\right)\epsilon+132\zeta_{3}- 5\epsilon^{2}-\frac{847}{4}\right]+r_{c}^{2}\left[\left(\frac{131}{4}-24\zeta _{3}\right)\epsilon-132\zeta_{3}+\frac{1441}{8}\right]\right.\right\}\alpha_ {g}^{2}\alpha_{y}^{2}\\ &+\left.\left\{\left.\left[\frac{13}{4}-8\epsilon\right]+r_{c} \left[-36\zeta_{3}+8\epsilon+\frac{53}{2}\right]+r_{c}^{2}\left[36\zeta_{3}- \frac{119}{4}\right]\right.\right\}\alpha_{g}^{2}\alpha_{y}\alpha_{u}\left. \right.\right.\end{split} \tag{100}\]
Moreover, the three-loop \(\beta\) function
\[\begin{split}\beta_{v}^{(3)}&=\left\{\left[384\zeta_{3 }+772\right]+r_{f}\left[1536\zeta_{3}+1700\right]\right.\right\}\alpha_{u}^{4}+ \left\{480+r_{f}\left[4608\zeta_{3}+9600\right]\right.\right\}\alpha_{u}^{3} \alpha_{v}\left.+\\ &\left.\left\{12+r_{f}\left[1152\zeta_{3}+6680\right]+r_{f}^{2} \left[8064\zeta_{3}+10476\right]\right.\right\}\alpha_{u}^{2}\alpha_{v}^{2}+ \left\{1264r_{f}+r_{f}^{2}\left[6144\zeta_{3}+10544\right]\right.\right\} \alpha_{u}\alpha_{v}^{3}\left.+\right.\\ &\left.\left\{132r_{f}+r_{f}^{2}\left[960\zeta_{3}+1844\right]+r _{f}^{3}\left[2112\zeta_{3}+2960\right]\right.\right\}\alpha_{v}^{4}+192\, \alpha_{y}\alpha_{u}^{3}+\left\{66+642r_{f}\right\}\alpha_{y}\alpha_{u}^{2} \alpha_{v}\left.+\right.\\ &\left.648r_{f}\,\alpha_{y}\alpha_{u}\alpha_{v}^{2}+\left\{130r_{f}+ 322r_{f}^{2}\right\}\alpha_{y}\alpha_{v}^{3}+\left[\left(192\zeta_{3}+187 \right)\epsilon+1056\zeta_{3}+\frac{1985}{2}\right]\right.\right.\right.\\ &\left.\left\{\left.\left[(96\zeta_{3}+152)\epsilon+528\zeta_{3} +788\right]+r_{f}\left[\left(528\zeta_{3}+132\right)\epsilon+2904\zeta_{3}+726 \right]\right.\right\}\alpha_{y}^{2}\alpha_{u}^{2}\left.+\right.\\ &\left.\left\{\left.\left[41\epsilon+\frac{437}{2}\right]+r_{f} \left[\left(192\zeta_{3}+268\right)\epsilon+1056\zeta_{3}+1426\right]\right. \right\}\alpha_{y}^{2}\alpha_{v}^{2}\left.+\right.\\ &\left.\left[\left(-96\zeta_{3}-88\right)\epsilon^{2}+\left(-1056 \zeta_{3}-904\right)\epsilon-2904\zeta_{3}-2310\right]\alpha_{y}^{3} \alpha_{u}\left.+\right.\\ &\left.\left\{\left.\left[-\frac{187\epsilon^{2}}{4}-\frac{1801 \epsilon}{4}-\frac{16995}{16}\right]+r_{c}\left[12\zeta_{3}-136\right] \right.\right\}\alpha_{y}^{3}\alpha_{v}\left.+\left[-10\epsilon^{3}-183 \epsilon^{2}-\frac{2211\epsilon}{2}-\frac{8833}{4}\right]\right.\right\} \alpha_{y}^{4}\left.+\right.\\ &\left.\left\{\left.\left[\left(24\zeta_{3}-2)\epsilon^{2}+\left(2 64\zeta_{3}-22\right)\epsilon+726\zeta_{3}-\frac{121}{2}\right]+r_{c}\left[ \left(2-24\zeta_{3}\right)
independent and provided in (24). The fermionic field anomalous dimension reads
\[\begin{split}\gamma^{(1)}_{\psi}&=\tfrac{1}{2}\xi \alpha_{g}+\tfrac{1}{4}(11+2\epsilon)\alpha_{y}\,,\\ \gamma^{(2)}_{\psi}&=\tfrac{1}{8}(\xi^{2}+8\xi-4 \epsilon)\,\alpha_{g}^{2}-\tfrac{1}{2}(11+2\epsilon)\alpha_{y}\alpha_{g}- \tfrac{1}{32}(23+2\epsilon)(11+2\epsilon)\alpha_{y}^{2}\,,\\ \gamma^{(3)}_{\psi}&=-\tfrac{11}{8}(11+2\epsilon) \alpha_{u}^{2}\alpha_{y}+\tfrac{1}{2}(11+2\epsilon)^{2}\alpha_{u}\alpha_{y}^{2 }+\left(\tfrac{13387}{256}+\tfrac{2119}{128}\epsilon+\tfrac{49}{64}\epsilon^{2 }-\tfrac{3}{32}\epsilon^{3}\right)\alpha_{y}^{3}\\ &\qquad+\tfrac{1}{32}(11+2\epsilon)(137+48\zeta_{3}+24\epsilon) \alpha_{y}^{2}\alpha_{g}+\tfrac{1}{64}(11+2\epsilon)(77-192\zeta_{3}+12 \epsilon)\alpha_{y}\alpha_{g}^{2}\\ &\qquad+\big{[}-\tfrac{331}{32}-\tfrac{21}{16}\zeta_{3}-(\tfrac{ 111}{64}-\tfrac{3}{8}\zeta_{3})\xi+(\tfrac{39}{64}+\tfrac{3}{16}\zeta_{3}) \xi^{2}+\tfrac{5}{32}\xi^{3}-(\tfrac{109}{24}+\tfrac{17}{16}\xi)\epsilon+\tfrac {5}{18}\epsilon^{2}\big{]}\alpha_{g}^{3}\,.\end{split} \tag{108}\]
The gauge field anomalous dimension is
\[\begin{split}\gamma^{(1)}_{A}&=\tfrac{1}{6}(9+3\xi+ 4\epsilon)\alpha_{g}\,,\\ \gamma^{(2)}_{A}&=\tfrac{1}{8}(95+11\xi+2\xi^{2}+28 \epsilon)\,\alpha_{g}^{2}-\tfrac{1}{4}(11+2\epsilon)^{2}\alpha_{y}\alpha_{g}\,, \\ \gamma^{(3)}_{A}&=\tfrac{1}{8}(11+2\epsilon)^{2}(20 +3\epsilon)\alpha_{y}^{2}\alpha_{g}-\tfrac{31}{32}(11+2\epsilon)^{2}\alpha_{g} ^{2}\alpha_{y}\\ &\qquad+\big{[}\tfrac{2039}{96}-\tfrac{255}{16}\zeta_{3}-( \tfrac{9}{32}-\tfrac{3}{4}\zeta_{3})\xi+(\tfrac{33}{32}+\tfrac{3}{16}\zeta_{3} )\xi^{2}+\tfrac{7}{32}\xi^{3}-(\tfrac{347}{72}+3\zeta_{3}+\xi)\epsilon-\tfrac{ 49}{18}\epsilon^{2}\big{]}\alpha_{g}^{3}\,,\end{split} \tag{109}\]
and the corresponding ghost has
\[\begin{split}\gamma^{(1)}_{c}&=-\tfrac{1}{4}(3- \xi)\alpha_{g}\,,\\ \gamma^{(2)}_{c}&=\tfrac{1}{48}(15-3\xi+20\epsilon) \,\alpha_{g}^{2}\,,\\ \gamma^{(3)}_{c}&=-\tfrac{23}{64}(11+2\epsilon)^{2} \alpha_{g}^{2}\alpha_{y}\\ &\qquad+\big{[}\tfrac{3569}{192}+\tfrac{255}{32}\zeta_{3}-\tfrac {1}{8}(15+3\zeta_{3})\xi+\tfrac{3}{32}(1-\zeta_{3})\xi^{2}+\tfrac{3}{64}\xi^{3 }+(\tfrac{983}{144}+\tfrac{3}{2}\zeta_{3}-\tfrac{7}{16}\xi)\epsilon+\tfrac{3 5}{108}\epsilon^{2}\big{]}\alpha_{g}^{3}\,.\end{split} \tag{110}\]
As the overall renormalisation of the gauge-fixing term cancels, the \(\beta\) function of the gauge parameter reads
\[\beta_{\xi}=-2\,\xi\,\gamma_{A}\,. \tag{111}\]
We observe that (111) has two types of fixed points due to either Landau gauge (\(\xi^{*}=0\)), or the vanishing of the gauge field anomalous dimension (\(\gamma_{A}=0\)). The latter happens at
\[\xi^{*}=-3+2.28\,\epsilon+10.19\,\epsilon^{2}+21.92\,\epsilon^{3}+\mathcal{O} (\epsilon^{4})\,. \tag{112}\]
Moreover, we note that the critical exponent of the flow (111) at the fixed point (112)
\[\frac{\partial\beta_{\xi}}{\partial\xi}\Big{|}_{\xi=\xi^{*}}=1.368\,\epsilon+1.146\,\epsilon^{2}+13.83\,\epsilon^{3}+\mathcal{O}(\epsilon^{4}) \tag{113}\]
is manifestly positive. Hence, we conclude that the Landau gauge corresponds to an UV fixed point of the flow (111), whereas a vanishing gauge field anomalous dimension corresponds to an IR attractive fixed point.
|
2304.11746 | Terminal spaces of monoids | The purpose of this note is a wide generalization of the topological results
of various classes of ideals of rings, semirings, and modules, endowed with
Zariski topologies, to strongly irreducible ideals (endowed with Zariski
topologies) of monoids, called terminal spaces. We show that terminal spaces
are $T_0$, quasi-compact, and every nonempty irreducible closed subset has a
unique generic point. We characterize arithmetic monoids in terms of terminal
spaces. Finally, we provide necessary and sufficient conditions for the
subspaces of maximal and prime ideals to be dense in the corresponding terminal
spaces. | Amartya Goswami | 2023-04-23T20:39:17Z | http://arxiv.org/abs/2304.11746v1 | # Terminal spaces of monoids
###### Abstract.
The purpose of this note is a wide generalization of the topological results of various classes of ideals of rings, semirings, and modules, endowed with Zariski topologies, to strongly irreducible ideals (endowed with Zariski topologies) of monoids, called terminal spaces. We show that terminal spaces are \(T_{0}\), quasi-compact, and every nonempty irreducible closed subset has a unique generic point. We characterize arithmetic monoids in terms of terminal spaces. Finally, we provide necessary and sufficient conditions for the subspaces of maximal and prime ideals to be dense in the corresponding terminal spaces.
Key words and phrases:strongly irreducible ideals; arithmetic monoids; Zariski topology; generic points 2020 Mathematics Subject Classification: 20M12, 20M14, 54F65
## 1. Introduction and Preliminaries
Under the name primitive ideals, in [10], the notion of strongly irreducible ideals was introduced for commutative rings. In [1, p. 301, Exercise 34], the ideals of the same spectrum are called quasi-prime ideals. The term "strongly irreducible" was first used for noncommutative rings in [1]. Since then, several algebraic and topological studies have been done on these types of ideals of rings (see [1, 10, 11]). The notion of strongly irreducible ideals has been generalized to semirings (see [13, 1]) and modules (see [14, 15]).
The aim of this note is to study the topological properties of the space of strongly irreducible ideals of a monoid endowed with a Zariski topology. This is a wide generalization of Zariski spaces. Moreover, strongly irreducible ideals are the "largest" class of ideals on which one can impose a Zariski topology. Therefore, we not only generalize some of the topological results from the above-mentioned works on strongly irreducible ideals of rings, semirings, and semimodules to monoids but also from maximal, prime, minimal prime, and primary ideals of those structures to strongly irreducible ideals of monoids. We highlight the results that have been generalized here. Although our setup is on monoids, many of the results still hold for (commutative) semigroups, which cannot be further generalized to magmas.
By a monoid \(M\), we shall mean a system \((M,\cdot,1)\) such that \(\cdot\) is an associative, commutative (multiplicative) binary operation on \(M\) and \(1\) is an element of \(M\) such that \(x\cdot 1=x\) for all \(x\in M\). We shall write \(xy\) for \(x\cdot y\) and we shall assume all our monoids are commutative. An element \(m\) of \(M\) is called _invertible_ if \(mm^{\prime}=1\), for some \(m^{\prime}\in M\). If \(S\) and \(T\) are subsets of a monoid \(M\), then by the _set product_\(ST\) of \(S\) and \(T\) we shall mean \(ST=\{st\mid s\in S,t\in T\}\). If \(S=\{s\}\) we write \(ST\) as \(sT\), and similarly for \(T=\{t\}\). Thus
\[ST=\cup\{St\mid t\in T\}=\cup\{sT\mid s\in S\}.\]
An _ideal_ of a monoid \(M\) is a subset \(I\) such that \(i\in I\) and \(m\in M\) implies \(im\in I\), and \(I\) is called a _proper_ ideal if \(I\neq M\). By \(\mathcal{I}\left(M\right)\), we shall denote the set of all ideals of \(M\). A monoid \(M\) is called _Noetherian_ if it satisfies the ascending chain condition on ideals. If \(S\) is a nonempty subset of a monoid \(M\), then \(\langle S\rangle\) will denote the smallest ideal generated by \(S\). If \(S=\{s\}\), then we shall write \(\langle s\rangle\) for \(\langle\{s\}\rangle\). A proper ideal I is called _prime_ if \(ii^{\prime}\in I\) implies \(i\in I\) or \(i^{\prime}\in I\). If \(I\) and \(J\) are ideals of \(M\), then their _product_ is defined by \(IJ=\{ij\mid i\in I,j\in J\}\), which is also an ideal of \(M\). Let \(S\) be a nonempty subset of a monoid \(M\). If \(I\) is an ideal of \(M\), the _radical_ of \(I\) is defined by
\[\sqrt{I}=\{m\in M\mid m^{k}\in I\text{ for some }k\in\mathbb{Z}^{+}\}.\]
An ideal \(I\) is said to be a _radical ideal_ (or to be a _semiprime_) if \(\sqrt{I}=I\). A proper ideal \(L\) of \(M\) is called _irreducible_ if \(L=I\cap J\) (for ideals \(I,J\) of \(M\)) implies \(L=I\) or \(L=J\). A proper ideal \(K\) of \(M\) is said to be _strongly irreducible_ if \(I\cap J\subseteq K\) implies \(I\subseteq K\) or \(J\subseteq K\).
## 2. Terminal spaces
Let \(M\) be a monoid and let \(\mathcal{S}(M)\) be the set of all strongly irreducible ideals of \(M\). We impose a Zariski topology (in the sense of [60, SS1.1.1]) on \(\mathcal{S}(M)\) by defining closed sets by
\[\mathcal{HK}(X)=\begin{cases}\{J\in\mathcal{S}(M)\mid J\supseteq\mathcal{K}(X )\},&X\neq\emptyset;\\ \emptyset,&X=\emptyset,\end{cases} \tag{2.1}\]
where \(X\subseteq\mathcal{S}(M)\) and \(\mathcal{K}(X)=\bigcap_{I\in X}I\). The following theorem shows that \(\mathcal{HK}\) is a Kuratowski closure operator on \(\mathcal{S}(M)\), and hence indeed induces a closed-set topology on \(\mathcal{S}(M)\).
**Theorem 2.1**.: _Let \(M\) be a module and let \(\mathcal{HK}\) be defined as in (2.1)._
1. \(\mathcal{HK}(\emptyset)=\emptyset\)_._
2. _For all_ \(X\subseteq\mathcal{S}(M)\)_,_ \(X\subseteq\mathcal{HK}(X)\)_._
3. _For all_ \(X\subseteq\mathcal{S}(M)\)_,_ \(\mathcal{HK}(\mathcal{HK}(X))=\mathcal{HK}(X)\)_._
4. _For all_ \(X,X^{\prime}\subseteq\mathcal{S}(M)\)_,_ \(\mathcal{HK}(X\cup X^{\prime})=\mathcal{HK}(X)\cup\mathcal{HK}(X^{\prime})\)_._
Proof.: (1)-(2) Follows from (2.1).
(3) By (2), \(X\subseteq\mathcal{HK}(X)\) and hence \(\mathcal{HK}(\mathcal{HK}(X))\supseteq\mathcal{HK}(X)\), by increasing property of \(\mathcal{HK}\). The other inclusion follows from (2.1).
(4) By (2) and by the increasing property of \(\mathcal{HK}\), we have \(\mathcal{HK}(X\cup X^{\prime})\supseteq\mathcal{HK}(X)\cup\mathcal{HK}(X^{ \prime})\). Suppose \(J\in\mathcal{HK}(X\cup X^{\prime})\). Then \(\mathcal{K}(X)\cap\mathcal{K}(X^{\prime})\subseteq J\). Since \(J\) is strongly irreducible, \(\mathcal{K}(X)\subseteq J\) or \(\mathcal{K}(X^{\prime})\subseteq J\), and hence, \(J\in\mathcal{HK}(X)\cup\mathcal{HK}(X^{\prime})\).
From Theorem 2.1(4), it is clear that the class of strongly irreducible ideals is the "largest" class of ideals of a monoid on which we can endow a hull-kernel topology (= Zariski topology). The set \(\mathcal{S}(M)\) endowed with the above-mentioned hull-kernel topology will be called a _terminal space_. The following proposition characterizes strongly irreducible ideals as terminal spaces, and it generalizes the ring-theoretic result [33, Section 2.2, p. 11].
**Proposition 2.2**.: _The operation defined in (2.1) is a Kuratowski closure operator on a class \(\mathcal{F}\) of ideals of \(M\) if and only if_
\[J\cap K\subseteq I\quad\text{implies}\quad J\subseteq I\ \text{ or }\ K\subseteq I,\]
_for all \(J,K\in\mathcal{I}(M)\) and for all \(I\in\mathcal{F}.\)_
Before we discuss topological properties of terminal spaces, let us note down a few more elementary results about the closure operator \(\mathcal{HK},\) which will be used in the sequel.
**Lemma 2.3**.: _Let \(M\) be a monoid and let \(X\), \(X^{\prime}\), \(\{X_{\lambda}\}_{\lambda\in\Lambda}\) be nonempty subsets of \(\mathcal{S}(M).\) Then the following hold._
1. \(\mathcal{HK}(M)=\emptyset.\)__
2. \(\mathcal{HK}(X)=\overline{X}.\)__
3. \(\mathcal{HK}(X)\cup\mathcal{H}^{\prime}\mathcal{K}(X^{\prime})=\mathcal{H}^{ \prime}\mathcal{K}(X\cap X^{\prime}).\)__
4. \(\bigcap_{\lambda\in\Lambda}\mathcal{HK}(X_{\lambda})=\mathcal{HK}\left( \bigcap_{\lambda\in\Lambda}X_{\lambda}\right).\)__
5. \(\mathcal{HK}(X)\subseteq\mathcal{HK}(\langle X\rangle)\subseteq\mathcal{H}^{ \prime}\mathcal{K}(\sqrt{\langle X\rangle}).\)__
Proof.: (1) Follows from the definition of a strongly irreducible ideal of \(M.\)
(2) From Theorem 2.1(2), we have \(\overline{X}\subseteq\overline{\mathcal{HK}(X)}=\mathcal{HK}(X).\) Let \(\mathcal{HK}(Y)\) be an arbitrary closed subset of \(\mathcal{S}(M)\) containing \(X.\) Then
\[\mathcal{HK}(Y)=\mathcal{HK}(\mathcal{HK}(Y))\supseteq\mathcal{HK}(X).\]
Since by Theorem 2.1(2), \(\mathcal{HK}(X)\) is the smallest closed set containing \(X\), we have the claim.
(3)-(5) Straightforward.
The next result generalizes Theorem 4.1 and Theorem 3.1 in [11], Theorem 9 in [156], Theorem 4.1:(v)-(vi) in [1], and Proposition 2.4 in [200].
**Theorem 2.4**.: _Every terminal space is quasi-compact and a \(T_{0}\)-space._
Proof.: Let \(\{C_{\lambda}\}_{\lambda\in\Lambda}\) be a family of closed sets of \(\mathcal{S}(M)\) and let \(\bigcap_{\lambda\in\Lambda}C_{\lambda}=\emptyset.\) Then \(C_{\lambda}=\mathcal{HK}(X_{\lambda})\) for some subsets \(X_{\lambda}\) of \(\mathcal{S}(M),\) and by Lemma 2.3(4), we have
\[\bigcap_{\lambda\in\Lambda}\mathcal{HK}(X_{\lambda})=\mathcal{HK}\left( \bigcap_{\lambda\in\Lambda}X_{\lambda}\right)=\emptyset.\]
Let \(K=\langle\bigcup_{\lambda\in\Lambda}\mathcal{K}(X_{\lambda})\rangle.\) We claim that \(K=M.\) If not, then there exists a maximal ideal \(J\) of \(M\) such that
\[\bigcap_{I\in X_{\lambda}}I\subseteq K\subseteq J,\]
for all \(\lambda\in\Lambda.\) Therefore, \(J\in\mathcal{H}(C_{\lambda})=C_{\lambda}\) for all \(\lambda\in\Lambda,\) a contradiction. Since \(1\in K,\) we have \(1\in\bigcup_{i=1}^{n}\mathcal{K}(X_{\lambda_{i}}),\) for a finite subset \(\{\lambda_{1},\ldots,\lambda_{n}\}\) of \(\Lambda.\) Hence, \(\bigcap_{i=1}^{n}C_{\lambda_{i}}=\emptyset,\) and by the finite intersection property, we have the quasi-compactness of \(\mathcal{S}(M).\)
To show the \(T_{0}\) separation property, let \(I,I^{\prime}\in\mathcal{S}(M)\) such that \(\mathcal{HK}(\{I\})=\mathcal{HK}(\{I^{\prime}\}).\) It suffices to show \(I=I^{\prime}.\) Since \(I^{\prime}\in\mathcal{HK}(\{I\}),\) we have \(I\subseteq I^{\prime}.\) Similarly, we obtain \(I^{\prime}\subseteq I.\) Hence \(I=I^{\prime}.\)
The following result characterizes \(T_{1}\) terminal spaces, and generalizes Theorem 3.2 in [11], Theorem 3.7 in [11], and Theorem 3 in [111].
**Theorem 2.5**.: _Let \(M\) be a monoid. A terminal space \(\mathcal{S}(M)\) is a \(T_{1}\)-space if and only if every strongly irreducible ideal of \(M\) is not containing in the other strongly irreducible ideals of \(M\)._
Proof.: If \(\mathcal{S}(M)\) is a \(T_{1}\)-space, then for every \(I\in\mathcal{S}(M)\) we have \(\bar{I}=\{I\}\). By Lemma 2.3(2), \(\bar{I}=\mathcal{H}\mathcal{K}(\{I\})=\mathcal{H}(I)\), and so, \(\{I\}=\mathcal{H}(I)\), implying that the only strongly irreducible ideal of \(M\) containing \(I\) is \(I\) itself. For the converse, let \(I\) be the unique strongly irreducible ideal of \(M\) that contains \(I\). Then by Lemma 2.3(2),
\[\overline{\{I\}}=\mathcal{H}\mathcal{K}(\{I\})=\mathcal{H}(I)=\{I\}.\]
Thus \(\{I\}\) is a closed set, proving that \(\mathcal{S}(M)\) is a \(T_{1}\)-space.
Our next goal is to study generic points of irreducible closed sets of terminal spaces. Recall that a subset \(Y\) of a topological space \(X\) is called _irreducible_ if for any closed subsets \(Y_{1}\) and \(Y_{2}\) in \(X\), \(Y\subseteq Y_{1}\cup Y_{2}\) implies that \(Y\subseteq Y_{1}\) or \(Y\subseteq Y_{2}\). A maximal irreducible subset \(Y\) of \(X\) is called an _irreducible component_. An element \(y\) of a closed subset \(Y\) of \(X\) is called a _generic point of_\(Y\) if \(Y=\overline{\{y\}}\). The following result characterizes irreducible subsets of a terminal space. Moreover, this result generalizes Theorem 3.3 in [11], Proposition 3 in [12], and Theorem 2.6(1) in [13].
**Theorem 2.6**.: _Let \(M\) be a monoid. A nonempty closed subset \(X\) of a terminal space \(\mathcal{S}(M)\) is irreducible if and only if \(\mathcal{K}(X)\) is a strongly irreducible ideal of \(M\)._
Proof.: It is clear that \(\mathcal{K}(X)\) is a proper ideal of \(M\). Let \(I\cap J\subseteq\mathcal{K}(X)\) for some \(I,J\in\mathcal{I}(M)\). Then for any \(L\in X\), we have \(I\subseteq L\) or \(J\subseteq L\) since \(L\in\mathcal{S}(M)\). Hence \(X\subseteq\mathcal{H}(I)\cup\mathcal{H}(J)\). Since \(X\) is irreducible, \(X\subseteq\mathcal{H}(I)\) or \(X\subseteq\mathcal{H}(J)\), which implies that \(I\subseteq\mathcal{K}(X)\) or \(J\subseteq\mathcal{K}(X)\). Therefore, \(\mathcal{K}(X)\) is a strongly irreducible.
For the converse, let \(\mathcal{K}(X)\) be a strongly irreducible ideal fo \(M\). Since \(\mathcal{K}(X)\neq M\), \(\mathcal{K}(X)\) is nonempty. Let \(X=X_{1}\cup X_{2}\) for some nonempty closed subsets of the terminal space \(\mathcal{S}(M)\). Then \(\mathcal{K}(X)\supseteq\mathcal{K}(X_{1})\cap\mathcal{K}(X_{2})\). Since \(\mathcal{K}(X)\) is strongly irreducible, \(\mathcal{K}(X)\in\mathcal{H}(\mathcal{K}(X_{1})\cap\mathcal{K}(X_{2}))\). By Lemma 2.3(3), this implies \(\mathcal{K}(X)\in\mathcal{H}\mathcal{K}(X_{1})\cup\mathcal{H}\mathcal{K}(X_{2})\). If \(\mathcal{K}(X)\in\mathcal{H}\mathcal{K}(X_{1})\), then
\[X\subseteq\overline{X}=\mathcal{H}\mathcal{K}(X)\subseteq\mathcal{H}\mathcal{ K}(X_{1})=\overline{X_{1}}=X_{1},\]
where the first and the second equalities follow from Lemma 2.3(2). Similarly, if \(\mathcal{K}(X)\in\mathcal{H}\mathcal{K}(X_{2})\), then obtain \(X\subseteq X_{2}\). This proves that \(X\) is irreducible.
The following corollary generalizes Corollary 3.1 in [11].
**Corollary 2.7**.: _Every nonempty irreducible closed subset of a terminal space \(\mathcal{S}(M)\) has a unique generic point._
Proof.: Let \(\mathcal{H}(I)\) be a nonempty irreducible subset of \(\mathcal{S}(M)\). Then by Theorem 2.6, \(I\) is strongly irreducible. Hence \(\overline{\{I\}}=\mathcal{H}\mathcal{K}(I)=\mathcal{H}(I)\), where the first equality follows from Lemma 2.3(2). Thus \(\{I\}\) is a generic point of \(\mathcal{H}(I)\). The uniqueness of this point follows from the fact that \(\mathcal{S}(M)\) is a \(T_{0}\)-space (see Theorem 2.4).
The following one-to-one correspondence generalizes Theorem 3.4 in [11].
**Theorem 2.8**.: _Let \(M\) be a monoid. Then there is a bijection between the set of irreducible components of the terminal space \(\mathcal{S}(M)\) and the set of minimal strongly irreducible ideals of \(M\)._
Proof.: If \(X\) is an irreducible component of the terminal space \(\mathcal{S}(M)\), then by Theorem 2.6, \(X=\mathcal{H}(I)\) for some \(I\in\mathcal{S}(M)\). If \(J\in\mathcal{S}(M)\) such that \(I\supseteq J\), then \(\mathcal{H}(I)\subseteq\mathcal{H}(J)\) so that \(I=J\). Conversely, let \(I\) be a minimal strongly irreducible ideal of of \(M\) and let \(\mathcal{H}(I)\subseteq\mathcal{H}(J)\) for some \(J\in\mathcal{S}(M)\). Then
\[\overline{\{I\}}=\mathcal{H}(I)\subseteq\mathcal{H}(J)=\overline{\{J\}},\]
implying that \(I=J\). Hence, \(\mathcal{H}(I)\) is an irreducible component of \(\mathcal{S}(M)\).
A characterization of invertible elements of \(\mathcal{S}(M)\) is given in the following proposition and this result generalizes Theorem 4.1(iii) in [1].
**Proposition 2.9**.: _Let \(M\) be a monoid and \(\mathcal{S}(M)\) be a terminal space. Then \(\mathcal{S}(M)\setminus\mathcal{H}\mathcal{K}(\langle m\rangle)=\mathcal{S}(M)\) if and only if \(m\) is an invertible element of \(M\)._
Proof.: Note that \(\mathcal{S}(M)\setminus\mathcal{H}\mathcal{K}(\langle m\rangle)=\mathcal{S}(M)\) implies \(m\) is not in any maximal ideal of \(M\), and hence \(m\) is invertible. The converse follows immediately from the fact that every strongly irreducible ideal is proper.
It is well-known that prime spectrum of a Noetherian (commutative) ring endowed with Zariski topology is a Noetherian space. The following proposition generalizes this to strongly irreducible ideals of monoids, and it also generalizes Proposition 4.2(i) in [1]. The proof is easy, and so will be omitted.
**Proposition 2.10**.: _If \(M\) is a Noetherian monoid, then \(\mathcal{S}(M)\) is a Noetherian terminal space._
Recall that a monoid is called _arithmetic_ if \(\mathcal{I}(M)\) is a distributive lattice. The following theorem characterizes arithmetic monoids in terms of strongly irreducible ideals. This result is a generalization of Theorem 10 in [156]. The half of the implications uses the Zariski topology on \(\mathcal{S}(M)\).
**Theorem 2.11**.: _A monoid \(M\) is arithmetic if and only if each ideal is the intersection of all strongly irreducible ideals containing it._
Proof.: Let \(I\in\mathcal{I}(M)\) and let \(I=\bigcap_{I\subseteq J}\{J\mid J\in\mathcal{S}(M)\}\). To show \(\mathcal{I}(M)\) is distributive, it suffices to show that the lattice \(\mathcal{I}(M)\) is isomorphic to the lattice of some closed sets of the terminal space \(\mathcal{S}(M)\). Note that the map \(I\mapsto\{J\in\mathcal{S}(M)\mid J\supseteq I\}=\mathcal{H}(I)\) is a bijection and since \(\mathcal{H}(I)\) is a closed set, this map is also an lattice isomorphism.
For the converse, we first observe that by [1], in a distributive lattice, irreducible ideals and strongly irreducible ideals coincide. The rest of the proof now follows from Theorem 6 and Theorem 7 in [156].
Finally, we wish to see relations between a terminal space and its subspaces of maximal ideals \(\operatorname{Max}(M)\) and prime ideals \(\operatorname{Spec}(M)\). To do so, we first talk about radicals induced by maximal, prime, and strongly irreducible ideals of a monoid \(M\). An _\(m\)-radical_\(\sqrt[n]{M}\) (respectively, _\(p\)-radical_\(\sqrt[n]{M}\) and _\(s\)-radical_\(\sqrt[n]{M}\)) of \(M\) is the intersection of all maximal ideals (respectively, prime ideals and strongly irreducible ideals) of \(M\).
**Proposition 2.12**.: _Let \(M\) be a monoid._
1. _The subspace_ \(\operatorname{Max}(M)\) _is dense in the terminal space_ \(\mathcal{S}(M)\) _if and only if_ \(\sqrt[N]{M}=\sqrt[N]{M}\)_._
2. _The subspace_ \(\operatorname{Spec}(M)\) _is dense in the terminal space_ \(\mathcal{S}(M)\) _if and only if_ \(\sqrt[N]{M}=\sqrt[N]{M}\)_._
Proof.: (1) Let \(\overline{\operatorname{Spec}(M)}=\mathcal{S}(M)\). Then \(\{J\in\mathcal{S}(M)\mid\bigcap_{P\in\operatorname{Spec}(M)}P\subseteq J\}= \mathcal{S}(M)\). This implies
\[\sqrt[N]{M}=\bigcap_{P\in\operatorname{Spec}(M)}P\subseteq\bigcap_{J\in \mathcal{S}(M)}J=\sqrt[N]{M}.\]
Furthermore, \(\operatorname{Max}(M)\subseteq\mathcal{S}(M)\) implies \(\sqrt[N]{M}\subseteq\sqrt[N]{M}\). Hence, we have the desired equality. To obtain the converse, let \(\mathcal{S}(M)\setminus\overline{\operatorname{Spec}(M)}\neq\emptyset\). This implies \(J\notin\operatorname{Spec}(M)\), but \(J\in\mathcal{S}(M)\). Therefore, there exists a neighbourhood \(N_{J}\) of \(J\) such that \(N_{J}\cap\operatorname{Spec}(M)=\emptyset\), and \(\sqrt[N]{M}\subseteq\sqrt[N]{M}\). In other words, we have \(\sqrt[N]{M}\neq\sqrt[N]{M}\).
(2) Follows from (1).
|
2306.13728 | A cross-age study on secondary school students' views of stars | Research in astronomy education has uncovered that many learners possess
limited and fragmented understanding of stars. The corresponding misconceptions
manifest in various areas such as star formation, size, the relationship
between stars and planets, and their position in space and have been shown to
persist across different age groups and educational settings, highlighting the
need for further investigation. This paper presents the findings of an
empirical study that examines secondary students' views of stars and their
evolution throughout their secondary school careers. Therefore, we designed and
evaluated an instrument for assessing students' views of stars in five domains.
The instrument creation process involved several steps, including
literature-based item development, an expert survey with faculty members, and a
quantitative pilot study with a sample of 390 secondary school and college
students. This process led to a final version of the instrument that exhibits
good psychometric properties. We used this new instrument in a cross-age study
to investigate the alignment of secondary students' ideas about stars with
scientific views across different stages of secondary education. The sample of
this main study comprised a total of 366 learners, from lower, middle and upper
secondary school. Our study findings reveal a progressive development of
students' perspectives on star-related topics throughout their school career:
We observed a statistically significant increase in the proportion of responses
aligning with scientific views across all aspects of stars examined in this
study, as students progressed from lower secondary to upper secondary levels.
We further report on widely held views of stars among our study participants
that oppose the scientific views, and discuss the implications of our findings
for both educational research and practice. | Philipp Bitzenbauer, Sarah Navarrete, Fabian Hennig, Malte S. Ubben, Joaquin M. Veith | 2023-06-23T18:31:08Z | http://arxiv.org/abs/2306.13728v1 | # A cross-age study on secondary school students' views of stars
###### Abstract
Research in astronomy education has uncovered that many learners possess limited and fragmented understanding of stars. The corresponding misconceptions manifest in various areas such as star formation, size, the relationship between stars and planets, and their position in space and have been shown to persist across different age groups and educational settings, highlighting the need for further investigation. This paper presents the findings of an empirical study that examines secondary students' views of stars and their evolution throughout their secondary school careers. Therefore, we designed and evaluated an instrument for assessing students' views of stars in five domains (stars and the solar system, formation and evolution of stars, properties of stars, (sub-)stellar objects, and spectral aspects). The instrument creation process involved several steps, including literature-based item development, an expert survey with faculty members, and a quantitative pilot study with a sample of \(N=390\) secondary school and college students. This process led to a final version of the instrument that exhibits good psychometric properties. We used this new instrument in a cross-age study to investigate the alignment of secondary students' ideas about stars with scientific views across different stages of secondary education. The sample of this main study comprised a total of \(N=366\) learners, including 148 lower, 151 middle and 67 upper secondary school students. Our study findings reveal a progressive development of students' perspectives on star-related topics throughout their school career: Using ANOVAs and conducting pairwise post-hoc comparisons, we observed a statistically significant increase in the proportion of responses aligning with scientific views across all aspects of stars examined in this study, as students progressed from lower secondary to upper secondary levels. We further report on widely held views of stars among our study participants that oppose the scientific views, and discuss the implications of our findings for both educational research and practice.
Astronomy; Stars; Cross-age study; Student Views
## I Introduction
Learning about astronomical objects poses a challenge for individuals due to the limited, skewed, or lack of direct experience with these objects [1]. Currently, research on mental models in astronomy education is mostly focused on topics related to celestial bodies such as the Sun, Moon, and Earth: For instance, there exists a large body of research on the development of students' mental models of Earth [2; 3], the origin of seasons [4], the day-and-night cycle [5] or lunar phases [6]. In contrast, research on more elaborate and modern astronomy concepts is still in its infancy [7; 8], although astrophysics research findings are increasingly making their way into the spotlight of news broadcasts and newspaper articles due to recent advancements in the field (e.g., see [9]), as evidenced by the multiple Nobel Prizes awarded for astrophysical research in recent years (e.g., 2002 [10], 2006 [11], 2011 [12], 2015 [13], 2017 [14], 2019 [15] and 2020 [16]). However, there is no need to rely solely on the number of Nobel Prizes to gather unanimous support for the assertion that modern astrophysics is an exceptionally captivating realm. Physics education research has found that space and astronomy topics spark significant interest among both boys and girls [17], and this comes as no surprise as argued in Ref. [18]: From an early age, students are consciously confronted with astronomical questions through the inevitable act of observing the sky [19]. Furthermore, their curiosity about the origins of humanity leads them to seek scientific explanations, highlighting the enduring allure and importance of astronomy and astrophysics in the eyes of students and non-specialists
alike. Consequently, astronomy concepts are on the rise in K-12 physics education [20; 21]. Thereby, the topic of stars and their evolution holds significant potential for physics education from various perspectives: Stars are objects whose structure and life can be described through the interaction of different sub-disciplines of physics (for an overview, see [22]). Through the application of fundamental physics principles, a comprehensive understanding can be gained of the formation and evolution of stars as well as their transition into compact objects such as white dwarfs, neutron stars, or black holes, which occurs when nuclear processes converting lighter elements (e.g., hydrogen and helium) into heavier elements (including carbon, oxygen, and elements across the periodic table up to uranium) cease. The example of stars offers a compelling demonstration of the interconnectedness between observation (such as spectral analysis) and theoretical description (for an overview, see [23]). Hence, stars are not only a fundamental aspect of astrophysics but may also serve as a central theme in astronomy education: They provide an excellent opportunity for students to grasp core aspects of stellar evolutionary processes. Additionally, the theory of star formation remains incomplete, with unresolved questions, offering a platform for learning about the Nature of Science. For example, the mechanisms leading to the collapse of interstellar clouds, and hence to star formation, are still not fully understood [24].
In this paper, we report on a cross-age study investigating secondary students' views of stars and their evolution throughout secondary school careers. Building on previous research on student learning of astronomy concepts in general, and stars in particular (see Section II), we formulate our research questions in Section III, and describe the design and evaluation of the research tool used in this study in Section IV. We present the findings of our study in Section VI and discuss these results against the backdrop of prior research in Section VII. Finally, we provide recommendations for both, future research and practice in astronomy education at the secondary school level (see Section IX).
## II Research background
### The big ideas of astronomy education research
_Topics covered and target groups addressed_
Lelliott and Rollnick [25] conducted a comprehensive review of peer-reviewed astronomy education studies published between 1974 and 2008. Their review highlighted that the majority of these studies focused on teaching and learning of five big ideas, "all involving the Earth in relation to its satellite and the Sun" [25] (p. 1777): (1) Earth [26; 27; 2; 28; 29; 30; 31; 32; 2], (2) gravity [26; 27; 30; 31; 32; 32], (3) the day-and-night cycle [33; 34; 5], (4) the seasons [35; 36; 37; 38], and (5) the Earth-Sun-Moon system [39; 28; 40]. However, there is a scarcity of studies exploring student learning in other areas of astronomy, such as the Big Bang [41; 42; 43; 20], black holes [7], stars [44; 45], and aspects related to sizes and distances of astronomical objects [1]. A recently published review article by Salimpour [46] further highlights the development of "a fertile landscape for research into Cosmology Education" (p. 819). It is noteworthy that research in the field of astronomy education sketched above has focused on questions regarding the teaching and learning of astronomy concepts in various target groups: These groups range from students, including school-aged students [1; 47; 48; 34], as well as college and university students [45; 49; 50; 51; 52], to pre- and in-service teachers [53; 54; 55; 56].
_Tools to assess students' conceptions of astronomy topics_
To assess students' conceptual understanding and students' conceptions of the aforementioned astronomy topics, a number of concept inventories have been developed and evaluated, e.g., the Lunar Phases Concept Inventory [57], the Moon Phases Concept Inventory [58], the Light and Spectroscopy Concept Inventory [59; 60], the Astronomy Diagnostic Test [61; 62; 63], the Test of Astronomy Standards [64] or the Star Properties Concept Inventory [65]. However, in Ref. [66], the authors emphasize the necessity of developing further instruments that allow for the valid assessment of students' conceptions of more advanced topics such as black holes [7] or the Big Bang [20] on the one hand, and stars, especially with regard to aspects beyond their basic properties, on the other. The development of such diagnostic tools seems necessary to address the gap in the existing literature concerning students' understanding and students' conceptions of a broad range of facets related to the topic of stars. In the upcoming subsection II.2, we will provide an overview of astronomy education research results on this topic.
### Students' conceptions of stars
Agan's study [67] shed light on students' ideas about stars across various educational levels, revealing a multitude of little elaborate ideas: These encompass the twinkling nature of stars or the students' idea that Polaris (the North Star) would be the brightest star. Notably, a significant number of students fail to recognize the Sun as a star, harbor the notion that stars are immortal, and mistakenly assume that all stars end in supernovae. Moreover, an erroneous view persists among students that all stars in the celestial sphere are equidistant from Earth [68; 69]. Further research has unveiled that students often describe stars as round objects without edges [68] and perceive them as motionless entities in the night sky while recognizing the Sun's apparent motion during the day [33; 70]. Their comprehension of
daily celestial motion and knowledge of significant celestial objects, such as Polaris and the ecliptic, also display limitations [71; 72]. Additionally, misconceptions regarding the size of stars persist among learners [73].
To (a) address these persistent ideas opposing the scientific view and (b) enhance students' learning experiences of astronomy concepts, researchers have explored the impact of out-of-school learning, particularly through planetarium visits. Lelliott [74] and Dunlop [75] conducted investigations into the effects of planetarium visits on secondary school students' understanding of astronomy topics. Lelliott's study [74] hints at different cognitive levels of students' knowledge, with planetarium visits leading to changes in their initial ideas and a shift towards more scientifically accurate notions of celestial motion.
### Research desiderata
To succinctly summarize, research in astronomy education has revealed that many learners possess limited and fragmented ideas about stars. Prevailing students' ideas opposing the scientific views encompass various aspects such as star formation, size, their relationship to planets, and their position in space. These challenges persist across diverse age groups and educational settings, underscoring the need for further investigation. Specifically, two key research desiderata emerge:
1. _Comprehensive analysis of students' views of stars:_ The studies presented in subsection II.2 have predominantly focused on isolated aspects of stars, leaving some areas unexplored. Consequently, there is a dearth of instruments capable of capturing students' views comprehensively, encompassing the multifaceted nature of the topic [66]. A comprehensive analysis and identification of students' views of stars, spanning the various sub-aspects (e.g., from star formation to spectral aspects), remains elusive. Recognizing the prevalent ideas among learners that contradict current scientific views, however, is vital, as it enables the development of targeted instructional interventions. Thus, a pressing research need exists to develop an instrument that can holistically assess learners' views on different aspects related to stars.
2. _Cross-age exploration of students' learning about stars:_ Although existing cross-age studies in astronomy have examined students' learning during limited periods of their school careers [76; 77], a comprehensive overview of students' learning about stars across their entire secondary school careers is lacking. Understanding the evolution of students' views of stars over time is crucial for designing instructional strategies that align with their cognitive development and evolving needs. By gaining insights into the progression of students' views throughout their secondary education, educators and curriculum developers can tailor interventions that effectively address conceptual challenges at different stages.
In this article, we tackle both of these research desiderata: On the one hand, we aim at developing an instrument to economically asses learners' views of different aspects regarding stars. On the other hand, we use this new instrument to explore the development of secondary school students' views of stars throughout their secondary school careers, and hence, aim at gaining a comprehensive overview of widespread students' ideas of stars opposing the current scientific views.
## III Research questions
The first research objective of this paper is to develop an instrument that can economically assess learners' views on various aspects of the stars and that performs well psychometrically on a sample of secondary school students. The second research objective is to gain insights into secondary school students' views of stars by examining their continuous development throughout secondary education and identifying widespread ideas that contradict the current scientific views. Hence, we aim at clarifying the following research questions:
1. How do the proportions of students' ideas about stars aligning with the scientific views compare among lower, middle and upper secondary school students?
2. What ideas opposing the scientific view on various aspects of stars are widespread among secondary school students?
## IV Design and evaluation of the instrument
In this section, we provide an overview of the development and evaluation of an instrument suitable for the assessment of learners' views of various aspects of stars. This endeavor aligns with the research objective outlined in the previous section (see Section III) and, hence, contributes to the validity and reliability of the research findings put forth in this paper.
### Design of the instrument
#### Determination of target group and question format
Research question 1 addresses the evolution of learners' views of stars throughout their secondary school careers. Hence, the primary target group are secondary school students. To allow for the economic identification of widespread student ideas of stars that are opposing the scientific views on a large scale (see Research question 2),
the use of rating scale items seems a sensible approach which has been used in prior research that aimed at the economic assessment of students' views in science education research [78, 79]. Consequently, in our instrument we decided to include statements about different aspects of stars alongside a four-point rating scale (1 corresponds to "I do not agree", 2 to "I rather do not agree", 3 to "I rather agree" and 4 to "I agree"). We decided to include a response option for abstaining to ensure that participants were not compelled to choose either agreement or disagreement when uncertain. Additionally, this reduces the likelihood of participants guessing their responses (for similar arguments see [80]).
It is crucial to emphasize that the ratings provided by the participants for the statements in the instrument were not assigned as either 'true' or 'false.' Instead, we employed a categorization approach to classify the students' ratings into the following distinct categories: (a) "in line with the scientific view", (b) "opposing the scientific view", and (c) "abstained from voting". This categorization decision is well-justified given the dynamic nature of research on stars and their formation, which continues to grapple with unresolved questions in the field, as thoroughly discussed in the introduction of this article (cf. Section I). However, we refrain from a more fine-grained categorization that takes into account the degree of (dis)agreement among students, since the meaning assigned to the rating options is highly subjective and may therefore vary from person to person. For example, if a questionnaire item contained a statement that deviated from the currently accepted scientific view of stars, and a student selected the response option 'I rather agree,' we categorized the student's rating as 'opposing the scientific view.
Lastly, we highlight that we deliberately ask for students' views or students' ideas instead of students' conceptions in the research questions underlying this study. The term conception would refer to representations and notions that people give to phenomena or their underlying patterns [81]. However, we assume that in this study we also collect ad hoc triggered mental connections of the learners, which are provoked by the items of the research instrument developed in this study, and which would not have been expressed without the input. These are then not internalized conceptions, but we refer to those as students' views or ideas that potentially are more superficial in nature. The distinction of the terms students' views or ideas from the term students' conceptions is in line with diSessa's knowledge-in-pieces perspective on learning (e.g., see [82, 83]) in which students' views or ideas are regarded the "product of occasional mismatches between p-prims and contexts" ([84], p. 10).
#### Description of the content domain
With our instrument, it should be possible to capture students' understandings of all aspects of stars relevant for astronomy education at the secondary level. Therefore, the aspects to be included were initially identified based on (German) secondary school curricula. Additionally, we used typical undergraduate textbooks (e.g., [85, 18, 86]) as well as scientific articles on astronomy education research that cover student learning (e.g., see [43, 67, 75, 87, 88]) for the description of the relevant content domains. This procedure led to the identification of five thematic domains covering the relevant aspects of stars:
* Domain 1: Stars and solar system. This domain covers the Earth-Sun-Moon relationship as well as their classification as celestial objects.
* Domain 2: Formation and Evolution of Stars. This domain covers topics such as stars' origin, age, life and death.
* Domain 3: Properties of Stars. This domain includes key properties of stars, such as size or apparent motion.
* Domain 4: (Sub-)Stellar objects. This domain comprises main aspects of binary stars, brown dwarfs, white dwarfs and pulsars.
* Domain 5: Spectral aspects. This domain covers questions on stars' specific colors and their brightness.
In the initial iteration, we meticulously devised a comprehensive set of 82 rating scale items, encompassing the five thematic domains (i.e., sub-scales) mentioned above, to be subjected to thorough evaluation (see next Subsection IV.2). These 82 items were partly developed newly, and additionally, we also drew upon earlier instruments and empirical findings gained from prior research.
### Evaluation of the instrument
#### Expert survey
To ensure the content validity of the instrument we conducted an expert survey involving a panel of three esteemed faculty members with extensive expertise in astronomy research and teaching. The expert survey encompassed two key aspects: content evaluation and linguistic refinement.
Regarding content evaluation, the experts provided valuable insights on the scientific accuracy of the items and their alignment with the five content domains covered in Section IV.1. Their comments helped identify any necessary adjustments or reallocations. Simultaneously, the experts scrutinized the language employed in the items, offering recommendations for rephrasing where deemed essential.
Based on the invaluable feedback obtained from the expert survey, we refined the item set, resulting in a total of 65 revised items. These revised items were then subjected to further piloting.
#### Psychometric characterization
Finally, we conducted a psychometric evaluation of the remaining items. Therefore, the items were administered to a sample of \(N=390\) secondary school and college students. The psychometric characterization of the items based on classical test theory was carried out to select the final item set for the five sub-scales. This involved examining the items' difficulties, with an accepted range of 0.2 to 0.8 according to [89], as well as their discriminatory powers, with accepted values \(\geq 0.2\) according to [90]. Additionally, Cronbach's Alpha was calculated as an estimator of the internal consistency of all five sub-scales [91].
During the assessment, a total of 10 items were identified and subsequently excluded due to their inadequate psychometric characteristics. Consequently, the final instrument comprises 55 items, distributed among the five sub-scales. A comprehensive overview of the item distribution per sub-scale, along with the internal consistencies of each sub-scale, the average item difficulties and discriminatory powers, is given in Table 1. Furthermore, to enhance clarity, we have included a sample item for each sub-scale. The final version of the instrument, with items arranged by sub-scales and alongside potential references, can be found in the appendix of this paper.
Taken together, the rigorous process of development and evaluation culminated in a final version of the instrument that exhibits robust psychometric properties and enables a reliable assessment of secondary school students' views of various aspects of stars. Subsequently, this refined instrument was employed in our main study, which focuses on addressing the research questions outlined in Section III. In the following section, we provide a detailed account of the methodology employed in our cross-age study, followed by the presentation of our findings in Section VI.
## V Methods
### Study design, sample and instrument
A cross-age study design was chosen to approach a clarification of our research questions as has been done in previous studies concerned with similar research objectives (cf. [92; 93]). The sample comprised \(N=366\) German secondary school students, divided into three different cohorts, enabling a deeper investigation of the temporal progression of students' views of stars throughout secondary education (for limitations of this approach see Section VIII): We included participants from various grades, namely \(N_{1}=148\) (81 female) students from grades 7-8 (aged 13-14 years), \(N_{2}=151\) (70 female) students from grades 9-10 (aged 15-16 years) and \(N_{3}=67\) (31 female) students from grades 11-12 (aged 17-18 years). In the following, we will refer to these subsamples as Cohort 1 (lower secondary school), Cohort 2 (middle secondary school) and Cohort 3 (upper secondary school), respectively. The participants did not receive any instruction as part of this study prior to test administration beyond their regular physics lessons and participation was completely voluntary as well as uncompensated. It it is noteworthy that current physics curricula in Germany deal with astronomy topics in a rather superficial manner, in particular many of the topics assessed by our instrument remain a fringe topic throughout the entirety of secondary education. An overview of the sample is provided in Table 2.
The data collection, took place at the end of the school year 2021/2022 when the instrument presented in Section IV was administered as a paper-pencil-test (for all items of the five sub-scales, see the appendix). The Cronbach Alpha values for the five-sub-scales calculated from the main study data (cf. Table 3) were stable compared to the ones obtained in the pilot study (cf. Section IV.2).
### Data analysis
#### Analysis carried out to answer research question 1
Analyses of variance (ANOVAs) were conducted to check for differences between the three cohorts, with corresponding Tukey-Kramer post-hoc tests to check for significant differences between the groups. As a measure of effect size for the overall comparisons we used partial eta squared (\(\eta_{p}^{2}\)) where the commonly used categorization of small (\(\eta_{p}^{2}<0.06\)), medium (\(0.06\leq\eta_{p}^{2}<0.14\)) and large (\(0.14\leq\eta_{p}^{2}\)) effects was applied [94]. As a measure of effect size regarding the pairwise comparisons we used Cohen's \(d\) alongside the established ranges of small
\begin{table}
\begin{tabular}{c c c c} \hline \hline Cohort & Description & Grades & Sample Size \\ \hline
1 & lower secondary school & 7-8 & 148 \\
2 & middle secondary school & 9-10 & 151 \\
3 & upper secondary school & 11-12 & 67 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of the sample for the main study.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & & Cronbach’s Alpha & \\ Sub-scale & Cohort 1 & Cohort 2 & Cohort 3 \\ \hline
1 & 0.67 & 0.70 & 0.74 \\
2 & 0.86 & 0.87 & 0.79 \\
3 & 0.74 & 0.79 & 0.79 \\
4 & 0.86 & 0.80 & 0.84 \\
5 & 0.71 & 0.73 & 0.71 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Internal consistencies of the sub-scales of the instrument based on the data of the main study sample. These compare well with the ones obtained in the pilot study (cf. Section IV.2). For the descriptions of the sub-scales see Section IV.1 and Table 1, respectively.
(\(d<0.5\)), medium (\(0.5\leq d<0.8\)) and large (\(0.8\leq d\)) effect sizes [94]. To ensure the assumption of homogeneity underlying ANOVA, we employed Levene's Test [95]. Additionally, the normal distribution of the data was assessed using the Shapiro-Wilk test [96].
#### Analysis carried out to answer research question 2
To analyse the students' views in terms of scientific accuracy we employed the categorization of responses described in Section IV, namely (a) "in line with the scientific view", (b) "opposing the scientific view" and (c) "abstained form voting". In a next step, the proportion of agreements with statements opposing the current scientific views was analysed and an interpretation of the corresponding students' views was given.
## VI Results
In the following, we report the results of our study, separated by domain. For each of the five investigated domains we will first report ANOVA results to evaluate the development of secondary school students' views of stars. In a second step, we will analyze all responses on the corresponding sub-scale of our instrument to identify widespread ideas opposing scientific views among secondary school learners. We refer to certain items of the instrument with the abbreviation x-y, which stands for item y of sub-scale x (for an overview of all items, see the appendix). To provide a concise overview, the ANOVA results are gathered in Table 4 and will be explored more thoroughly in the following subsections.
### Domain 1: Stars and solar system
Table 5 shows the descriptive statistics for all cohorts regarding items of domain 1. While cohort 1 students averaged 48.4% scientifically accurate answers, the proportion of responses aligning with scientific views for cohort 2 is 64.2% and further increases to 72.6% for cohort 3. A similar observation can be made for the median.
#### ANOVA results for domain 1
The trend observed in Table 5 is statistically substantiated by the ANOVA results (cf. Table 4). The difference between the three cohorts is statistically significant [\(F(2,363)=39.7,p<0.001;\eta_{p}^{2}=0.18\)]. Comparing the three cohorts directly yields a statistically significant difference between cohort 2 and cohort 3 (\(p<0.05\)) with a effect size of \(d=0.42\). Cohort 1, on the other hand, differs highly significantly from both cohort 2 (\(p<0.01\)) and cohort 3 (\(p<0.01\)) with medium to high effect sizes of \(d=0.52\) and \(d=1.07\), respectively. These results are summarized in the form of boxplots in Figure 1 - for the presentation of our boxplots the whiskers indicate \(1.5\times\mathrm{IQR}\) throughout this article, where IQR is the interquartile range.
#### Secondary school students' views of stars and the solar system
A more in-depth view of the student's responses on the items of domain 1 is provided by Table 10. Here, we report the more fine-grained categorization of answers into answers that are in line with scientific views (\(+\)), opposing scientific views (\(-\)) and the ones abstained from vot
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Cohort & Mean & SD & Median & Min & Max \\ \hline
1 & 48.4 & 22.9 & 55.6 & 0.0 & 88.9 \\ \hline
2 & 64.2 & 18.6 & 66.7 & 11.1 & 100 \\ \hline
3 & 72.6 & 18.3 & 77.8 & 22.2 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Descriptive statistics for the percentage of responses on all items of domain 1 that are in line with the scientific views, separated by cohort.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline Sub-scale & N & \(\alpha\) & \(\varnothing\) Item difficulty & \(\varnothing\) Discriminatory power & Anchor example \\ \hline D1: Stars and solar system & 9 & 0.69 & 0.65 & 0.36 & The Sun is the largest star in the universe. \\ \hline D2: Formation and Evolution of Stars & 16 & 0.83 & 0.56 & 0.44 & Stars do not form and die, they only undergo changes over time. \\ \hline D3: Properties of Stars & 15 & 0.78 & 0.66 & 0.39 & All stars have the same mass. \\ \hline D4: (Sub-)Stellar objects & 8 & 0.79 & 0.35 & 0.50 & All stars end as white dwarfs. \\ \hline D5: Spectral aspects & 7 & 0.73 & 0.70 & 0.46 & All stars are white. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overview of the five sub-scales D1 to D5 (including the number \(N\) of items comprised and the Cronbach’s \(\alpha\) values) of the final version of the instrument used in this study to approach a clarification of the research questions alongside the average item difficulties and the average discriminatory powers of the corresponding items. We refrain from reporting the item difficulties and discriminatory powers of the single items due to the large number of items.
ing (\(\circ\)). Across all items and cohorts, there is generally a high percentage of students providing scientifically sound answers. Nevertheless, the ANOVA results are also reflected in the observation that the share of answers in line with the scientific view increase from throughout grades 7 to 12. An item that stands out due to a large share of \(-\) is item 1-4, indicating that between 32.4% (lower), 46.1% (middle) and 41.8% (upper) secondary school students believe that "the planets and the Sun were formed at the time of the Big Bang". An even higher share of answers opposing the scientific view and the most evident view held by all students is revealed by item 1-5 which states that "there are hundreds of stars in our solar system." Here, 75.0% (lower), 70.9% (middle) and 62.7% (upper) of all participants agreed with the statement, indicating trouble with grasping the magnitude of our solar system. Lastly, item 1-6 exhibits the greatest share of abstained votes, with 32.4% (lower), 25.8% (middle) and 28.4% (upper), respectively. In other words, this item stating that "metals have existed in the universe since the Big Bang" was met with the highest uncertainty and, consequently, a low percentage of scientifically sound responses.
### Domain 2: Formation and evolution of stars
Table 6 shows the descriptive statistics for all cohorts regarding items of domain 2. While mean and median are, on average, slightly lower compared to domain 1, we again observe an increase of all metrics with the exception of standard deviation.
#### ANOVA results for domain 2
The trend observed for domain 1 also holds for domain 2 (cf. Table 4): The difference between the cohorts is very highly significant [\(F(2,363)=28.0,p<0.001;\eta_{p}^{2}=28.4\%\) (upper), respectively. In other words, this item stating that "metals have existed in the universe since the Big Bang" was met with the highest uncertainty and, consequently, a low percentage of scientifically sound responses.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Cohort & Mean & SD & Median & Min & Max \\ \hline
1 & 42.9 & 21.4 & 43.8 & 0.0 & 75.0 \\ \hline
2 & 53.8 & 20.8 & 50.0 & 6.3 & 100 \\ \hline
3 & 65.4 & 20.9 & 68.8 & 6.3 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Descriptive statistics for the percentage of responses on all items of domain 2 that are in line with the scientific views, separated by cohort.
\begin{table}
\begin{tabular}{c c c c c c c c|c} \hline \hline & & & Sum of squares of & \(F\) & \(p\) & \(\eta_{p}^{2}\) & \multicolumn{3}{c}{Post-hoc test} \\ \cline{4-9} Domain & Sum of squares & df & residual error & & & 1-2 & 1-3 & 2-3 \\ \hline \hline D1 & between groups & 3.30 & 2 & 1.65 & 39.7 & \(<0.001\) & 0.18 & \(<0.01\) & \(<0.05\) \\ & within group & 15.10 & 363 & 0.04 & & & & & \\ \hline D2 & between groups & 2.48 & 2 & 1.24 & 28.0 & \(<0.001\) & 0.13 & \(<0.01\) & \(<0.01\) \\ & within group & 16.06 & 363 & 0.04 & & & & & \\ \hline D3 & between groups & 1.82 & 2 & 0.91 & 24.0 & \(<0.01\) & 0.12 & \(<0.01\) & \(<0.01\) \\ & within group & 13.78 & 363 & 0.04 & & & & & \\ \hline D4 & between groups & 0.65 & 2 & 0.32 & 4.46 & \(<0.05\) & 0.02 & 0.94 & \(<0.05\) & \(<0.05\) \\ & within group & 26.26 & 363 & 0.07 & & & & & \\ \hline D5 & between groups & 1.89 & 2 & 0.94 & 13.2 & \(<0.001\) & 0.07 & \(<0.05\) & \(<0.001\) & \(<0.01\) \\ & within groups & 25.99 & 363 & 0.07 & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of ANOVAs comparing the percentage of responses aligned with the scientific view on all items across the three cohorts (1 corresponding to lower secondary students, 2 to middle secondary students, and 3 to upper secondary students) and across all sub-scales (domains D1 to D5). The \(p\) values reported in the last three columns belong to a Tukey-Kramer post-hoc test. Cohen’s \(d\) coefficients as measures of effect size for the pairwise comparisons are provided in the corresponding Figures 1-5.
Figure 1: Boxplot for the percentage of responses on all items of domain 1 that are in line with the scientific views. Asterisks indicate the statistical significance of Tukey-Kramer post hoc pairwise comparisons (*: \(p<0.05\), **: \(p<0.01\), ***: \(p<~{}0.001\)), whereas Cohen’s \(d\) is reported as a measure of effect size.
0.13]. Likewise, all between group comparisons show highly statistical significance (\(p<0.01\)) with effect sizes ranging from \(d=0.52\) between cohort 1 and 2, \(d=0.55\) between cohort 2 and 3 as well as \(d=1.07\) between cohort 1 and 3. Hence, not only do the groups differ highly significantly, but medium to high effect sizes can be associated with each of the pairwise comparisons, indicating a steady increase of scientifically accurate views across secondary education. These results are summarized in the form of boxplots in Figure 2.
#### Secondary school students' views of the formation and evolution of stars
A more fine-grained overview of responses is provided in Table 11. Again, we observe a general tendency towards answers in line with the scientific view compared to answers opposing it. Additionally, the pattern of upper secondary school students outperforming their middle and lower peers repeats. An item with a notable percentage of scientifically inaccurate views is item 2-10 which deals with the color of stars. Here, 50.7% of lower, 49.0% of middle and 38.8% of upper secondary school students hold the opinion that stars do not change their color throughout their life, neglecting their evolutionary stages that impact the perceived color. This lack of awareness regarding the evolutionary stages is substantiated by item 2-13 which states that "stars fade and disappear over time". An almost equal share of 63.5% (lower), 58.3% (middle) and 53.7% (upper) disagreed with this statement and, thus, perceive stars as permanent objects. Lastly, item 2-16 reveals another view that falls in line with the responses on item 1-5: Almost half of each cohort responded that "a supernova immediately destroys a large part of the galaxy", indicating yet again a skewed perception of astronomic scales.
### Domain 3: Properties of Stars
The descriptive statistics for all cohorts regarding items of domain 3 are provided in Table 7. The values overall compare very well to the ones from domain 1 and 2 (cf. Table 5 and Table 6).
#### ANOVA results for domain 3
The ANOVA results for domain 3 show almost identical values with those of domain 2 (cf. Table 4). The cohorts differ highly statistically significantly [\(F(2,363)=24.0,p<0.01;\eta_{p}^{2}=0.12\)] and the same is true for all pairwise cohort comparisons (\(p<0.01\) each). The corresponding effect sizes are all high with \(d=0.44\) between Cohort 1 and 2, \(d=0.57\) between Cohort 2 and 3 as well as \(d=1.01\) between Cohort 1 and 3. Hence, as before, a continuous improvement in scientifically accurate views held by students can be observed from lower to middle and, lastly, higher secondary education. A summary of these results in terms of Boxplots is provided in Figure 3.
#### Secondary school students' views of properties of stars
Table 11 provides an overview of the responses on all items of domain 3, exhibiting similar answer patterns as before (cf. Table 11 and Table 11). Here, item 3-4 stands out as it extends an idea observed in domain 2: 43.9% of lower, 29.1% of middle and 37.3% of upper secondary school students held the misguided view that stars do not underlie gravitational pull, viewing stars as stationary celestial objects that do not interact with their surroundings. The share of scientifically accurate responses on this item being the same for cohorts 2 and 3 suggests that this view is somewhat persistent throughout higher secondary education. This is further substantiated by the response pattern of item 3-13 which states that "stars seem to rise and set". Again, with 35.8%, 43% and 34.3%, respectively, a substantial share of participants disagreed with that statement, expressing yet another view of _rigid_ stars. On a different note, item 3-7
\begin{table}
\begin{tabular}{c c c c c c} Cohort & Mean & SD & Median & Min & Max \\ \hline
1 & 55.7 & 22.2 & 60.0 & 0.0 & 93.3 \\ \hline
2 & 64.2 & 17.5 & 66.7 & 13.3 & 100 \\ \hline
3 & 75.3 & 17.2 & 80.0 & 20.0 & 100 \\ \end{tabular}
\end{table}
Table 7: Descriptive statistics for the percentage of responses on all items of domain 3 that are in line with the scientific views, separated by cohort.
Figure 2: Boxplot for the percentage of responses on all items of domain 2 that are in line with the scientific views. Asterisks indicate the statistical significance of Tukey-Kramer post hoc pairwise comparisons (*: \(p<0.05\), **: \(p<0.01\), ***: \(p<~{}0.001\)), whereas Cohen’s \(d\) is reported as a measure of effect size.
reveals that 39.9% of lower, 42.4% of middle and 28.4% of higher secondary school students relate a star's distance to its apparent brightness in the sky, stating that "the brightest stars are closest to Earth".
### Domain 4: (Sub-)Stellar objects
Table 8 provides an overview of descriptive statistics for all items of domain 4. This domain holds the overall lowest descriptive statistics. While cohort 1 students averaged 32.9% scientifically accurate answers, the proportion of responses aligning with scientific views for cohort 2 is 31.8% and 43.1% for cohort 3. The median reflects this performance and, for the first and only time, there are participants with no responses that are in line with the scientific view. Thus, on average secondary school students' views differed the most from the current scientific view regarding (sub-)stellar objects. We will elaborate on this observation in the discussion section (cf. Section VII).
#### ANOVA results for domain 4
With the scores being relatively low in each cohort, the variance in performance did not differ as much compared to the other domains. While overall group comparison still results in a statistically significant difference \([F(2,363)=4.46,p<0.05;\eta_{p}^{2}=0.02]\), no statistically significant difference was found between cohorts 1 and 2 (\(p=0.94\)). The differences between cohort 1 and 3 as well as cohort 2 and 3 were found to be statistically significant by Tukey-Kramer post-hoc tests (\(p<0.05\) each) but the corresponding effect sizes were low with \(d=0.38\) and \(d=0.42\), respectively. Thus, for the domain of (sub-)stellar objects we record the overall least distinguishable progress throughout secondary education, with only the later grades (11 and 12) meaningfully separating themselves from the rest. The boxplots reflecting this observation are presented in Figure 4.
#### Secondary school students' views of (sub-)stellar objects
A deeper look into this evidently more elusive domain is enabled by the response pattern on all items, provided in Table 8. It becomes clear that students on average did not hold inaccurate views but rather abstained from voting, perhaps due to the (currently) marginal nature of the corresponding contents in secondary curricula. Items 4-3, 4-5 as well as 4-7 all address brown and white dwarfs and have in common that students across all cohorts held similar views. With on average one third of all participants expressing views opposing the scientific view across these items, it is suggested that (sub-)stellar objects constitute a fringe topic for which little progress
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Cohort & Mean & SD & Median & Min & Max \\ \hline
1 & 32.9 & 27.9 & 37.5 & 0.0 & 87.5 \\ \hline
2 & 31.8 & 26.4 & 25.0 & 0.0 & 100 \\ \hline
3 & 43.1 & 25.7 & 37.5 & 0.0 & 100 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Descriptive statistics for the percentage of responses on all items of domain 4 that are in line with the scientific views. Responses on all items of domain 4 that are in line with the scientific views. Responses indicate the statistical significance of Tukey-Kramer post hoc pairwise comparisons (*: \(p<0.05\), **: \(p<0.01\), ***: \(p<~{}0.001\)), whereas Cohen’s \(d\) is reported as a measure of effect size.
Figure 4: Boxplot for the percentage of responses on all items of domain 4 that are in line with the scientific views. Asterisks indicate the statistical significance of Tukey-Kramer post hoc pairwise comparisons (*: \(p<0.05\), **: \(p<0.01\), ***: \(p<~{}0.001\)), whereas Cohen’s \(d\) is reported as a measure of effect size.
Figure 3: Boxplot for the percentage of responses on all items of domain 3 that are in line with the scientific views. Asterisks indicate the statistical significance of Tukey-Kramer post hoc pairwise comparisons (*: \(p<0.05\), **: \(p<0.01\), ***: \(p<~{}0.001\)), whereas Cohen’s \(d\) is reported as a measure of effect size.
is recorded throughout the grades 7 to 12.
### Domain 5: Spectral aspects
Table 9 shows the descriptive statistics for all cohorts regarding items of domain 5. In contrast to domain 4, domain 5 records the overall highest mean percentages of responses in accordance with the scientific views with 58.9% for lower, 67.3% for middle and 78.9% for higher secondary school students. The minimum and maximum in each cohort are 0.0% and 100.0%, respectively.
#### ANOVA results for domain 5
The ANOVA results for domain 5 demonstrate a much pronounced improvement along the trajectory of secondary education (cf. Table 4). The overall difference between the three cohorts is highly statistically significant [\(F(2,363)=13.2,p<0.001;\eta_{p}^{2}=0.07\)]. A Tukey-Kramer post-hoc test indicates that the advancement from grades 7-8 to grades 9-10 is statistically significant (\(p<0.05\)) with an effect size of \(d=0.31\), whereas the difference between grades 9-10 and 11-12 is highly statistically significant (\(p<0.01\)) with an effect size \(d=0.48\). Lastly, the difference between cohorts 1 and 3 is very highly statistically significant (\(p<0.001\)) with a medium effect size of \(d=0.75\). The boxplots reflecting this continuous increase in accurate scientific views are illustrated in Figure 5.
#### Secondary school students' views of Spectral aspects
The fine-grained overview of responses to items of the final sub-scale is presented in Table 10. Here, the item that stands out the most is item 5-7 which states that "it is said that stars twinkle because they change their brightness" - 45.9% of lower, 39.1% of middle and 25.4% of higher secondary school students agreed with this statement. In congruence to the response pattern on items 2-10 and 3-7 singled out earlier, the brightness of stars seems to constitute a recurring theme where students' views are inaccurate to a greater extent than usual. We will discuss this observation more thoroughly in Section VII.
## VII Discussion
In this section, we contextualize our findings against the backdrop of prior astronomy education research. In particular, we shed light on the extent to which our findings are consistent with prior research, identify areas where differences have emerged, and highlight new contributions to the understanding of secondary school students' views of stars. Moreover, we discuss implications of our findings for both astronomy education research and practice.
In general, our cross-age analysis reveals a progressive development of students' perspectives on star-related topics throughout their secondary school education. The statistically significant increase in the percentage of responses aligned with current scientific views from lower to upper secondary school applies to all aspects of stars examined in this study, as demonstrated through ANOVAs and post-hoc pairwise comparisons (see Section VI). Furthermore, our study expands upon existing literature, which primarily focuses on students' views of the nature, apparent motion, and properties of stars [26, 65, 76].
### Discussion of findings regarding domain 1
The percentage of responses aligning with scientific views in domain 1 (stars and solar systems) shows an increase from 48.4% among lower, and 64.2% among middle to 72.6% among upper level secondary school students (see Table 5). An item with a very big gain of alignment towards scientific views was for example item 1-9, where progressively more students disagreed with the statement that the Sun is the largest star in the universe (75.0% of
\begin{table}
\begin{tabular}{c c c c c c} Cohort & Mean & SD & Median & Min & Max \\ \hline
1 & 58.9 & 29.5 & 71.4 & 0.0 & 100 \\ \hline
2 & 67.3 & 24.9 & 71.4 & 0.0 & 100 \\ \hline
3 & 78.9 & 24.3 & 85.7 & 0.0 & 100 \\ \end{tabular}
\end{table}
Table 9: Descriptive statistics for the percentage of responses on all items of domain 5 that are in line with the scientific views, separated by cohort.
Figure 5: Boxplot for the percentage of responses on all items of domain 5 that are in line with the scientific views. Asterisks indicate the statistical significance of Tukey-Kramer post hoc pairwise comparisons (*: \(p<0.05\), **: \(p<0.01\), ***: \(p<~{}0.001\)), whereas Cohen’s \(d\) is reported as a measure of effect size.
the lower, 43.7% of the middle and 34.3% of the upper secondary students). On the single-item level we also found, that a majority of the study participants (75.0% of the lower, 70.9% of the middle and 62.7% of the upper secondary students) agreed that there are hundreds of stars in our solar system (item 1-5, see Table 11). This idea opposes the scientific view and has already been reported in Ref. [97]. This student view can possibly be explained by the confusion of the terms solar system and stellar system as indicated by Rajpaul et al. [1]. Another widespread perspective among our study participants is that metals have existed since the Big Bang (item 1-6, see Table 11): 46.6% of the lower, 40.4% of the middle, and 29.9% of the upper secondary students were of that opinion, while in addition, around a third of the students at all grade levels were undecided (32.4%, 25.8% and 28.4%, respectively). This finding supports the study by Slater et al. [70], in which around a third of the students have been reported to hold the view that heavy atoms have existed since the Big Bang.
In contrast to previous research by Dunlop [75], Philips [98] or Comins [97], in our study, we did not find evidence of students misconceiving that the Sun is not a star: Only approximately 10% of lower and middle secondary school students, and merely 3% of upper secondary school students, subscribed to this view (see see item 1-8 Table 11). It is noteworthy, that it remains unclear which research methods Philips [98] and Comins [97] employed to uncover this confusion to be widespread among learners. Additionally, Comins [97] stated that many students would believe that the sun is bigger than other stars. In our study, we identified this view to be widespread among the lower secondary school students (60.1%, see Table 11), while only 37.7% of the middle and 29.9% of the upper secondary students agreed with item 1-9 ("The sun is the largest star in the universe.").
### Discussion of findings regarding domain 2
Similar to domain 1, the percentage of responses aligning with scientific views in domain 2 (formation and evolution of stars) shows an increase from 42.9% among lower secondary school students to 53.8% among middle secondary school students and 65.4% among upper secondary school students (see Table 12). An item with a very big gain of alignment towards scientific views was, for example, item 2-10, where progressively more students agreed that stars can change in color (16.9% of the lower, 35.8% of the middle and 79.1% of the upper secondary students). But on the single-item level, we also found that - in line with earlier research conducted by Agan [67] - more than half of our participating students agreed with item 2-13 stating that stars would fade and disappear over time (63.5% of the lower, 58.3% of the middle, and 53.7% of the upper secondary students).
### Discussion of the findings regarding domain 3
A study similar to the one presented in this paper, focusing on star-related aspects, was conducted by Plummer [76]. In her cross-age study, Plummer aimed to assess students' views of celestial motion and identify any misconceptions held at different grade levels (1st, 3rd, and 8th grade students). One part of her study specifically focused on students' views of the apparent motion of stars. Among the sample students, two main perspectives emerged: those who could provide a general description of stars moving slowly across the night sky (40% of 8th graders, 50% of 3rd graders, and 35% of 1st graders), and those who believed that stars never move (40%, 40%, and 25%, respectively). The remaining students held various perspectives, including the idea that stars only move at the end of the night. Our study complements Plummer's findings by providing insights into the views of older students regarding the apparent motion of stars. Among our cohort 1 students (grades 7 and 8), only 18.2% believed that stars are stationary and fixed in the sky (item 3-11, see Table 11). This percentage was even lower among cohort 3 students (grades 11 and 12) at 14.9%. In addition, our findings align with the research conducted by Agan [67], revealing a comparable proportion of students (approximately 14%) who share the view that stars are stationary. This stands in contrast to studies published in Refs. [33; 48; 70], which reported a higher prevalence of this view among approximately 40% of students. On a further note, our findings are supported by Plummer's study, according to which more than "one-half of the students in the first, third, and eight grades do not think that we see different stars in the sky during the night (65%, 60%, and 65%), respectively" ([76], p. 1598). In our study, 60.8% of lower secondary school students, 75.5% of middle secondary school students, and 82.1% of upper secondary school students agreed with item 3-12, which states that we see different stars over the course of a night (see Table 11). In summary, our findings suggest a continued evolution of students' views on the apparent motion of stars throughout secondary education.
Another cross-age study on basic astronomy concepts was conducted by Trumper, which included both junior (grades 7-9) [99] and senior (grades 10-12) [100] high school students. In Trumper's study, only 36% of junior high school students were aware that stars are the farthest objects from the Earth ([99], p. 1117), while this percentage increased to 49% among senior high school students ([100], p. 103). In our study, the percentages of students agreeing with the scientific views on this matter were slightly higher in the sample cohorts. Among lower secondary school students, 43.9% agreed that stars are farther away from the Earth than the Sun (item 3-9, see Table 11). This percentage increased to 51.7% among middle secondary school students and 53.7% among upper secondary school students. Therefore, it appears that students' views regarding distances
develop throughout their school careers. This observation is further supported by the decreasing agreement with item 3-10, which states that the distance between stars is about the same as the distance between planets. While half of the lower secondary school students disagreed with this item, the disagreement percentage increased to 62.9% among middle secondary school students and 74.6% among upper secondary school students (see Table 17). Our findings align with earlier research on learners' views of astronomical object distances from the Earth's surface (e.g., see [1, 20, 48]).
### Discussion of findings regarding domains 4 & 5
While the percentage of responses aligning with scientific views in domain 4 remains below 50% for students across all grade levels (see Section VIII), we found a majority of students demonstrating views in line with current scientific understandings of the spectral aspects of stars in domain 5 (see Section IX). No statistically significant difference can be observed in the percentage of responses aligning with scientific views for items in domain 4 between lower (32.9%) and middle secondary school students (31.8%), see Table 8. This finding can likely be attributed to the omission of (sub-)stellar objects from the German astronomy curricula at both lower and middle school levels. Insights into current curriculum developments regarding astronomy in German secondary schools can be found in Ref. [20]. The topic of spectral aspects shows a similar pattern: Surprisingly, a small but statistically significant difference exists in the percentage of responses aligning with scientific views for items in domain 5 between lower (58.9%) and middle secondary school students (67.3%). This discrepancy can possibly be attributed to either implicit learning, which occurs unconsciously as students engage with different topics, such as atomic physics (for more details on implicit learning, see Reber [101]), or the influence of informal learning environments [102], such as planetariums [75]. There were items like 4-6 (see Table 17) and 5-6 (see Table 19), where a bigger shift towards a perspective aligned with scientific views was observed (38.5% to 70.1% and 39.2% to 86.6% respectively), though for most, the shifts remained smaller. In future research, it would be valuable to explore (a) which topics in the secondary school curriculum facilitate implicit learning of astronomy and (b) the sources from which students gain insights into astronomy topics in informal settings.
### Implications for Educational Research
The results of this study imply that, in general, students' ideas about stars show a progressive alignment with the current scientific views. This quite positive development, however, lacks a clear explanation at this point, meaning that further extensive research is advised to clear up the various causes for the developments presented in this study: While teaching in the classroom appears to play a role, it is important to consider the potential influence of informal learning environments such as out-of-school visits [103, 104] or explanatory videos [105]. These supplementary resources may contribute to the positive evolution of students' understanding. Additionally, no data about the traits of the students have been gathered. As previous research has shown, astronomical topics tend to rank highly in studies on interesting topics for students [17]. A thorough investigation into how and what affective traits facilitate the increase in alignment with scientific topics during the secondary school years could give new and more detailed insights into the role of interests in learning processes. Furthermore, the instrument utilized in this study has the potential to be employed in future research to identify the specific factors and properties of learning materials and methodologies that facilitate these favorable learning outcomes. By using this instrument, researchers can gain insights into the effective strategies and approaches that support students' development of ideas in the domain of stars and related astronomical concepts, complementing previous instruments devised for this purpose (e.g., see [57, 106]). Further investigation is warranted to delve deeper into the mechanisms driving this positive learning trajectory. Additionally, exploring the comparative impact of various educational interventions and materials could shed light on the most effective approaches for promoting accurate scientific understanding of stars among students.
### Implications for Educational Practice
From the findings presented in this study, we surmise that current educational practices available between the lower and upper secondary school level likely facilitate a basic development of ideas about stars that in general align progressively more with scientific views during these years. However, some persistent ideas have also been isolated that have also been found in corresponding literature (e.g., see [67, 107, 66]). Mainly, static ideas have been found especially in the lower high school classes and confusions of central ideas like stellar system and solar system have also been documented. Ideas of change and dynamics (stars changing brightness or stars changing color) might be better facilitated by incorporating learning materials that depict these dynamic properties (such as videos or comics, cf. [105, 108]) or even enable learners to interact with the material (such as simulations, cf. [109]).
## VIII Limitations
While our study on students' views of stars throughout their secondary school careers provides valuable insights, it is important to acknowledge several limitations.
Firstly, it is crucial to note that our research adopted a cross-age study design rather than a conventional longitudinal approach: Instead of longitudinally tracking the views of a specific group of students, we assessed students from different grade levels at a single time point. Our approach offers valuable cross-sectional data but (a) limits our ability to capture individual students' developmental trajectories and the specific changes in their views of stars over time and (b) may be subject to cohort bias [110]. Secondly, it is important to acknowledge the limited emphasis on astronomy within the secondary school curriculum in Germany where this study was conducted (e.g., see [111]). The lack of continued instruction in astronomy raises the question of how students' views of stars might be influenced if they had participated in continued astronomy instruction throughout their secondary school careers. Thirdly, the subsamples of lower and middle secondary school students consisted of approximately 150 participants each, whereas the number of participants in the upper secondary school group was only about half that size. This discrepancy arises from the fact that in Germany, students make a decision after completing grade 10 regarding whether to pursue a physics course in upper secondary school. Consequently, the total cohort of possible study participants becomes significantly smaller at this grade level. Furthermore, teachers are often less inclined to participate in research studies during the twelfth grade due to the impending final exams. These factors imposed limitations on our sampling approach, resulting in an asymmetric distribution of participants among the three subsamples.
Another limitation stems from the question format used in our instrument, which consisted of closed-ended rating-scale items. While this format facilitated efficient data collection on students' views of stars on a large scale, it is important to recognize that the predetermined statements may have influenced participants' responses, potentially leading to the generation of ad hoc conceptions. To mitigate this limitation, future research should incorporate qualitative data collection methods, such as mind mapping or concept mapping [112; 113; 114; 115], to gather in-depth insights that validate and expand upon our findings. Furthermore, it is crucial to acknowledge that our study primarily focused on assessing students' ideas and views of stars, rather than their conceptual understanding of relevant aspects related to stars. While our study provides valuable perspectives, it is essential to complement these findings with investigations into students' conceptual understanding. Developing a concept inventory should therefore be a priority for future research, enabling the assessment of students' conceptual understanding of the topic under investigation (cf. [66]).
## IX Conclusion
From the data collected and analyzed, our findings about students' ideas about stars compared to scientific views have shown that progressing from lower to upper secondary school, ideas start to align more with the current scientific views. This development in itself is positive, though the exact factors for it still need to be further determined. From the data gathered, some of the ideas are already being developed rather effectively, such as ideas about Sun's size when compared to other stars (see items 1-9 in Table 14 and 3-2 in Table 15) or that stars can change both color and brightness (see items 2-10 in Table 16 and 5-6 in Table 17) while others seem more robust, such as the idea that there are hundreds of stars in the solar system (item 1-5, see Table 14) or that white dwarfs are suns (item 4-5, see Table 15). In summary, our analysis indicates a positive trend of students' ideas aligning more closely with scientific views regarding stars as they progress through secondary school, although the specific contributing factors remain to be determined.
## Data Availability
Anonymized data from the study is available on request from the authors.
|
2310.19285 | Facilitating Graph Neural Networks with Random Walk on Simplicial
Complexes | Node-level random walk has been widely used to improve Graph Neural Networks.
However, there is limited attention to random walk on edge and, more generally,
on $k$-simplices. This paper systematically analyzes how random walk on
different orders of simplicial complexes (SC) facilitates GNNs in their
theoretical expressivity. First, on $0$-simplices or node level, we establish a
connection between existing positional encoding (PE) and structure encoding
(SE) methods through the bridge of random walk. Second, on $1$-simplices or
edge level, we bridge edge-level random walk and Hodge $1$-Laplacians and
design corresponding edge PE respectively. In the spatial domain, we directly
make use of edge level random walk to construct EdgeRWSE. Based on the spectral
analysis of Hodge $1$-Laplcians, we propose Hodge1Lap, a permutation
equivariant and expressive edge-level positional encoding. Third, we generalize
our theory to random walk on higher-order simplices and propose the general
principle to design PE on simplices based on random walk and Hodge Laplacians.
Inter-level random walk is also introduced to unify a wide range of simplicial
networks. Extensive experiments verify the effectiveness of our random
walk-based methods. | Cai Zhou, Xiyuan Wang, Muhan Zhang | 2023-10-30T06:03:34Z | http://arxiv.org/abs/2310.19285v1 | # Facilitating Graph Neural Networks with Random Walk on Simplicial Complexes
###### Abstract
Node-level random walk has been widely used to improve Graph Neural Networks. However, there is limited attention to random walk on edge and, more generally, on \(k\)-simplices. This paper systematically analyzes how random walk on different orders of simplicial complexes (SC) facilitates GNNs in their theoretical expressivity. First, on \(0\)-simplices or node level, we establish a connection between existing positional encoding (PE) and structure encoding (SE) methods through the bridge of random walk. Second, on \(1\)-simplices or edge level, we bridge edge-level random walk and Hodge \(1\)-Laplacians and design corresponding edge PE respectively. In the spatial domain, we directly make use of edge level random walk to construct EdgeRMSE. Based on the spectral analysis of Hodge \(1\)-Laplcians, we propose Hodge1Lap, a permutation equivariant and expressive edge-level positional encoding. Third, we generalize our theory to random walk on higher-order simplices and propose the general principle to design PE on simplices based on random walk and Hodge Laplacians. Inter-level random walk is also introduced to unify a wide range of simplicial networks. Extensive experiments verify the effectiveness of our random walk-based methods.
## 1 Introduction
Graph neural networks (GNNs) have recently achieved great success in tasks with graph-structured data, benefiting many theoretical application areas, including combinatorial optimization, bioinformatics, social-network analysis, etc. [11; 29; 16]. Two important aspects to evaluate GNN models are their theoretical expressivity in distinguishing non-isomorphic graphs, and their performance on real-world tasks. Positional encoding (PE) and structure encoding (SE) are widely adopted methods to enhance both theoretical expressivity and real-world performance of GNNs. Generally, PE encodes the information of the nodes' local or global positions, while SE provides information about local or global structures in the graph. For example, Kreuzer et al. [30] uses eigenvectors of the graph Laplacian, Dwivedi et al. [18] proposes to use diagonal elements of the \(t\)-step random walk matrix, and Bouritsas et al. [9] manually count some predefined structures. There are also some methods based on pair-wise node distances, such as the shortest path distance [31], the heat kernel [20], and the graph geodesic [35]. Although some work theoretically analyzes some of these methods [51], there are still some left-out methods, and people lack a unified perspective to view all these PE and SE designs. Moreover, most existing methods focus only on node data, while PE and SE on edge data as well as some higher-order topological structures are waited to be studied.
In addition to PE and SE, geometric deep learning has recently become a central topic. Researchers are inspired by concepts of differential geometry and algebraic topology, which resulted in many works on simplices and simplicial complexes [8; 7; 47]. Despite their capability to deal with higher-order structures, these simplicial networks should follow orientation symmetry, which brings difficulties in their applications in undirected graphs. This work connects these two separate areas via a central
concept: random walk on simplicial complexes. On the one hand, by introducing concepts of higher-order simplicial complexes, we can design more PE and SE methods that are both theoretically and practically powerful. On the other hand, PE and SE greatly facilitate simplicial data and benefit graph learning.
In summary, we first connect a number of existing PE and SE methods through the bridge of node-level random walk on \(0\)-simplices. Then, for \(1\)-simplices or edges, we design two novel sign and basis invariant edge-level PE and SE, namely EdgeRMSE and Hodge1Lap. EdgeRMSE uses an edge-level random walk directly to capture structure information, while Hodge1Lap is based on spectral analysis of Hodge 1 Laplacian, which is closely related to random walk on edges. We further generalize our theory to random walk on higher-order and inter-order simplices to facilitate graph and simplicial learning. Our methods achieve State-Of-The-Art or highly competitive performance on several datasets and benchmarks. Code is available at [https://github.com/zhouc20/HodgeRandomWalk](https://github.com/zhouc20/HodgeRandomWalk).
## 2 Related work
Theoretical expressivity and Weisfeiler-Lehman test.Weisfeiler-Lehman tests are a classical family of algorithms to distinguish non-isomorphic graphs. Previous work has built connections between the expressivity of GNNs and the WL hierarchy. Some classical conclusions include that for \(k\geq 2\), \(k+1\)-dimensional WL is more powerful than \(k\)-WL. [46] proves that traditional message-passing neural networks (MPNN) are not more powerful than \(1\)-WL. There is another variation of the WL test called the Folklore Weisfeiler-Lehman (FWL) test, and \(k\)-FWL is equivalent to \(k\)-WL in expressivity for \(k\geq 1\).
Symmetry in graph and simplicial learning.Symmetry is a central topic in graph and simplicial learning. In graph learning, node features and edge features need to be permutation (i.e., relabeling of nodes or edges) equivariant, while the graph features should be permutation invariant. In simplicial learning, one needs to further orientation symmetry [47] in an oriented simplicial complex (SC). The incidence relations and the simplicial adjacencies in an oriented SC are altered when the orientations are reversed. The \(k\)-form remains invariant to this transformation, while the features of \(k\)-simplices are equivariant in terms of the basis. [32] also state the standard that the graph-level functions (and in the context of SC, \(k\)-forms) should be invariant to both sign and basis (either of orientation or of space), which is a basic rule for our PE and SE designs.
## 3 Preliminary
Graphs.We denote a graph as \(G(V,E,A)\), where \(V,E\) is the set of nodes and the set of edges, respectively, and \(A\) is the adjacency matrix for the nodes. For convenience, we use \(n=|V|\) and \(m=|E|\) to represent the number of nodes and edges in the graph \(G(V,E,A)\). In an undirected graph, for any \(u,v\in V\), we have \((u,v)\in E\Leftrightarrow(v,u)\in E\). Let \(\mathcal{N}(v,G)=\{u\in V|(u,v)\in E\}\) denote the set of neighbors of node \(v\) in graph \(G\). Let diagonal matrix \(D=diag(d_{1},...,d_{n})\), where \(d_{i}\) is the degree of node \(v_{i}\).
The transition matrix of a typical random walk at node level is \(P=D^{-1}A\), which indicates that in each step the walk moves from the current node \(v\) to one of its neighboring nodes \(u\in\mathcal{N}(v,G)\) with equal probabilities. Consequently, a \(t\) step of the aforementioned random walk corresponds to a transition matrix \(P^{t}\).
Discrete Hodge Laplacian of abstract simplicial complex.An abstract simplicial complex \(\mathcal{K}\) on a finite set \(V\) is a collection of subsets of \(V\) that is closed under inclusion. In our paper, \(V\) will be a vertex set \([n]=\{1,2,...,n\}\) if without special statement. An element of cardinality \(k+1\) is called a \(k\)-face or \(k\)-simplex of \(\mathcal{K}\). For instance, \(0\)-faces are usually called vertices, \(1\)-faces are directed edges, and \(2\)-faces are 3-cliques (triangles) with an orientation. We denote the collection of all \(k\)-faces of \(\mathcal{K}\) as \(S_{k}(\mathcal{K})\). The dimension of a \(k\)-face is \(k\), and the dimension of a complex \(\mathcal{K}\) is defined as the maximum dimension of the faces in \(\mathcal{K}\).
The definition of neighbors of simplices is crucial in this paper. Two \(k+1\)-simplices sharing a collective \(k\)-face are called \(k\)-down neighbors, and two \(k\)-simplices sharing a collective \(k+1\)-simplex are called \(k+1\)-up neighbors. Generally, a face \(F\) is chosen as an ordering on its vertices and is said
to be oriented, denoted by \([F]\). For any permutation element \(\sigma\in\mathcal{G}_{k+1}\) where \(\mathcal{G}_{k+1}\) is the symmetric group of permutations on \(\{0,...,k\}\), two orders of vertices transformed by \(\sigma\) are said to determine the same orientation if \(\sigma\) is an even permutation and opposite if \(\sigma\) is odd.
In the Hilbert space, the matrix representations of boundary and coboundary operators are adjacency matrices of order \(k\) and \(k+1\) simplices. In order to keep coordinate with most existing literature, we write the adjacent matrix of \(k\)-th and \(k+1\)-th simplices as \(\mathbf{B}_{k+1}\in\mathbb{R}^{[S_{k}]\times[S_{k+1}]}\). \(\mathbf{B}_{k+1}[i,j]=1\) if the \(i\)-th \(k\)-simplex and \(j\)-th \(k+1\)-simplex are adjacent and share the same direction, \(\mathbf{B}_{k+1}[i,j]=-1\) if adjacent with opposite directions, and \(0\) if they are not adjacent. For example, \(\mathbf{B}_{1}\) is the node-to-edge incidence matrix.
In discrete Hodge-deRham theory, the \(k\)-th order Hodge Laplacian is defined as
\[\mathbf{L}_{k}=\mathbf{B}_{k}^{*}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k +1}^{*} \tag{1}\]
where \(\mathbf{B}_{k}^{*}=\mathbf{B}_{k}^{T}\) is the adjoint of \(\mathbf{B}_{k}\) and is equivalent to the transpose of \(\mathbf{B}_{k}\) in Hilbert space. A special case is that when \(k=0\), \(\mathbf{B}_{0}\) is not defined and \(\mathbf{L}_{0}=\mathbf{B}_{1}\mathbf{B}_{1}^{*}=\mathbf{D}-\mathbf{A}\) is exactly the graph Laplacian. We refer readers to Appendix C.2.2 for an illustrative calculation example of Hodge Laplacians. In our following texts, we will make use of higher-order Hodge Laplacians such as \(\mathbf{L}_{1}\) rather than previously used \(\mathbf{L}_{0}\) alone.
The kernel space of \(\mathbf{L}_{k}\) is called the \(k\)-th cohomology group: \(\tilde{\mathcal{H}}^{k}(\mathcal{K},\mathbb{R}):=\ker(\mathbf{B}_{k+1}^{*})/ \mathrm{im}(\mathbf{B}_{k}^{*})\cong\ker(\mathbf{B}_{k+1}^{*})\cap\ker( \mathbf{B}_{k})=\ker(\mathbf{L}_{k})\). We will write \(\tilde{\mathcal{H}}^{k}(\mathcal{K},\mathbb{R})\) simply as \(\tilde{\mathcal{H}}^{k}\) without causing confusion. The kernel spaces of Hodge Laplacians are closely associated with harmonic functions and will play an important role in our following analysis. Particularly, the multiplicity of zero eigenvalues of \(\mathbf{L}_{k}\), or the dimension of null space of Hodge \(k\)-Laplacian \(\ker(\mathbf{L}_{k})\), is called the \(k\)-th Betti number \(\beta_{k}\)[23]. This is exactly the number of cycles composed of \(k\)-simplicials that are not induced by a \(k\)-boundary, or intuitively, \(k\)-dimensional "holes" in the simplicial complex \(\mathcal{K}\). For example, zero eigenvalues and their eigenvectors of \(\mathbf{L}_{0}\) are associated with the \(0\)-th cohomology group of the graph, corresponding to the connected components of the graph. The zero eigenvalues and eigenvectors of \(\mathbf{L}_{1}\) are associated with cycles (in the usual sense), and those of \(\mathbf{L}_{2}\) correspond to cavities. We refer readers to Appendix C.2.2 for detailed explanations and illustrative examples of cohomology groups.
## 4 Random walk on 0-simplices
Random walk on \(0\)-simplices or at node level has been studied systematically. Previous work has established comprehensive analysis on the theoretical properties of node-level random walk, which provide theoretical insights into the design of random walk-based methods. However, there is still limited research on the theoretical expressivity of random walk-based positional encoding (PE) and structure encoding (SE) methods. In this section, we establish connections between several PE and SE with node-level random walk, and provide theoretical expressive power bounds for them.
Rwse.[52] and Dwivedi et al. [18] propose a structure encoding method based on node-level random walk, which we denote as RWSE. Concretely, RWSE considers \(K\) steps of random walk at the node level of the graph, obtaining \(\mathbf{P},\mathbf{P}^{2},...,\mathbf{P}^{K}\). Then the method only takes into account each node's return probabilities to itself, i.e. the diagonal elements of \(\mathbf{P}^{k},k=1,2,...,K\). For each node \(v_{i}\), the RWSE feature is \(h_{i}^{RWSE}=[\mathbf{P}_{ii},\mathbf{P}_{ii}^{2},...,\mathbf{P}_{ii}^{K}]\). Compared with encoding methods based on graph Laplacian eigenvalues and eigenvectors, this method is sign and basis invariant. It internally captures some structure information within \(K\)-hops and achieves impressive results in experiments [38]. However, there are limited investigations on the theoretical expressivity of RWSE and its extensions. Here, we provide a theoretical bound of positional and structure encoding methods based on random walk transition matrix \(\mathbf{P}\).
**Theorem 4.1**.: _RWSE is strictly less powerful than \(2\)-FWL, i.e. \(\text{RWSE}\prec 2\)-FWL._
The above expressivity bound holds because \(2\)-FWL can simulate the multiplication and injective transformations of a matrix, including the adjacency matrix \(\mathbf{A}\). Therefore, \(2\)-FWL is capable of obtaining \(\mathbf{P}^{k},k\in\mathbb{N}\). Specifically, a block of PPGN [34] can simulate one time of matrix multiplication. Moreover, RWSE is strictly less expressive than \(2\)-FWL, since it loses much structure information when taking the diagonal elements of \(\mathbf{P}^{k}\) only. In other words, RWSE is a summary of full random walk transition probabilities (on spatial domain), which accelerates calculation at the cost of losing expressivity.
Resistance distance and random walk.In addition to RWSE, there are a number of positional encoding methods closely related to the node-level random walk. A.K. et al. [2], Zhang et al. [51] connect commute time in random walks with resistance in electrical networks, which can be used as a PE method called resistance distance (RD). Zhang et al. [51] prove that RD and shortest path distance (SPD) [31] are both upper-bounded by 2-FWL in expressive power.
Positive definite kernels based on graph Laplacian spectrum.Graph Laplacian, or Hodge \(0\)-Laplacian as we refer to later, is closely connected with random walk on graph. The definition of graph Laplacian is \(\mathbf{L}_{0}=\mathbf{D}-\mathbf{A}=\delta_{0}^{*}\delta_{0}=\Delta_{0}\). Through the spectrum of \(\mathbf{L}_{0}\), we are able to define a family of positive definite kernels on graphs [42] by applying a regularization function \(r\) to the spectrum of \(\mathcal{L}_{0}\): \(K_{r}=\sum_{i=1}^{m}r(\lambda_{i})\mathbf{u}_{i}\mathbf{u}_{i}^{T}\), where \(\mathbf{L}_{0}=\sum_{i}\lambda_{i}\mathbf{u}_{i}\mathbf{u}_{i}^{T}\) is the eigenvalue decomposition. For example, the heat kernel or the diffusion kernel [20] can be incorporated if \(r(\lambda_{i})=e^{-\beta\lambda_{i}}\). Other methods directly use eigenvectors as PE [30]. These results imply that spectral analysis of graph Laplacians can also inspire more powerful PE and SE, and we will generalize graph Laplacian \(\mathbf{L}_{0}\) to arbitrary order of Hodge \(k\) Laplacians in the following section to facilitate graph learning.
## 5 Random walk on 1-simplices
While node-level random walk has been widely studied, edge-level random walk is still limited. In this section, we will first introduce Hodge \(1\) Laplacian \(\mathbf{L}_{1}\), as well as its connection with random walk on \(1\)-simplices (in the lifted space) and thus edges of undirected graph. Analogous to node-level RWSE, we introduce EdgeRMSE, a more theoretically powerful PE for edges. Furthermore, we systematically analyze the spectra of \(\mathbf{L}_{1}\) and propose a novel Hodge1Lap PE, the first sign and basis invariant edge-level positional encoding that make use of the spectra of \(\mathbf{L}_{1}\) instead of the previously adopted \(\mathbf{L}_{0}\) only.
### Normalized Hodge-1 Laplacian and edge-level random walk
Theoretical analysis of edge-level random walk.The standard Hodge \(k\)-Laplacian is \(\mathbf{L}_{k}=\mathbf{B}_{k}^{*}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k +1}^{*}\), and there are a number of normalized Hodge Laplacian because the normalization is rather flexible. Schaub et al. [41] propose a normalized form for Hodge \(1\)-Laplacian \(\mathbf{L}_{1}\) with a clear interpretation of a random walk in the lifted edge space. Concretely,
\[\mathbf{\tilde{L}_{1}}=\mathbf{D}_{2}\mathbf{B}_{1}^{*}\mathbf{D}_{1}^{-1} \mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{D}_{3}\mathbf{B}_{2}^{*}\mathbf{D}_{2}^{ -1} \tag{2}\]
where \(\mathbf{D}_{2}\) is the diagonal matrix with adjusted degrees of each edge \(\mathbf{D}_{2}=\max(diag(|\mathbf{B}_{2}|\mathbf{1}),I)\), \(\mathbf{D}_{1}\) is the diagonal matrix of weighted degree of nodes \(\mathbf{D}_{1}=2\cdot diag(|\mathbf{B}_{1}|\mathbf{D}_{2}\mathbf{1})\), and \(\mathbf{D}_{3}=\frac{1}{3}\mathbf{I}\).
To interpret this normalized Hodge \(1\)-Laplacian \(\mathbf{\tilde{L}_{1}}\), Schaub et al. [41] introduce a lifted space of edges, where the original \(m=|S_{1}|\) directed edges are lifted to \(2m\) directed edges. For example, if \((i,j)\in S_{1}\), then we add \((j,i)\) to the lifted space. Consequently, the edge flow \(\mathbf{f}\in\mathcal{C}^{1}\) expands to a larger space \(\mathcal{D}^{1}\) where there are two orientations for each edge, \(|\mathcal{D}^{1}|=2|\mathcal{C}^{1}|\). The matrix representation for this lifting procedure is \(\mathbf{V}=[+\mathbf{I}_{m}\quad-\mathbf{I}_{m}]^{T}\in\mathbb{R}^{2m\times m}\). Then the probability transition matrix for this lifted random walk corresponding to \(\tilde{L}_{1}\) is \(\mathbf{\hat{P}}\):\(-\frac{1}{2}\mathbf{\tilde{L}_{1}}\mathbf{V}^{T}=\mathbf{V}^{T}\mathbf{ \hat{P}}\). In practice, we also perform a simpler row-wise normalization over \(\mathbf{L}_{1}\) to obtain another form of probability transition matrix.
Using \(\mathbf{\hat{P}}\), we can construct an edge-level random walk-based PE method to enrich edge data by encoding structure information, analogous to node-level RWSE. We will also discuss some variations and simplified versions of the aforementioned random walk on \(1\)-simplices and theoretically analyze their expressivity.
EdgeRMSE.Similar to node-level random walk, a well-defined edge-level random walk contains some structure information and can be used to facilitate edge data, namely edge-level positional encoding. While node-level positional encodings have been widely studied, the edge-level positional encoding is a nearly blank field.
Inspired by (node-level) RWSE, EdgeRMSE is based on edge-level random walk. A full version of EdgeRMSE is based on the full edge-level random walk as we have stated above and in [41]. For undirected graphs, two edges with opposite directions \((i,j)\) and \((j,i)\) are again merged by summing
the two probabilities, that is, the lifted space \(\mathcal{D}^{1}\) is mapped back to \(\mathcal{C}^{1}\). Generally speaking, PE can be based on any injection functions \(\psi\) in \(\hat{\mathbf{P}}\) and its powers.
\[\mathrm{EdgeRWSE}(\hat{\mathbf{P}})_{i}=\psi([\hat{\mathbf{P}}^{k}]),k=1,2,...K \tag{3}\]
where \(K\) is the maximum steps we consider. One possible example is to encode the return probability of each edge, which is written \(\mathrm{EdgeRWSE}_{\mathrm{ret}}(\hat{\mathbf{P}})_{i}=\psi([\hat{\mathbf{P}} ^{k}_{i\mid i}]),k=1,2,...K\). If \(\psi\) is well defined, the theoretical expressivity of the full EdgeRWSE above is able to break the \(2\)-FWL bottleneck of node-level RWSE. In practice, we can apply neural networks like MLP or Transformer to encode \(\hat{\mathbf{P}}^{k}\) and concatenate them with the original edge features. Then any standard GNN is applicable for downstream tasks. If the GNN is at least as powerful as \(1\)-FWL, then the GNN with EdgeRWSE is strictly more powerful than \(1\)-FWL and can distinguish some non-isomorphic graph pairs in which \(2\)-FWL fails.
In addition to the edge-level random walk in the lifted space of \(1\)-simplicials in [41], we further define two simplified versions of the edge-level random walk only through lower adjacency. We neglect the \(2\)-simplices or the triangles in our simplified version random walk, i.e. we only consider the \(1\)-down neighbors that share a \(0\)-simplices (node). In this way, \(\hat{\mathbf{P}}\) becomes \(\mathbf{P}_{down}\). This simplification will lead to a theoretically weaker expressivity than using full \(\hat{\mathbf{P}}\), which will be bounded by \(2\)-FWL. However, this simplification is appropriate and beneficial for real-world data that contain a small number of triangles. We illustrate these two variations temporarily on undirected connected graphs without multiple edges and self-loops for simplicity.
The two variations of edge-level random walk via down-neighbors differ in whether two lower adjacent nodes of the edge have the same status. Concretely, the first type of edge-level random walk based on \(\mathbf{P}_{down}\), which we define as _directed \(1\)-down random walk_ follows a two-stage procedure at every step. The walk first selects one of the two lower-adjacent nodes with equal probability \(0.5\) each, then moves towards the neighboring edges connected with the selected node with equal probabilities. If there are no other edges connected to the selected node, the walk returns to the original edge. On the other hand, the second type, which we denote as _undirected \(1\)-down random walk_, chooses the two nodes \(u,v\) with probabilities proportional to their degrees minus one (since we want to exclude the case of returning to \(e\) itself). Consequently, the walk transits to all \(1\)-down neighbors of the source edge with equal probabilities.
In a similar way as the full EdgeRWSE, we propose two simplified versions of EdgeRWSE based on directed \(1\)-down and undirected \(1\)-down random walk, both can be implemented in a rather flexible way. As a special case, the return probabilities of each edge after \(k=1,\ldots,K\) steps are encoded, but notice again that it is not the only implementation choice.
We conclude by summarizing the expressivity of EdgeRWSE.
**Theorem 5.1**.: _Full EdgeRWSE can distinguish some non-isomorphic graphs that are indistinguish by \(2\)-FWL. EdgeRWSE based on directed and undirected \(1\)-down random walk are not more powerful than \(2\)-FWL._
### Sign and basis invariant edge-level positional encoding
Theoretical analysis of Hodge 1-Laplacian spectrum.Recall that the unnormalized Hodge 1-Laplacian is \(\mathbf{L}_{1}=\mathbf{B}_{1}^{T}\mathbf{B}_{1}+\mathbf{B}_{2}\mathbf{B}_{2}^{ T}=\mathbf{L}_{1,down}+\mathbf{L}_{up}\). Here, we analyze the theoretical properties of Hodge 1-Laplacian including its spectrum, which provides solid insights into our following designs.
Note that previous simplicial networks [12; 47; 8; 7] are orientation equivariant and permutation equivariant; thus, they can only be applied to simplicial complexes where all edges are directed. This is frustrating if we want to boost general learning on graphs rather than simplicial complexes alone. However, the spectral analysis of Hodge \(1\)-Laplacian is applicable to undirected graphs. An important property of Hodge Laplacians is that their eigenvalues are invariant to permutation and orientation (if the simplices are oriented), thus they could be directly applied to analyze undirected graphs. Hence in this section, we temporarily omit discussion on permutation and orientation invariance since they naturally hold. Instead, we care more about the sign and basis invariance in the field of spectral analysis [32].
We can show that the nonzero eigenvalues of \(\mathbf{L}_{1,down}\) are the same as \(\mathbf{L}_{0,up}\) and hence \(\mathbf{L}_{0}\). This implies that if there are no \(2\)-simplicials (triangles), Hodge \(1\)-Laplacian has the same nonzero
eigenvalues as Hodge \(0\)-Laplacian. However, the corresponding eigenvectors still provide different information about the nodes and edges, respectively.
**Theorem 5.2**.: _The number of non-zero eigenvalues of Hodge \(1\)-Laplacian \(L_{1}\) is not less than the number of non-zero eigenvalues of Hodge \(0\)-Laplacian \(L_{0}\)._
One direct conclusion is that graph isomorphism based on Hodge 1-Laplacian isospectral is strictly more powerful than Hodge 0-Laplacian. Here we draw a conclusion on the theoretical expressivity of the \(L_{1}\) isospectra:
**Theorem 5.3**.: \(L_{1}\) _isospectral is incomparable with \(1\)-FWL and \(2\)-FWL._
Rattan and Seppelt [39] show that the \(L_{0}\) isospectra is strictly bounded by \(2\)-FWL. The \(L_{1}\) isospectra, through the introduction of \(2\)-simplices (triangles), can distinguish some non-isomorphic graph pairs that are indistinguishable by \(2\)-FWL. See Appendix C for detailed examples.
The zero eigenvalues of \(L_{1}\) have some more important properties. Its multiplicity is the \(1\)-th Betti number \(\beta_{1}\), which is exactly the number of cycles (except triangles) in the graph. We further consider the eigenvectors of \(L_{1}\), each eigenvector \(\mathbf{u}_{i}\) of the eigenvalues \(\lambda_{i}\) has a length \(m\), and each element \(\mathbf{u}_{ij}\) in it reflects the weight of the corresponding edge \(e_{j}\) at this frequency \(\lambda_{i}\). The absolute values of elements corresponding to the edges in cycles are non-zero, while the edges not in cycles have zero weights in the eigenvectors. In other words, the eigenvectors of zero eigenvalues can efficiently mark the edges that are in a cycle. More intuitive illustration and theoretical proof are given in Appendix C.2.2.
Hodge1Lap: sign and basis invariant edge PE.In this section, we propose Hodge1Lap, a novel edge-level positional encoding method based on the spectral analysis of Hodge 1-Laplacian. To the best of our knowledge, this is the first sign and basis invariant edge-level PE based on Hodge \(1\)-Laplacian \(L_{1}\).
Recall the geometric meaning of the Hodge \(1\)-Laplacian spectra in Section 5.2. Zero eigenvalues and eigenvectors reflect the cycles in the graph. These insights of Hodge \(1\)-Laplacian spectra shed light on our design for edge-level positional encoding. Denote the eigenvalues \(\lambda_{i}\) with multiplicity \(m(i)\) as \(\lambda_{i(1)},\lambda_{i(2)},\ldots,\lambda_{i(m_{i})}\), respectively. The corresponding eigenvectors are \(\mathbf{u}_{i(1)},\ldots,\mathbf{u}_{i(m_{i})}\), but note that these eigenvectors are: (i) not sign invariant, since if \(L_{1}\mathbf{u}_{i(j)}=0,j=1,...,m_{i}\), then \(L_{1}(-\mathbf{u}_{i(j)})=0\); (ii) not basis invariant if \(m_{i}>1\), since any \(m_{i}\) linearly independent basis of the kernel space are also eigenvectors, and the subspace they span is identical to the kernel space. This is analogous to the \(L_{0}\) eigenvectors: they are not sign and basis invariant, which makes it difficult for us to design sign and basis invariant positional encodings. Therefore, we propose a novel projection-based method to build Hodge1Lap, a sign and basis invariant edge-level positional encoding.
Formally, Hodge1Lap processes the eigenvalues \(\lambda_{i}\) with multiplicity \(m_{i}\) and relevant eigenvectors as follows. Recall the projection matrix
\[P_{proj,i}=\mathbf{U}\mathbf{U}^{T}=\sum_{j=1}^{m_{i}}\mathbf{u}_{i(j)} \mathbf{u}_{i(j)}^{T} \tag{4}\]
where the subscript \({}_{proj}\) is used to distinguish the projection matrix from probability transition matrix \(P\), and \(\mathbf{U}=[\mathbf{u}_{i(1)},\ldots,\mathbf{u}_{i(m_{i})}]\). For any vector \(\mathbf{v}\in\mathbb{R}^{m}\), \(P_{proj,i}\mathbf{v}\) projects it into the subspace spanned by the eigenvectors \(u_{i(j)},j=1,\ldots,m_{i}\). It is straightforward to verify that the projection in the subspace is independent of the choice of basis \(u_{i(j)}\) as long as they are linearly independent and hence is both sign and basis invariant. As long as the preimage \(\mathbf{v}\) is well defined (e.g., permutation equivariant to edge index), the projection can satisfy permutation equivariance as well as sign and basis invariance. In Hodge1Lap, we propose to use two different forms of preimages: a unit vector \(\mathbf{e}\in\mathbb{R}^{m}\) with each element \(\mathbf{e}_{j}=\frac{1}{\sqrt{m}}\), and the original edge feature \(\mathbf{X}(E)\in\mathbb{R}^{m\times d}\). The first variant considers pure structure information, while the second variant jointly encodes structure and feature information. Taking the first variant as an example, Hodge1Lap implemented by projection can be formulated as
\[\mathrm{Hodge1Lap_{proj}}(E)=\sum_{i}\phi_{i}(P_{proj,i}\mathbf{e}) \tag{5}\]
where \(\phi_{i}\) are injective functions and can be replaced by MLP layers, and the summation is performed over the interested eigen-subspaces.
In addition to the projection-based implementation of Hodge1Lap, we also implement other variants (analogously to the implementation of LapPE [30]): (i) We use a shared MLP \(\phi\) to directly embed the \(n_{eigen}\) eigenvectors corresponding to the smallest \(n_{eigen}\) eigenvalues, where \(n_{eigen}\) is a hyper-parameter shared for all graphs. We refer this implementation as \(\mathrm{Hodge1Lap_{sim}}(E)=\sum_{i=1}^{n_{eigen}}\phi(\mathbf{u}_{i})\). (ii) We take the absolute value of each element in eigenvectors before passing them to the MLP, which we denote as \(\mathrm{Hodge1Lap_{abs}}(E)=\sum_{i=1}^{n_{eigen}}\phi(|\mathbf{u}_{i}|)\), where \(|\cdot|\) means taking element-wise absolute value. It is remarkable that, while \(\mathrm{Hodge1Lap_{proj}}\) is sign-invariant and basis-invariant, \(\mathrm{Hodge1Lap_{sim}}\) is not invariant to both sign and basis, and \(\mathrm{Hodge1Lap_{abs}}\) is sign-invariant yet not basis-invariant. We also allow combination of the above implementations; see Appendix E for more implementation details.
Our Hodge1Lap has elegant geometric meanings thanks to the spectral properties of \(L_{1}\). For example, the kernel space of \(L_{1}\) related to the zero eigenvalues is fully capable of **detecting cycles and rings** in graphs [23], which can play a significant role in many domains. In molecular graphs, for example, cycle structures such as benzene rings have crucial effects on molecular properties. Hodge1Lap is able to extract such rings in a natural way rather than manually listing them, and \(\mathrm{Hodge1Lap_{abs}}\) is able to differentiate edges from distinct cycles. Intuitively, according to the Hodge decomposition theorem, any vector field defined on edges \(\mathcal{C}^{1}\) can be decomposed into three orthogonal components: a solenoidal component, a gradient component and a harmonic (both divergence-free and curl-free) component; see Appendix A. \(\ker(\mathbf{L}_{1})\) is the harmonic component, and since divergence-free and curl-free edge flows can only appear on cycles, the eigenvectors corresponding to \(\ker(\mathbf{L}_{1})\) therefore mark out the cycles in the graph; see Appendix C.2.2 for more technical details and illustrative examples. Moreover, taking into account more subspaces other than the kernel space of \(L_{1}\), Hodge1Lap contains other structure information since the eigenvectors are real and continuous vectors. Ideally, one can apply any sign and basis invariant functions to obtain a universal approximator [32] for functions on \(1\)-faces besides projections, see Section 6 for general conclusions.
## 6 Random walk on higher-order and inter-order simplices
In Section 4 and Section 5, we systematically analyze the random walk and Hodge Laplacian-based PE and SE on \(0\)-simplices (node level) and \(1\)-simplices (edge level), respectively. As we have shown, introducing higher-order simplices into random walk benefits their theoretical expressivity. In this section, we formally introduce random walks on higher-order simplices and analyze their expressivity. We will also investigate the spectral analysis of Hodge \(k\) Laplacians, whose normalization forms are closely related to random walks on \(k\)-simplices. Besides random walk within same-order simplices, we define a novel inter-order random walk that is able to transmit within different orders of simplices. This random walk scheme incorporates and unifies a wide range of simplicial networks [12; 8; 14].
### Higher-order Hodge Laplacians and random walk
The \(k\)-th order Hodge Laplacian is defined as \(\mathbf{L}_{k}=\mathbf{B}_{k}^{*}\mathbf{B}_{k}+\mathbf{B}_{k+1}\mathbf{B}_{k +1}^{*}=\mathbf{L}_{k,down}+\mathbf{L}_{k,up}\). Analogous to \(\mathbf{L}_{1}\), a properly normalized Hodge \(k\) Laplacian \(\mathbf{\tilde{L}_{k}}\) corresponds to a \(k\)-th order random walk on \(k\)-simplices in the lifted space. The matrix representation for the lifting is \(\mathbf{V}_{k}=\left[+\mathbf{I}_{n_{k}}\quad-\mathbf{I}_{n_{k}}\right]^{T} \in\mathbb{R}^{2n_{k}\times n_{k}}\), where \(n_{k}=|S_{k}|\) is the number of \(k\)-simplices in the simplicial complex \(\mathcal{K}\). For undirected graphs, one only needs to sum over different orientations to get the cochain group \(\mathcal{C}^{k}\) from \(\mathcal{D}^{k}\), where \(|\mathcal{D}^{k}|=2|\mathcal{C}^{k}|\) is the cochain group in the lifted space. The transition matrix \(\hat{\mathbf{P}_{k}}\) for \(k\)-th order random walk is defined through \(-\frac{1}{2}\mathbf{\tilde{L}_{k}}\mathbf{V}_{k}^{T}=\mathbf{V}_{k}^{T}\hat{ \mathbf{P}_{k}}\).
Similarly to the edge-level random walk in the lifted space, the transition matrix \(\hat{\mathbf{P}_{k}}\) describes that each step of \(k\)-th order random walk move towards either \(k\)-down neighbors or \(k\)-up neighbors. When going through the upper adjacent \(k+1\) faces, the walk uniformly transits to an upper adjacent \(k\)-simplex with different orientation relative to the shared \(k+1\) face, unless it has no upper adjacent faces. If the step is taken towards a lower-adjacent \(k-1\) face, the walk transits along or against the original direction to one of its \(k\)-down neighbors.
Based on \(\hat{\mathbf{P}_{k}}\), we can design \(k\)-th order RWSE for \(k\)-simplicial data according to the \(k\)-th order random walk, \(k-\mathrm{RWSE}=\psi_{k}(\hat{\mathbf{P}_{k}})\), where \(\psi_{k}\) is an injective function that acts on either \(\hat{\mathbf{P}_{k}}\) or its polynomials. If we maintain all \(k\)-RWSE for \(k=0,1,\ldots,K\) in a simplicial complex \(\mathcal{K}\) with
dimension larger than \(K\), then we can get a more powerful algorithm by adding \(K+1\)-RWSE to the \(K+1\)-simplices in \(\mathcal{K}\).
In addition to directly making use of the random walk on the \(k\)-simplices, spectral analysis of \(\mathbf{L}_{k}\) also sheds light on PE designs for higher-order simplicial data. Based on the eigenvalues and eigenvectors of \(\mathbf{L}_{k}\), we can build permutation equivariant and basis invariant functions defined on \(\mathcal{K}_{k+1}\) that can simulate arbitrary \(k\)-cochain or \(k\)-form. Concretely, if we use the normalized version of \(k\)-th Hodge Laplacian \(\Delta_{k}\) as in [24], the eigenvalues of \(\Delta_{k}\) will be compact \(0\leq\lambda\leq k+2\). Then applying a permutation equivariant and basis-invariant function such as _Unconstrained BasisNet_[32] on the eigenvalues and eigenvectors, we are able to approximate any \(k\)-form which is basis-invariant. We refer interested readers to Appendix C.3 for more details.
### Inter-order random walk
The concept of random walk can be even generalized to a more universal version, which we denote as inter-order random walk. In each step, the inter-order random walk at a \(k\)-simplex can transit not only to the \(k\)-down neighbors and \(k\)-up neighbors (they are all \(k\)-simplices as well), but also to lower adjacent \(k-1\)-simplices and upper adjacent \(k+1\)-simplices. Here we denote the (unnormalized) adjacent matrix for the inter-order random walk on a \(K\)-order simplicial complex \(\mathcal{K}\) as \(\mathcal{A}_{K}(\mathcal{K})\), which is defined as
\[\mathcal{A}_{K}(\mathcal{K})=\begin{bmatrix}\mathbf{L}_{0}&\mathbf{B}_{1}&&\\ \mathbf{B}_{1}^{T}&\mathbf{L}_{1}&\mathbf{B}_{2}&&\\ &...&...&...&\\ &&...&...&...\\ &&&\mathbf{B}_{K-1}^{T}&\mathbf{L}_{K-1}&\mathbf{B}_{K}\\ &&\mathbf{B}_{K}^{T}&\mathbf{L}_{K}\end{bmatrix} \tag{6}\]
which is a block matrix with \(\mathbf{L}_{k}\) in the \(k\)-th diagonal block, \(\mathbf{B}_{k}^{T}\) and \(\mathbf{B}_{k+1}\) in the offset \(\pm 1\) diagonal blocks, while all other blocks are zeros. Although Chen et al. [14] also mentioned a similar block matrix, they do not pose a concrete form of the off-diagonal blocks. The inter-order adjacent matrix we define has a clear physical interpretation that one can only transform to simplices with different orders that are boundaries and co-boundaries of current simplex. A properly normalized version \(\tilde{\mathcal{A}}_{K}\) can describe the inter-order random walk with a certain rule. Here, we give a property of the power of \(\mathcal{A}_{K}\) which still holds in normalized versions.
\[\mathcal{A}_{K}^{r}=\begin{bmatrix}p_{r}(\mathbf{L}_{0})&q_{r-1}(\mathbf{L}_ {0,up})\mathbf{B}_{1}&&\\ q_{r-1}(\mathbf{L}_{1,down})\mathbf{B}_{1}^{T}&p_{r}(\mathbf{L}_{1})&q_{r-1}( \mathbf{L}_{1,up})\mathbf{B}_{2}&&\\ &...&...&...&\\ &&...&...&...\\ &&&q_{r-1}(\mathbf{L}_{K,down})\mathbf{B}_{K}^{T}&p_{r}(\mathbf{L}_{K})\end{bmatrix} \tag{7}\]
where \(p_{r}(\cdot)\) and \(q_{r}(\cdot)\) are polynomials with maximum order \(r\). The above equation states that simplices with differences of order larger than one cannot directly exchange information even after infinite rounds, but they can affect each other through the coefficients in \(p_{r}\) and \(q_{r-1}\) in the blocks on the offset \(\pm 1\)-diagonal blocks.
Several previous works such as [8] can be unified by \(\mathcal{A}_{K}\). Additionally, we can make use of \(\mathcal{A}_{K}^{r}\) to build random walk-based positional encoding for all simplices in the \(K\)-dimensional simplicial complex that contains rich information.
## 7 Experiments
In this section, we present a comprehensive ablation study on Zinc-12k to investigate the effectiveness of our proposed methods. We also verify the performance on graph-level OGB benchmarks. Due to the limited space, experiments on synthetic datasets and more real-world datasets as well as experimental details are presented in Appendix E.
**Ablation study on Zink-12k.** Zinc-12k [17] is a popular real-world dataset containing 12k molecules. The task is the graph-level molecular property (constrained solubility) regression. In our ablation study, we use GINE [25], GAT [45], PNA [15], SSWL+ [50], GPS [38] and GRIT [33] as our base models, where the first three are message-passing based GNNs, SSWL+ is an instance of subgraph GNN, while GPS and GRIT are recent SOTA graph transformers. Four different factors are studied: (1) the node-level PE or SE, where RWSE refers to [18], LapPE refers to [30] and
"-" suggests no node-level PE/SE; (2) EdgeRWSE, the edge-level SE based on spatial domain of \(1\)-down random walk, where "directed" and "undirected" are used to distinguish the two types of simplified version of \(1\)-down random walk; (3) Hodge1Lap, the edge-level PE based on spectra of \(\mathbf{L}_{1}\), where "abs" refers to the sign-invariant method (summing over absolute values of eigenvectors, or \(\mathrm{Hodge1Lap_{abs}}\)), and "project" refers to the sign and basis invariant method (project the unit vector into interested subspace, or \(\mathrm{Hodge1Lap_{proj}}\)); (4) RWMP, a novel Random Walk Message Passing scheme we propose, which performs message passing based on probability calculated by a distance metric; see Appendix D for details of RWMP.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline model & Node PE/SE & EdgeRWSE & Hodge1Lap & RWMP & Test MAE \\ \hline GIN [46] & - & - & - & - & \(0.526\pm 0.051\) \\ GSN [9] & - & - & - & - & \(0.101\pm 0.010\) \\ Graphmer [48] & - & - & - & - & \(0.122\pm 0.006\) \\ SAN [30] & - & - & - & - & \(0.139\pm 0.006\) \\ GIN-AK+ [53] & - & - & - & - & \(0.080\pm 0.001\) \\ CIN [7] & - & - & - & - & \(0.079\pm 0.006\) \\ Specformer [6] & - & - & - & - & \(0.066\pm 0.003\) \\ \hline GINE[25] & - & - & - & - & \(0.133\pm 0.002\) \\ GINE & - & directed & - & - & \(0.110\pm 0.003\) \\ GINE & - & undirected & - & - & \(0.104\pm 0.008\) \\ GINE & - & - & abs & - & \(0.102\pm 0.004\) \\ GINE & - & - & project & - & \(0.091\pm 0.004\) \\ GINE & LapPE & - & - & - & \(0.120\pm 0.005\) \\ GINE & RWSE & - & - & - & \(0.074\pm 0.003\) \\ GINE & RWSE & directed & - & - & \(0.070\pm 0.003\) \\ GINE & RWSE & undirected & - & - & \(0.069\pm 0.002\) \\ GINE & RWSE & - & abs & - & \(0.068\pm 0.003\) \\ GINE & RWSE & - & project & - & \(0.068\pm 0.004\) \\ GINE & RWSE & - & - & True & \(0.068\pm 0.003\) \\ GINE & RWSE & - & project & True & \(0.066\pm 0.003\) \\ \hline GINE & RWSE & Full-EdgeRWSE & - & - & \(0.069\pm 0.003\) \\ GINE & Inter-RWSE & Inter-RWSE & - & - & \(0.083\pm 0.006\) \\ GINE & RWSE & Cellular & - & - & \(0.068\pm 0.003\) \\ \hline GAT [45] & - & - & - & - & \(0.384\pm 0.007\) \\ GAT & - & undirected & - & - & \(0.163\pm 0.008\) \\ GAT & - & - & project & - & \(0.130\pm 0.005\) \\ \hline PNA [15] & - & - & - & - & \(0.188\pm 0.004\) \\ PNA & - & undirected & - & - & \(0.104\pm 0.004\) \\ PNA & - & - & project & - & \(0.074\pm 0.005\) \\ \hline SSWL+ [50] & - & - & - & - & \(0.070\pm 0.005\) \\ SSWL+ & - & undirected & - & - & \(0.067\pm 0.005\) \\ SSWL+ & - & - & project & - & \(0.066\pm 0.003\) \\ \hline GPS [38] & - & - & - & - & \(0.113\pm 0.005\) \\ GPS & RWSE & - & - & - & \(0.070\pm 0.004\) \\ GPS & RWSE & undirected & - & - & \(0.068\pm 0.004\) \\ GPS & RWSE & - & project & - & \(0.064\pm 0.003\) \\ \hline GRIT [33] & - & - & - & - & \(0.149\pm 0.008\) \\ GRIT & RWSE & - & - & - & \(0.081\pm 0.010\) \\ GRIT & SPDPE & - & - & - & \(0.067\pm 0.002\) \\ GRIT & RDPE & - & - & - & \(0.059\pm 0.003\) \\ GRIT & RRWP & - & - & - & \(0.059\pm 0.002\) \\ GRIT & - & undirected & - & - & \(0.103\pm 0.006\) \\ GRIT & - & - & project & - & \(0.086\pm 0.005\) \\ GRIT & RRWP & undirected & - & - & \(0.058\pm 0.002\) \\ GRIT & RRWP & - & project & - & \(0.057\pm 0.003\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Ablation on Zinc-12k dataset [17] (MAE \(\downarrow\)). Highlighted are the first, second results.
The full results of performance on the Zinc dataset are reported in Table 1. Note that all our base models are improved when augmented with our EdgeRWSE or Hodge1Lap: both GAT and PNA reduce by over \(50\%\) MAE. In particular, a simple GINE without using any transformer or subgraph GNN variations is able to surpass GPS with our PE/SE, verifying the impressive effectiveness of our proposed methods. Applying EdgeRWSE and Hodge1Lap to GRIT results in new **State-of-the-Art** performance. Regarding ablation, all variants of our EdgeRWSE and Hodge1Lap can improve performance of base models, see Appendix E for more implementation details of these variants. One may observe that RWSE is significantly beneficial in this task, and combining node-level RMSE and our edge-level PE/SE methods would lead to a further performance gain. In general, Hodge1Lap shows better performance than EdgeRWSE, indicating the effectiveness of embedding structures such as rings through spectral analysis. The effect of whether EdgeRWSE is directed or the implementation method in Hodge1Lap is rather small. We also observe that Full-EdgeRWSE, Inter-RWSE, and CellularRWSE are beneficial, see Appendix E for more details. Additionally, the RWMP mechanism is also capable of improving performance, which we will analyze in Appendix D.
Experiments on OGB benchmarks.We also verify the performance of EdgeRWSE and Hodge1Lap on graph-level OBG benchmarks, including the ogbg-molhiv and ogbg-molpcba datasets. The results are shown in Table 2. We apply our Hodge1Lap and EdgeRWSE to both GatedGCN and GPS(consists of GatedGCN and Transformer) and show that our methods can improve both architectures. In general, both two edge-level PE/SE are able to achieve comparable performance as the SOTA models, though EdgeRWSE suffers from overfitting on ogbg-molhiv. It should be noted that SOTA results on ogbg-molhiv typically involve manually crafted structures, including GSN [9] and CIN [7]. Natural methods and complex models usually suffer from overfitting and cannot generalize well in the test set.
## 8 Conclusions
In this paper, we propose to facilitate graph neural networks through the lens of random walk on simplicial complexes. The random walk on \(k\)-th order simplices is closely related to Hodge \(k\) Laplacian \(\mathbf{L}_{k}\), and we emphasize that both spatial analysis of random walk and spectra of \(\mathbf{L}_{k}\) can improve the theoretical expressive power and performance of GNNs. For \(0\)-simplices, we connect a number of existing PE and SE methods (such as RWSE) via node-level random walk, and further provide a theoretical expressivity bound. For \(1\)-simplices, we propose two novel edge-level PE and SE methods, namely EdgeRWSE and Hodge1Lap. EdgeRWSE directly encodes information based on edge-level random walk, while Hodge1Lap is the first sign and basis invariant edge-level PE based on Hodge-\(1\) Laplacian spectra. We also generalize our theory to arbitrary-order simplices, showing how \(k\)-order and inter-order random walk as well as spectral analysis of Hodge Laplacians can facilitate graph and simplicial learning. Besides analyzing theoretical expressive power and physical meanings of these random walk-based methods, we also verify the effectiveness of our methods, which achieve SOTA or highly competitive performance on several datasets.
\begin{table}
\begin{tabular}{l|c c} \hline \hline model & ogbg-molhiv (AUROC \(\uparrow\)) & ogbg-molpcba (Avg. Precision \(\uparrow\)) \\ \hline GIN+virtual node & \(0.7707\pm 0.0149\) & \(0.2703\pm 0.0023\) \\ GSN (directional) & \(0.8039\pm 0.0090\) & - \\ PNA & \(0.7905\pm 0.0132\) & \(0.2838\pm 0.0035\) \\ SAN & \(0.7785\pm 0.2470\) & \(0.2765\pm 0.0042\) \\ GIN-AK+ & \(0.7961\pm 0.0110\) & \(0.2930\pm 0.0044\) \\ CIN & \(0.8094\pm 0.0057\) & - \\ GPS & \(0.7880\pm 0.0101\) & \(0.2907\pm 0.0028\) \\ Specformer & \(0.7889\pm 0.0124\) & \(0.2972\pm 0.0023\) \\ \hline GPS+EdgeRWSE & \(0.7891\pm 0.0118\) & \(\mathbf{0.2934\pm 0.0025}\) \\ GPS+Hodge1Lap & \(\mathbf{0.8021\pm 0.0154}\) & \(0.2937\pm 0.0023\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Experiments on graph-level OGB benchmarks [26]. Highlighted are the first, second, **third** test results.
## Acknowledgments and Disclosure of Funding
Muhan Zhang is partially supported by the National Natural Science Foundation of China (62276003) and Alibaba Innovative Research Program.
|
2302.09287 | Thermodynamically consistent diffuse-interface mixture models of
incompressible multicomponent fluids | In this paper we derive a class of thermodynamically consistent
diffuse-interface mixture models of incompressible multicomponent fluids. The
class of mixture models is fully compatible with the continuum theory of
mixtures. The resulting mixture models may be formulated either in constituent
or in mixture quantities. This permits a direct comparison with the
Navier-Stokes Cahn-Hilliard model with non-matching densities, which reveals
the key modeling simplifications of the latter. | M. ten Eikelder, K. van der Zee, D. Schillinger | 2023-02-18T11:21:00Z | http://arxiv.org/abs/2302.09287v1 | Thermodynamically consistent diffuse-interface mixture models of incompressible multicomponent fluids
###### Abstract
In this paper we derive a class of thermodynamically consistent diffuse-interface mixture models of incompressible multicomponent fluids. The class of mixture models is fully compatible with the continuum theory of mixtures. The resulting mixture models may be formulated either in constituent or in mixture quantities. This permits a direct comparison with the Navier-Stokes Cahn-Hilliard model with non-matching densities, which reveals the key modeling simplifications of the latter.
**Key words**. Multi-constituent flow, Incompressible flow, Mixture theory, Navier-Stokes Cahn-Hilliard equations.
**AMS Subject Classification**: Primary: 76T99, Secondary: 35Q30, 35Q35, 35R35, 76D05, 76D45, 80A99
## 1 Introduction
### Background
The description of diffuse-interface multi-constituent flows in which the interface has a positive thickness may be traced back to Rayleigh [25] and van der Waals [33]. Based on these works, the pioneering work of Korteweg [18] and others, diffuse-interface models governing the motion of multiple constituents (fluids) or phases have been developed [3, 23] and applied in computations [35, 13, 10]. In the scenario of multi-phase flow, the prototypical model is the Navier-Stokes-Korteweg model. On the other hand, mixture theory of rational mechanics provides the theoretical framework of the dynamics of multi-constituent mixtures. The first contributions on simple mixtures are the works of Fick [12] and Darcy [8]. Since then, the topic has become more mature with the important
contributions of Truesdell [29, 30] and Truesdell and Toupin [32]. More complete overviews of rational mixture theory are provided by Green and Naghdi [14], Muller [21], Muller and Ruggeri [22], Bowen [4, 5], Truesdell [31], Morro [20], and others.
The study of incompressible diffuse-interface multi-fluid models seems only weakly connected with continuum mixture theory. Indeed, the study of diffuse-interface multi-fluid models was initiated in 1970 independent of the continuum theory of mixtures. In that year Hohenberg and Halperin proposed a model, known as _model H_, for the coupling of viscous fluid incompressible flow and spinoidal decomposition [17]. This diffuse-interface model is now recognized as the first _Navier-Stokes Cahn-Hilliard_ (NSCH) model. As the name suggests, the model is presented as the coupling between the incompressible (isothermal) Navier-Stokes equations and (an extension of) the Cahn-Hilliard equation. The capillary forces are modeled through the introduction of an additional Korteweg-type contribution to the stress tensor. Model H was initially established via phenomenological arguments, and a continuum mechanics derivation was presented by Gurtin [16]. This derivation, and the resulting model are not compatible with the continuum theory of mixtures.
The major assumption in model H is the constant density of the mixture as well as of the individual constituents (making it not applicable to problems with large density ratios). This limitation initiated the generalization of model H to NSCH models with non-matching densities. Noteworthy contributions include the models of Lowengrub and Truskinovsky [19], Boyer [6], Ding et al. [9], Abels et al. [1], Shen et al. [27], Aki et al. [2] and Shokrpour Roudbari et al. [28]. These models all aim to describe the same physical phenomena (the evolution of isothermal incompressible mixtures), yet they are (seemingly) distinct from one another.
In a recent article we have proposed a unified framework of all existing Navier-Stokes Cahn-Hilliard models with non-matching densities and non-zero mass fluxes [11]. In this work we have established one NSCH system of balance laws and have shown that many alternate forms of the same model are connected via variable transformations. As such, in this paper we no longer think of a wide variety of NSCH models, but instead of _the NSCH model_ (variations only occur in constitutive modeling). A particular formulation of the NSCH model reads:
\[\partial_{t}(\rho\mathbf{v})+\operatorname{div}\left(\rho \mathbf{v}\otimes\mathbf{v}\right)+\nabla p+\operatorname{div}\left(\nabla \phi\otimes\frac{\partial\bar{\Psi}}{\partial\nabla\phi}+(\bar{\mu}\phi-\bar{ \Psi})\mathbf{I}\right)\] \[-\operatorname{div}\left(\nu(2\mathbf{D}+\lambda(\operatorname{ div}\mathbf{v})\mathbf{I})\right)-\rho\mathbf{b} = 0, \tag{1a}\] \[\partial_{t}\rho+\operatorname{div}(\rho\mathbf{v}) = 0,\] (1b) \[\partial_{t}\phi+\operatorname{div}(\phi\mathbf{v})-\operatorname{ div}\left(\bar{\mathbf{M}}\nabla(\bar{\mu}+\omega p)\right)+\zeta\bar{m}(\bar{\mu}+ \omega p) = 0,\] (1c) \[\bar{\mu}-\frac{\partial\bar{\Psi}}{\partial\phi}+\operatorname{ div}\left(\frac{\partial\bar{\Psi}}{\partial\nabla\phi}\right) = 0. \tag{1d}\]
Here \(\rho\) is the mixture density, \(\mathbf{v}\) the mixture velocity, \(p\) the pressure, \(\phi\) an order parameter and \(\bar{\mu}\) a chemical potential quantity. Furthermore, \(\bar{\mathbf{M}}=\bar{\mathbf{M}}(\phi,\nabla\phi,\bar{\mu},\nabla\bar{\mu},p)\) and \(\bar{m}=\bar{m}(\phi,\bar{\mu},p)\) are degenerate mobilities, \(\nu\) the dynamic viscosity of the mixture,
\(\mathbf{g}\) the gravitational acceleration, \(\rho_{1}\) and \(\rho_{2}\) constant specific densities of the constituents, \(\omega=(\rho_{2}-\rho_{1})/(\rho_{1}+\rho_{2})\), and \(\zeta=(\rho_{1}+\rho_{2})/(2\rho_{1}\rho_{2})\). We provide precise definitions in Section 5.
### Objective and main results
The unified framework presented in ten Eikelder et al. [11] completes the fundamental exploration of alternate non-matching density NSCH models. However, the NSCH model is not compatible with mixture theory of rational mechanics. Namely, in the construction of the NSCH model, the evolution equation of the diffusive flux that results from mixture theory is replaced by a constitutive model. Therefore, the NSCH model may be classified as a _reduced mixture model_. This observation bring us to the main objective of this article: _to derive a thermodynamically-consistent diffuse-interface incompressible mixture model compatible with continuum mixture theory_. We restrict to isothermal constituents. The thermodynamically-consistent property of the mixture model refers to the compatibility with the second law of thermodynamics. In particular, we derive the following mixture model:
\[\partial_{t}\tilde{\rho}_{\alpha}+\operatorname{div}(\tilde{ \rho}_{\alpha}\mathbf{v}_{\alpha})-\hat{\gamma}_{\alpha}= \leavevmode\nobreak\ 0, \tag{2a}\] \[\partial_{t}(\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha})+ \operatorname{div}\left(\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha}\otimes \mathbf{v}_{\alpha}\right)+\phi_{\alpha}\nabla\left(p+\mu_{\alpha}\right)\] \[-\operatorname{div}\left(\tilde{\nu}_{\alpha}\left(2\mathbf{D}_ {\alpha}+\lambda_{\alpha}\mathrm{div}\mathbf{v}_{\alpha}\right)\right)- \tilde{\rho}_{\alpha}\mathbf{b}\] \[-\sum_{\beta}\frac{p\phi_{\alpha}\phi_{\beta}}{D_{\alpha\beta}}( \mathbf{v}_{\beta}-\mathbf{v}_{\alpha})-\mathbf{\beta}_{\alpha}= \leavevmode\nobreak\ 0, \tag{2b}\]
for \(\alpha=1,...,N\). Here \(\tilde{\rho}_{\alpha}\) is the partial mass density of constituent \(\alpha\), \(\mathbf{v}_{\alpha}\) the constituent velocity, \(\phi_{\alpha}\) the constituent volume fraction, and \(\mu_{\alpha}\) a constituent chemical potential. Furthermore, the model contains two distinct pressure quantities, \(\pi_{\alpha}\) is the thermodynamical pressure of constituent \(\alpha\) and \(p\) the mechanical pressure of the mixture. Finally, \(\nu_{\alpha}\) is the constituent dynamical viscosity, \(\mathbf{D}_{\alpha}\) the constituent symmetric velocity gradient, \(D_{\alpha\beta}\) a diffusion coefficient associated with constituents \(\alpha\) and \(\beta\), and \(\hat{\gamma}_{\alpha}\) and \(\mathbf{\beta}_{\alpha}\) mass transfer (related) terms. We provide precise definitions in Sections 3 and 4.
The distinguishing feature of the model lies in the occurrence of both a mass and a momentum balance equation per constituent. Reduced models (e.g. NSCH and Navier-Stokes Allen-Cahn) typically contain a phase equation per constituent but a single momentum equation for the mixture. This decrease in complexity comes at the cost of violating mixture theory of rational mechanics. Another interesting aspect is that the model has no Cahn-Hilliard type equation. Furthermore we note the presence of the multiple pressure quantities. The single mechanical pressure variable \(p\) acts as a Lagrange multiplier of the mixture incompressibility constraint. On the other hand, the thermodynamical pressure \(\pi_{\alpha}\) is solely associated with constituent \(\alpha\). The last line in the constituent momentum equations models the momentum transfer between the constituents. As such, we observe that constituent momentum interaction is absent in the Stefan-Maxwell equilibrium balance.
Another important feature of the model is that the equilibrium profile coincides with that of the NSCH model (for the standard Ginzburg-Landau free energy).
### Plan of the paper
The remainder of the paper is structured as follows. In Section2 we present the general continuum theory of incompressible fluid mixtures. Here we present identities that relate constituent and mixture quantities. We exclude thermal effects. Next, in Section3 we perform constitutive modeling via the Coleman-Noll procedure. Then, in Section4 we present particular diffuse-interface models. We compare the resulting models with the NSCH model in Section5. Finally, in Section6 we conclude and outline avenues for future research.
## 2 Continuum theory of mixtures
The purpose of this section is to lay down the continuum theory of mixtures composed of incompressible isothermal constituents. The theory is based on three metaphysical principles proposed in the groundbreaking works of Truesdell and Toupin [32]:
1. _All properties of the mixture must be mathematical consequences of properties of the constituents._
2. _So as to describe the motion of a constituent, we may in imagination isolate it from the rest of the mixture, provided we allow properly for the actions of the other constituents upon it._
3. _The motion of the mixture is governed by the same equations as is a single body._
The first principle states that the mixture is composed of its constituent parts. The second principle asserts the physics model to be band together via interaction flux, forces or energies. Finally, the third principle ensures that the motion of a mixture is indistinguishable from that of a single fluid.
In Section2.1 we introduce the fundamentals of the continuum theory of mixtures and the necessary kinematics. Then, in Section2.2 we provide balance laws of individual constituents and associated mixtures.
### Preliminaries and kinematics
The core idea of the continuum theory of mixtures is that the material body \(\mathscr{B}\) is composed of \(N\) constituent bodies \(\mathscr{B}_{\alpha}\), with \(\alpha=1,\ldots,N\). The bodies \(\mathscr{B}_{\alpha}\) are allowed to occupy, simultaneously, a common region in space. Denote with \(\mathbf{X}_{\alpha}\) the spatial position of a particle of \(\mathscr{B}_{\alpha}\) in the Lagrangian (reference) configuration. The spatial position of a particle is given by the (invertible) deformation map
\[\mathbf{x}:=\mathbf{\chi}_{\alpha}(\mathbf{X}_{\alpha},t). \tag{3}\]
Consider from now on positions \(\mathbf{x}\) that are taken by one particle from each of the \(N\) constituent bodies \(\mathscr{B}_{\alpha}\). Around this spatial position \(\mathbf{x}\) we consider an arbitrary mixture control volume \(V\subset\Omega\) with measure \(|V|\). Furthermore, we introduce volume \(V_{\alpha}\subset V\), with measure \(|V_{\alpha}|\), as the control volume of constituent \(\alpha\). The constituents masses denote \(M_{\alpha}=M_{\alpha}(V)\) and the total mass in \(V\) is \(M=M(V)=\sum_{\alpha}M_{\alpha}(V)\). The constituent partial mass density \(\tilde{\rho}_{\alpha}\) and specific mass density \(\rho_{\alpha}>0\) are respectively defined as
\[\tilde{\rho}_{\alpha}(\mathbf{x},t) :=\ \lim_{|V|\to 0}\frac{M_{\alpha}(V)}{|V|}, \tag{4a}\] \[\rho_{\alpha}(\mathbf{x},t) :=\ \lim_{|V_{\alpha}|\to 0}\frac{M_{\alpha}(V)}{|V_{\alpha}|}. \tag{4b}\]
The quantities represent the mass of the associated constituent \(\alpha\) per unit volume of the mixture \(V\), and constituent volume \(V_{\alpha}\), respectively. In this paper we work with incompressible isothermal constituents of which the specific mass densities \(\rho_{\alpha}\) are constants. The density of the mixture is the sum of the partial mass densities of the constituents:
\[\rho(\mathbf{x},t):=\sum_{\alpha}\tilde{\rho}_{\alpha}(\mathbf{x},t). \tag{5}\]
The volume fraction of constituent \(\alpha\) is defined as:
\[\phi_{\alpha}(\mathbf{x},t):=\ \lim_{|V|\to 0}\frac{|V_{\alpha}|}{|V|}. \tag{6}\]
We preclude the existence of void spaces by assuming:
\[\sum_{\alpha}\phi_{\alpha}=1. \tag{7}\]
The above definitions (4), (5) and (6) imply the relation:
\[\tilde{\rho}_{\alpha}(\mathbf{x},t)=\rho_{\alpha}\phi_{\alpha}(\mathbf{x},t). \tag{8}\]
The constituent velocity is given by
\[\mathbf{v}_{\alpha}(\mathbf{x},t)=\partial_{t}\mathbf{\chi}_{\alpha}(\mathbf{X}_{ \alpha},t)|_{\mathbf{X}_{\alpha}}=\mathbf{\dot{x}}_{\alpha}(\mathbf{x},t), \tag{9}\]
where \(\mathbf{\dot{\psi}}\) is the time derivative of any differentiable function \(\psi\) (of position and time) where the position \(\mathbf{X}_{\alpha}\) is fixed. Next, we denote the momentum of constituent \(\alpha\) as:
\[\mathbf{m}_{\alpha}(\mathbf{x},t)=\tilde{\rho}_{\alpha}(\mathbf{x},t)\mathbf{ v}_{\alpha}(\mathbf{x},t). \tag{10}\]
By taking the sum of the momenta of the constituent we get the momentum of the mixture:
\[\mathbf{m}(\mathbf{x},t):=\sum_{\alpha}\mathbf{m}_{\alpha}(\mathbf{x},t). \tag{11}\]
From the momentum of the mixture, we identify the _mixture velocity_\(\mathbf{v}\) (also called mass-averaged velocity or barycentric velocity):
\[\mathbf{m}(\mathbf{x},t)=\rho(\mathbf{x},t)\mathbf{v}(\mathbf{x},t). \tag{12}\]
Another important velocity is the peculiar velocity (also known as diffusion velocity) of constituent \(\alpha\):
\[\mathbf{w}_{\alpha}(\mathbf{x},t):=\mathbf{v}_{\alpha}(\mathbf{x},t)-\mathbf{ v}(\mathbf{x},t), \tag{13}\]
which describes the constituent velocity relative to the gross motion of the mixture. The peculiar velocity satisfies the property:
\[\sum_{\alpha}\mathbf{J}_{\alpha}=\sum_{\alpha}\rho_{\alpha}^{-1}\mathbf{h}_{ \alpha}=0, \tag{14}\]
where the so-called _diffusive fluxes_ are defined as:
\[\mathbf{h}_{\alpha} :=\phi_{\alpha}\mathbf{w}_{\alpha}, \tag{15a}\] \[\mathbf{J}_{\alpha} :=\tilde{\rho}_{\alpha}\mathbf{w}_{\alpha}. \tag{15b}\]
Alongside the time derivative \(\dot{\mathbf{\uppsi}}\) of the differentiable function \(\mathbf{\uppsi}\) of \(\mathbf{x}\) and \(t\), we introduce a time derivative of \(\mathbf{\uppsi}\) that follows the mean motion. In the Eulerian frame these material derivatives are given by:
\[\dot{\mathbf{\uppsi}} =\partial_{t}\mathbf{\uppsi}+\mathbf{v}_{\alpha}\cdot\nabla \mathbf{\uppsi}, \tag{16a}\] \[\dot{\mathbf{\uppsi}} =\partial_{t}\mathbf{\uppsi}+\mathbf{v}\cdot\nabla\mathbf{\uppsi}. \tag{16b}\]
### Balance laws
According to the second metaphysical principle of the continuum theory of mixtures, the motion of each of the constituents is governed by an individual set of balance laws. These laws are contain interaction terms that model the interplay of the different constituents. Following e.g. [31], each of the constituent \(\alpha=1,\ldots,N\) must satisfy in the following set of local balance laws for all \(\mathbf{x}\in\Omega\) and \(t\in(0,T)\):
\[\partial_{t}\tilde{\rho}_{\alpha}+\mathrm{div}(\tilde{\rho}_{ \alpha}\mathbf{v}_{\alpha})= \gamma_{\alpha}, \tag{17a}\] \[\partial_{t}\mathbf{m}_{\alpha}+\mathrm{div}\left(\mathbf{m}_{ \alpha}\otimes\mathbf{v}_{\alpha}\right)-\mathrm{div}\mathbf{T}_{\alpha}- \tilde{\rho}_{\alpha}\mathbf{b}_{\alpha}= \pi_{\alpha},\] (17b) \[\mathbf{T}_{\alpha}-\mathbf{T}_{\alpha}^{T}= \mathbf{N}_{\alpha},\] (17c) \[-\mathrm{div}\left(\mathbf{v}_{\alpha}\mathbf{T}_{\alpha}- \tilde{\rho}_{\alpha}\mathbf{b}_{\alpha}\cdot\mathbf{v}_{\alpha}+\mathrm{div }\mathbf{q}_{\alpha}-\tilde{\rho}_{\alpha}r_{\alpha}= e_{\alpha}. \tag{17d}\]
The equation (17a) represents the local constituent mass balance law, where the interaction term \(\gamma_{\alpha}\) is the mass supply of constituent \(\alpha\) due to chemical reactions with the other
constituents. Next, (17b) is the local constituent linear momentum balance law. Here \(\mathbf{T}_{\alpha}\) is the Cauchy stress tensor of constituent \(\alpha\), \(\mathbf{b}_{\alpha}\) the constituent external body force, and \(\hat{\boldsymbol{\pi}}_{\alpha}\) is the momentum exchange rate of constituent \(\alpha\) with the other constituents. In the remainder of the article we assume equal body forces (\(\mathbf{b}_{\alpha}=\mathbf{b}\) for \(\alpha=1,\ldots,N\)). Moreover, we restrict to body forces of gravitational type: \(\mathbf{b}=-b\boldsymbol{\jmath}=-b\nabla y\), with \(y\) the vertical coordinate, \(\boldsymbol{\jmath}\) the vertical unit vector and \(b\) a constant. Next, (17c) is the local constituent angular momentum balance with \(\mathbf{N}_{\alpha}\) the intrinsic moment of momentum. Finally, equation (17d) is the local constituent energy balance. Here \(\epsilon_{\alpha}\) is the specific internal energy of constituent \(\alpha\), \(\|\mathbf{v}_{\alpha}\|=\sqrt{\mathbf{v}_{\alpha}\cdot\mathbf{v}_{\alpha}}\) is the Euclidean norm of the velocity \(\mathbf{v}_{\alpha}\), \(\mathbf{q}_{\alpha}\) is the heat flux, \(r_{\alpha}\) is the external heat supply, and \(e_{\alpha}\) represents the energy exchange with the other constituents.
We denote the kinetic and gravitational energies of constituent respectively as:
\[\mathscr{K}_{\alpha} =\tilde{\rho}_{\alpha}\|\mathbf{v}_{\alpha}\|^{2}/2, \tag{18a}\] \[\mathscr{G}_{\alpha} =\tilde{\rho}_{\alpha}by. \tag{18b}\]
On the account of the mass balance (17a) and the linear momentum balance (17b), we deduce the evolution of the constituent kinetic energy:
\[\partial_{t}\mathscr{K}_{\alpha}+\mathrm{div}\left(\mathscr{K}_{ \alpha}\mathbf{v}_{\alpha}\right)-\mathbf{v}_{\alpha}\cdot\mathrm{div} \mathbf{T}_{\alpha}-\tilde{\rho}_{\alpha}\mathbf{b}_{\alpha}\cdot\mathbf{v}_{ \alpha}=\boldsymbol{\pi}_{\alpha}\cdot\mathbf{v}_{\alpha}-\frac{1}{2}\| \mathbf{v}_{\alpha}\|^{2}\gamma_{\alpha}. \tag{19}\]
Next, the evolution of the gravitational energy follows from the constituent mass equation (17a):
\[\partial_{t}\mathscr{G}_{\alpha}+\mathrm{div}\left(\mathscr{G}_{ \alpha}\mathbf{v}_{\alpha}\right)+\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha} \cdot\mathbf{b}-\gamma_{\alpha}by=0. \tag{20}\]
Taking the difference of (17d) and (19) we obtain the evolution of the constituent internal energy:
\[\partial_{t}\left(\tilde{\rho}_{\alpha}\epsilon_{\alpha}\right) +\mathrm{div}\left(\tilde{\rho}_{\alpha}\epsilon_{\alpha}\mathbf{v}_{\alpha} \right)-\mathbf{T}_{\alpha}:\nabla\mathbf{v}_{\alpha}+\mathrm{div}\mathbf{q} _{\alpha}-\tilde{\rho}_{\alpha}r_{\alpha}=\] \[-\boldsymbol{\pi}_{\alpha}\cdot\mathbf{v}_{\alpha}+\frac{1}{2}\| \mathbf{v}_{\alpha}\|^{2}\gamma_{\alpha}+e_{\alpha}. \tag{21}\]
The convective forms of the constituent evolution equations read:
\[\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha}+\tilde{\rho}_{\alpha} \mathrm{div}\mathbf{v}_{\alpha}= \gamma_{\alpha}, \tag{22a}\] \[\tilde{\rho}_{\alpha}\dot{\mathbf{v}}_{\alpha}-\mathrm{div} \mathbf{T}_{\alpha}-\tilde{\rho}_{\alpha}\mathbf{b}_{\alpha}= \mathbf{p}_{\alpha},\] (22b) \[\tilde{\rho}_{\alpha}\dot{\epsilon}_{\alpha}-\mathbf{T}_{\alpha} :\nabla\mathbf{v}_{\alpha}+\mathrm{div}\mathbf{q}_{\alpha}-\tilde{\rho}_{ \alpha}r_{\alpha}= \dot{\vec{e}}_{\alpha}, \tag{22c}\]
where the interaction terms are:
\[\mathbf{p}_{\alpha}= \boldsymbol{\pi}_{\alpha}-\gamma_{\alpha}\mathbf{v}_{\alpha}, \tag{23a}\] \[\ddot{\vec{e}}_{\alpha}= e_{\alpha}-\boldsymbol{\pi}_{\alpha}\cdot\mathbf{v}_{\alpha}- \gamma_{\alpha}(\epsilon_{\alpha}-\|\mathbf{v}_{\alpha}\|^{2}/2). \tag{23b}\]
By invoking the constant specific densities \(\rho_{\alpha}\), we obtain the evolution equation of the volume fraction:
\[\partial_{t}\phi_{\alpha}+\mathrm{div}(\phi_{\alpha}\mathbf{v}_{ \alpha})=\frac{\gamma_{\alpha}}{\rho_{\alpha}}. \tag{24}\]
Next, we turn to the continuum balance laws of the mixtures. Summing the balance laws (17) over the constituents gives:
\[\partial_{t}\rho+\mathrm{div}(\rho\mathbf{v})= 0, \tag{25a}\] \[\partial_{t}\mathbf{m}+\mathrm{div}\left(\mathbf{m}\otimes \mathbf{v}\right)-\mathrm{div}\mathbf{T}-\rho\mathbf{b}= 0,\] (25b) \[\mathbf{T}-\mathbf{T}^{T}= 0,\] (25c) \[\partial_{t}\left(\rho\left(\epsilon+\|\mathbf{v}\|^{2}/2\right) \right)+\mathrm{div}\left(\rho\left(\epsilon+\|\mathbf{v}\|^{2}/2\right) \mathbf{v}\right)\] \[-\mathrm{div}\left(\mathbf{T}\mathbf{v}\right)-\rho\mathbf{b} \cdot\mathbf{v}+\mathrm{div}\mathbf{q}-\rho r= 0. \tag{25d}\]
where
\[\epsilon:=\frac{1}{\rho}\sum_{\alpha}\tilde{\rho}_{\alpha}\left( \epsilon_{\alpha}+\frac{1}{2}\|\mathbf{w}_{\alpha}\|^{2}\right), \tag{26a}\] \[\mathbf{T}:= \sum_{\alpha}\mathbf{T}_{\alpha}-\tilde{\rho}_{\alpha}\mathbf{ w}_{\alpha}\otimes\mathbf{w}_{\alpha},\] (26b) \[\mathbf{b}:= \frac{1}{\rho}\sum_{\alpha}\tilde{\rho}_{\alpha}\mathbf{b}_{ \alpha},\] (26c) \[\mathbf{q}:= \sum_{\alpha}\mathbf{q}_{\alpha}-\mathbf{T}_{\alpha}\mathbf{w}_{ \alpha}+\tilde{\rho}_{\alpha}\left(\epsilon_{\alpha}+\frac{1}{2}\|\mathbf{w}_ {\alpha}\|^{2}\right),\] (26d) \[r:= \frac{1}{\rho}\sum_{\alpha}\tilde{\rho}_{\alpha}r_{\alpha}, \tag{26e}\]
and where we have postulated the following balance conditions to hold:
\[\sum_{\alpha}\gamma_{\alpha}=0, \tag{27a}\] \[\sum_{\alpha}\boldsymbol{\pi}_{\alpha}=0,\] (27b) \[\sum_{\alpha}\mathbf{N}_{\alpha}=0,\] (27c) \[\sum_{\alpha}e_{\alpha}=0. \tag{27d}\]
In establishing the mixture laws (25) use has been made of the identities (14) and
\[\sum_{\alpha}\tilde{\rho}_{\alpha}\frac{1}{2}\|\mathbf{w}_{\alpha}\|^{2} \mathbf{w}_{\alpha}=\sum_{\alpha}\left(\tilde{\rho}_{\alpha}\frac{1}{2}\| \mathbf{v}_{\alpha}\|^{2}\mathbf{w}_{\alpha}-\tilde{\rho}_{\alpha}\mathbf{w}_ {\alpha}(\mathbf{w}_{\alpha}\cdot\mathbf{v})\right). \tag{28}\]
In agreement with the first metaphysical principle of mixture theory, the kinetic, gravitational and internal energy of the mixture are the superposition of the constituent energies:
\[\mathscr{K} = \sum_{\alpha}\mathscr{K}_{\alpha}, \tag{29a}\] \[\mathscr{G} = \sum_{\alpha}\mathscr{G}_{\alpha},\] (29b) \[\mathscr{S} = \sum_{\alpha}\tilde{\rho}_{\alpha}\epsilon_{\alpha}. \tag{29c}\]
The kinetic energy of the mixture can be decomposed as:
\[\mathscr{K} =\bar{\mathscr{K}}+\sum_{\alpha}\frac{1}{2}\tilde{\rho}_{\alpha} \|\mathbf{w}_{\alpha}\|^{2}, \tag{30a}\] \[\bar{\mathscr{K}} = \frac{1}{2}\rho\|\mathbf{v}\|^{2}, \tag{30b}\]
where \(\bar{\mathscr{K}}\) is a kinetic energy of the mixture variables, and where the second term represents the kinetic energy of the constituents relative to the gross motion of the mixture. As a consequence, (17d) represents the evolution of the internal and kinetic energy of the mixture
\[\partial_{t}\mathscr{E}+\operatorname{div}\left(\mathscr{E}\mathbf{v}\right)- \operatorname{div}\left(\mathbf{v}\mathbf{T}\right)-\rho\mathbf{b}\cdot \mathbf{v}+\operatorname{div}\mathbf{q}-\rho r=\ 0, \tag{31}\]
with \(\mathscr{E}=\mathscr{K}+\mathscr{G}+\mathscr{S}\), given the standing assumption of equal body forces. Finally, we remark that the system of mixture balance laws (25) may be augmented with evolution equations of the order parameters (mass and energy) and diffusive fluxes [11] to arrive at a system equivalent with (17).
## 3 Constitutive modeling
In this section we perform the constitutive modeling. We choose to employ the well-known Coleman-Noll procedure [7] to construct constitutive models that satisfy the second law of thermodynamics. First, in Section3.1 we introduce the second law of thermodynamics in the context of rational mechanics. Next, in Section3.2 we establish the constitutive modeling restriction yielding from the second law. Then, in Section3.3 we select specific constitutive models compatible with the modeling restriction.
### Second law in mixture theory
In agreement with the second metaphysical principle, the entropy of each of the constituents \(\alpha\) is governed by the balance law:
\[\partial_{t}(\tilde{\rho}_{\alpha}\eta_{\alpha})+\operatorname{div}\left( \tilde{\rho}_{\alpha}\eta_{\alpha}\mathbf{v}_{\alpha}\right)+\operatorname{ div}\left(\mathbf{\Phi}_{\alpha}\right)-\tilde{\rho}_{\alpha}s_{\alpha}= \mathscr{P}_{\alpha}, \tag{32}\]
where the constituent quantities are the specific entropy density \(\eta_{\alpha}\), the entropy flux \(\mathbf{\Phi}_{\alpha}\), the specific entropy supply \(s_{\alpha}\), and the entropy production \(\mathscr{P}_{\alpha}\). The second law of thermodynamics dictates positive entropy production of the entire mixture:
\[\sum_{\alpha}\mathscr{P}_{\alpha}\geq 0. \tag{33}\]
The second law (33) is compatible with the first metaphysical principle of mixture theory.
In the following we derive the modeling restriction that results from the second law (33). To this purpose, we introduce the _Helmholtz mass-measure free energy_ of constituent \(\alpha\):
\[\psi_{\alpha}:=\epsilon_{\alpha}-\theta\eta_{\alpha}, \tag{34}\]
where \(\theta\) is the temperature. We restrict to isothermal mixtures and thus all constituents have the same constant temperature \(\theta=\theta_{\alpha}\), \(\alpha=1,\ldots,N\). We now substitute (32) and (34) into (33) and arrive at:
\[\sum_{\alpha}\partial_{t}(\tilde{\rho}_{\alpha}\left(\epsilon_{\alpha}-\psi_ {\alpha}\right))+\operatorname{div}\left(\tilde{\rho}_{\alpha}\left(\epsilon_{ \alpha}-\psi_{\alpha}\right)\mathbf{v}_{\alpha}\right)+\operatorname{div} \left(\theta\mathbf{\Phi}_{\alpha}\right)-\tilde{\rho}_{\alpha}s_{\alpha} \theta\ \geq 0. \tag{35}\]
We insert the balance of energy (21) into (35) to arrive at:
\[\sum_{\alpha}-\partial_{t}\left(\tilde{\rho}_{\alpha}\psi_{ \alpha}\right)-\operatorname{div}\left(\tilde{\rho}_{\alpha}\psi_{\alpha} \mathbf{v}_{\alpha}\right)+\mathbf{T}_{\alpha}:\nabla\mathbf{v}_{\alpha}+ \operatorname{div}\left(\theta\mathbf{\Phi}_{\alpha}-\mathbf{q}_{\alpha}\right)\] \[+\tilde{\rho}_{\alpha}\left(r_{\alpha}-\theta s_{\alpha}\right)- \boldsymbol{\pi}_{\alpha}\cdot\mathbf{v}_{\alpha}+\gamma_{\alpha}\|\mathbf{v} _{\alpha}\|^{2}/2\ \geq 0, \tag{36}\]
where the energy interaction term cancels because of (27d). In the final step we invoke the mass balance equation (17a) to find:
\[\sum_{\alpha}\tilde{\rho}_{\alpha}\dot{\psi}_{\alpha}-\mathbf{T}_ {\alpha}:\nabla\mathbf{v}_{\alpha}+\operatorname{div}\left(\mathbf{q}_{\alpha }-\theta\mathbf{\Phi}_{\alpha}\right)\] \[+\tilde{\rho}_{\alpha}\left(\theta s_{\alpha}-r_{\alpha}\right)+ \boldsymbol{\pi}_{\alpha}\cdot\mathbf{v}_{\alpha}-\gamma_{\alpha}\|\mathbf{v }_{\alpha}\|^{2}/2+\gamma_{\alpha}\psi_{\alpha}\ \leq 0. \tag{37}\]
This form of the second law provides the basis for the constitutive modeling.
Lastly, we remark that the second law may be written in an energy-dissipative form (given \(r_{\alpha}=\theta s_{\alpha}\)).
**Proposition 3.1** (Energy-dissipation).: _The second law may be written as the energy-dissipation statement:_
\[\sum_{\alpha}\left(\partial_{t}\mathscr{E}_{\alpha}+\operatorname{div}\left( \mathscr{E}_{\alpha}\mathbf{v}_{\alpha}\right)-\operatorname{div}\left( \mathbf{T}_{\alpha}\mathbf{v}_{\alpha}-\mathbf{q}_{\alpha}+\theta\mathbf{\Phi }_{\alpha}\right)\right)\leq 0, \tag{38}\]
_with \(\mathscr{E}_{\alpha}=\mathscr{K}_{\alpha}+\mathscr{G}_{\alpha}+\tilde{\rho}_ {\alpha}\epsilon_{\alpha}\), and where we have set \(r_{\alpha}=\theta s_{\alpha}\)._
Proof.: Using the constituent mass equation (17a), the second law (37) may be written as:
\[\sum_{\alpha}\left[\partial_{t}(\tilde{\rho}_{\alpha}\psi_{\alpha})+ \operatorname{div}(\tilde{\rho}_{\alpha}\psi_{\alpha}\mathbf{v}_{\alpha})- \mathbf{T}_{\alpha}:\nabla\mathbf{v}_{\alpha}+\operatorname{div}\left( \mathbf{q}_{\alpha}-\theta\mathbf{\Phi}_{\alpha}\right)\right.\]
\[\left.+\boldsymbol{\pi}_{\alpha}\cdot\mathbf{v}_{\alpha}-e_{\alpha}-\gamma_{ \alpha}\|\mathbf{v}_{\alpha}\|^{2}/2\right]\leq 0. \tag{39}\]
Adding (19) and (20) to the condition (39) provides the result.
### Constitutive modeling restriction
We specify the modeling restriction (37) to a particular set of constitutive constituent classes for the stress \(\mathbf{T}_{\alpha}\), free energy \(\psi_{\alpha}\), entropy flux \(\mathbf{\Phi}_{\alpha}\), momentum supply \(\boldsymbol{\pi}_{\alpha}\), and mass supply \(\gamma_{\alpha}\). We introduce the constitutive free energy class:
\[\hat{\psi}_{\alpha}=\hat{\psi}_{\alpha}(\phi_{\alpha},\nabla\phi_{\alpha}, \mathbf{D}_{\alpha}), \tag{40}\]
and postpone the specification of the other constitutive classes. Here \(\mathbf{D}_{\alpha}\) is the symmetric velocity gradient of constituent \(\alpha\).
In the following we examine the constitutive modeling restriction (37) for this specific set of constitutive classes. Substitution of the constitutive classes (40) into (37) and expanding the peculiar derivative of the free energy provides:
\[\sum_{\alpha}\tilde{\rho}_{\alpha}\left(\frac{\partial\hat{\psi} _{\alpha}}{\partial\phi_{\alpha}}\hat{\phi}_{\alpha}+\frac{\partial\hat{\psi} _{\alpha}}{\partial\nabla\phi_{\alpha}}\cdot\hat{\nabla}\hat{\phi}_{\alpha}+ \partial_{\mathbf{D}_{\alpha}}\hat{\psi}_{\alpha}\hat{\mathbf{D}}_{\alpha} \right)-\hat{\mathbf{T}}_{\alpha}:\nabla\mathbf{v}_{\alpha}\] \[+\mathrm{div}\left(\mathbf{q}_{\alpha}-\theta\hat{\mathbf{\Phi}}_ {\alpha}\right)+\tilde{\rho}_{\alpha}\left(\theta s_{\alpha}-r_{\alpha}\right)\] \[+\boldsymbol{\pi}_{\alpha}\cdot\mathbf{v}_{\alpha}-\gamma_{ \alpha}\|\mathbf{v}_{\alpha}\|^{2}/2+\gamma_{\alpha}\psi_{\alpha}\ \leq 0. \tag{41}\]
The arbitrariness of the peculiar time derivative \(\hat{\mathbf{D}}_{\alpha}\) precludes dependence of \(\psi_{\alpha}\) on \(\mathbf{D}_{\alpha}\). Thus, the free energy class reduces to:
\[\hat{\psi}_{\alpha}=\hat{\psi}_{\alpha}(\phi_{\alpha},\nabla\phi_{\alpha}), \tag{42}\]
and the last member in the first brackets is eliminated.
Next we focus on the first term in the sum in (41) and introduce the constituent quantity:
\[\chi_{\alpha}=\phi_{\alpha}\frac{\partial\hat{\psi}_{\alpha}}{\partial\phi_{ \alpha}}-\mathrm{div}\left(\phi_{\alpha}\frac{\partial\hat{\psi}_{\alpha}}{ \partial\nabla\phi_{\alpha}}\right). \tag{43}\]
**Lemma 3.2** (Identity peculiar derivative free energy).: _We have the identity:_
\[\tilde{\rho}_{\alpha}\left(\frac{\partial\hat{\psi}_{\alpha}}{ \partial\phi_{\alpha}}\hat{\phi}_{\alpha}+\frac{\partial\hat{\psi}_{\alpha}}{ \partial\nabla\phi_{\alpha}}\cdot\hat{\nabla}\hat{\phi}_{\alpha}\right)= -\tilde{\rho}_{\alpha}\left(\chi_{\alpha}\mathrm{div}\mathbf{v} _{\alpha}+\left(\nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{\alpha}}{ \partial\nabla\phi_{\alpha}}\right):\nabla\mathbf{v}_{\alpha}\right)\] \[-\mathrm{div}\left(\tilde{\rho}_{\alpha}\frac{\partial\hat{\psi} _{\alpha}}{\partial\nabla\phi_{\alpha}}\left(\phi_{\alpha}\mathrm{div} \mathbf{v}_{\alpha}\right)\right)\] \[+\gamma_{\alpha}\chi_{\alpha}+\mathrm{div}\left(\gamma_{\alpha} \phi_{\alpha}\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}} \right). \tag{44}\]
Proof.: Noting the identity
\[\hat{\nabla}\hat{\phi}_{\alpha}=\nabla\left(\hat{\phi}_{\alpha}\right)-( \nabla\phi_{\alpha})^{T}\nabla\mathbf{v}_{\alpha}, \tag{45}\]
we can deduce:
\[\tilde{\rho}_{\alpha}\frac{\partial\hat{\psi}_{\alpha}}{\partial \nabla\phi_{\alpha}}\cdot\dot{\nabla}\hat{\phi}_{\alpha} =\operatorname{div}\left(\tilde{\rho}_{\alpha}\frac{\partial\hat{ \psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\hat{\phi}_{\alpha}\right)-\hat{ \phi}_{\alpha}\text{div}\left(\tilde{\rho}_{\alpha}\frac{\partial\hat{\psi}_{ \alpha}}{\partial\nabla\phi_{\alpha}}\right)\] \[\quad-\tilde{\rho}_{\alpha}\nabla\phi_{\alpha}\otimes\frac{ \partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\cdot\nabla\mathbf{v}_ {\alpha}. \tag{46}\]
By substituting the mass balance equation (17a) into (46) we deduce:
\[\tilde{\rho}_{\alpha}\frac{\partial\hat{\psi}_{\alpha}}{\partial \nabla\phi_{\alpha}}\dot{\nabla}\hat{\phi}_{\alpha} = -\operatorname{div}\left(\tilde{\rho}_{\alpha}\frac{\partial\hat{ \psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\left(\phi_{\alpha}\text{div} \mathbf{v}_{\alpha}-\rho_{\alpha}^{-1}\gamma_{\alpha}\right)\right) \tag{47}\] \[+\left(\phi_{\alpha}\text{div}\mathbf{v}_{\alpha}-\rho_{\alpha}^{ -1}\gamma_{\alpha}\right)\operatorname{div}\left(\tilde{\rho}_{\alpha}\frac{ \partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\right)\] \[-\left(\tilde{\rho}_{\alpha}\nabla\phi_{\alpha}\otimes\frac{ \partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\right):\nabla \mathbf{v}_{\alpha}.\]
As a result the first term in (41) may be written as:
\[\tilde{\rho}_{\alpha}\left(\frac{\partial\hat{\psi}_{\alpha}}{ \partial\phi_{\alpha}}\hat{\phi}_{\alpha}+\frac{\partial\hat{\psi}_{\alpha}}{ \partial\nabla\phi_{\alpha}}\cdot\dot{\nabla}\hat{\phi}_{\alpha}\right)=\] \[-\tilde{\rho}_{\alpha}\left(\frac{\partial\hat{\psi}_{\alpha}}{ \partial\phi_{\alpha}}\left(\phi_{\alpha}\text{div}\mathbf{v}_{\alpha}-\rho_{ \alpha}^{-1}\gamma_{\alpha}\right)\right)\] \[-\operatorname{div}\left(\tilde{\rho}_{\alpha}\frac{\partial\hat {\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\left(\phi_{\alpha}\text{div} \mathbf{v}_{\alpha}-\rho_{\alpha}^{-1}\gamma_{\alpha}\right)\right)\] \[+\left(\tilde{\rho}_{\alpha}\text{div}\mathbf{v}_{\alpha}-\gamma_ {\alpha}\right)\operatorname{div}\left(\phi_{\alpha}\frac{\partial\hat{\psi}_ {\alpha}}{\partial\nabla\phi_{\alpha}}\right)-\left(\tilde{\rho}_{\alpha} \nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla \phi_{\alpha}}\right):\nabla\mathbf{v}_{\alpha}. \tag{48}\]
Substituting (43) into (48) completes the proof.
Substitution of Theorem 3.2 into the second law (41) provides:
\[\sum_{\alpha}-\left(\pi_{\alpha}\mathbf{I}+\tilde{\rho}_{\alpha} \nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla \phi_{\alpha}}+\hat{\mathbf{T}}_{\alpha}\right):\nabla\mathbf{v}_{\alpha}\] \[+\text{div}\left(\mathbf{q}_{\alpha}-\theta\hat{\mathbf{\Phi}}_{ \alpha}-\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\phi_{ \alpha}\left(\tilde{\rho}_{\alpha}\text{div}\mathbf{v}_{\alpha}-\gamma_{ \alpha}\right)\right)\] \[+\tilde{\rho}_{\alpha}\left(\theta s_{\alpha}-r_{\alpha}\right)+ \left(\boldsymbol{\pi}_{\alpha}-\gamma_{\alpha}\mathbf{v}_{\alpha}/2\right) \cdot\mathbf{v}_{\alpha}+\gamma_{\alpha}\left(\psi_{\alpha}+\chi_{\alpha} \right)\ \leq 0, \tag{49}\]
where we have introduced \(\pi_{\alpha}:=\tilde{\rho}_{\alpha}\chi_{\alpha}\).
At this point we remark that (49) is degenerate because of the dependency of the various members in the superposition. Namely, the first two terms in the integral contain \(\nabla\mathbf{v}_{\alpha}\) and \(\mathbf{v}_{\alpha}\) are connected via the mass balance (17a). To exploit the degeneracy, we introduce a scalar Lagrange multiplier \(p\geq 0\) representing the _mixture mechanical pressure_. Summation of (17a) over the constituents provides:
\[0 =p\sum_{\alpha}\hat{\phi}_{\alpha}+\phi_{\alpha}\mathrm{div} \mathbf{v}_{\alpha}-\rho_{\alpha}^{-1}\gamma_{\alpha}\] \[=p\sum_{\alpha}\mathbf{v}_{\alpha}\cdot\nabla\phi_{\alpha}+\phi_ {\alpha}\mathrm{div}\mathbf{v}_{\alpha}-\rho_{\alpha}^{-1}\gamma_{\alpha}, \tag{50}\]
where we recall the postulate of no excess volume (7). Employing the relation (50) into (49) provides the requirement:
\[\sum_{\alpha}-\left((\pi_{\alpha}+p\phi_{\alpha})\mathbf{I}+\tilde {\rho}_{\alpha}\nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{\alpha}}{ \partial\nabla\phi_{\alpha}}+\hat{\mathbf{T}}_{\alpha}\right):\nabla\mathbf{v} _{\alpha}\] \[+\mathrm{div}\left(\mathbf{q}_{\alpha}-\theta\hat{\mathbf{\Phi}}_ {\alpha}-\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\phi _{\alpha}\left(\tilde{\rho}_{\alpha}\mathrm{div}\mathbf{v}_{\alpha}-\gamma_{ \alpha}\right)\right)+\tilde{\rho}_{\alpha}\left(\theta s_{\alpha}-r_{\alpha}\right)\] \[\qquad\qquad+(\boldsymbol{\pi}_{\alpha}-\gamma_{\alpha}\mathbf{v} _{\alpha}/2-p\nabla\phi_{\alpha})\cdot\mathbf{v}_{\alpha}+\gamma_{\alpha} \left(\hat{\psi}_{\alpha}+\chi_{\alpha}+\rho_{\alpha}^{-1}p\right)\ \leq 0. \tag{51}\]
The term \(\mathfrak{p}_{\alpha}:=\pi_{\alpha}+p\phi_{\alpha}\) represents a generalized form of the constituent pressure in the incompressible mixture. It consists of the constituent mechanical pressure \(p\phi_{\alpha}\) and the constituent thermodynamical pressure \(\pi_{\alpha}\). The latter may be written in a form closely related to the classical thermodynamical pressure:
\[\pi_{\alpha} =\tilde{\rho}_{\alpha}^{2}\upsilon_{\alpha}, \tag{52a}\] \[\upsilon_{\alpha} :=\frac{\partial\hat{\psi}_{\alpha}}{\partial\tilde{\rho}_{ \alpha}}-\frac{1}{\tilde{\rho}_{\alpha}}\mathrm{div}\left(\tilde{\rho}_{ \alpha}\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\tilde{\rho}_{\alpha} }\right). \tag{52b}\]
Thus \(\pi_{\alpha}\) represents the thermodynamical pressure for the free energy constituent class (42), where \(\upsilon_{\alpha}\) is a generalized derivative of the free energy.
We now introduce the volumetric Helmholtz free energy \(\hat{\Psi}_{\alpha}:=\tilde{\rho}_{\alpha}\hat{\psi}_{\alpha}\). Given the constituent class of \(\hat{\psi}_{\alpha}\) (equation (42)), we identify the volumetric Helmholtz free energy class:
\[\hat{\Psi}_{\alpha}=\hat{\Psi}_{\alpha}(\phi_{\alpha},\nabla\phi_{\alpha})= \tilde{\rho}_{\alpha}\hat{\psi}_{\alpha}(\phi_{\alpha},\nabla\phi_{\alpha})= \rho_{\alpha}\phi_{\alpha}\hat{\psi}_{\alpha}(\phi_{\alpha},\nabla\phi_{ \alpha}). \tag{53}\]
The constituent thermodynamical pressure \(\pi_{\alpha}\) may be written in terms of the volume-measure free energy \(\hat{\Psi}_{\alpha}\):
\[\pi_{\alpha} =\phi_{\alpha}\mu_{\alpha}-\hat{\Psi}_{\alpha} \tag{54a}\] \[\mu_{\alpha} :=\frac{\partial\hat{\Psi}_{\alpha}}{\partial\phi_{\alpha}}- \mathrm{div}\left(\frac{\partial\hat{\Psi}_{\alpha}}{\partial\nabla\phi_{ \alpha}}\right), \tag{54b}\]
where \(\mu_{\alpha}\) is the chemical potential variable associated with the volume-measure free energy \(\hat{\Psi}_{\alpha}\). The volume-measure based chemical potential \(\mu_{\alpha}\) may be expressed in terms of the mass-measure based chemical potential \(\tau_{\alpha}\) via:
\[\mu_{\alpha} =\rho_{\alpha}\left(\phi_{\alpha}\tau_{\alpha}+\hat{\psi}_{\alpha} -\nabla\phi_{\alpha}\cdot\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi _{\alpha}}\right), \tag{55a}\] \[\tau_{\alpha} =\frac{\partial\hat{\psi}_{\alpha}}{\partial\phi_{\alpha}}- \operatorname{div}\left(\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi _{\alpha}}\right). \tag{55b}\]
**Remark 3.3** (Dalton's law).: _The mechanical pressure obeys Dalton's law. Namely, the constituent mechanical pressure \(p\phi_{\alpha}\) is the product of the mixture mechanical pressure \(p\) and the constituent volume fraction \(\phi_{\alpha}\). Additionally, according to the axiom (7), the sum of the constituent mechanical pressures is the mixture mechanical pressure \(p\)._
**Remark 3.4** (Incompressibility constraint).: _The introduction of the mixture mechanical pressure is connected with an incompressibility constraint in absense of mass fluxes (i.e. \(\gamma_{\alpha}=0\)). Namely, by introducing the mean velocity_
\[\mathbf{u}:=\sum_{\alpha}\phi_{\alpha}\mathbf{v}_{\alpha}, \tag{56}\]
(50) _takes the form:_
\[p\mathrm{div}\mathbf{u}=p\sum_{\alpha}\mathrm{div}(\phi_{\alpha}\mathbf{v}_{ \alpha})=p\sum_{\alpha}\mathbf{v}_{\alpha}\cdot\nabla\phi_{\alpha}+\phi_{ \alpha}\mathrm{div}\mathbf{v}_{\alpha}=0, \tag{57}\]
_provided \(\gamma_{\alpha}=0\). The mean velocity \(\mathbf{u}\) is known as the volume averaged velocity which is an incompressible field in absense of mass fluxes. The observation has been employed in the formulation of reduced (approximate) quasi-incompressible Navier-Stokes Cahn-Hilliard models [6, 9, 1, 11] with an incompressible velocity field._
Based on the condition (51), we restrict to the following constitutive constituent classes for the stress \(\mathbf{T}_{\alpha}\), entropy flux \(\mathbf{\Phi}_{\alpha}\), entropy supply \(s_{\alpha}\), mass supply \(\gamma_{\alpha}\), and momentum supply \(\boldsymbol{\pi}_{\alpha}\):
\[\hat{\mathbf{\Phi}}_{\alpha} =\hat{\mathbf{\Phi}}_{\alpha}\left(\phi_{\alpha},\nabla\phi_{ \alpha},\mathrm{div}\mathbf{v}_{\alpha},\mathbf{q}_{\alpha},\gamma_{\alpha} \right), \tag{58a}\] \[\hat{s}_{\alpha} =\hat{s}_{\alpha}\left(r_{\alpha}\right),\] (58b) \[\hat{\mathbf{T}}_{\alpha} =\hat{\mathbf{T}}_{\alpha}(\phi_{\alpha},\nabla\phi_{\alpha}, \mathbf{D}_{\alpha},\pi_{\alpha},p),\] (58c) \[\hat{\gamma}_{\alpha} =\hat{\gamma}_{\alpha}\left(\phi_{\alpha},\nabla\phi_{\alpha},p, \left\{\psi_{\beta}\right\}_{\beta=1,\ldots,N},\left\{\mu_{\beta}\right\}_{ \beta=1,\ldots,N}\right),\] (58d) \[\hat{\boldsymbol{\pi}}_{\alpha} =\hat{\boldsymbol{\pi}}_{\alpha}\left(\phi_{\alpha},\nabla\phi_{ \alpha},\left\{\mathbf{v}_{\beta}\right\}_{\beta=1,\ldots,N},\left\{\gamma_{ \beta}\right\}_{\beta=1,\ldots,N}\right), \tag{58e}\]
where in (58d) and (58e) the dependence on the sets over all constituents is a consequence of the axioms (27a) and (27b).
### Selection of constitutive models
We are now in the position to pose thermodynamically consistent relations for the constitutive classes (58).
_Entropy flux_. By demanding the divergence term to equate zero, we identify the entropy flux of constituent \(\alpha\) as:
\[\hat{\mathbf{\Phi}}_{\alpha}\equiv\frac{\mathbf{q}_{\alpha}}{\theta}-\frac{1}{ \theta}\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\phi_{ \alpha}\left(\tilde{\rho}_{\alpha}\mathrm{div}\mathbf{v}_{\alpha}-\hat{ \gamma}_{\alpha}\right). \tag{59}\]
The first member in the entropy flux is the constituent version of the classical term that appears in single constituent models. On the other hand, the second member in the entropy flux is the incompressible counterpart augmented with mass transfer, of the so-called _extra entropy flux_.
_Entropy supply_. By requiring the last member in (51) to disappear, we identify the constituent entropy supply density as:
\[s_{\alpha}\equiv\frac{r_{\alpha}}{\theta}. \tag{60}\]
_Stress tensor_. To preclude that variations of the velocity gradient \(\nabla\mathbf{v}_{\alpha}\) cause a violation of the second law (51) we insist:
\[-\left((\tilde{\rho}_{\alpha}\chi_{\alpha}+p\phi_{\alpha})\mathbf{I}+\tilde{ \rho}_{\alpha}\nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{\alpha}}{ \partial\nabla\phi_{\alpha}}+\hat{\mathbf{T}}_{\alpha}\right):\nabla\mathbf{v} _{\alpha}\leq 0. \tag{61}\]
We select the following constitutive model for the stress tensor that is compatible with (61):
\[\hat{\mathbf{T}}_{\alpha}=\tilde{\nu}_{\alpha}\left(2\mathbf{D}_{\alpha}+ \lambda_{\alpha}(\mathrm{div}\mathbf{v}_{\alpha})\mathbf{I}\right)-(\pi_{ \alpha}+p\phi_{\alpha})\mathbf{I}-\tilde{\rho}_{\alpha}\nabla\phi_{\alpha} \otimes\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}, \tag{62}\]
where \(\tilde{\nu}_{\alpha}=\nu_{\alpha}\phi_{\alpha}\geq 0\) is a dynamic viscosity, and \(\lambda_{\alpha}\geq-2/d\).
**Lemma 3.5** (Compatibility stress tensor).: _The choice (62) is compatible with the thermodynamical restriction (61)._
Proof.: This is a standard result. In this particular case (61) takes the form:
\[-2\tilde{\nu}_{\alpha}\left(\mathbf{D}-\frac{1}{d}(\mathrm{div}\mathbf{v}_{ \alpha})\mathbf{I}\right):\left(\mathbf{D}-\frac{1}{d}(\mathrm{div}\mathbf{v}_ {\alpha})\mathbf{I}\right)-\tilde{\nu}_{\alpha}\left(\lambda_{\alpha}+\frac{2 }{d}\right)(\mathrm{div}\mathbf{v}_{\alpha})^{2}\leq 0. \tag{63}\]
**Remark 3.6** (General form stress tensor).: _The requirement (61) implies the general form:_
\[\hat{\mathbf{T}}_{\alpha}=2\mathbf{K}_{\alpha}\mathbf{D}_{\alpha}-(\pi_{\alpha} +p\phi_{\alpha})\mathbf{I}-\tilde{\rho}_{\alpha}\nabla\phi_{\alpha}\otimes \frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}, \tag{64}\]
_where \(\mathbf{K}_{\alpha}=\mathbf{K}_{\alpha}(\phi_{\alpha},\nabla\phi_{\alpha}, \mathbf{D}_{\alpha})\) is a quantity that satisfies:_
\[\mathbf{D}_{\alpha}^{T}\mathbf{K}_{\alpha}\mathbf{D}_{\alpha}\geq 0. \tag{65}\]
_This implication follows from a result concerning thermodynamical inequalities proved by Gurtin [15]._
_Mass transfer._ To rule out violations (51) caused by the latter term on the left-hand side, we impose the following requirement on the mass interaction terms:
\[\sum_{\alpha}\hat{\gamma}_{\alpha}\left(\psi_{\alpha}+\chi_{\alpha}+\rho_{ \alpha}^{-1}p\right)\ \leq 0. \tag{66}\]
The requirement distinguishes from the compressible situation by the occurrence of the hydrodynamic pressure \(p\), see e.g. Morro [20]. We take the following model for the mass transfer:
\[\hat{\gamma}_{\alpha}= \ -\hat{m}_{\alpha}\left((\psi_{\alpha}-\psi_{N})+(\chi_{\alpha}- \chi_{N})+(\rho_{\alpha}^{-1}-\rho_{N}^{-1})p\right),\] \[\text{ for }\alpha=1,\ldots,N-1, \tag{67a}\] \[\hat{\gamma}_{N}= \ -\sum_{\alpha=1,\ldots,N-1}\hat{\gamma}_{\alpha}, \tag{67b}\]
for some non-negative constituent quantity \(\hat{m}_{\alpha}\geq 0\) that vanishes when \(\phi_{\alpha}=0,1\).
**Lemma 3.7** (Compatibility mass transfer).: _The choice (67) is compatible with the balance of mass supply (27b), and the thermodynamical restriction (66)._
Proof.: Invoking the identity (27b) written as (67b), the condition (66) is equivalent to:
\[\sum_{\alpha=1,\ldots,N-1}\hat{\gamma}_{\alpha}\left((\psi_{\alpha}-\psi_{N}) +(\chi_{\alpha}-\chi_{N})+(\rho_{\alpha}^{-1}-\rho_{N}^{-1})p\right)\leq 0. \tag{68}\]
The choice (67) causes each of the terms in the sum in (68) to be non-positive. Compatibility with (27b) follows from (67b).
On the account of the identity:
\[\rho_{\alpha}\left(\psi_{\alpha}+\chi_{\alpha}\right)=\mu_{\alpha}, \tag{69}\]
the mass flux may be expressed in terms of the chemical potential \(\mu_{\alpha}\):
\[\hat{\gamma}_{\alpha}= \ -\hat{m}_{\alpha}\left(\frac{1}{\rho_{\alpha}}\left(\mu_{\alpha} +p\right)-\frac{1}{\rho_{N}}\left(\mu_{N}+p\right)\right),\quad\text{ for }\alpha=1,\ldots,N-1. \tag{70}\]
Furthermore, the mass flux may be written as:
\[\hat{\gamma}_{\alpha}= -\hat{m}_{\alpha}(g_{\alpha}-g_{N}),\qquad\text{ for }\alpha=1,\ldots,N-1. \tag{71}\]
where \(g_{\alpha}\) represents the Gibbs free energy of constituent \(\alpha\):
\[g_{\alpha}=\psi_{\alpha}+\frac{\mathfrak{p}_{\alpha}}{\tilde{ \rho}_{\alpha}}=\psi_{\alpha}+\chi_{\alpha}+\frac{p}{\rho_{\alpha}}, \tag{72}\]
and where we recall the total constituent pressure \(\mathfrak{p}_{\alpha}=\pi_{\alpha}+\phi_{\alpha}p\).
_Momentum transfer._ To avoid a violation of (51) resulting from momentum transfer, we demand:
\[\sum_{\alpha}\mathbf{v}_{\alpha}\cdot\left(\mathbf{\pi}_{\alpha}- \hat{\gamma}_{\alpha}\mathbf{v}_{\alpha}/2-p\nabla\phi_{\alpha}\right)\ \leq 0. \tag{73}\]
We select the mass transfer model:
\[\mathbf{\pi}_{\alpha}=p\nabla\phi_{\alpha}+\sum_{\beta}R_{\alpha\beta }(\mathbf{w}_{\beta}-\mathbf{w}_{\alpha})+\mathbf{\beta}_{\alpha}, \tag{74}\]
where
\[\mathbf{\beta}_{\alpha} = \frac{1}{2}\hat{\gamma}_{\alpha}\left(\mathbf{w}_{\alpha}+ \mathbf{w}_{N}+2\mathbf{v}\right),\quad\text{ for }\alpha=1,\ldots,N-1, \tag{75a}\] \[\mathbf{\beta}_{N} = -\sum_{\alpha=1,\ldots,N-1}\mathbf{\beta}_{\alpha}. \tag{75b}\]
Furthermore, \(R_{\alpha\beta}\) is a symmetric non-negative matrix of the form:
\[R_{\alpha\beta}=\frac{p\phi_{\alpha}\phi_{\beta}}{D_{\alpha\beta }}\geq 0, \tag{76}\]
with \(D_{\alpha\beta}\geq 0\) a symmetric diffusion coefficient.
**Lemma 3.8** (Compatibility momentum transfer).: _The momentum transfer model (74) is compatible with the balance of momentum supply (27b), and the thermodynamical restriction (73)._
Proof.: Compatibility with (27b) is a consequence of (74), the symmetry of \(R_{\alpha\beta}\), and the definition (75). Next, recalling the axiom of constant volume (7), the axioms of balance of mixture mass and momentum (27a)-(27b), the condition (73) is equivalent to:
\[\sum_{\alpha}\mathbf{w}_{\alpha}\cdot\left(\mathbf{\pi}_{\alpha}-p \nabla\phi_{\alpha}-\hat{\gamma}_{\alpha}\left(\frac{1}{2}\mathbf{w}_{\alpha }+\mathbf{v}\right)\right)\leq 0. \tag{77}\]
Substitution of (74) into (77) provides the requirement:
\[\sum_{\alpha,\beta}R_{\alpha\beta}\mathbf{w}_{\alpha}\cdot(\mathbf{ w}_{\beta}-\mathbf{w}_{\alpha})+\sum_{\alpha}\mathbf{w}_{\alpha}\cdot\left( \boldsymbol{\beta}_{\alpha}-\hat{\gamma}_{\alpha}\left(\frac{1}{2}\mathbf{w}_{ \alpha}+\mathbf{v}\right)\right)\leq 0. \tag{78}\]
The first term is non-positive as a consequence of the identity:
\[\sum_{\alpha,\beta}R_{\alpha\beta}(\mathbf{w}_{\beta}-\mathbf{w} _{\alpha})\cdot\mathbf{w}_{\alpha}=-\frac{1}{2}\sum_{\alpha,\beta}R_{\alpha \beta}\|\mathbf{w}_{\alpha}-\mathbf{w}_{\beta}\|^{2}. \tag{79}\]
Taking the second term in isolation, splitting the summation provides:
\[\sum_{\alpha}\mathbf{w}_{\alpha}\cdot\left(\boldsymbol{\beta}_{ \alpha}-\hat{\gamma}_{\alpha}\left(\frac{1}{2}\mathbf{w}_{\alpha}+\mathbf{v} \right)\right)=\] \[\qquad\qquad\qquad\sum_{\alpha=1,\ldots N-1}\mathbf{w}_{\alpha} \cdot\left(\boldsymbol{\beta}_{\alpha}-\hat{\gamma}_{\alpha}\left(\frac{1}{2} \mathbf{w}_{\alpha}+\mathbf{v}\right)\right)\] \[\qquad\qquad+\mathbf{w}_{N}\cdot\left(\boldsymbol{\beta}_{N}- \hat{\gamma}_{N}\left(\frac{1}{2}\mathbf{w}_{N}+\mathbf{v}\right)\right). \tag{80}\]
We substitute the identities (67b) and (75b) arrive at:
\[\sum_{\alpha}\mathbf{w}_{\alpha}\cdot\left(\boldsymbol{\beta}_{ \alpha}-\hat{\gamma}_{\alpha}\left(\frac{1}{2}\mathbf{w}_{\alpha}+\mathbf{v} \right)\right)=\] \[\qquad\qquad\sum_{\alpha=1,\ldots,N-1}(\mathbf{w}_{\alpha}- \mathbf{w}_{N})\cdot\left(\boldsymbol{\beta}_{\alpha}-\frac{1}{2}\hat{\gamma} _{\alpha}\left(\mathbf{w}_{\alpha}+\mathbf{w}_{N}\right)-\hat{\gamma}_{\alpha }\mathbf{v}\right). \tag{81}\]
Inserting the definition (75a) causes the term to vanish.
**Remark 3.9** (Stefan-Maxwell model).: _The second member in (74) represents an isothermal Stefan-Maxwell model [34]. The term \(p\phi_{\alpha}\phi_{\beta}\) is proportional to the frequency of collisions between \(\alpha\) and \(\beta\). This makes intuitive sense in the way that the force that is exerted by constituent \(\beta\) on constituent \(\alpha\) scales with the frequency of collisions between the two constituents. Provided mass transfer is absent (\(\hat{\gamma}_{\alpha}=0\)), the momentum transfer vanishes if and only if:_
\[\nabla\phi_{\alpha}+\sum_{\beta}\frac{\phi_{\alpha}\phi_{\beta}}{ D_{\alpha\beta}}(\mathbf{v}_{\beta}-\mathbf{v}_{\alpha})=0. \tag{82}\]
_The equations (82) represent the well-known Stefan-Maxwell equations that describe an equilibrium situation. The left-hand side of (82) represents the diffusion driving force for constituent \(\alpha\), whereas the right-hand side of (82) is the drag force on constituent \(\alpha\) that resists the diffusion. As such \(D_{\alpha\beta}\) can be interpreted as an inverse drag coefficient, and is referred to as Stefan-Maxwell diffusivity._
This concludes the Coleman-Noll procedure. We have now obtained the _incompressible multi-constituent model_ that is consistent with the second law of mixture-theory:
\[\partial_{t}\tilde{\rho}_{\alpha}+\mathrm{div}(\tilde{\rho}_{ \alpha}\mathbf{v}_{\alpha})-\hat{\gamma}_{\alpha}=\ 0, \tag{83a}\] \[\partial_{t}(\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha})+\mathrm{ div}\left(\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha}\otimes\mathbf{v}_{\alpha} \right)+\phi_{\alpha}\nabla p\] \[-\mathrm{div}\left(\tilde{\nu}_{\alpha}\left(2\mathbf{D}_{\alpha }+\lambda_{\alpha}\mathrm{div}\mathbf{v}_{\alpha}\right)\right)\] \[+\nabla\pi_{\alpha}+\mathrm{div}\left(\tilde{\rho}_{\alpha} \nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla \phi_{\alpha}}\right)-\tilde{\rho}_{\alpha}\mathbf{b}\] \[-\sum_{\beta}\frac{p\phi_{\alpha}\phi_{\beta}}{D_{\alpha\beta}}( \mathbf{v}_{\beta}-\mathbf{v}_{\alpha})-\boldsymbol{\beta}_{\alpha}=\ 0, \tag{83b}\]
for \(\alpha=1,...,N\) where \(\hat{\gamma}_{\alpha}\) and \(\boldsymbol{\beta}_{\alpha}\) are given in (67) and (75), respectively.
We now discuss some properties of the model. First we explicitly state the compatibility with the second law.
**Theorem 3.10** (Compatibility second law).: _The model (83) is compatible with the second law of thermodynamics (33)._
Proof.: This follows from the form of the second law (51) and Theorem 3.5, Theorem 3.7, and Theorem 3.8. In particular, inserting (59), (60), (62), (67) and (74) into (51) reveals that the second law is satisfied with
\[\theta\sum_{\alpha}\mathscr{P}_{\alpha}= \ \sum_{\alpha}2\tilde{\nu}_{\alpha}\left(\mathbf{D}-\frac{1}{d}( \mathrm{div}\mathbf{v}_{\alpha})\mathbf{I}\right):\left(\mathbf{D}-\frac{1}{d }(\mathrm{div}\mathbf{v}_{\alpha})\mathbf{I}\right)\] \[+\sum_{\alpha}\tilde{\nu}_{\alpha}\left(\lambda_{\alpha}+\frac{2 }{d}\right)\left(\mathrm{div}\mathbf{v}_{\alpha}\right)^{2}+\frac{1}{2}\sum_{ \alpha,\beta}R_{\alpha\beta}\|\mathbf{w}_{\alpha}-\mathbf{w}_{\beta}\|^{2}\] \[+\sum_{\alpha=1,...,N-1}\hat{m}_{\alpha}\left(g_{\alpha}-g_{N} \right)^{2}\geq 0. \tag{84}\]
We now note the reduction to the standard Navier-Stokes equations in the single fluid regime.
**Proposition 3.11** (Reduction to Navier-Stokes).: _The multi-constituent system (83) reduces to the standard incompressible Navier-Stokes equations in the single-constituent regime (\(\phi_{\alpha}=1\)):_
\[\partial_{t}(\rho_{\alpha}\mathbf{v}_{\alpha})+\mathrm{div}\left( \rho_{\alpha}\mathbf{v}_{\alpha}\otimes\mathbf{v}_{\alpha}\right)+\nabla p\] \[-\mathrm{div}\left(\nu_{\alpha}\left(2\mathbf{D}_{\alpha}+\lambda _{\alpha}\mathrm{div}\mathbf{v}_{\alpha}\right)\right)-\rho_{\alpha}\mathbf{b} =\ 0, \tag{85a}\] \[\mathrm{div}\mathbf{v}_{\alpha} =\ 0, \tag{85b}\]
_with \(\rho_{\alpha}=\rho,\mathbf{v}_{\alpha}=\mathbf{v}\), and \(\mathbf{D}_{\alpha}=\mathbf{D}:=(\nabla\mathbf{v}+(\nabla\mathbf{v})^{T})/2\)._
We finalize this section with a more compact form of the mixture model.
**Lemma 3.12** (Compact form free energy contributions).: _The free energy contributions in the momentum equation may be expressed in the compact form:_
\[\phi_{\alpha}\nabla\mu_{\alpha}=\nabla\pi_{\alpha}+\operatorname{div}\left( \tilde{\rho}_{\alpha}\nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{ \alpha}}{\partial\nabla\phi_{\alpha}}\right). \tag{86}\]
Proof.: Substituting (54) and subsequently expanding the derivatives yields:
\[\nabla\pi_{\alpha}+\operatorname{div}\left(\tilde{\rho}_{\alpha} \nabla\phi_{\alpha}\otimes\frac{\partial\hat{\psi}_{\alpha}}{\partial\nabla \phi_{\alpha}}\right)=\] \[\nabla\left(\phi_{\alpha}\mu_{\alpha}-\hat{\Psi}_{\alpha}\right) +\operatorname{div}\left(\nabla\phi_{\alpha}\otimes\frac{\partial\hat{\Psi}_ {\alpha}}{\partial\nabla\phi_{\alpha}}\right)=\] \[\phi_{\alpha}\nabla\mu_{\alpha}+\nabla\phi_{\alpha}\frac{ \partial\hat{\Psi}_{\alpha}}{\partial\phi_{\alpha}}-\nabla\phi_{\alpha} \operatorname{div}\left(\frac{\partial\hat{\Psi}_{\alpha}}{\partial\nabla\phi_ {\alpha}}\right)-\nabla\hat{\Psi}_{\alpha}\] \[+\nabla\phi_{\alpha}\operatorname{div}\left(\frac{\partial\hat{ \Psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}\right)+\left(\mathbf{H}\phi_{ \alpha}\right)\frac{\partial\hat{\Psi}_{\alpha}}{\partial\nabla\phi_{\alpha}}=\] \[\phi_{\alpha}\nabla\mu_{\alpha}-\nabla\hat{\Psi}_{\alpha}+\nabla \phi_{\alpha}\frac{\partial\hat{\Psi}_{\alpha}}{\partial\phi_{\alpha}}+\left( \mathbf{H}\phi_{\alpha}\right)\frac{\partial\hat{\Psi}_{\alpha}}{\partial\nabla \phi_{\alpha}}, \tag{87}\]
where \(\mathbf{H}\phi_{\alpha}\) is the hessian of \(\phi_{\alpha}\). As a consequence of the volumetric Helmholtz free energy class (53), the latter three terms in the final expression in (87) vanish.
On the account of Theorem 3.12, the multi-constituent model (83) takes the more compact form:
\[\partial_{t}\tilde{\rho}_{\alpha}+\operatorname{div}(\tilde{\rho }_{\alpha}\mathbf{v}_{\alpha})-\hat{\gamma}_{\alpha}= \leavevmode\nobreak\ 0, \tag{88a}\] \[\partial_{t}(\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha})+ \operatorname{div}\left(\tilde{\rho}_{\alpha}\mathbf{v}_{\alpha}\otimes \mathbf{v}_{\alpha}\right)+\phi_{\alpha}\nabla\left(p+\mu_{\alpha}\right)\] \[-\operatorname{div}\left(\tilde{\nu}_{\alpha}\left(2\mathbf{D}_{ \alpha}+\lambda_{\alpha}\operatorname{div}\mathbf{v}_{\alpha}\right)\right)- \tilde{\rho}_{\alpha}\mathbf{b}\] \[-\sum_{\beta}\frac{p\phi_{\alpha}\phi_{\beta}}{D_{\alpha\beta}}( \mathbf{v}_{\beta}-\mathbf{v}_{\alpha})-\boldsymbol{\beta}_{\alpha}= \leavevmode\nobreak\ 0, \tag{88b}\]
for \(\alpha=1,...,N\).
## 4 Diffuse-interface models
In this section we present diffuse-interface models. First, in Section 4.1 we introduce the Ginzburg-Landau free energy. Next, in Section 4.2 we provide the dimensionless form of the model. Finally, in Section 4.3 we discuss the equilibrium profile of the mixture model.
### Ginzburg-Landau free energy
Important classes of fluid mixture models arise when selecting the constituent Helmholtz free energy to be of Ginzburg-Landau type. We consider two different options: (I) a Ginzburg-Landau type volume-measure-based free energy, and (II) a Ginzburg-Landau type volume-measure-based free energy.
_Model I_. The Helmholtz volume-measure free energy is given by:
\[\hat{\Psi}_{\alpha}^{\rm I} = \frac{\sigma_{\alpha}}{\varepsilon_{\alpha}}W(\phi_{\alpha})+ \sigma_{\alpha}\varepsilon_{\alpha}\|\nabla\phi_{\alpha}\|^{2} \tag{89a}\] \[W(\phi_{\alpha}) = 2\phi_{\alpha}^{2}(1-\phi_{\alpha})^{2}, \tag{89b}\]
where \(W=W(\phi_{\alpha})\) represents a double-well potential, \(\varepsilon_{\alpha}\) are interface thickness variables, and \(\sigma_{\alpha}\) are quantities related to the surface energy density. We assume that \(\varepsilon_{\alpha}\) and \(\sigma_{\alpha}\) are constants. The chemical potential takes the form:
\[\mu_{\alpha}^{\rm I}=\frac{\sigma_{\alpha}}{\varepsilon_{\alpha}}W^{\prime}( \phi_{\alpha})-2\sigma_{\alpha}\varepsilon_{\alpha}\Delta\phi_{\alpha}, \tag{90}\]
Furthermore, the mass flux takes the form:
\[\hat{\gamma}_{\alpha}^{\rm I}= -\hat{m}_{\alpha}\left(\frac{\sigma_{\alpha}}{\rho_{\alpha} \varepsilon_{\alpha}}W^{\prime}(\phi_{\alpha})-\frac{\sigma_{N}}{\rho_{N} \varepsilon_{N}}W^{\prime}(\phi_{N})\right. \tag{91}\] \[\qquad\qquad\left.-2\frac{\sigma_{\alpha}}{\rho_{\alpha}} \varepsilon_{\alpha}\Delta\phi_{\alpha}+2\frac{\sigma_{N}}{\rho_{N}} \varepsilon_{N}\Delta\phi_{N}+\left(\frac{1}{\rho_{\alpha}}-\frac{1}{\rho_{N }}\right)p\right),\]
for \(\alpha=1,\ldots,N-1\) and (67b) for \(\alpha=N\).
_Model II_. The Helmholtz mass-measure free energy reads:
\[\hat{\psi}_{\alpha}^{\rm II}=2\frac{\kappa_{\alpha}}{\varepsilon_{\alpha}}W( \phi_{\alpha})+2\kappa_{\alpha}\varepsilon_{\alpha}\|\nabla\phi_{\alpha}\|^{2}, \tag{92}\]
where \(W=W(\phi_{\alpha})\) is given in (89b). Also in this second model, the interface thickness variables \(\varepsilon_{\alpha}\) and surface energy density quantities \(\kappa_{\alpha}\) are assumed constant. The associated chemical potential takes the form:
\[\tau_{\alpha}^{\rm II}= 2\frac{\kappa_{\alpha}}{\varepsilon_{\alpha}}W^{\prime}(\phi_{ \alpha})-4\kappa_{\alpha}\varepsilon_{\alpha}\Delta\phi_{\alpha}, \tag{93}\]
The corresponding mass flux reads:
\[\hat{\gamma}_{\alpha}^{\rm II}= -\hat{m}_{\alpha}\left(2\phi_{\alpha}\frac{\kappa_{\alpha}}{ \varepsilon_{\alpha}}W^{\prime}(\phi_{\alpha})-2\phi_{N}\frac{\kappa_{N}}{ \varepsilon_{N}}W^{\prime}(\phi_{N})\right.\] \[\qquad\qquad\left.-4\kappa_{\alpha}\varepsilon_{\alpha}\phi_{ \alpha}\Delta\phi_{\alpha}+4\kappa_{N}\varepsilon_{N}\phi_{N}\Delta\phi_{N}\right.\] \[\qquad\qquad\left.+2\frac{\kappa_{\alpha}}{\varepsilon_{\alpha}}W (\phi_{\alpha})-2\frac{\kappa_{N}}{\varepsilon_{N}}W(\phi_{N})\right.\]
\[-2\kappa_{\alpha}\varepsilon_{\alpha}\|\nabla\phi_{\alpha}\|^{2}+2 \kappa_{N}\varepsilon_{N}\|\nabla\phi_{N}\|^{2}+\left(\frac{1}{\rho_{\alpha}}- \frac{1}{\rho_{N}}\right)p\right), \tag{94}\]
for \(\alpha=1,\ldots,N-1\) and (67b) for \(\alpha=N\).
Invoking relation (55), the corresponding volumetric free energy and associated chemical potential take the form:
\[\hat{\Psi}_{\alpha}^{\rm II} = 2\frac{\rho_{\alpha}\kappa_{\alpha}}{\varepsilon_{\alpha}}K( \phi_{\alpha})+2\rho_{\alpha}\kappa_{\alpha}\varepsilon_{\alpha}\phi_{\alpha} \|\nabla\phi_{\alpha}\|^{2}, \tag{95a}\] \[K(\phi_{\alpha}) = 2\phi_{\alpha}^{3}(1-\phi_{\alpha})^{2},\] (95b) \[\mu_{\alpha}^{\rm II} = \phi_{\alpha}\rho_{\alpha}\tau_{\alpha}^{\rm II}+\rho_{\alpha} \left(2\frac{\kappa_{\alpha}}{\varepsilon_{\alpha}}W(\phi_{\alpha})-2\kappa_ {\alpha}\varepsilon_{\alpha}\|\nabla\phi_{\alpha}\|^{2}\right). \tag{95c}\]
We visualize the potentials \(W=W(\phi_{\alpha})\) and \(K=K(\phi_{\alpha})\) in Figure 1. The potential \(W=W(\phi_{\alpha})\) admits the well-known symmetrical double-well shape, whereas \(K=K(\phi_{\alpha})\) is a non-symmetric double-well.
### Dimensionless form
We perform non-dimensionalization based on the dimensionless variables:
\[{\bf x}^{*} := \frac{{\bf x}}{L_{0}},\quad{\bf v}_{\alpha}^{*}:=\frac{{\bf v}_{ \alpha}}{V_{0}},\quad t^{*}:=t\frac{V_{0}}{L_{0}},\quad\tilde{\nu}_{\alpha}^{* }:=\frac{\tilde{\nu}_{\alpha}}{\nu_{\alpha}},\quad p_{\alpha}^{*}:=\frac{pL_{ 0}}{a_{\alpha}},\] \[\mu_{\alpha}^{*} := \frac{\mu_{\alpha}L_{0}}{a_{\alpha}},\quad D_{\alpha\beta}^{*}:= \frac{D_{\alpha\beta}}{L_{0}V_{0}},\quad\hat{m}_{\alpha}^{*}:=\frac{a_{\alpha }}{V_{0}\rho_{\alpha}^{2}}\hat{m}_{\alpha}, \tag{96}\]
where \(L_{0},V_{0},T_{0}\) and \(\nu_{\alpha}\) denote a characteristic length, time, velocity, density, and constituent dynamic viscosity, respectively, and \(a_{\alpha}=\sigma_{\alpha}\) and \(a_{\alpha}=\rho_{\alpha}\kappa_{\alpha}\) for models I and II
respectively. The re-scaled system takes the form:
\[\partial_{t^{*}}\phi_{\alpha}+\text{div}^{*}(\phi_{\alpha}\mathbf{v }_{\alpha}^{*})-\hat{\gamma}_{\alpha}^{*}= \leavevmode\nobreak\ 0, \tag{97a}\] \[\partial_{t^{*}}(\phi_{\alpha}\mathbf{v}_{\alpha}^{*})+\text{div}^ {*}\left(\phi_{\alpha}\mathbf{v}_{\alpha}^{*}\otimes\mathbf{v}_{\alpha}^{*}\right)\] \[-\frac{1}{\mathbb{R}e_{\alpha}}\text{div}^{*}\left(\tilde{\nu}_{ \alpha}^{*}\left(2\mathbf{D}_{\alpha}^{*}+\lambda_{\alpha}\text{div}^{*} \mathbf{v}_{\alpha}^{*}\right)\right)\] \[+\frac{1}{\mathbb{W}\mathbf{e}_{\alpha}}\phi_{\alpha}\nabla^{*} \left(p_{\alpha}^{*}+\mu_{\alpha}^{*}\right)+\frac{1}{\mathbb{F}r^{2}}\phi_{ \alpha\boldsymbol{J}}\] \[-\frac{1}{\mathbb{W}\mathbf{e}_{\alpha}}p_{\alpha}^{*}\sum_{ \beta}\frac{\phi_{\alpha}\phi_{\beta}}{D_{\alpha\beta}^{*}}(\mathbf{v}_{ \beta}^{*}-\mathbf{v}_{\alpha}^{*})+\boldsymbol{\beta}_{\alpha}^{*}= \leavevmode\nobreak\ 0, \tag{97b}\]
for \(\alpha=1,...,N\). Here \(\nabla^{*}\), \(\Delta^{*}\) and \(\text{div}^{*}\) denote the dimensionless spatial derivatives. The dimensionless variables are the constituent Reynolds number (\(\mathbb{R}\mathbf{e}_{\alpha}\)), the Froude number (\(\mathbb{F}\)r), the constituent Cahn number (\(\mathbb{C}\text{n}_{\alpha}\)) and the constituent Weber number (\(\mathbb{W}\mathbf{e}_{\alpha}\)):
\[\mathbb{R}\mathbf{e}_{\alpha} =\frac{\rho_{\alpha}V_{0}L_{0}}{\nu_{\alpha}}, \tag{98a}\] \[\mathbb{F}\text{r} =\frac{V_{0}}{\sqrt{bL_{0}}},\] (98b) \[\mathbb{C}\text{n}_{\alpha} =\frac{\varepsilon_{\alpha}}{L_{0}},\] (98c) \[\mathbb{W}\mathbf{e}_{\alpha} =\frac{\rho_{\alpha}V_{0}^{2}L_{0}}{a_{\alpha}}. \tag{98d}\]
The dimensionless mass transfer terms read:
\[\hat{\gamma}_{\alpha}^{*}=\leavevmode\nobreak\ -\leavevmode\nobreak\ \hat{m}_{\alpha}^{*}\left(\mu_{\alpha}^{*}+p_{\alpha}^{*}-\frac{\mathbb{W} \mathbf{e}_{\alpha}}{\mathbb{W}\mathbf{e}_{N}}\left(\mu_{N}^{*}+p_{N}^{*} \right)\right),\quad\text{ for }\alpha=1,\ldots,N-1, \tag{99}\]
and where
\[\boldsymbol{\beta}_{\alpha}^{*}=\leavevmode\nobreak\ \frac{1}{2}\hat{\gamma}_{\alpha}^{*}\left( \mathbf{v}_{\alpha}^{*}+\mathbf{v}_{N}^{*}\right),\quad\text{ for }\alpha=1,\ldots,N-1. \tag{100}\]
The free energies take the form:
\[\hat{\Psi}_{\alpha}^{I,*} =\hat{\psi}_{\alpha}^{II,*}=\frac{1}{\mathbb{C}\text{n}_{\alpha }\mathbb{W}\mathbf{e}_{\alpha}}W(\phi_{\alpha})+\frac{\mathbb{C}\text{n}_{ \alpha}}{\mathbb{W}\mathbf{e}_{\alpha}}\|\nabla\phi_{\alpha}\|^{2}, \tag{101a}\] \[\hat{\Psi}_{\alpha}^{II,*} =\frac{2}{\mathbb{C}\text{n}_{\alpha}\mathbb{W}\mathbf{e}_{ \alpha}}K(\phi_{\alpha})+\frac{2\mathbb{C}\text{n}_{\alpha}}{\mathbb{W} \mathbf{e}_{\alpha}}\phi_{\alpha}\|\nabla\phi_{\alpha}\|^{2}, \tag{101b}\]
and the chemical potentials are:
\[\mu_{\alpha}^{\text{I},*}-\left(\frac{1}{\mathbb{C}\text{n}_{ \alpha}}W^{\prime}(\phi_{\alpha})-2\mathbb{C}\text{n}_{\alpha}\Delta^{*}\phi _{\alpha}\right)=\leavevmode\nobreak\ 0, \tag{102a}\]
\[\mu_{\alpha}^{\Pi,*}-2\phi_{\alpha}\left(\frac{1}{\mathbb{C}\mathrm{n} _{\alpha}}W^{\prime}(\phi_{\alpha})-2\mathbb{C}\mathrm{n}_{\alpha}\Delta^{*} \phi_{\alpha}\right)\] \[-2\left(\frac{1}{\mathbb{C}\mathrm{n}_{\alpha}}W(\phi_{\alpha})- \mathbb{C}\mathrm{n}_{\alpha}\|\nabla^{*}\phi_{\alpha}\|^{2}\right)=\ 0. \tag{102b}\]
We suppress the star symbols in the remainder of this paper.
### Equilibrium profile
The static equilibrium profile of the model (97) is characterized by zero entropy production:
\[\sum_{\alpha}\mathscr{P}_{\alpha}=0. \tag{103}\]
From the equivalent form (84) we find:
\[\tilde{\nu}_{\alpha}\left(\mathbf{D}-\frac{1}{d}(\mathrm{div} \mathbf{v}_{\alpha})\mathbf{I}\right):\left(\mathbf{D}-\frac{1}{d}(\mathrm{ div}\mathbf{v}_{\alpha})\mathbf{I}\right) =0, \tag{104a}\] \[\tilde{\nu}_{\alpha}\left(\lambda_{\alpha}+\frac{2}{d}\right)( \mathrm{div}\mathbf{v}_{\alpha})^{2} =0,\] (104b) \[R_{\alpha\beta}\|\mathbf{w}_{\alpha}-\mathbf{w}_{\beta}\|^{2} =0,\] (104c) \[\hat{m}_{\alpha}\left(g_{\alpha}-g_{N}\right)^{2} =0, \tag{104d}\]
for \(\alpha=1,\ldots,N\) in (104a)-(104b), for \(\alpha,\beta=1,\ldots,N\) in (104c), and \(\alpha=1,\ldots,N-1\) in (104d). Consider now the non-trivial case \(0<\phi_{\alpha}<1\) and \(\nu_{\alpha}>0\). Since \(\tilde{\nu}_{\alpha}>0\) we obtain from (104a)-(104a) that \(\mathbf{v}_{\alpha}=\mathrm{const}\) for all \(\alpha=1,\ldots N\). Next, since \(R_{\alpha\beta}\geq 0\) we get from (104c) that \(\mathbf{v}_{\alpha}=\mathbf{v}=\mathrm{const}\) for all \(\alpha=1,\ldots N\). From (104d) we obtain \(g_{1}=\ldots,=g_{N}\) and \(\hat{\gamma}_{\alpha}=0\) for all \(\alpha=1,\ldots,N\). As a consequence, from the mass balance equation (97a) we get \(\hat{\phi}_{\alpha}=0\). The viscous term and the last term in the momentum balance (97b) vanish due to \(\mathbf{v}_{\alpha}=\mathrm{const}\). Finally, the inertia terms in momentum balance (97b) vanish since:
\[\partial_{t}(\phi_{\alpha}\mathbf{v}_{\alpha})+\mathrm{div}\left(\phi_{\alpha }\mathbf{v}_{\alpha}\otimes\mathbf{v}_{\alpha}\right)=\mathbf{v}_{\alpha} \hat{\phi}_{\alpha}=0. \tag{105}\]
The static equilibrium solution is now identified by the following relations:
\[\mu_{\alpha}+p_{\alpha}-\frac{\mathbb{W}\mathrm{e}_{\alpha}}{ \mathbb{W}\mathrm{e}_{N}}\left(\mu_{N}+p_{N}\right)\ =0,\quad\text{ for }\alpha=1,\ldots,N-1, \tag{106a}\] \[\phi_{\alpha}\nabla\left(p_{\alpha}+\mu_{\alpha}+\frac{\mathbb{W }\mathrm{e}_{\alpha}}{\mathbb{F}\mathrm{r}^{2}}y\right)=\ 0,\quad\text{ for }\alpha=1,\ldots,N. \tag{106b}\]
**Remark 4.1** (Constituent body force).: _The equilibrium relations (106) are compatible due to the standing assumption of equal body forces (\(\mathbf{b}_{\alpha}=\mathbf{b}\) for \(\alpha=1,\ldots,N\))._
In scenario of a pure fluid (\(\phi_{\alpha}\equiv 1\)), the thermodynamical pressure \(\mu_{\alpha}\) vanishes and we obtain \(p_{\alpha}=p_{\infty,\alpha}-y\mathbb{W}\mathrm{e}_{\alpha}/\mathbb{F}\mathrm{ r}^{2}\), where \(p_{\infty,\alpha}\) is a constant equilibrium pressure. Consider
now the non-trivial case (\(0<\phi_{\alpha}<1\)) in absence of gravitational forces (\(\mathbb{F}\mathrm{r}^{-2}=0\)). The condition (106a) implies that the quantity:
\[\frac{1}{\mathbb{W}\mathrm{e}_{\alpha}}\left(\mu_{\alpha}+p_{\alpha}\right)=C, \tag{107}\]
where \(C\) is a constant independent of the constituent number. A solution is obtained by requiring \(\mu_{\alpha}=p_{\alpha}=0\). The zero pressure \(p_{\alpha}\) implies that momentum transfer is absent in equilibrium. The interface profiles \(\phi_{\alpha}=\phi_{\alpha}^{\mathrm{eq}}(\xi)\) are determined by the differential equations:
\[0 =\mu_{\alpha}^{\mathrm{I}} =\phi_{\alpha}^{\mathrm{eq}}\left(\frac{1}{\mathbb{C}\mathrm{n} _{\alpha}}W^{\prime}(\phi_{\alpha}^{\mathrm{eq}})-2\mathbb{C}\mathrm{n}_{ \alpha}\Delta\phi_{\alpha}^{\mathrm{eq}}\right), \text{for }\alpha=1,\ldots,N \tag{108a}\] \[0 =\mu_{\alpha}^{\mathrm{II}} =2\phi_{\alpha}^{\mathrm{eq}}\left(\frac{1}{\mathbb{C}\mathrm{n }_{\alpha}}W^{\prime}(\phi_{\alpha}^{\mathrm{eq}})-2\mathbb{C}\mathrm{n}_{ \alpha}\Delta\phi_{\alpha}^{\mathrm{eq}}\right)\] \[\quad+\frac{2}{\mathbb{C}\mathrm{n}_{\alpha}}W(\phi_{\alpha}^{ \mathrm{eq}})-2\mathbb{C}\mathrm{n}_{\alpha}\|\nabla\phi_{\alpha}^{\mathrm{ eq}}\|^{2}, \text{for }\alpha=1,\ldots,N,\] (108b) \[1 =\sum_{\alpha}\phi_{\alpha}. \tag{108c}\]
We determine the explicit interface profiles in the one-dimensional situation. Denote with \(\xi\) a spatial coordinate centered at the interface.
**Theorem 4.2** (Equilibrium profile).: _In absence of gravitational forces, the system (97) obeys in one-dimension the classical interface profile:_
\[\phi_{\alpha}=\phi_{\alpha}^{\mathrm{eq}}(\xi)=\frac{1}{2}\left(1+\tanh\left( \frac{\pm\xi}{\mathbb{C}\mathrm{n}\sqrt{2}}\right)\right), \tag{109}\]
_with \(\mathbb{C}\mathrm{n}_{\alpha}=\mathbb{C}\mathrm{n}\) for \(\alpha=1,\ldots,N\)._
Proof.: One may verify via substitution that the interface profile (109) satisfies the identities:
\[\frac{1}{\mathbb{C}\mathrm{n}_{\alpha}}W^{\prime}(\phi_{\alpha}^{ \mathrm{eq}})-2\mathbb{C}\mathrm{n}_{\alpha}\frac{\mathrm{d}^{2}\phi_{\alpha} ^{\mathrm{eq}}}{\mathrm{d}\xi^{2}} =0, \tag{110a}\] \[\frac{1}{\mathbb{C}\mathrm{n}_{\alpha}}W(\phi_{\alpha}^{\mathrm{ eq}})-\mathbb{C}\mathrm{n}_{\alpha}\left(\frac{\mathrm{d}\phi_{\alpha}^{ \mathrm{eq}}}{\mathrm{d}\xi}\right)^{2} =0, \tag{110b}\]
for \(\mathbb{C}\mathrm{n}_{\alpha}=\mathbb{C}\mathrm{n},\alpha=1,\ldots,N\).
Theorem 4.2 conveys the shape of the interface profile, and moreover, it communicates that the interface width parameters need to be equal (\(\mathbb{C}\mathrm{n}_{\alpha}=\mathbb{C}\mathrm{n},\alpha=1,\ldots,N\)). In the remainder of the paper we restrict to equal interface width parameters. As a consequence of the above identities we have
\[\hat{\Psi}_{\alpha}^{\mathrm{I}}\left(\phi_{\alpha}^{\mathrm{eq}}(\xi)\right) =\hat{\psi}_{\alpha}^{\mathrm{II}}\left(\phi_{\alpha}^{\mathrm{eq}}(\xi) \right)=\frac{2}{\mathbb{C}\mathrm{n}\mathbb{W}\mathrm{e}_{\alpha}}W(\phi_{ \alpha}^{\mathrm{eq}})\]
\[= \frac{1}{4\mathbb{C}\mathrm{n}\mathbb{W}\mathrm{e}_{\alpha}}\left(1- \tanh^{2}\left(\frac{\pm\xi}{\mathbb{C}\mathrm{n}\sqrt{2}}\right)\right)^{2}, \tag{111a}\] \[\hat{\Psi}_{\alpha}^{\mathrm{II}}\left(\phi_{\alpha}^{\mathrm{eq} }(\xi)\right) = \frac{4}{\mathbb{C}\mathrm{n}\mathbb{W}\mathrm{e}_{\alpha}}K(\phi_ {\alpha}^{\mathrm{eq}})\] (111b) \[= \frac{1}{4\mathbb{C}\mathrm{n}\mathbb{W}\mathrm{e}_{\alpha}}\left( 1+\tanh\left(\frac{\pm\xi}{\mathbb{C}\mathrm{n}\sqrt{2}}\right)\right)\times\] \[\left(1-\tanh^{2}\left(\frac{\pm\xi}{\mathbb{C}\mathrm{n}\sqrt{2 }}\right)\right)^{2}.\]
We visualize the free energies in Figure 2. The free energy of model I is symmetric around \(0\), whereas the free energy of model I is non-symmetric. Both free energies collapse onto the interface for \(\mathbb{C}\mathrm{n}\to 0\).
Finally, we introduce the (dimensionless) constituent surface tension coefficient as:
\[\hat{\Theta}_{\alpha}=\int_{\mathbb{R}}\hat{\Psi}_{\alpha}\left(\phi_{\alpha} ^{\mathrm{eq}}(\xi)\right)\mathrm{d}\xi. \tag{112}\]
One may verify that the integral is the same for each of the two models:
\[\int_{\mathbb{R}}\hat{\Psi}_{\alpha}^{\mathrm{I}}\left(\phi_{\alpha}^{\mathrm{ eq}}(\xi)\right)\mathrm{d}\xi=\int_{\mathbb{R}}\hat{\Psi}_{\alpha}^{\mathrm{II}} \left(\phi_{\alpha}^{\mathrm{eq}}(\xi)\right)\mathrm{d}\xi=\frac{\sqrt{2}}{3 \mathbb{W}\mathrm{e}_{\alpha}}. \tag{113}\]
## 5 Connection with the Navier-Stokes Cahn-Hilliard model
In this section we explore the connection of the mixture model (88) and the the Navier-Stokes Cahn-Hilliard model. We restrict ourselves to binary mixtures for the sake of clarity, and note that the extension to multi-constituent mixtures is straightforward. We discuss the connection for the diffuse-interface models outlined in Section 4. First, in Section 5.1
we lay down two particular forms of the NSCH model. Then, in Section5.2 we analyze the connection of the components of the mixture model with the NSCH model. Finally, we discuss the connection of the complete models Section5.3.
### The Navier-Stokes Cahn-Hilliard model
Restricting to two constituents, the volume fractions now constitute a single order parameter. We define this order parameter in the classical way as the difference of the volume fractions of the individual constituents: \(\phi=\phi_{1}-\phi_{2}\in[-1,1]\). Invoking (5) and (7) provides:
\[\phi_{1} =\frac{1+\phi}{2},\qquad\phi_{2}=\frac{1-\phi}{2}, \tag{114a}\] \[\rho(\phi) =\frac{\rho_{1}(1+\phi)}{2}+\frac{\rho_{2}(1-\phi)}{2}. \tag{114b}\]
We note that the NSCH model (1) is written is a form that directly allows the specification of a volume-measure-based Helmholtz free energy belonging to the constitutive class:
\[\bar{\Psi}=\bar{\Psi}(\phi,\nabla\phi). \tag{115}\]
On the other hand, it is also common to work with a Helmholtz free energy that is mass-measure-based:
\[\bar{\psi}=\bar{\psi}(\phi,\nabla\phi). \tag{116}\]
We now present (equivalent) compact forms of the NSCH model, one suited for each of the two choices.
To establish the connection between the two Helmholtz free energy classes we select the following natural identification:
\[\bar{\Psi}(\phi,\nabla\phi)\equiv\rho(\phi)\bar{\psi}(\phi,\nabla\phi). \tag{117}\]
Furthermore, we introduce chemical potentials associated with each of the constitutive classes:
\[\bar{\mu} =\frac{\partial\bar{\Psi}}{\partial\phi}-\mathrm{div}\left(\frac {\partial\bar{\Psi}}{\partial\nabla\phi}\right), \tag{118a}\] \[\bar{v} =\frac{\partial\bar{\psi}}{\partial\phi}-\frac{1}{\rho}\mathrm{ div}\left(\rho\frac{\partial\bar{\psi}}{\partial\nabla\phi}\right). \tag{118b}\]
With the aim of introducing the first compact form, we present a lemma analogous to Theorem3.12.
**Lemma 5.1** (Compact form free energy contributions).: _The following identity holds:_
\[\phi\nabla\bar{\mu}=\nabla(\bar{\mu}\phi-\bar{\Psi})+\mathrm{div}\left(\nabla \phi\otimes\frac{\partial\bar{\Psi}}{\partial\nabla\phi}\right). \tag{119}\]
Proof.: The proof is similar to that of Theorem 3.12.
**Remark 5.2**.: _The identity (119) is often employed in the particular scenario of the Ginzburg-Landau free energy. Here we note that it holds for the general constitutive class of the Helmholtz free energy._
Applying Theorem 5.1, we arrive at the first form of the NSCH model:
\[\partial_{t}(\rho\mathbf{v})+\operatorname{div}\left(\rho\mathbf{ v}\otimes\mathbf{v}\right)+\nabla p+\phi\nabla\bar{\mu}\] \[-\operatorname{div}\left(\nu(2\mathbf{D}+\lambda(\operatorname{ div}\mathbf{v})\mathbf{I})\right)-\rho\mathbf{b}= \ 0, \tag{120a}\] \[\partial_{t}\rho+\operatorname{div}(\rho\mathbf{v})= \ 0,\] (120b) \[\partial_{t}\phi+\operatorname{div}(\phi\mathbf{v})-\operatorname{ div}\left(\bar{\mathbf{M}}\nabla(\bar{\mu}+\omega p)\right)+\zeta\bar{m}(\bar{\mu}+ \omega p)= \ 0, \tag{120c}\]
Next, the second form of the NSCH model follows when switching to the mass-measure-based Helmholtz free energy in (120). To this purpose we introduce the relation between the chemical potentials (118).
**Lemma 5.3** (Relation chemical potentials).: _The chemical potentials (118) are related as:_
\[\bar{\mu}=\rho\bar{v}+\bar{\psi}\frac{\rho_{1}-\rho_{2}}{2}. \tag{121a}\]
Proof.: This follows from a straightforward substitution. For details we refer to [11].
Applying Theorem 5.3, we arrive at the second form of the NSCH model:
\[\partial_{t}(\rho\mathbf{v})+\operatorname{div}\left(\rho\mathbf{ v}\otimes\mathbf{v}\right)+\nabla p+\phi\nabla\left(\rho\bar{v}+\bar{\psi}\frac{ \rho_{1}-\rho_{2}}{2}\right)\] \[-\operatorname{div}\left(\nu(2\mathbf{D}+\lambda(\operatorname{ div}\mathbf{v})\mathbf{I})\right)-\rho\mathbf{b}= \ 0, \tag{122a}\] \[\partial_{t}\rho+\operatorname{div}(\rho\mathbf{v})= \ 0,\] (122b) \[+\zeta m\left(\left(\rho\bar{v}+\bar{\psi}\frac{\rho_{1}-\rho_{2} }{2}\right)+\omega p\right)= \ 0. \tag{122c}\]
**Remark 5.4** (Variable transformation).: _One can apply a variable transformation in (122) to absorb the term \(\bar{\psi}(\rho_{1}-\rho_{2})/2\) into the pressure \(p\). For details we refer to [11]._
Analogous to the diffuse-interface models in Section 4, we distinguish between a Ginzburg-Landau free energy that is either volume-measure-based, or mass-measure-based. It is our purpose to compare the associated models with the diffuse-interface models of Section 4 (model I and model II). We also refer to the NSCH free energy models as model I and model II to emphasize this intend.
_Model I_. The volume-measure-based Ginzburg-Landau free energy is given by:
\[\bar{\Psi}^{\mathrm{I}} =\frac{\sigma}{\varepsilon}F(\phi)+\frac{\sigma\varepsilon}{2}\| \nabla\phi\|^{2}, \tag{123a}\] \[F(\phi) :=\frac{1}{4}(1-\phi^{2})^{2}. \tag{123b}\]
where \(F=F(\phi)\) represents a double-well potential, \(\varepsilon\) is a (constant) interface thickness variable, and \(\sigma\) is a (constant) variable related to the surface energy density. The chemical potential and mass transfer take the form:
\[\bar{\mu}^{\mathrm{I}} =\frac{\sigma}{\varepsilon}F^{\prime}(\phi)-\sigma\varepsilon \Delta\phi, \tag{124a}\] \[\bar{\gamma}^{\mathrm{I}} =\ -m\left(\bar{\mu}^{\mathrm{I}}+\omega p\right). \tag{124b}\]
_Model II_. The mass-measured-based Ginzburg-Landau free energy reads:
\[\bar{\psi}^{\mathrm{II}} =\frac{\kappa}{\varepsilon}F(\phi)+\frac{\kappa\varepsilon}{2} \|\nabla\phi\|^{2}, \tag{125}\]
where \(F=F(\phi)\) is given in (123b). Also in this second model, the interface thickness variables \(\varepsilon\) and surface energy density quantities \(\kappa\) are assumed constant. The associated chemical potentials and mass transfer take the form:
\[\bar{v}^{\mathrm{II}} =\bar{\tau}^{\mathrm{II}}-\frac{\kappa\varepsilon(\rho_{1}-\rho_ {2})}{2\rho}\|\nabla\phi\|^{2}, \tag{126a}\] \[\bar{\tau}^{\mathrm{II}} :=\frac{\kappa}{\varepsilon}F^{\prime}(\phi)-\kappa\varepsilon \Delta\phi,\] (126b) \[\bar{\gamma}^{\mathrm{II}} =\ -m\left(\rho\bar{\tau}^{\mathrm{II}}+\frac{\rho_{1}-\rho_{2}}{2} \left(\frac{\kappa}{\varepsilon}F(\phi)-\frac{\kappa\varepsilon}{2}\|\nabla \phi\|^{2}\right)+\omega p\right). \tag{126c}\]
We now present the energy-dissipation property of the NSCH model. Introduce the global energy as the superposition of the Helmholtz free energy, kinetic energy and gravitational energy:
\[\bar{\mathscr{E}}(\Omega):=\int_{\Omega}\bar{\Psi}+\bar{\mathscr{ K}}+\bar{\mathscr{G}}\ \mathrm{d}\Omega,\] (127a) where the Helmholtz free energy ( ( 115 ) ) is specified in ( 123 ) and ( 125 ), the kinetic energy is given in ( 30b ), and the gravitational energy is: \[\bar{\mathscr{G}}:=\rho gy. \tag{128}\]
**Theorem 5.5** (Energy dissipation NSCH).: _Suppose that the NSCH model is equipped with the natural boundary conditions on \(\Omega\):_
\[(-p\mathbf{I}+\nu\left(2\mathbf{D}+\lambda(\mathrm{div}\mathbf{v })\mathbf{I}\right))\,\mathbf{n} =\,0, \tag{129a}\] \[\nabla\phi\cdot\mathbf{n} =\,0, \tag{129b}\]
\[\left(\mathbf{M}\nabla\left(\bar{\mu}+\omega p\right)\right)\mathbf{n}=0, \tag{129c}\]
_where \(\mathbf{n}\) denotes the outward unit normal, then the associated total energy satisfies the dissipation relation:_
\[\frac{\mathrm{d}}{\mathrm{d}t}\bar{\mathscr{E}}(\Omega)= -\int_{\Omega}\left(2\nu\left(\mathbf{D}-\frac{1}{d}(\mathrm{ div}\mathbf{v})\mathbf{I}\right):\left(\mathbf{D}-\frac{1}{d}(\mathrm{div} \mathbf{v})\mathbf{I}\right)\right)\ \mathrm{d}\Omega\] \[-\int\nu\left(\lambda+\frac{2}{d}\right)(\mathrm{div}\mathbf{v} )^{2}\ \mathrm{d}\Omega\] \[-\int_{\Omega}\nabla(\bar{\mu}+\omega p)\cdot\left(\overline{ \mathbf{M}}\nabla(\bar{\mu}+\omega p)\right)\ \mathrm{d}\Omega\] \[-\int_{\Omega}\bar{m}\zeta(\bar{\mu}+\omega p)^{2}\ \mathrm{d} \Omega\leq 0. \tag{130}\]
The equilibrium profile of the model is characterized by zero energy evolution:
\[\frac{\mathrm{d}}{\mathrm{d}t}\mathscr{E}(\Omega)=0. \tag{131}\]
Following a similar argumentation as in Section 4.3, in absence of gravitational forces one can deduce the equilibrium profile:
\[\phi=\phi^{\mathrm{eq}}(\xi)=\tanh\left(\frac{\pm\xi}{\varepsilon\sqrt{2}} \right), \tag{132}\]
where again \(\xi\) is a coordinate centered at the interface (\(\phi=0\)).
Lastly, consider the determination of the surface tension coefficient. Similar to (112) we set:
\[\bar{\Theta}^{\mathrm{I}}= \int_{\mathbb{R}}\bar{\Psi}^{\mathrm{I}}\left(\phi^{\mathrm{eq}}( \xi)\right)\mathrm{d}\xi, \tag{133a}\] \[\bar{\Theta}^{\mathrm{II}}= \int_{\mathbb{R}}\bar{\Psi}^{\mathrm{II}}\left(\phi^{\mathrm{eq}}( \xi)\right)\mathrm{d}\xi, \tag{133b}\]
and note that the integrals are equal to:
\[\bar{\Theta}^{\mathrm{I}}= \ \sigma\frac{2\sqrt{2}}{3}, \tag{134a}\] \[\bar{\Theta}^{\mathrm{II}}= \ (\rho_{1}+\rho_{2})\kappa\frac{\sqrt{2}}{3}. \tag{134b}\]
### Connection of the components of the mixture model
To study the connection of the mixture model (88) and the NSCH model (120), (122), it is useful to formulate the mixture model in terms of pure mixture quantities. The mixture
quantities are the mixture velocity \(\mathbf{v}\) (defined in (12)), the order parameter \(\phi\) (defined (114)), and lastly a diffusive flux quantity defined as:
\[\mathbf{J}:=\tilde{\rho}_{1}\mathbf{w}_{1}-\tilde{\rho}_{2}\mathbf{ w}_{2}. \tag{135}\]
To formulate the mixture model (88) in mixture quantities we introduce the variable transformations:
\[\mathbf{v}_{1} =\mathbf{v}+\frac{\mathbf{J}}{2\tilde{\rho}_{1}} \tag{136a}\] \[\mathbf{v}_{2} =\mathbf{v}-\frac{\mathbf{J}}{2\tilde{\rho}_{2}}, \tag{136b}\]
which follow from (12) and (135).
In the remainder of this subsection we formulate the various energies and components of the mixture model (88) in mixture quantities, and establish the connection with their counterparts in the NSCH model. We compare the quantities associated with the Ginzburg-Landau free energy model of Section 4.1 with quantities of corresponding free energy model of Section 5.1.
_Kinetic energy_. We recall from (30) that the kinetic energy of the mixture (29a) may be decomposed as:
\[\mathscr{K}=\bar{\mathscr{K}}+\sum_{\alpha}\frac{1}{2}\tilde{ \rho}_{\alpha}\|\mathbf{w}_{\alpha}\|^{2}. \tag{137}\]
The kinetic energy corresponding to the peculiar velocity is neglected in the NSCH model. The next lemma reformulates this kinetic energy in mixture quantities.
**Lemma 5.6** (Kinetic energy peculiar velocity).: _The kinetic energy associated with the peculiar velocity takes the form:_
\[\sum_{\alpha=1,2}\tilde{\rho}_{\alpha}\|\mathbf{w}_{\alpha}\|^{2 }=\frac{\rho\|\mathbf{J}\|^{2}}{2\rho_{1}\rho_{2}(1-\phi^{2})}. \tag{138}\]
Proof.: On the account of (14) we add a suitable partition of zero to the left-hand side and find:
\[\sum_{\alpha=1,2}\tilde{\rho}_{\alpha}\|\mathbf{w}_{\alpha}\|^{2} =\mathbf{w}_{1}\cdot(\tilde{\rho}_{1}\mathbf{w}_{1}+\tilde{\rho}_ {2}\mathbf{w}_{2})+\mathbf{w}_{2}\cdot(\tilde{\rho}_{1}\mathbf{w}_{1}+\tilde{ \rho}_{2}\mathbf{w}_{2})\] \[\quad-\mathbf{w}_{1}\cdot\tilde{\rho}_{2}\mathbf{w}_{2}-\mathbf{ w}_{2}\cdot\tilde{\rho}_{1}\mathbf{w}_{1}\] \[=\,-\mathbf{w}_{1}\cdot\tilde{\rho}_{2}\mathbf{w}_{2}-\mathbf{ w}_{2}\cdot\tilde{\rho}_{1}\mathbf{w}_{1}\] \[=\,-\rho\mathbf{w}_{1}\cdot\mathbf{w}_{2}. \tag{139}\]
Next, by recognizing the constituent diffusive flux we arrive at the result:
\[\sum_{\alpha=1,2}\tilde{\rho}_{\alpha}\|\mathbf{w}_{\alpha}\|^{2 }=\,-\frac{\rho\mathbf{J}_{1}\cdot\mathbf{J}_{2}}{\tilde{\rho}_{1}\tilde{\rho }_{2}}=\frac{\mathbf{J}\cdot\mathbf{J}}{4\tilde{\rho}_{1}\tilde{\rho}_{2}}= \frac{\rho\|\mathbf{J}\|^{2}}{2\rho_{1}\rho_{2}(1-\phi^{2})}. \tag{140}\]
Gravitational energyThe gravitational energy of the mixture \(\mathscr{G}\) coincides with the NSCH gravitational energy:
\[\mathscr{G}_{1} =\rho_{1}\frac{1+\phi}{2}by, \tag{141a}\] \[\mathscr{G}_{2} =\rho_{2}\frac{1-\phi}{2}by,\] (141b) \[\mathscr{G} =\mathscr{G}_{1}+\mathscr{G}_{2}=\bar{\mathscr{G}}=\rho by. \tag{141c}\]
Free energyWe define the mixture free energies as:
\[\hat{\Psi}(\phi,\nabla\phi) =\hat{\Psi}_{1}(\phi_{1},\nabla\phi_{1})+\hat{\Psi}_{2}(\phi_{2}, \nabla\phi_{2}), \tag{142a}\] \[\rho\hat{\psi}(\phi,\nabla\phi) =\tilde{\rho}_{1}\hat{\psi}_{1}(\phi_{1},\nabla\phi_{1})+\tilde{ \rho}_{2}\hat{\psi}_{2}(\phi_{2},\nabla\phi_{2}). \tag{142b}\]
We distinguish between the two models specified in Section 4.1.
Model IThe constituent free energies (89) take the form:
\[\hat{\Psi}_{1}^{\mathrm{I}} =\frac{\sigma_{1}}{2\varepsilon}F(\phi)+\frac{\sigma_{1} \varepsilon}{4}\|\nabla\phi\|^{2}, \tag{143a}\] \[\hat{\Psi}_{2}^{\mathrm{I}} =\frac{\sigma_{2}}{2\varepsilon}F(\phi)+\frac{\sigma_{2} \varepsilon}{4}\|\nabla\phi\|^{2}, \tag{143b}\]
where \(F=F(\phi)\) is defined in (123b). Inserting the Ginzburg Landau free energy (143) into (142) we obtain:
\[\hat{\Psi}^{\mathrm{I}}=\,\left(\frac{\sigma_{1}}{2\varepsilon}+ \frac{\sigma_{2}}{2\varepsilon}\right)F(\phi)+\frac{\sigma_{1}\varepsilon+ \sigma_{2}\varepsilon}{4}\|\nabla\phi\|^{2}. \tag{144}\]
This form coincides with the standard Ginzburg Landau form (123) for the scenario \(\sigma=\sigma_{1}=\sigma_{2}\):
\[\hat{\Psi}^{\mathrm{I}}=\,\bar{\Psi}^{\mathrm{I}}=\frac{\sigma} {\varepsilon}F(\phi)+\frac{\sigma\varepsilon}{2}\|\nabla\phi\|^{2}. \tag{145}\]
Model IIThe constituent free energies (92) read:
\[\hat{\psi}_{1}^{\mathrm{II}} =\,\frac{\kappa_{1}}{\varepsilon}F(\phi)+\frac{\kappa_{1} \varepsilon}{2}\|\nabla\phi\|^{2}, \tag{146a}\] \[\hat{\psi}_{2}^{\mathrm{II}} =\,\frac{\kappa_{2}}{\varepsilon}F(\phi)+\frac{\kappa_{2} \varepsilon}{2}\|\nabla\phi\|^{2}. \tag{146b}\]
Inserting the Ginzburg Landau free energy (146) into (142) yields:
\[\rho\hat{\psi}^{\mathrm{II}}=\,\left(\frac{\rho_{1}\kappa_{1}}{2 \varepsilon}+\frac{\rho_{2}\kappa_{2}}{2\varepsilon}\right)F(\phi)+\frac{\rho _{1}\kappa_{1}\varepsilon+\rho_{2}\kappa_{2}\varepsilon}{4}\|\nabla\phi\|^{2}\] \[\quad\quad+\left(\frac{\rho_{1}\kappa_{1}}{2\varepsilon}-\frac{ \rho_{2}\kappa_{2}}{2\varepsilon}\right)\phi F(\phi)+\frac{\rho_{1}\kappa_{1} \varepsilon-\rho_{2}\kappa_{2}\varepsilon}{4}\phi\|\nabla\phi\|^{2}. \tag{147}\]
In the special case \(\kappa=\kappa_{1}=\kappa_{2}\) we retrieve the NSCH free energy:
\[\hat{\psi}^{\rm II}=\bar{\psi}^{\rm II}=\frac{\kappa}{\varepsilon}F(\phi)+\frac{ \kappa\varepsilon}{2}\|\nabla\phi\|^{2}. \tag{148}\]
_Korteweg tensor_. We differentiate between the two models specified in Section 4.1.
_Model I_. The constituent Korteweg tensors read in mixture quantities:
\[\nabla\phi_{\alpha}\otimes\frac{\partial\hat{\Psi}^{\rm I}_{1}}{ \partial\nabla\phi_{\alpha}} = \frac{\sigma_{1}\varepsilon}{2}\nabla\phi\otimes\nabla\phi, \tag{149a}\] \[\nabla\phi_{\alpha}\otimes\frac{\partial\hat{\Psi}^{\rm I}_{2}}{ \partial\nabla\phi_{\alpha}} = \frac{\sigma_{2}\varepsilon}{2}\nabla\phi\otimes\nabla\phi. \tag{149b}\]
The superposition of the constituent Korteweg tensors yields:
\[\sum_{\alpha=1,2}\nabla\phi_{\alpha}\otimes\frac{\partial\hat{ \Psi}^{\rm I}_{\alpha}}{\partial\nabla\phi_{\alpha}}=\nabla\phi\otimes\frac{ \partial\hat{\Psi}^{\rm I}}{\partial\nabla\phi}=\left(\frac{\sigma_{1} \varepsilon}{2}+\frac{\sigma_{2}\varepsilon}{2}\right)\nabla\phi\otimes\nabla\phi. \tag{150}\]
The first equality holds for all constituent classes \(\hat{\Psi}^{\rm II}=\hat{\Psi}^{\rm I}(\phi,\nabla\phi)\), whereas the second follows from (149). For the special case \(\sigma=\sigma_{1}=\sigma_{2}\) we find the standard mixture Korteweg tensor:
\[\sum_{\alpha=1,2}\nabla\phi_{\alpha}\otimes\frac{\partial\hat{ \Psi}^{\rm I}_{\alpha}}{\partial\nabla\phi_{\alpha}}=\sigma\varepsilon\nabla \phi\otimes\nabla\phi. \tag{151}\]
_Model II_. The constituent Korteweg tensors read in mixture quantities:
\[\nabla\phi_{1}\otimes\frac{\partial\hat{\psi}^{\rm II}_{1}}{ \partial\nabla\phi_{1}} = \kappa_{1}\varepsilon\nabla\phi\otimes\nabla\phi, \tag{152a}\] \[\nabla\phi_{2}\otimes\frac{\partial\hat{\psi}^{\rm II}_{2}}{ \partial\nabla\phi_{2}} = \kappa_{2}\varepsilon\nabla\phi\otimes\nabla\phi. \tag{152b}\]
The superposition of the constituent Korteweg tensors yields:
\[\sum_{\alpha=1,2}\nabla\phi_{\alpha}\otimes\frac{\partial\hat{ \Psi}^{\rm II}_{\alpha}}{\partial\nabla\phi_{\alpha}} = \nabla\phi\otimes\frac{\partial\hat{\Psi}^{\rm II}}{\partial \nabla\phi} \tag{153}\] \[= \left(\frac{\rho_{1}\kappa_{1}\varepsilon}{2}+\frac{\rho_{2} \kappa_{2}\varepsilon}{2}\right.\] \[\left.+\phi\frac{\rho_{1}\kappa_{1}\varepsilon}{2}-\phi\frac{ \rho_{2}\kappa_{2}\varepsilon}{2}\right)\nabla\phi\otimes\nabla\phi.\]
In the scenario \(\kappa=\kappa_{1}=\kappa_{2}\) the mixture Korteweg tensor reduces to:
\[\sum_{\alpha=1,2}\nabla\phi_{\alpha}\otimes\frac{\partial\hat{ \Psi}^{\rm I}_{\alpha}}{\partial\nabla\phi_{\alpha}}=\rho\kappa\varepsilon \nabla\phi\otimes\nabla\phi. \tag{154}\]
Chemical potential_. Likewise the other terms involving the free energy, we separate the two modeling choices specified in Section 4.1.
_Model I_. The chemical potentials take the form:
\[\mu_{1}^{\mathrm{I}} =\frac{\sigma_{1}}{\varepsilon}F^{\prime}(\phi)-\sigma_{1} \varepsilon\Delta\phi \tag{155a}\] \[\mu_{2}^{\mathrm{I}} =\;-\frac{\sigma_{2}}{\varepsilon}F^{\prime}(\phi)+\sigma_{2} \varepsilon\Delta\phi. \tag{155b}\]
In the case \(\sigma=\sigma_{1}=\sigma_{2}\) we arrive at:
\[\mu_{1}^{\mathrm{I}}=-\mu_{2}^{\mathrm{I}}=\bar{\mu}^{\mathrm{I}}=\frac{ \sigma}{\varepsilon}F^{\prime}(\phi)-\sigma\varepsilon\Delta\phi. \tag{156}\]
_Model II_. The associated chemical potentials take the form:
\[\mu_{1}^{\mathrm{II}} =\frac{1+\phi}{2}\rho_{1}\tau_{1}+\rho_{1}\left(\frac{\kappa_{1} }{\varepsilon}F(\phi)-\frac{\kappa_{1}\varepsilon}{2}\|\nabla\phi\|^{2} \right), \tag{157a}\] \[\mu_{2}^{\mathrm{II}} =\frac{1-\phi}{2}\rho_{2}\tau_{2}+\rho_{2}\left(\frac{\kappa_{2 }}{\varepsilon}F(\phi)-\frac{\kappa_{2}\varepsilon}{2}\|\nabla\phi\|^{2} \right),\] (157b) \[\tau_{1}^{\mathrm{II}} =\frac{2\kappa_{1}}{\varepsilon}F^{\prime}(\phi)-2\kappa_{1} \varepsilon\Delta\phi,\] (157c) \[\tau_{2}^{\mathrm{II}} =\;-\frac{2\kappa_{2}}{\varepsilon}F^{\prime}(\phi)+2\kappa_{2} \varepsilon\Delta\phi, \tag{157d}\]
In the case \(\kappa=\kappa_{1}=\kappa_{2}\) we arrive at:
\[\mu_{1}^{\mathrm{II}} =\rho_{1}(1+\phi)\bar{\tau}^{\mathrm{II}}+\rho_{1}\left(\frac{ \kappa}{\varepsilon}F(\phi)-\frac{\kappa\varepsilon}{2}\|\nabla\phi\|^{2} \right), \tag{158a}\] \[\mu_{2}^{\mathrm{II}} =\;-\rho_{2}(1-\phi)\bar{\tau}^{\mathrm{II}}+\rho_{2}\left(\frac {\kappa}{\varepsilon}F(\phi)-\frac{\kappa\varepsilon}{2}\|\nabla\phi\|^{2} \right). \tag{158b}\]
The free energy contributions take the form:
\[\sum_{\alpha=1,2}\phi_{\alpha}\nabla\mu_{\alpha}^{\mathrm{I}}=\frac{\phi}{2} \nabla\left(\mu_{1}^{\mathrm{I}}-\mu_{2}^{\mathrm{I}}\right)+\frac{1}{2} \nabla\left(\mu_{1}^{\mathrm{I}}+\mu_{2}^{\mathrm{I}}\right). \tag{159}\]
**Lemma 5.7** (Reduction free energy contribution).: _In case of equal parameters \(\sigma=\sigma_{1}=\sigma_{2}\) (model I), and \(\kappa=\kappa_{1}=\kappa_{2}\) (model II), the surface tension contributions reduce to:_
\[\sum_{\alpha=1,2}\phi_{\alpha}\nabla\mu_{\alpha}^{\mathrm{I}}= \;\phi\nabla\bar{\mu}^{\mathrm{I}}, \tag{160a}\] \[\sum_{\alpha=1,2}\phi_{\alpha}\nabla\mu_{\alpha}^{\mathrm{II}}= \;\phi\nabla\left(\rho\bar{v}^{\mathrm{II}}+\bar{\psi}^{\mathrm{ II}}\frac{\rho_{1}-\rho_{2}}{2}\right)+\mathbf{c},\] (160b) \[\mathbf{c}= \;\nabla\left((\tilde{\rho}_{1}-\tilde{\rho}_{2})\bar{\tau}^{ \mathrm{II}}+\frac{\rho_{1}+\rho_{2}}{2}\left(\frac{\kappa}{\varepsilon}F(\phi )-\frac{\kappa\varepsilon}{2}\|\nabla\phi\|^{2}\right)\right). \tag{160c}\]
Proof.: This is a straightforward consequence of the variable transformation (114) and the form of the chemical potentials (155) and (157).
Theorem 5.7 conveys that for free energy model I the surface tension contribution coincides with that of the NSCH model. On the other hand, for model II it does not match with the NSCH model due to the presence of \(\mathbf{c}\) in (160b) (which is in general not zero).
_Mass transfer._ On the account of the balance (27a), we introduce a single mass transfer quantity \(\hat{\gamma}\) that is related to the constituent mass transfer quantities via:
\[\hat{\gamma}=\hat{\gamma}_{1}-\hat{\gamma}_{2},\qquad\hat{\gamma}_{1}=\frac{1 }{2}\hat{\gamma},\qquad\hat{\gamma}_{2}=-\frac{1}{2}\hat{\gamma}. \tag{161}\]
We distinguish the two free energy models specified in Section 4.1.
_Model I._ Substitution of the order parameter into (91) provides:
\[\hat{\gamma}^{\rm I}= \;-\hat{m}\left(\left(\frac{\sigma_{1}}{\rho_{1}\varepsilon}+ \frac{\sigma_{2}}{\rho_{2}\varepsilon}\right)F^{\prime}(\phi)-\left(\frac{ \sigma_{1}\varepsilon}{\rho_{1}}+\frac{\sigma_{2}\varepsilon}{\rho_{2}} \right)\Delta\phi\right. \tag{162}\] \[\qquad\qquad\left.+\left(\frac{1}{\rho_{1}}-\frac{1}{\rho_{2}} \right)p\right),\]
where \(\hat{m}=2\hat{m}_{1}=2\hat{m}_{2}\). In the scenario \(\sigma=\sigma_{1}=\sigma_{2}\) the mass transfer reduces to the NSCH mass transfer:
\[\hat{\gamma}^{\rm I}=\bar{\gamma}^{\rm I}=-\bar{m}\left(\bar{\mu}^{\rm I}+ \omega p\right), \tag{163}\]
with \(\bar{m}=\hat{m}(\rho_{1}^{-1}+\rho_{2}^{-1})\).
_Model II._ Substitution of the order parameter into (94) provides:
\[\hat{\gamma}^{\rm II}= \;-\hat{m}\left(\left(\frac{\kappa_{1}}{\varepsilon}+\frac{ \kappa_{2}}{\varepsilon}\right)F^{\prime}(\phi)+\left(\frac{\kappa_{1}}{ \varepsilon}-\frac{\kappa_{2}}{\varepsilon}\right)\phi F^{\prime}(\phi)\right. \tag{164}\] \[\qquad\qquad-\left(\kappa_{1}\varepsilon+\kappa_{2}\varepsilon \right)\Delta\phi-\left(\kappa_{1}\varepsilon-\kappa_{2}\varepsilon\right) \phi\Delta\phi\] \[\qquad\qquad+\left(\frac{\kappa_{1}}{\varepsilon}-\frac{\kappa_{ 2}}{\varepsilon}\right)F(\phi)-\left(\frac{\kappa_{1}\varepsilon}{2}-\frac{ \kappa_{2}\varepsilon}{2}\right)\|\nabla\phi\|^{2}\] \[\qquad\qquad\left.+\left(\frac{1}{\rho_{1}}-\frac{1}{\rho_{2}} \right)p\right),\]
where \(\hat{m}=2\hat{m}_{1}=2\hat{m}_{2}\). In the scenario \(\kappa=\kappa_{1}=\kappa_{2}\) the mass flux reduces to:
\[\hat{\gamma}^{\rm II}= \;-\check{m}\left(\frac{2\rho_{1}\rho_{2}}{\rho_{1}+\rho_{2}} \bar{\tau}^{\rm II}+\omega p\right), \tag{165}\]
with \(\check{m}=m(\rho_{1}+\rho_{2})/(\rho_{1}\rho_{2})\). This does in general not match with the NSCH mass transfer. However, in the density matching case \(\rho_{1}=\rho_{2}=\rho\) it reduces to the NSCH mass transfer
\(\hat{\gamma}^{\Pi}=\hat{\gamma}^{\Pi}\).
_Momentum transfer_. Based on the balance (27b), we introduce the momentum transfer \(\hat{\gamma}\) related to the constituent momentum transfer quantities via:
\[\hat{\mathbf{\pi}}=\hat{\mathbf{\pi}}_{1}-\hat{\mathbf{\pi}}_{2},\qquad\hat{\mathbf{\pi}}_{1}= \frac{1}{2}\hat{\mathbf{\pi}},\qquad\hat{\mathbf{\pi}}_{2}=-\frac{1}{2}\hat{\mathbf{\pi}}. \tag{166}\]
Inserting the order parameter and denoting \(D=D_{12}=D_{21}\), we obtain:
\[\mathbf{\pi}=p\nabla\phi-\frac{\rho p}{2D\rho_{1}\rho_{2}}\mathbf{J}+\frac{1}{2} \hat{\gamma}\mathbf{v}+\frac{\hat{\gamma}}{2}\left(\frac{1}{\rho_{1}(1+\phi)} -\frac{1}{\rho_{2}(1-\phi)}\right)\mathbf{J}, \tag{167}\]
where the last member vanishes when \(\phi=\pm 1\).
_Viscous stress tensor_. Invoking the variable transformation (136), the superposition of the viscous components of the stress tensors admits the form:
\[\sum_{\alpha=1,2}\tilde{\nu}_{\alpha}\left(2\mathbf{D}_{\alpha}+ \lambda_{\alpha}(\mathrm{div}\mathbf{v}_{\alpha})\mathbf{I}\right)= \nu\left(2\mathbf{D}+\lambda\mathrm{div}\mathbf{v}\right)\] \[+\hat{\nu}\left(2\mathbf{A}+\lambda\left(\mathrm{div}\mathbf{J} \right)\mathbf{I}\right)\] \[+\tilde{\nu}\left(2\mathbf{B}+\lambda\left(\mathbf{J}\cdot \nabla\phi\right)\mathbf{I}\right),\] (168a) where we have introduced the viscosity quantities: \[\nu= \nu_{1}\frac{1+\phi}{2}+\nu_{2}\frac{1-\phi}{2}, \tag{169a}\] \[\hat{\nu}= \frac{\nu_{1}}{2\rho_{1}}-\frac{\nu_{2}}{2\rho_{2}},\] (169b) \[\tilde{\nu}= -\frac{\nu_{1}}{2\rho_{1}(1+\phi)}+\frac{\nu_{2}}{2\rho_{2}(1- \phi)},\] (169c) the symmetric tensors: \[\mathbf{D}= \frac{1}{2}\left(\nabla\mathbf{v}+(\nabla\mathbf{v})^{T}\right),\] (170a) \[\mathbf{A}= \frac{1}{2}\left(\nabla\mathbf{J}+(\nabla\mathbf{J})^{T}\right),\] (170b) \[\mathbf{B}= \frac{1}{2}\left(\mathbf{J}\otimes\nabla\phi+\nabla\phi\otimes \mathbf{J}\right),\] (170c) and we have set \[\lambda=\lambda_{1}=\lambda_{2}\]. In establishing the above form we have made use of the identities: \[\nabla\mathbf{v}_{1}= \nabla\mathbf{v}+\frac{1}{\rho_{1}(1+\phi)}\nabla\mathbf{J}-\frac{ 1}{\rho_{1}(1+\phi)^{2}}\mathbf{J}\otimes\nabla\phi,\] (171a) \[\nabla\mathbf{v}_{2}= \nabla\mathbf{v}-\frac{1}{\rho_{2}(1-\phi)}\nabla\mathbf{J}+\frac{ 1}{\rho_{2}(1-\phi)^{2}}\mathbf{J}\otimes\nabla\phi. \tag{171b}\]
Each of the three members of the viscous stress tensor (168) appears in the classical form of a symmetric tensor and \(\lambda\mathbf{I}\) times its trace. The form (168) conveys that the mixture viscous stress term is composed of contribution solely associated with the mixture velocity \(\mathbf{v}\), and a part in terms of the diffusive velocity \(\mathbf{J}\). The first contribution is precisely the viscous stress tensor in the Navier-Stokes Cahn-Hilliard model. In contrast, the second contribution represents diffusion with respect to the peculiar velocity. This contribution is absent in the Navier-Stokes Cahn-Hilliard model.
_Peculiar velocity stress component_.: With the aim of expressing the peculiar velocity component of the stress in mixture variables, we introduce the following lemma.
**Lemma 5.8** (Symmetry dyadic product peculiar velocity).: _The peculiar velocity dyadic product is symmetric:_
\[\mathbf{w}_{1}\otimes\mathbf{w}_{2}=\mathbf{w}_{2}\otimes\mathbf{w}_{1}. \tag{172}\]
Proof.: This follows from the sequences of identities:
\[\mathbf{w}_{1}\otimes\mathbf{w}_{2} =\,(\mathbf{v}_{1}-\mathbf{v})\otimes(\mathbf{v}_{2}-\mathbf{v})\] \[=\,\mathbf{v}_{1}\otimes\mathbf{v}_{2}-\frac{1}{\rho}\mathbf{v}_ {1}\otimes(\tilde{\rho}_{1}\mathbf{v}_{1}+\tilde{\rho}_{2}\mathbf{v}_{2})- \frac{1}{\rho}\left(\tilde{\rho}_{1}\mathbf{v}_{1}+\tilde{\rho}_{2}\mathbf{v} _{2}\right)\otimes\mathbf{v}_{2}+\mathbf{v}\otimes\mathbf{v}\] \[=\,-\frac{\tilde{\rho}_{1}}{\rho}\mathbf{v}_{1}\otimes\mathbf{v} _{1}-\frac{\tilde{\rho}_{2}}{\rho}\mathbf{v}_{2}\otimes\mathbf{v}_{2}+\mathbf{ v}\otimes\mathbf{v}. \tag{173}\]
We may now write the peculiar velocity component in mixture quantities.
**Lemma 5.9** (Peculiar velocity component stress).: _The peculiar velocity component of the stress takes the form:_
\[\sum_{\alpha=1,2}\tilde{\rho}_{\alpha}\mathbf{w}_{\alpha}\otimes\mathbf{w}_{ \alpha}=\frac{\rho\mathbf{J}\otimes\mathbf{J}}{2\rho_{1}\rho_{2}(1-\phi^{2})}. \tag{174}\]
Proof.: The proof goes similar as that of Theorem 5.6 and relies on Theorem 5.8.
This contribution represents the inertia of the diffusive flux. It is not present in the NSCH model.
### Connection of the complete models
We start with the mass balance laws. The mixture mass balance law
\[\partial_{t}\rho+\operatorname{div}\left(\rho\mathbf{v}\right)=0, \tag{175}\]
as presented in (25a), is identical in the mixture model (88) and the NSCH models (120) and (122). Next, the phase equation formulated in mixture quantities follows from (88a):
\[\partial_{t}\phi+\mathrm{div}(\phi\mathbf{v})+\mathrm{div}\mathbf{h}-\zeta\gamma=0, \tag{176}\]
where we have introduced the diffusive flux quantity:
\[\mathbf{h}=\phi_{1}\mathbf{w}_{1}-\phi_{2}\mathbf{w}_{2}. \tag{177}\]
This equation is _not_ of Cahn-Hilliard type. The phase equation (176) does not contain a chemical potential or pressure variable. This sets it apart from it NSCH counterpart in which the diffusive flux \(\mathbf{h}\) is replaced by the constitutive model:
\[\bar{\mathbf{h}}^{\mathrm{I}}= \ -\bar{\mathbf{M}}\nabla(\bar{\mu}+\omega p),\] (Model I) (178a) \[\bar{\mathbf{h}}^{\mathrm{II}}= \ -\bar{\mathbf{M}}\nabla\left(\rho\bar{v}+\bar{\psi}\frac{\rho_{1}- \rho_{2}}{2}+\omega p\right).\] (Model II) (178b)
The diffusive flux (177) and the constitutive model (178) both vanish in equilibrium. On the other hand, the mass transfer term of the mixture model and the NSCH model is of similar type. In the scenario of model I with equal modeling parameters (\(\sigma_{1}=\sigma_{2}\)) it coincides with the NSCH mass transfer (see Section 5.2).
**Remark 5.10** (Diffusive fluxes).: _The diffusive fluxes \(\mathbf{J}\) and \(\mathbf{h}\) constitute a single unknown in the system, since they are related as \(\mathbf{J}=2\rho_{1}\rho_{2}\mathbf{h}/(\rho_{1}+\rho_{2})\). For a proof we refer to [11]._
Next, we focus on the mixture momentum equation which follows from the superposition of the constituent momentum balance equations (88b):
\[\partial_{t}\mathbf{m}+\mathrm{div}\left(\mathbf{m}\otimes \mathbf{v}\right)+\nabla p-\mathrm{div}\left(\nu\left(2\mathbf{D}+\lambda \mathrm{div}\mathbf{v}\right)\right)-\rho\mathbf{b}\] \[+\frac{\phi}{2}\nabla\left(\mu_{1}^{\mathrm{I}}-\mu_{2}^{ \mathrm{I}}\right)+\frac{1}{2}\nabla\left(\mu_{1}^{\mathrm{I}}+\mu_{2}^{ \mathrm{I}}\right)\] \[-\mathrm{div}\left(\hat{\nu}\left(2\mathbf{A}+\lambda\left( \mathrm{div}\mathbf{J}\right)\mathbf{I}\right)+\tilde{\nu}\left(2\mathbf{B}+ \lambda\left(\mathbf{J}\cdot\nabla\phi\right)\mathbf{I}\right)\right)\] \[+\mathrm{div}\left(\frac{\rho\mathbf{J}\otimes\mathbf{J}}{2\rho_ {1}\rho_{2}(1-\phi^{2})}\right)=\ 0. \tag{179}\]
where we have substituted the expressions for viscous, and peculiar velocity contributions. The first line matches with the NSCH model. The second line consists of free energy terms. In case of equal modeling parameters, it reduces for model I to the free energy contribution in the NSCH model. This does not apply to the second model. The members of the last two lines are absent in the NSCH linear momentum equation. These terms are all linked to the diffusive flux. The diffusive flux in the mixture model is described by an evolution, whereas in the NSCH model it is determined by the constitutive model (178). This is related to the usage of the energy-dissipation statement modeling restriction of the NSCH model, instead of the second law of thermodynamics adopted for the mixture model. It precludes the need of a constitutive model for the momentum transfer. The system described by
the mixture mass balance (175), the phase equation (176), the linear momentum equation (179), augmented with the evolution equation of the diffusive flux (see [11]) is equivalent to the mixture model (88) (for the diffuse-interface models of Section 4).
The mixture model and the NSCH model share the same one-dimensional equilibrium profile:
\[\phi=\phi^{\text{eq}}(\xi)=\tanh\left(\frac{\pm\xi}{\varepsilon\sqrt{2}}\right). \tag{180}\]
We consider the surface tension coefficient and define for both models:
\[\hat{\Theta}:=\hat{\Theta}_{1}+\hat{\Theta}_{2}. \tag{181}\]
This results in:
\[\hat{\Theta}^{\text{I}} =(\sigma_{1}+\sigma_{2})\frac{\sqrt{2}}{3}, \tag{182a}\] \[\hat{\Theta}^{\text{II}} =(\rho_{1}\kappa_{1}+\rho_{2}\kappa_{2})\frac{\sqrt{2}}{3}. \tag{182b}\]
For equal parameters \(\sigma_{1}=\sigma_{2}=\sigma\) and \(\kappa_{1}=\kappa_{2}=\kappa\) these integrals match with the NSCH surface tension coefficients:
\[\hat{\Theta}^{\text{I}} =\bar{\Theta}^{\text{I}}=\sigma\frac{2\sqrt{2}}{3}, \tag{183a}\] \[\hat{\Theta}^{\text{II}} =\bar{\Theta}^{\text{II}}=(\rho_{1}+\rho_{2})\kappa\frac{\sqrt{2} }{3}. \tag{183b}\]
Lastly, we summarize the comparison of the mixture model and the NSCH model in Table 1.
## 6 Conclusion
In this paper, we presented a thermodynamical consistent diffuse-interface incompressible mixture model. Starting from the continuum theory of mixtures we derived a constitutive modeling restriction that is compatible with the second law of thermodynamics. Subsequently, we selected constitutive models that satisfy this modeling restriction. To close the mixture model, we presented two diffuse-interface models, each associated with a particular Helmholtz free energy. Finally, we studied in detail the connection with the Navier-Stokes Cahn-Hilliard model (see Table 1 for an overview).
While the diffuse-interface mixture models we have set out are helpful in the study of evolution of incompressible mixtures, we certainly do not claim that these are sufficient. We outline two main avenues of potential future research. The first avenue is the rigorous
mathematical analysis of the models, and the study of the sharp interface asymptotics. This sharp interface analysis is of different type than of the Navier-Stokes Cahn-Hilliard model. Indeed, the proposed mixture models are not of Cahn-Hilliard type and do not contain a mobility parameter. Furthermore, to assess the behavior of solutions of the mixture model, it is essential to develop suitable numerical algorithms. In particular, it is worthwhile to compare numerical solutions of the mixture model with those of the Navier-Stokes Cahn-Hilliard model.
## Acknowledgments
MtE acknowledges support from the German Research Foundation (Deutsche Forschungsgemeinschaft DFG) via the Walter Benjamin project EI 1210/1-1. The research by KvdZ was supported by the Engineering and Physical Sciences Research Council (EPSRC), UK, under Grants EP/T005157/1 and EP/W010011/1. DS gratefully acknowledges support from the German Research Foundation (Deutsche Forschungsgemeinschaft DFG) via the Emmy Noether Award SCH 1249/2-1.
|
2308.13284 | Integrability of a Family of Lotka--Volterra Three Species Biological
System | The aim of this study is to analyze the integrability problem of
Lotka--Volterra three species biological system. The system which considered in
this work is a biological plausibility or a chemical model. The system has a
complex dynamical behavior because it is chaotic system. We, first show that
the system is a complete integrable when two of the involved parameters in the
system are zero. Second, thorough invariant algebraic surfaces and exponential
factors, the nonintegrability problems have been investigated. Particularly, we
show the non-existence of polynomial, rational, formal series, and Darboux
first integrals when parameters are strictly positive. | Aween Karim, Azad Amen, Waleed Aziz | 2023-08-25T10:11:01Z | http://arxiv.org/abs/2308.13284v1 | # Integrability of a family of Lotka-Volterra three species biological system
###### Abstract.
The aim of this study is to analyze the integrability problem of Lotka-Volterra three species biological system. The system which considered in this work is a biological plausibility or a chemical model. The system has a complex dynamical behavior because it is chaotic system. We, first show that the system is a complete integrable when two of the involved parameters in the system are zero. Second, thorough invariant algebraic surfaces and exponential factors, the nonintegrability problems have been investigated. Particularly, we show the non-existence of polynomial, rational, formal series, and Darboux first integrals when parameters are strictly positive.
Key words and phrases:Integrability, Invariant algebraic surface, Darboux first integral, Formal first integral 2
## 1. Introduction
During the last fifty years there has been increasing interest in studying the autonomous differential system, mainly due to their many applications in natural science. The conservation of ecological and biological systems is a primary concern for scientists and researchers, and controlling and analyzing the complex dynamical behavior of these systems is a challenge. The predation and competition species are the most popular interactions in which nonlinear polynomials are involved to represent such kind of interactions. The Lotka-Voltera system is one of the most prominent among existing these models. The authors in [1] have been demonstrated that three species can possess chaotic behavior in an ecosystem. Samardzija and Greller [15] derived a three species Lotka-Volterra biological system, which can be describe by a three dimensinal system of differential equations
\[\begin{split}\dot{x}&=x(1-y+cx-axz),\\ \dot{y}&=y(-1+x),\\ \dot{z}&=z(-b+ax^{2}).\end{split} \tag{1}\]
In (1), \(x\) is the prey population and \(y\), \(z\) are predator populations, and by the biological meaning of the system, we only consider the parameters that are assumed to be non negative real numbers. For a certain value of parameters, \(a=2.9851,b=3\), and \(c=2\), the authors in [15] showed that the three-species biological system (1) was chaotic and it had very complicated dynamical behavior. In 1999, Costello [13] studied chaos synchronization in the integer-order system (1) for a certain parametric. Elsadany et al. [3] studied the dynamical behaviors of the system (1). They investigated the boundedness, existence, and uniqueness of the solutions of the system (1) and determined the stability and bifurcation of its equilibrium points.
One of the main difficulties in studying these differential systems consists of controlling the existence and nonexistence of first integrals and complete integrability. Despite these earlier studies, the integrability of the system (1) has not yet been investigated. The study of the integrability, the existence of first integrals, is a main problem in the theory of differential equations. The existence of first integrals of a system are crucial in understanding the dynamics of the system. Therefore, it is important to understand whether a system has first integrals and whether they are analytical, smooth, etc. We study the existence of first integrals of system (1), that described either by a formal power series (see [14], chapter one), rational function or by Darboux function using the Darboux theory of integrability (see [7], [5] and [19]). Therefore, we are interested in studying the integrability of system (1) completely and showing that it is non-integrable for most values of the parameters \(a,b\) and \(c\).
First, we look at the formal first integrals as it is a classical tool in studing differential equations. It is also been used to described solutions around singularities [2], the existence of first integrals given by formal series [17], Moussu [16]. The most success in using formal series to study differential equations has been achieved by Ecalle [8], who used them to prove the Dulac's conjecture. Mattei and Moussu [12] are proved that formal integrability implies analytic integrability in dimension two.
The associated vector field of system (1) is
\[\mathcal{X}=x(1-y+cx-axz)\frac{\partial}{\partial x}+y(-1+x)\frac{\partial}{ \partial y}+z(-b+ax^{2})\frac{\partial}{\partial z}. \tag{2}\]
Let \(U\subseteq\mathbb{C}^{3}\) be an open set. A non-constant function \(H:U\rightarrow\mathbb{C}\) is a first integral of the polynomial vector field \(\mathcal{X}\) on \(U\) if it stays constant along all solution curves \((x(t),y(t),z(t))\) of (1). Clearly \(H\) is a first integral of \(\mathcal{X}\) on \(U\) if and only if \(\mathcal{X}H=0\) on \(U\). When \(U=\mathbb{C}^{3}\), the first integral \(H\) is called a global first integral. When a first integral \(H\) is a rational (polynomial or analytic) function, we say that \(H\) is a rational (polynomial or analytic) first integral. when \(H\) is a formal power series in the variables \(x,y\) and \(z\), we say that \(H\) is a formal first integral. Finally, a first integral is of Darboux type if it is of the form
\[f_{1}^{\lambda_{1}}\ldots f_{p}^{\lambda_{p}}E_{1}^{\mu_{1}}\ldots E_{q}^{ \mu_{q}}, \tag{3}\]
where \(f_{1},\ldots,f_{p}\) are Darboux polynomials and \(E_{1},\ldots,E_{q}\) are exponential factors (see Section 2 for definitions), and \(\lambda_{i},\mu_{j}\in\mathbb{C}\) for \(i=1,\ldots,p\), and \(j=1,\ldots,q.\) The functions of the form (3) are called Darboux functions, and they are the base of the Darboux theory of integrability. The Darboux theory of integrability is an algebraic theory of integrability which based on the an adequate number of invariant algebraic surfaces and exponential factors associated to polynomial differential systems. In fact, to every Darboux polynomial there is associated some invariant algebraic surface, and the exponential factors appear when an invariant algebraic surface has multiplicity larger than \(1\), see [19, 6, 11] for more details. Several well-known findings on the Darboux theory of integrability and analytical first integrals can be found in [19, 18, 4]. The following is our first main results.
**Theorem 1.1**.: _The unique irreducible Darboux polynomials of system (1) with non-zero cofactors are \(x\) and \(y\) and \(z\)._
This result is the basis of the Darboux theory of integrability, and its proof is given in Section 4.
**Theorem 1.2**.: _The following statements hold for system (1)._
1. _The unique exponential factors of system (_1_) are_ \(\exp(x+z)\) _and_ \(\exp(y)\) _with cofactors_ \(cx^{2}-xy-bz+x\) _and_ \(y(x-1)\) _respectively if_ \(a>0\) _and_ \(c>0,\) _with an exception
_if_ \(c=0,\) _an extra exponential factor_ \(\exp((x+y+z)^{2})\) _can be appear with cofactor_ \(-2(x+y+z)(bz-x-y)\)_._
2. _If_ \(a=0,b>0\) _and_ \(c>0\)_, then system (_1_) has an exponential factor_ \(\exp(z)\) _with cofactor_ \(-bz\) _with an exception if_ \(c=0,\) _an extra exponential factor_ \(\exp(x+y)\) _can be derived with cofactor_ \(x-y.\) _Moreover, if_ \(ab=0,\) _and_ \(c>0,\)_, then it has no exponential factors._
The proof of Theorem 1.2 is given in Section 5. The next two results state the existence of Darboux first integrals for a set of the values of the parameters and the absence of formal first integrals for system (1).
**Theorem 1.3**.: _If \(a=c=0\), then for system (1) the following statements hold._
1. _If_ \(b=0,\) _then it is Darboux integrable with the first integrals_ \[H_{1}=xy\exp(-x-y)\quad\text{and}\quad H_{2}=z.\]
2. _If_ \(b>0,\) _then it is integrable with the following first integrals_ \[H_{1}=xy\exp(-x-y)\quad\text{and}\quad H_{2}=zh(x,y),\] _where_ \(h(x,y)=\exp(\int^{x}\frac{b}{s\left(\text{LambertW}\left(-\frac{xy^{a-x-y}}{ s}\right)+1\right)}ds),\) _and lambertW computes the principal value of the Lambert W function._
**Theorem 1.4**.: _System (1) with \(a>0\) and \(c>0\) has no first integrals of Darboux type._
**Theorem 1.5**.: _If \(a>0\) and \(c>0\), then system (1) has no rational first integrals._
Darboux first integrals are not necessarily formal series first integrals, and vice versa. This demonstrates the independence of these topics.
**Theorem 1.6**.: _When \(a>0,c>0\), system (1) does not admits any formal first integral._
To prove Theorem 1.6, when \(b>0,\) we rewrite system (1) as a four dimensional system in the variables \(x,y,z,\) and \(b\)
\[\dot{x}=x(1-y+cx-axz),\quad\dot{y}=y(-1+x),\quad\dot{z}=z(-b+ax^{2}),\quad \dot{b}=0. \tag{4}\]
**Theorem 1.7**.: _Suppose that \(a>0,\)\(c>0,\) and \(b>0.\) If \(f=f(x,y,z,b)\) is a formal first integral of system (4), then \(f\) is an arbitrary formal power series in the variable \(b.\)_
From Theorem 1.6, directly the following result for the system (1) is obtained.
**Corollary 1**.: _Suppose that \(a>0,\) and \(c>0.\) Then, the system (1) has no global analytic first integrals, no polynomial first integrals and no local analytic first integrals at the origin._
The proof of Theorems 1.3, 1.6, 1.7, 1.4 and 1.5 are given in Section 6.
Since the Darboux theory of integrability of a polynomial differential system is based on the existence of Darboux polynomials and their multiplicity, then the study of the existence or non-existence of Darboux first integrals needs to seek of the Darboux polynomials. We shall recall the main concepts of the Darboux theory of integrability.
## 2. Preliminaries
A polynomial \(f(x,y,z)\in\mathbb{C}[x,y,z]\) satisfying the equation
\[x(1-y+cx-axz)\frac{\partial f}{\partial x}+y(-1+x)\frac{\partial f}{\partial y }+z(-b+ax^{2})\frac{\partial f}{\partial z}=Kf, \tag{5}\]
is called a Darboux polynomial of the system (1), where \(k=k(x,y,z)\in\mathbb{C}[x,y,z],\) is a cofactor of \(f(x,y,z)\) of degree at most two. Therefore, without loss of generality, the cofactor can be in the form
\[K=\alpha_{0}+\alpha_{1}x+\alpha_{2}y+\alpha_{3}z+\alpha_{4}x^{2}+\alpha_{5}xy +\alpha_{6}xz+\alpha_{7}y^{2}+\alpha_{8}yz+\alpha_{9}z^{2}, \tag{6}\]
where \(\alpha_{i}\in\mathbb{C}\) for \(i=0,\ldots,9\). If \(f(x,y,z)\) is a Darboux polynomial, then the surface \(f(x,y,z)=0\) is an invariant algebraic surface.
We recall the following result in [6].
**Lemma 2.1**.: _Let \(f\) be a polynomial and \(f=\prod_{i=1}^{s}f_{j}^{\alpha_{j}}\) be its decomposition into irreducible factors in \(\mathbb{C}[x,y,z].\) Then \(f\) is a Darboux polynomial of system (1) if and only if all the \(f_{j}\) are Darboux polynomials of system (1). Moreover if \(K\) and \(K_{j}\) are the cofactors of \(f\) and \(f_{j},\) then \(K=\sum_{j=1}^{s}\alpha_{j}K_{j}.\)_
An exponential factor \(E\) of system (1) is a function of the form \(E=\exp(\frac{h}{f})\) where \(h,f\in\mathbb{C}[x,y,z]\) satisfying \((h,f)=1\) and
\[\mathcal{X}E=LE, \tag{7}\]
for some polynomial \(L=L(x,y,z)\) of degree at most two. Such function is called the cofactor of \(E.\)
**Proposition 1**.: _[_10, 9_]__. The following statements hold._
1. _If_ \(\exp(\frac{g}{h})\) _is an exponential factor for the polynomial differential system (_1_) and_ \(h\) _is not a constant polynomial, then_ \(h=0\) _is an invariant algebraic surface._
2. _Eventually_ \(\exp(g)\) _can be an exponential factor, coming from the multiplicity of the infinite invariant plane._
The following is well known result of the Darboux theory of integrability, for instance see Chapter 3 of [19].
**Theorem 2.2** (Darboux Theory of Integrability).: _Suppose that a polynomial vector field \(\mathcal{X}\) defined in \(\mathbb{R}^{n}\) of degree \(m\) admits \(p\) Darboux polynomials \(f_{i}\) with cofactors \(K_{i}\) for \(i=1,...,p,\) and \(q\) exponential factors \(E_{j}=\exp(g_{j}/h_{j})\) with cofactors \(L_{j}\) for \(j=1,...,q.\) If there exist \(\lambda_{i},\mu_{j}\in\mathbb{C}\) not all zero such that_
\[\sum\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Proof.** It can be easily verified that \(x\) and \(z\) are Darboux polynomials with respective cofactors, \(1+cx-axz\) and \(-b+ax^{2}\). First, suppose \(a>0\), and we show that there is no other Darboux polynomial. Suppose that \(f\) is an irreducible Darboux polynomial of system (10) with degree greater than or equal 2 with a non-zero cofactor \(k_{1}=\beta_{0}+\beta_{1}x+\beta_{2}z+\beta_{3}x^{2}+\beta_{4}xz+\beta_{5}z^{2}\). Then \(f\) must visfies
\[x(1+cx-axz)\frac{\partial f}{\partial x}+z(-b+ax^{2})\frac{\partial f}{\partial z }=k_{1}f. \tag{11}\]
By restricting (11) to the invariant plane \(x=0\) and denoting \(f\) by \(\bar{g}\), we obtain
\[-bz\frac{d\bar{g}}{dz}=(\beta_{0}+\beta_{2}z+\beta_{5}z^{2})\bar{g}. \tag{12}\]
If \(b=0\), we see that \((\beta_{0}+\beta_{2}z+\beta_{5}z^{2})\bar{g}=0\) which implies \(\beta_{0}=\beta_{2}=\beta_{5}=0\). When \(b\neq 0\), the solution of (12) is
\[\bar{g}=\ \tilde{d_{1}}\,z^{-\frac{\beta_{\theta}}{b}}\exp(\,\frac{-z\,( \beta_{5}\,z+2\,\beta_{2})}{2b}),\quad\tilde{d_{1}}\in\mathbb{C}.\]
Given that \(\bar{g}\) is a polynomial, then must be \(\beta_{2}=\beta_{5}=0\) and \(\beta_{0}=-bm_{0}\), where \(m_{0}\in\mathbb{N}\cup\{0\}\). Hence \(k_{1}=-m_{0}b+\beta_{1}x+\beta_{3}x^{2}+\beta_{4}xz\) and \(f=\tilde{d_{1}}z^{m_{0}}+xh(x,z)\), where \(h(x,z)\) is a polynomial in the variables \(x\) and \(z\). It is obvious that \(\tilde{d_{1}}\neq 0\) because \(f\) is irreducible.
Now, restricting (11) to the invariant plane \(z=0\) and denoting \(f\) by \(\tilde{g}\), we obtain
\[x(1+cx)\frac{d\tilde{g}}{dx}=(-m_{0}b+\beta_{1}x+\beta_{3}x^{2})\tilde{g}. \tag{13}\]
Here we consider two distinct cases.
**Case 1**: Suppose that \(c>0\), then
\[\tilde{g}=\,\tilde{d_{2}}\,\left(1+cx\right)^{m_{0}\,b+\frac{\beta_{1}}{c}- \frac{\beta_{3}}{c^{2}}}x^{-m_{0}\,b}\mathrm{exp}(\frac{\beta_{3}\,x}{c}), \quad\tilde{d_{2}}\in\mathbb{C}.\]
Since \(\tilde{g}\) is a polynomial, the must be \(\beta_{3}=0\) and \(\beta_{1}=c(m_{1}-m_{0}b)\), with \(m_{1}\in\mathbb{N}\cup\{0\}\). Therefore,
\[k_{1}=-m_{0}b+c(m_{1}-m_{0}b)x+\beta_{4}xz, \tag{14}\]
and \(f=\tilde{d_{2}}\,\left(cx+1\right)^{m_{1}}x^{-m_{0}\,b}+zT(x,z)\), where \(T(x,z)\) is a polynomial, and since \(f\) is irreduced, then must \(\tilde{d_{2}}\neq 0\).
We write \(f=\sum\limits_{j=0}^{n}f_{j}(x,z)\), where each \(f_{j}=f_{j}(x,z)\) denotes a homogeneous polynomial of degree \(j\) in \(x\) and \(z\). Obviously, \(f_{n}\neq 0\). We can deduce from (11) and (14) that the terms of degree \(n+2\) satisfy
\[-ax^{2}z\frac{\partial f_{n}}{\partial x}+ax^{2}z\frac{\partial f_{n}}{ \partial z}=\beta_{4}xzf_{n}.\]
Solving this linear partial differential equation, we obtain \(f_{n}=G_{n}(x+z)x^{-\frac{\beta_{4}}{a}}\). where \(G_{n}(x+z)\) is an arbitrary function in terms of \(x+z\). Since \(f_{n}\) must be a homogeneous
polynomial, we get that \(\beta_{4}=-am_{2}\) where \(m_{2}\in\mathbb{N}\cup\{0\}\) and \(G_{n}(x+z)\in\mathbb{C}[x,z]\backslash\{0\}\). Note that \(f_{n}\neq 0\) implies that \(G_{n}(x+z)\neq 0\). Furthermore, because \(f_{n}\) has degree \(n\), we can write
\[f_{n}=\tilde{d}_{3}(x+z)^{n-m_{2}}x^{m_{2}},\quad\tilde{d}_{3}\in\mathbb{C} \backslash\{0\}, \tag{15}\]
and the cofactor becomes \(k_{1}=-m_{0}b+c(m_{1}-m_{0}b)x-am_{2}xz\). Computing the terms of degree \(n+1\) in (11), we obtain
\[cx^{2}\frac{\partial f_{n}}{\partial x}-ax^{2}z\frac{\partial f_{n-1}}{ \partial x}+ax^{2}z\frac{\partial f_{n-1}}{\partial z}=c(m_{1}-m_{0}b)xf_{n}- am_{2}xzf_{n-1}\]
Hence
\[f_{n-1} = \frac{\tilde{d}_{3}c}{a}x^{m_{2}(x+z)^{n-m_{2}-1}}((-m_{0}\,b+m_{ 1}-n)\ln(-z)+\ln(x)(m_{0}\,b-m_{1}+m_{2}))\] \[+x^{m_{2}}G_{n-1}(x+z),\]
where \(G_{n-1}(x+z)\) is an arbitrary function in terms of \(x+z\). Since \(f_{n-1}\) must be a homogeneous polynomial, we obtain \(G_{n-1}(x+z)\in\mathbb{C}[x,z]\backslash\{0\}\) and \(-m_{0}b+m_{1}-n=0\), and \(m_{0}b-m_{1}+m_{2}=0\). As a result, \(n=m_{2}\), and therefore
\[f_{n-1}=x^{n}G_{n-1}(x+z),\]
which implies that \(G_{n-1}(x+z)=0\) and hence \(f_{n-1}=0\). Since \(\beta_{4}=-am_{2}=-a(m_{1}-m_{0}b)\), and computing the terms of degree \(n-1\) in (11), that satisfy
\[x\frac{\partial f_{n}}{\partial x}-ax^{2}z\frac{\partial f_{n-2}}{\partial x} +ax^{2}z\frac{\partial f_{n-2}}{\partial z}-bz\frac{\partial f_{n}}{\partial z }=-a(m_{1}-m_{0}b)xzf_{n-2}-bm_{0}f_{n}. \tag{16}\]
The function
\[f_{n-2}\left(x,z\right)=\frac{x^{-m_{0}\,\,b+m_{1}}\tilde{d}_{3}m_{1}\,\left( \ln\left(x\right)x-x\ln\left(-z\right)-x-z\right)}{a\left(x+z\right)^{2}x}+x^{ -m_{0}\,\,b+m_{1}}\,G_{n-2}\left(x+z\right),\]
satisfy (16) where \(G_{n-2}(x+z)\) is an arbitrary function in terms of \(x+z\). It obvious \(\tilde{d}_{3}\neq 0\), otherwise, \(f\) becomes reducible. Since \(f_{n-2}\) must be a homogeneous polynomial, we must have \(m_{1}=0\). Hence, \(m_{2}=-m_{0}b\), and equation (15) becomes
\[f_{n}=\tilde{d}_{3}x^{-m_{0}b},\quad\tilde{d}_{3}\in\mathbb{C}\backslash\{0\}.\]
Since \(f\) is a homogeneous polynomial and \(m_{0}b\geq 0\), \(b\neq 0\), then it must be \(m_{0}=0\), and hence \(m_{2}=0\). As a result, \(\beta_{0}=\beta_{1}=\beta_{4}=0\), which implies that the cofactor \(k_{1}=0\).
**Case 2**: When \(c=0\). In this case, equation (13) becomes
\[x\frac{\partial\tilde{g}}{\partial x}=(-m_{0}b+\beta_{1}x+\beta_{3}x^{2}) \tilde{g}. \tag{17}\]
By solving it, we get \(\tilde{g}=\tilde{d_{I}}\,x^{-m\partial\,\,b}{\rm exp}(\frac{1}{2}x\,(\beta_{S}\,x+ 2\,\beta_{I}))\), where \(\tilde{d_{1}}\in\mathbb{C}\backslash\{0\}\). We must have \(\beta_{1}=\beta_{3}=0\), so
\[k_{1}=-m_{0}b+\beta_{4}xz. \tag{18}\]
When \(c=0\), from (18) and (11), one can see
\[x(1-axz)\frac{\partial f}{\partial x}+z(-b+ax^{2})\frac{\partial f}{\partial z }=(-m_{0}b+\beta_{4}xz)f. \tag{19}\]
Writing \(f=\sum_{j=0}^{n}f_{j}(x,z)\) as a homogeneous polynomial and repeating the same way used in Case 1, we obtain
\[f_{n}=\tilde{d_{2}}(x+z)^{n-m_{2}}x^{m_{2}},\qquad\tilde{d_{2}}\in\mathbb{C} \backslash\{0\}, \tag{20}\]
and
\[f_{n-1}=x^{m_{2}}G_{n-1}\left(x+z\right),\]
where \(G_{n-1}(x+z)\) is a homogeneous polynomial in the terms of \((x+z)\). Then we can write
\[f_{n-1}=\tilde{d_{3}}(x+z)^{n-m_{2}-1}x^{m_{2}},\qquad\tilde{d_{3}}\in\mathbb{ C}\backslash\{0\}.\]
Finally, we calculate the terms of degree \(n\) in equation (19), and we find that
\[f_{n-2} = \frac{\tilde{d_{2}}}{a}x^{-1+m_{2}}(x+z)^{-2+n-m_{2}}(-x(m_{0}\,b +n)\ln(-z)+x(m_{0}\,b+n)\ln(x)\] \[-(x+z)(m_{0}\,b+m_{2}))+x^{m_{2}}G_{n-2}(x+z).\]
Since \(f_{n-2}\) must be a homogeneous polynomial and also since \(\tilde{d_{2}}\neq 0\), then must be \(m_{0}b+n=0\). We have \(m_{0}b\geq 0\), \(b\neq 0\) and \(n\) is nonnegative, then must be \(m_{0}=0\) and so \(n=0\). If \(n=0\), from (20), we get \(f_{0}=\tilde{d_{2}}(x+z)^{-m_{2}}x^{m_{2}}\). Since the degree of \(f_{0}\) is zero, it must also \(m_{2}=0\). Based on these values of parameters, we get that \(\beta_{0}=\beta_{4}=0\), and so \(k_{1}=0\). This concludes the proof of statement (i).
Now considering that system (10) with \(a=0\) has the rational first integral \(H=\frac{zx^{b}}{(1+cx)^{b}}\), Proposition (2.3) implies that all invariant algebraic curves are in the set \(\{(x,z)\in\mathbb{R}^{2}|Czx^{b}-(1+cx)^{b}=0,\,C\in\mathbb{R}\}\backslash\{x =-1/c\}\). This concludes the proof of the Lemma.
### The system restricted to z=0
If system (1) restricted to \(z=0\), then it becomes
\[\begin{split}\dot{x}&=x(1-y+cx),\\ \dot{y}&=y(-1+x),\end{split} \tag{21}\]
The associated vector field of (21) is
\[\mathcal{Z}=\mathcal{X}|_{z=0}=x(1-y+cx)\frac{\partial}{\partial x}+y(-1+x) \frac{\partial}{\partial y}.\]
To prove the next lemma, we need the following definition. A polynomial \(F(x,y,z)\) is said to be weight homogeneous of degree \(r\in\mathbb{N}\) with respect to the weight exponent \(s=(s_{1},s_{2},s_{3})\) for all \(\mu\in\mathbb{R}\backslash 0\) if it satisfies
\[F(\mu^{s_{1}}x,\mu^{s_{2}}y,\mu^{s_{3}}z)=\mu^{r}F(x,y,z).\]
**Lemma 3.2**.: _The unique irreducible Darboux polynomials of system (21) with non-zero cofactors are \(x\) and \(y\)._
**Proof.** Assume that \(f\) is a Darboux polynomial of system (21) with the cofactor \(k_{2}\). Then it must satisfies
\[x(1-y+cx)\frac{\partial f}{\partial x}+y(-1+x)\frac{\partial f}{\partial y}=k_ {2}f. \tag{22}\]
It is clear that \(k_{2}\) is a polynomial of degree at most one. Without loss of generality, we set \(k_{2}=\alpha_{0}+\alpha_{1}x+\alpha_{2}y\). It is also obvious that \(x\) and \(y\) are Darboux polynomial with respective cofactors \(1-y+cx\) and \(-1+x\). We now show that the system (21) has no other Darboux polynomials. Suppose that \(f\) is an irreducible Darboux polynomial of system (21) of degree at least two, and we claim that \(\alpha_{0}=\alpha_{1}=\alpha_{2}=0\).
On the invariant plane \(x=0\), the equation (22) reduces to
\[-y\frac{\partial\bar{h}}{\partial y}=(\alpha_{0}+\alpha_{2}y)\bar{h},\]
where \(\bar{h}=f|_{x=0}\), and its solution is \(\bar{h}=d_{1}exp(-\alpha_{2}\,y)y^{-\alpha_{0}}\), where \(d_{1}\in\mathbb{C}\). Since \(\bar{h}\) must be a polynomial, then \(\alpha_{2}\) must be zero, and \(\alpha_{0}=-m_{0}\) where \(m_{0}\in\mathbb{N}\cup\{0\}\). It follows that \(k_{2}=-m_{0}+\alpha_{1}x\) and \(f=d_{1}y^{-m_{0}}+xg_{1}(x,y)\), where \(g_{1}(x,y)\) is a polynomial in the variables \(x\) and \(y\). Note that \(d_{1}\neq 0\), otherwise \(f\) would be reducible, which is a contradiction.
Now, if we set \(\tilde{h}=f|_{y=0}\), the restriction (22) to \(y=0\) is
\[x(1+cx)\frac{\partial\tilde{h}}{\partial x}=(-m_{0}+\alpha_{1}x)\tilde{h}.\]
Let \(c\neq 0\), then \(\tilde{h}=d_{2}\,(cx+1)^{\frac{\alpha_{1}}{c}+m_{0}}\,x^{-m_{0}}\), where \(d_{2}\in\mathbb{C}\). Since \(\tilde{h}\) must be a polynomial, we obtain \(\alpha_{1}=c(m_{1}-m_{0})\), where \(m_{1}\in\mathbb{N}\cup\{0\}\). So \(k_{2}=-m_{0}+c(m_{1}-m_{0})x\) and \(f=d_{2}x^{-m_{0}}(cx+1)^{\frac{\alpha_{1}}{c}+m_{0}}+yg_{2}(x,y)\), where \(g_{2}(x,y)\) is a polynomial in the variables \(x\) and \(y\). Since \(f\) is irreducible, then \(d_{2}\neq 0\).
We consider now the change of variables \(x=X,y=\mu Y\) with \(\mu\in\mathbb{C}\backslash\{0\}\). Then system (21) becomes
\[\begin{split}&\dot{X}=X(1-\mu Y+cX),\\ &\dot{Y}=Y(-1+X),\end{split} \tag{23}\]
where the prime denotes derivative with respect to the variable \(T\). We apply the transformation above and put \(F(X,Y)=\mu^{n}f(X,\mu Y)\), where \(n\) is the highest weight degree in the weight homogeneous components of \(f\) in \(x\) and \(y\) with the weight \((0,1)\) and \(k_{2}=-m_{0}+c(m_{1}-m_{0})X\). Then \(F\) satisfies
\[\frac{dF}{dT}=\frac{d\mu^{n}f}{dt}=\mu^{n}\frac{df}{dt}=\mu^{n}k_{2}f=k_{2}F.\]
Suppose \(F=F_{0}+\mu F_{1}+\mu^{2}F_{2}+\cdots+\mu^{l}F_{l}\), where \(F_{i}\) is a weight homogeneous polynomial in \(X\) and \(Y\) with the weight degree \(n-i\) for \(i=0,1,\ldots,l\), and \(n\geq l\). Clearly \(f=F|_{\mu=1}\). By definition of a Darboux polynomial, we have
\[X(1-\mu Y+cX)\sum\nolimits_{i=0}^{n}\!\mu^{i}\frac{\partial F_{i}}{\partial x }+Y(-1+X)\sum\nolimits_{i=0}^{n}\!\mu^{i}\frac{\partial F_{i}}{\partial y}=(-m _{0}+c(m_{1}-m_{0})X)\sum\nolimits_{i=0}^{n}\!\mu^{i}F_{i}.\]
We claim that \(m_{1}=m_{0}=0\). By Equating the terms with \(\mu^{i}\), for \(i=0,1\), we obtain
\[\begin{split} L(F_{0})&=(-m_{0}+c(m_{1}-m_{0})X)F_{ 0},\\ L(F_{1})&=(-m_{0}+c(m_{1}-m_{0})X)F_{1}+XY\frac{ \partial F_{0}}{\partial x},\end{split} \tag{24}\]
where \(L\) is a vector field
\[L=X(1+cX)\frac{\partial}{\partial X}+Y(-1+X)\frac{\partial}{\partial Y}.\]
Solving the first equation in (24) and pulling back its solution to the second equation, we obtain
\[\begin{split} F_{0}\left(X,Y\right)=&\ G_{0}\left( YX\left(Xc+1\right)^{\frac{-1-c}{c}}\right)\left(Xc+1\right)^{m_{1}}X^{-m_{0}}, \end{split}\]
\[\begin{split} F_{1}(X,Y)&=\frac{(Xc+1)^{m_{1}}X^{- m_{0}}}{\Gamma(\frac{2c-1}{c})\Gamma(\frac{c-1}{c})}\Bigg{(}-D(G_{0})\left(YX(Xc+1) ^{(\frac{c-1}{c})}\right)X^{3}\Gamma(\frac{c-1}{c})\\ &\qquad\qquad\Gamma(\frac{2c-1}{c})Y^{2}(Xc+1)^{(\frac{-2c-2}{c} )}\text{hypergeom}([1,\frac{2c-1}{c}],[2],-Xc)c\\ +M_{1}(Xc+1)^{(\frac{-c-1}{c})}+\Gamma(\frac{c-1}{c})\Big{(} \frac{1}{2}M_{2}D(G_{0})\left(YX(Xc+1)^{(\frac{-c-1}{c})}\right)\\ Y^{2}X(Xc+1)^{(\frac{-2c-2}{c})}+G_{1}\left(YX(Xc+1)^{\frac{-c-1}{c }}\right)\Gamma(\frac{2c-1}{c})\Big{)}\Bigg{)},\end{split}\]
where
\[M_{1}=G_{0}\big{(}YX\,(Xc+1)^{\frac{-c-1}{c}}\,\big{)}Y\Gamma(\frac{2c-1}{c}) \Bigg{(}c\Big{(}\Psi(\frac{2c-1}{c})m_{0}+m_{0}\ln(X)+m_{0}\ln(c)\]
\[+\text{hypergeom}\left([1,1,\frac{2c-1}{c}],[2,2],-Xc\right)(m_{0}-m_{1})\,Xc+ m_{0}(\Gamma-1)\Big{)}X\Gamma\left(\frac{2c-1}{c}\right)\]
\[-\frac{1}{2}m_{0}\text{hypergeom}\left([1,1,\frac{3c-1}{c}],[2,3],-Xc\right) \Bigg{)}\Gamma\left(\frac{2c-1}{c}\right)X^{2}c^{2}\]
\[-\Gamma\left(\frac{c-1}{c}\right)\Bigg{(}Xc(m_{0}-m_{1})\Psi(\frac{c-1}{c})+Xc \,(m_{0}-m_{1})\ln(X)\]
\[+Xc(m_{0}-m_{1})\ln(c)+c\Gamma\left(m_{0}-m_{1}\right)X-m_{0}\Bigg{)},\]
and
\[M_{2}=\Bigg{(}\Big{(}2X(c-1)\Psi(\frac{2c-1}{c})+2X(c-1)\ln(X)+2X(c-1)\ln(c)-2\]
\[+2\Gamma(c-1)X\Big{)}\Gamma(\frac{2c-1}{c})+\Big{(}(-2\Psi(\frac{3c-1}{c})-2 \ln(X)-2\ln(c)\]
\[+(-2\text{hypergeom}([1,1,\frac{3c-1}{c}],[2,2],-Xc)c\]
\[+2\text{hypergeom}([1,1,\frac{3c-1}{c}],[2,2],-Xc))X-2\Gamma+2)\Gamma(\frac{3 c-1}{c})\]
\[+\Gamma(\frac{4c-1}{c})Xc\,\text{hypergeom}([1,1,\frac{4c-1}{c}],[2,3],-Xc) \Big{)}cX\Bigg{)}.\]
where \(G_{0},G_{1}\in\mathbb{C}[X,Y]\) and have weight degrees \(n\) and \(n-1\), respectively. Since \(F_{1}\) is a polynomial and \(c\neq 0\), we must have \(m_{0}=0\) and \(m_{1}-m_{0}=0\). This gives, \(k_{2}=0\).
When \(c=0\), the equation (22) takes the form
\[x(1-y)\frac{\partial f}{\partial x}+y(-1+x)\frac{\partial f}{\partial y}=-m_{ 0}f. \tag{25}\]
Writing \(f=\sum\nolimits_{j=0}^{n}f_{j}\), where each \(f_{j}=f_{j}(x,y)\) represents a homogeneous polynomial of degree \(j\) in \(x\) and \(y\), in which \(f_{n}\neq 0\). By computing the terms of degree \(n+1\) in (25), we obtain
\[-xy\frac{\partial f_{n}}{\partial x}+xy\frac{\partial f_{n}}{\partial y}=0,\]
and whose solution is that is
\[f_{n}(x,y)=T_{1}(x+y),\]
where \(T_{1}(x+y)\) is a homogeneous polynomial of degree \(n\). The terms of degree \(n\) in (25) satisfies
\[x\frac{\partial f_{n}}{\partial x}-xy\frac{\partial f_{n-1}}{\partial x}-y \frac{\partial f_{n}}{\partial y}+xy\frac{\partial f_{n-1}}{\partial y}=-m_{ 0}f_{n}.\]
Solving the partial differential differential above, we get
\[\begin{split} f_{n-1}\left(x,y\right)=&-\frac{1}{x+y} \left(\ln\left(x\right)+\ln\left(-y\right)\right)\left(x+y\right)\mathrm{D} \left(T_{1}\right)\left(x+y\right)\\ &-\ln\left(-y\right)m_{0}T_{1}\left(x+y\right)+\left(x+y\right)T _{2}\left(x+y\right)\\ &+\ln\left(x\right)m_{0}T_{1}\left(x+y\right).\end{split} \tag{26}\]
Since \(f_{n-1}\) must be a polynomial, we obtain \(m_{0}=0\), which lead to \(\alpha_{0}=0\), and so \(k_{2}=0\). The proof now is complete.
**Proposition 2**.: _The system (21) has no formal first integrals when \(c>0\)._
**Proof.** We assume that \(f(x,y)\) is a formal first integral of system (21) with \(c\neq 0\). Then it satisfies
\[x(1-y+cx)\frac{\partial f}{\partial x}+y(-1+x)\frac{\partial f}{\partial y}=0. \tag{27}\]
We write \(f\) as \(f=\sum{}_{k\geq 0}f_{k}(x)y^{k}\), where each \(f_{k}(x)\) is a formal power series in the variable \(x\). Denoting the restriction of \(f\) to \(y=0\) by \(f_{0}=f_{0}(x)\) in equation (27). Then
\[x(1+cx)\frac{df_{0}}{dx}=0,\]
and its solution is \(f_{0}=d_{0}\), where \(d_{0}\) is a constant. Then we can write \(f=f_{0}+yg(x,y)=d_{0}+yg(x,y)\) where \(g=\sum{}_{k\geq 0}f_{k+1}(x)y^{k}\). Now, the function \(g\) must satisfies the equation
\[x(1-y+cx)\frac{\partial g}{\partial x}+y(-1+x)\frac{\partial g}{\partial y}=-( -1+x)g. \tag{28}\]
It is suffices to show that
\[f_{k+1}(x)=0\quad\text{for}\quad k\geq 0. \tag{29}\]
Since the restriction of \(g\) to \(y=0\) is \(f_{1}=f_{1}(x)\), then from (28), we obtain
\[x(1+cx)\frac{df_{1}}{dx}=-(-1+x)f_{1},\]
and its solution is
\[f_{1}=d_{1}x(1+cx)^{-1-\frac{1}{c}}, \tag{30}\]
where \(d_{1}\) is a constant. By calculating the coefficient of \(y\) in (28), we see
\[x(1+cx)\frac{df_{2}}{dx}-x\frac{df_{1}}{dx}=-2(-1+x)f_{2},\]
or equivalently,
\[x(1+cx)\frac{df_{2}}{dx}+d_{1}x(-1+x)(1+cx)^{-2-\frac{1}{c}}+2(-1+x)f_{2}=0. \tag{31}\]
Multiplying (31) by \((1+cx)^{2+\frac{1}{c}}\), gives
\[x(1+cx)^{3+\frac{1}{c}}\frac{df_{2}}{dx}+d_{1}x(-1+x)+2(-1+x)(1+cx)^{2+\frac{1 }{c}}f_{2}=0. \tag{32}\]
Evaluating (32) on \(x=-\frac{1}{c}\), we obtain that \(-\frac{d_{1}(-1-\frac{1}{c})}{c}=0\), and since \(c>0\), this implies \(d_{1}=0\). Then, from (30), we obtain \(f_{1}=0\). We now assume that (29) is true for \(k=0,\ldots,l-2\), and we will prove it is also true for \(k=l-1\). By the induction hypothesis, we have
\[f_{1}=\cdots=f_{l-2}=0\quad\text{and}\quad g=\sum{}_{k\geq 0}f_{k+1}(x)y^{k}=y^{l -1}\sum{}_{k\geq 0}f_{k+l}(x)y^{k}.\]
By introducing \(g\) in (28) and determining the coefficient of \(y^{l-1}\), we see that \(f_{l}\) satisfies
\[x(1+cx)\frac{df_{l}}{dx}+(-1+x)(l-1)f_{l}=-2(-1+x)f_{l},\]
which yields
\[f_{l}=d_{l}x^{l+1}(1+cx)^{-\frac{(1+c)(1+l)}{c}}, \tag{33}\]
where \(d_{l}\) is a constant. By calculating the coefficient of \(y^{l}\) in (28), we obtain
\[x(1+cx)\frac{df_{l+1}}{dx}-x\frac{df_{l}}{dx}+l(-1+x)f_{l+1}=-2(-1+x)f_{l+1},\]
or equivalently,
\[\begin{split} d_{l}x^{l+1}\left(-1+x\right)(l+1)\left(cx+1 \right)^{-\frac{(1+c)(l+1)}{c}-1}+\left(cx^{2}+x\right)\frac{\mathrm{d}}{ \mathrm{d}x}f_{l+1}\left(x\right)\\ +f_{l+1}\left(x\right)\left(-1+x\right)\left(l+2\right)=0.\end{split} \tag{34}\]
Multiplying (34) by \((1+cx)^{1+\frac{(1+c)(l+1)}{c}}\), implies
\[\begin{split}\left(\left(cx^{2}+x\right)\frac{\mathrm{d}}{ \mathrm{d}x}f_{l+1}\left(x\right)+f_{l+1}\left(x\right)\left(-1+x\right)(l+2) \right)\left(cx+1\right)^{\frac{lc+l+2\,c+1}{c}}\\ +x^{l+1}d_{l}\left(-1+x\right)\left(l+1\right)=0.\end{split} \tag{35}\]
Evaluating (35) on \(x=-\frac{1}{c}\), we obtain that \(-\frac{d_{l}(l+1)(-\frac{1}{c})^{l}(-1-\frac{1}{c})}{c}=0\), and since \(c>0\), this implies \(d_{l}=0\). Then, from (33), we obtain \(f_{l}=0\), for \(k=l-1\). So we have by induction that \(f_{k}=0\) for all \(k\geq 1\), which yields \(g=0\). Consequently, \(f=d_{0}\), and hence the system (21) has no formal first integrals.
## 4. Darboux polynomials with non-zero cofactors
It is obvious that \(x\),\(y\), and \(z\) are Darboux polynomials of (1). We show that system (1) has no any Darboux polynomial of degrees greater than one. The proof of Theorem 1.1 will follows by the following Lemmas.
**Proof of Theorem 1.1**. We consider two cases.
**Case 1.** When \(a=0\), the system (1) reduces to
\[\begin{split}\dot{x}&=x(1-y+cx),\\ \dot{y}&=y(-1+x),\\ \dot{z}&=-bz,\end{split} \tag{36}\]
then any cofactor of (36) must be the form
\[k=\alpha_{0}+\alpha_{1}x+\alpha_{2}y+\alpha_{3}z. \tag{37}\]
**Lemma 4.1**.: _Let \(f\) be an irreducible Darboux polynomial of degree greater than one with non-zero cofactor (37). Then \(\alpha_{0}=\alpha_{1}=\alpha_{2}=\alpha_{3}=0\)._
**Proof.** Assume that \(f\) is a Darboux polynomial of degree \(n\geq 2\) in system (36) with a non-zero cofactor \(k\). Then \(f\) satisfies
\[x(1-y+cx)\frac{\partial f}{\partial x}+y(-1+x)\frac{\partial f}{\partial y}-bz \frac{\partial f}{\partial z}=kf. \tag{38}\]
Suppose that \(\bar{f}\) is a restriction of \(f\) to \(x=0\), then from equation (38), \(\bar{f}\) satisfies
\[-y\frac{\partial\bar{f}}{\partial y}-bz\frac{\partial\bar{f}}{\partial z}=( \alpha_{0}+\alpha_{2}y+\alpha_{3}z)\bar{f}, \tag{39}\]
and \(\bar{f}\) satisfies
\[\bar{f}=y^{-\alpha_{0}}\text{ F1 }\Big{(}zy^{-b}\Big{)}\exp(-\alpha_{2}\,y- \frac{\alpha_{3}}{b}z). \tag{40}\]
Since \(\bar{f}\) must be a polynomial, we get \(\alpha_{2}=\alpha_{3}=0\). We now expand \(f\) in powers of the variable \(z\) so that \(f=\sum_{j=0}^{n}f_{j}z^{j}\), where each \(f_{j}=f_{j}(x,y)\) is a polynomial in the variables \(x\) and \(y\). The Darboux polynomial of system (36) with the restriction \(z=0\) is given by \(f=f_{0}\), and according to the irreducibility condition for \(f\), \(f_{0}\neq 0\). Moreover, system (36) restricted to \(z=0\) yields system (21), and so Lemma 3.2 guarantees that \(\alpha_{0}=\alpha_{1}=0\). Therefore, \(k=0\), which means that the system (36) has no Darboux polynomials of greater than one with non-zero cofactor. \(\square\)
**Case 2.**\(a>0\). To prove this case, we shall need the following Lemmas.
**Lemma 4.2**.: _Let \(f\) be an irreducible Darboux polynomial of greater than one with non-zero cofactor as in (6). Then \(\alpha_{5}=\alpha_{7}=\alpha_{8}=\alpha_{9}=0\), and \(\alpha_{4}=N_{4}a,\alpha_{6}=-N_{6}a\), where \(N_{4},N_{6}\in\mathbb{N}\cup\{0\}\)._
**Proof.** Let \(f(x,y,z)=\sum_{i=0}^{n}f_{i}(x,y,z)\) be an irreducible Darboux polynomial of system (1) where \(n\geq 2\), where each \(f_{i}\) is a homogeneous polynomial of degree \(i\) for \(i=0,1,\ldots,n\)
Clearly \(f_{n}\neq 0\). Then \(f\) satisfies
\[x(1-y+cx-axz)\sum\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
a contradiction. Since \(f_{0}\) is independent of \(x\), it is a Darboux polynomial of system (1) restricted to \(x=0\) that satisfies
\[-y\frac{\partial f_{0}}{\partial y}-bz\frac{\partial f_{0}}{\partial z}=(\alpha_ {2}y)f_{0}. \tag{43}\]
We write \(f_{0}\) as the sum of its homogeneous parts as \(f_{0}=\sum\limits_{i=0}^{m}f_{0,i}(y,z)\), where each \(f_{0,i}=f_{0,i}(y,z)\) is a homogeneous polynomial in its variables of degree \(i\) and \(0\leq m\leq n\). Since \(f_{0}\neq 0\), then \(f_{0,m}\neq 0\). Computing the terms of degree \(m+1\) in (43), we have \((\alpha_{2}y)f_{0,m}=0\), and since \(f_{0,m}\neq 0\), so must be \(\alpha_{2}=0\). This completes the proof of the Theorem 1.1.
## 5. Exponential Factors
**Proof of Theorem 1.2**. Let \(E=\exp(\frac{h}{f})\) be an exponential factor of the system (1) with cofactor \(L\) where \(h,f\in\mathbb{C}[x,y,z]\) with \((h,f)=1\). Then, from the definition of exponential factor and in view of Proposition 1, we have either \(f\) is a constant, in this case we can take \(f=1\), or \(f\) is a Darboux polynomial of system (1), in this case from Theorem 1.1, \(E\) can be of the form \(E=\exp(\frac{h}{x^{s_{1}}y^{s_{2}}z^{s_{3}}})\) with \(s_{1},s_{2}\), and \(s_{3}\) are non-negative integers and such that \(h\in\mathbb{C}[x,y,z]\) is coprime with \(x,y,z\). Obviously, when \(s_{1}=s_{2}=s_{3}=0\), we are in the previous case.
We first prove that the system (1) has no exponential factors of the form \(\exp(\frac{h}{x^{s_{1}}y^{s_{2}}z^{s_{3}}})\). For this purpose, we assume that \(\exp(\frac{h}{x^{s_{1}}y^{s_{2}}z^{s_{3}}})\) is an exponential factor of the system and that at least one of the \(s_{1},s_{2}\), or \(s_{3}\) is positive, and then we get a contradiction. Clearly, by (7), \(h\) satisfies,
\[\dot{x}\frac{\partial h}{\partial x}+\dot{y}\frac{\partial h}{\partial y}+\dot {z}\frac{\partial h}{\partial z}-h(\frac{\dot{x}s_{1}}{\partial x}+\frac{ \dot{y}s_{2}}{\partial y}+\frac{\dot{z}s_{3}}{\partial z})=Lx^{s_{1}}y^{s_{2}} z^{s_{3}}, \tag{44}\]
We distinguish the following cases.
**Case 1**: When \(s_{1}>0\). Evaluating (44) on \(x=0\) and setting \(\bar{h}=h|_{x=0}\), then \(\bar{h}\) satisfies
\[-y\frac{\partial\bar{h}}{\partial y}-bz\frac{\partial\bar{h}}{\partial z}= \bar{h}\big{(}s_{1}(1-y)-s_{2}-bs_{3}\big{)}.\]
and clearly \(\bar{h}\neq 0\) because \((h,x)=1\). The right hand side of the equation above has one degree more than the left hand side, and this is a contradiction.
**Case 2**: When \(s_{1}=0\) and \(s_{2}>0\). On \(y=0\) and setting \(\tilde{h}=h|_{y=0}\), then from (44) it must satisfies
\[x(1+cx-axz)\frac{\partial\tilde{h}}{\partial x}+z(-b+ax^{2})\frac{\partial \tilde{h}}{\partial z}=\tilde{h}\big{(}s_{2}(-1+x)+s_{3}(-b+ax^{2})\big{)}. \tag{45}\]
Clearly, \(\tilde{h}\neq 0\), since \((h,y)=1\). Moreover, we note that \(\tilde{h}\) is a Darboux polynomial of the vector field \(\mathcal{Y}\). Then, from Lemma 3.1, we can consider the following subcases below.
**Subcase 2.1**. If \(a>0\). Then system (10) has Darboux polynomials \(x\) and \(z\), so we can write \(\tilde{h}=Gx^{m_{1}}z^{m_{2}}\) with \(G\) is a constant and \(m_{1}\), \(m_{2}\) are non-negative integers such that \(m_{1}+m_{2}>0\). We substitute \(\tilde{h}\) in equation (45), and we deduce
\[-am_{1}xz+am_{2}x^{2}+cm_{1}x-bm_{1}x-bm_{2}+m_{1}=as_{3}x^{2}-bs_{3}+s_{2}x-s_ {2}.\]
Then clearly \(am_{1}=0\), and then \(m_{1}=0\). This implies that \(\tilde{h}(x,z)=\tilde{h}(z)=Gz^{m_{2}}\). By substituting \(\tilde{h}(z)\) in (45), and simplifying it, we can derive
\[(ax^{2}-b)m_{2}=as_{3}x^{2}-bs_{3}+s_{2}x-s_{2}. \tag{46}\]
It is also easy to clear that \(s_{2}=0\), which is a contradiction.
**Subcase 2.2**. If \(a=0\). In this case, \(\tilde{h}\) must be of the form \(\tilde{h}=G(1+cx)^{bv}-x^{bw}z^{v}\), where \(G\) is a constant and \(v\) is a non-negative integer. The cofactor of \(\tilde{h}\) is \(vbc\,x\), and from (45), we obtain
\[s_{2}(-1+x)+s_{3}(-b)=vbc\,x,\]
which implies \(s_{2}=-bs_{3}\leq 0\), and this a contradiction.
**Case 3**: When \(s_{1}=s_{2}=0\) and \(s_{3}>0\). From equation (44) with \(z=0\), we get
\[x(1-y+cx)\frac{\partial\hat{h}}{\partial x}+y(-1+x)\frac{\partial\hat{h}}{ \partial y}=\hat{h}s_{3}(-b+ax^{2}), \tag{47}\]
where \(\hat{h}=h|_{z=0}\) and \(\hat{h}\neq 0\) because \(h\) is coprime with \(z\). We emphasize that \(\hat{h}\) is a Darboux polynomial of the vector field \(\mathcal{Z}\) with a non-zero cofactor. Therefore, by Lemma 3.2, we get that \(\hat{h}=Gx^{m_{1}}y^{m_{2}}\) where \(G\) as a constant and \(m_{1}\) and \(m_{2}\) as non-negative integers such that \(m_{1}+m_{2}>0\). Then from (47), we see that
\[(c\,m_{1}+m_{2})x-m_{1}y+m_{1}-m_{2}=s_{3}(-b+ax^{2}). \tag{48}\]
We consider two subcases depending on \(a\) and \(b\). Obviously, at least one of \(a>0\) or \(b>0\) is required for the existence of the Darboux polynomial \(z\).
**Subcase 3.1**. If \(a>0\). Computing the coefficient of \(x^{2}\) in (48), we get \(as_{3}=0\). Hence, \(s_{3}=0\), which is a contradiction.
**Subcase 3.2**. If \(a=0\) and \(b>0\). Then equation (48) implies
\[(c\,m_{1}+m_{2})x-m_{1}y+m_{1}-m_{2}=-bs_{3}. \tag{49}\]
Computing the coefficients of \(y\) and \(x\) in (49), we obtain \(m_{1}=m_{2}=0\). Consequently, \(s_{3}=0\), which is a contradiction.
**Case 4**: \(s_{1}=s_{2}=s_{3}=0\). In this case, equation (44) becomes
\[x(1-y+cx-axz)\frac{\partial h}{\partial x}+y(-1+x)\frac{\partial h}{\partial y }+z(-b+ax^{2})\frac{\partial h}{\partial z}=L, \tag{50}\]
where \(L=l_{0}+l_{1}x+l_{2}y+l_{3}z+l_{4}x^{2}+l_{5}xy+l_{6}xz+l_{7}y^{2}+l_{8}yz+l_{ 9}z^{2}\), with \(l_{i}\in\mathbb{C}\) for \(i=0,\ldots,9\). Here, we consider different subcases depending on \(a\) and \(c\).
**Subcase 4.1**. When \(a>0\) and \(c>0\). We write \(h\) in the form \(h=\sum\nolimits_{j=0}^{n}h_{j}(x,y,z)\), where each \(h_{j}\) is a homogeneous polynomial of degree \(j\) in the variables \(x,y,z\). Firstly, assume that \(n\geq 3\). The terms of degree \(n+2\) in (50) are
\[-ax^{2}z\frac{\partial h_{n}}{\partial x}+ax^{2}z\frac{\partial h_{n}}{ \partial z}=0,\]
whose solution is \(h_{n}=W_{n}(y,x+z)\), where \(W_{n}\) is an arbitrary \(C^{1}\) function. Using the fact that \(h_{n}\) is a homogeneous polynomial of degree \(n\), we can write \(h_{n}=\sum\nolimits_{j=0}^{n}a_{j}y^{n-j}(x+z)^{j}\), where \(a_{j}\in\mathbb{C}\). The terms of degree \(n+1\) in (50) satisfy
\[x(-y+cx)\frac{\partial h_{n}}{\partial x}-ax^{2}z\frac{\partial h_{n-1}}{ \partial x}+xy\frac{\partial h_{n}}{\partial y}+ax^{2}z\frac{\partial h_{n-1} }{\partial z}=0.\]
or equivalently,
\[-ax^{2}z\frac{\partial h_{n-1}}{\partial x}+ax^{2}z\frac{\partial h _{n-1}}{\partial z}+(cx^{2}-xy)\sum\nolimits_{j=0}^{n}ja_{j}y^{n-j}(x+z)^{j-1}\] \[+xy\sum\nolimits_{j=0}^{n}(n-j)a_{j}y^{n-j-1}(x+z)^{j}=0,\]
The function
\[h_{n-1}=\frac{1}{2a(x+z)}(-4\operatorname{arctanh}(\frac{x-z}{x +z})yB+A((2xc+2zc-4y)\operatorname{arctanh}(\frac{x-z}{x+z})\] \[-\ln(-(x+z)xz)c(x+z)))+W_{n-1}(y,x+z),\]
satisfy the partial differential equations above where \(A=\sum\nolimits_{j=0}^{n}ja_{j}y^{n-j}(x+z)^{j-1}\), \(B=\sum\nolimits_{j=0}^{n}a_{j}(-n+j)y^{n-j-1}(x+z)^{j}\), and \(W_{n-1}\) is an arbitrary function of \(y\) and \(x+z\). Since \(h_{n-1}\) must be a homogeneous polynomial of degree \(n-1\), we must have \(A=0\) and \(B=0\), which implies that \(a_{j}=0\) for \(j=0,1,\ldots,n\), and thus \(h_{n}=\sum\nolimits_{j=0}^{n}a_{j}y^{n-j}(x+z)^{j}=0\). This contradiction with the fact that \(h_{n}\) is a polynomial of degree \(n\geq 3\). Then we must have \(n\leq 2\). Therefore
\[h=h_{0}+h_{1}x+h_{2}y+h_{3}z+h_{4}x^{2}+h_{5}xy+h_{6}xz+h_{7}y^{2}+h_{8}yz+h_{9 }z^{2}.\]
Substituting \(h\) in (50), we get that
\[l_{0}=0,l_{1}=h_{1},l_{2}=-h_{2},l_{3}=-bh_{3},l_{4}=ch_{1}+2h_{4},l_{5}=-h_{1}+h_ {2},l_{6}=-bh_{6}+h_{6},\]
\[l_{7}=-2h_{7},l_{8}=-bh_{8}-h_{8},l_{9}=-2bh_{9},\]
where \(h_{0}=h_{0},h_{1}=h_{1},h_{2}=h_{2},h_{3}=h_{1},h_{4}=h_{5}=h_{6}=h_{7}=h_{8}=h_ {9}=0\). Then \(h=h_{0}+(x+z)h_{1}+yh_{2}\) with \(L=(cx^{2}+(-y+1)x-bz)h_{1}+y(x-1)h_{2}\).
**Subcase 4.2**. When \(a>0\) and \(c=0\). From equation (50), we have
\[x(1-y-axz)\frac{\partial h}{\partial x}+y(-1+x)\frac{\partial h}{\partial y}+ z(-b+ax^{2})\frac{\partial h}{\partial z}=L. \tag{51}\]
By proceeding in a similar way as above, we get
\[h_{n-1}=\frac{y}{a(x+z)}(A+B)(\ln(x)-\ln(-z))+W_{n-1}(y,x+z),\]
where \(A=\sum\nolimits_{j=0}^{n}\!ja_{j}y^{n-j}(x+z)^{j-1},B=\sum\nolimits_{j=0}^{n }\!a_{j}(-n+j)y^{n-j-1}(x+z)^{j}\), and \(W_{n-1}\) is an arbitrary polynomial in \(y\) and \(x+z\). Since \(h_{n-1}\) just admits polynomial solutions, we must have \(A+B=0\), and \(a_{j}=\frac{n!}{j!(n-j)!}A_{0}\) for \(j=0,1,\ldots,n\). Hence, \(h_{n}=\sum\nolimits_{j=0}^{n}\!\frac{n!}{j!(n-j)!}A_{0}y^{n-j}(x+z)^{j}\) and \(h_{n-1}=W_{n-1}(y,x+z)\). Since \(h_{n-1}\) must be a homogeneous polynomial of degree \(n-1\), we write \(h_{n-1}=\sum\nolimits_{j=0}^{n}\!b_{j}y^{n-j-1}(x+z)^{j}\), where \(b_{j}\in\mathbb{C}\). Now, computing the homogeneous part of degree \(n\) in (51) yields
\[-ax^{2}z\frac{\partial h_{n-2}}{\partial x}-xy\frac{\partial h_{n-1}}{ \partial x}+x\frac{\partial h_{n}}{\partial x}-y\frac{\partial h_{n}}{ \partial y}+xy\frac{\partial h_{n-1}}{\partial y}-bz\frac{\partial h_{n}}{ \partial z}+ax^{2}z\frac{\partial h_{n-2}}{\partial z}=0,\]
which is
\[-ax^{2}z\frac{\partial h_{n-2}}{\partial x}-xy\frac{\partial\sum \nolimits_{j=0}^{n}\!b_{j}y^{n-j-1}(x+z)^{j}}{\partial x}+x\frac{\partial\sum \nolimits_{j=0}^{n}\!\frac{n!}{j!(n-j)!}A_{0}y^{n-j}(x+z)^{j}}{\partial x}\] \[-y\frac{\partial\sum\nolimits_{j=0}^{n}\!\frac{n!}{j!(n-j)!}A_{0} y^{n-j}(x+z)^{j}}{\partial y}+xy\frac{\partial\sum\nolimits_{j=0}^{n}\!b_{j}y^{n-j-1} (x+z)^{j}}{\partial y}\] \[-bz\frac{\partial\sum\nolimits_{j=0}^{n}\!\frac{n!}{j!(n-j)!}A_{0} y^{n-j}(x+z)^{j}}{\partial z}+ax^{2}z\frac{\partial h_{n-2}}{\partial z}=0.\]
Solving it, we get
\[h_{n-2}=\frac{1}{ax(x+z)^{2}}\big{(}-2\operatorname{arctanh}( \frac{x-z}{x+z})xy(x+z)C-2\operatorname{arctanh}(\frac{x-z}{x+z})xy(x+z)D\] \[+(-x\left(x-y+z\right)\ln\left(-z\right)+x\left(x-y+z\right)\ln \left(x\right)+\left(x+z\right)\left(bx+bz+y\right))\] \[nA_{0}\left(x+z+y\right)^{n-1}\big{)}+\text{W}_{n-2}\left(y,x+z \right),\]
where \(C=\sum\limits_{j=0}^{n}(-n+j+1)b_{j}y^{n-j-2}(x+z)^{j}\), \(D=\sum\limits_{j=0}^{n}jb_{j}y^{n-j-1}(x+z)^{j-1}\), and \(W_{n-2}\) is an arbitrary polynomial in \(y\) and \(x+z\). Since \(h_{n-2}\) must be a homogeneous polynomial of degree \(n-2\) and \(n\geq 3\), then must be \(A_{0}=0\), so \(h_{n}=0\), which is a contradiction with the fact that \(h_{n}\) is a polynomial of degree \(n\geq 3\). Then we must have \(n\leq 2\). Thus, \(h=h_{0}+h_{1}x+h_{2}y+h_{3}z+h_{4}x^{2}+h_{5}xy+h_{6}xz+h_{7}y^{2}+h_{8}yz+h_{9 }z^{2}\). By substituting \(h\) in (51), we can obtain
\[h=h_{0}+(x+y+z)^{2}h_{1}+(x+z)h_{2}+yh_{3},\]
and
\[L=-2(x+y+z)(bz-x+y)h_{1}+\big{(}(h_{3}-h_{2})y+h_{2}\big{)}x-bzh_{2}-yh_{3}.\]
This completes the proof of the statement a.
**Subcase 4.3**. When \(a=0\) and \(c>0\). In this case, we have
\[x(1-y+cx)\frac{\partial h}{\partial x}+y(-1+x)\frac{\partial h}{\partial y}+z (-b)\frac{\partial h}{\partial z}=l_{0}+l_{1}x+l_{2}y+l_{3}z, \tag{52}\]
where \(l_{i}\in\mathbb{R}\) for \(i=0,\ldots,3\). First, we assume that \(b>0\). We decompose \(h\) as a sum of polynomials in the variable \(x\) as \(h=\sum\limits_{j=0}^{n}h_{j}(y,z)x^{j}\), where each \(h_{j}\in\mathbb{C}[y,z]\). Assume that \(n\geq 1\), then the coefficient of \(x^{n+1}\) in (52), is
\[c\,n\,h_{n}+y\frac{\partial h_{n}}{\partial y}=0,\]
and its solution is \(h_{n}=W_{n}(z)y^{-cn}\). Since \(h_{n}\) must be a polynomial and \(n\geq 1\), \(c>0\), then must be \(h_{n}=0\). As a result, \(h_{n}=0\) if \(n\geq 1.\) For \(n=0\), we have \(h=h_{0}(y,z)\), and the coefficients of \(x\) from (52) are
\[y\frac{\partial h_{0}}{\partial y} =l_{1}, \tag{53}\] \[-y\frac{\partial h_{0}}{\partial y}-bz\frac{\partial h_{0}}{ \partial z} =l_{0}+l_{2}y+l_{3}z.\]
The solution of the first equation in (53), is
\[h_{0}=l_{1}\ln(y)+W_{0}(z).\]
The function \(h_{0}\) is a polynomial if \(l_{1}=0\), and putting it in the second equation in (53), we obtain
\[W_{0}(z)=(-\frac{l_{2}}{b}y-\frac{l_{0}}{b})\ln(z)-\frac{l_{3}}{b}z.\]
Since \(W_{0}(z)\) must be a polynomial, we get \(l_{0}=l_{2}=0\) and \(W_{0}=-\frac{l_{3}}{b}z\). Then \(h=-\frac{l_{3}}{b}z\) with \(L=l_{3}z\).
If \(b=0\), then equation (52) becomes
\[x(1-y+cx)\frac{\partial h}{\partial x}+y(-1+x)\frac{\partial h}{\partial y}=l_{0} +l_{1}x+l_{2}y, \tag{54}\]
so by eliminating the variable \(z\) and repeating the previous steps, we get
\[c\,n\,h_{n}+y\frac{\partial h_{n}}{\partial y}=0,\quad\text{so that},\ \ \ h_{n}=d_{1}y^{-cn},\,d_{1}\in\mathbb{C}.\]
Hence, \(h_{n}=0\) if \(n\geq 1\). For \(n=0\), we have \(h=h_{0}(y)\), and by comparing the coefficients of \(x\) from (54), we obtain
\[\begin{split} y\frac{\partial h_{0}}{\partial y}& =l_{1},\\ -y\frac{\partial h_{0}}{\partial y}&=l_{0}+l_{2}y. \end{split} \tag{55}\]
Solving the first equation, gives \(h_{0}=l_{1}{\rm ln}(y)\), and it is a polynomial if \(l_{1}=0\). This implies that \(h=h_{0}=0\). Eventually, the system (1) has no exponential factors for \(a=b=0\), and \(c>0\).
**Subcase 4.4**. When \(a=0\) and \(c=0\). The equation (50) takes the form
\[x(1-y)\frac{\partial h}{\partial x}+y(-1+x)\frac{\partial h}{\partial y}+z(-b )\frac{\partial h}{\partial z}=l_{0}+l_{1}x+l_{2}y+l_{3}z. \tag{56}\]
If \(b=0\), then the system has an exponential factor, as shown in the part (a) of Theorem 1.3. So from now on, we assume that \(b\neq 0\). We proceed in a similar way to the proof of Subcase 4.1. We write \(h\) in the form \(h=\sum_{j=0}^{n}h_{j}(x,y,z)\), where each \(h_{j}\) is a homogeneous polynomial of degree \(j\) in its variables. Assume \(n\geq 2\). As before, we will show that \(h\) has degree \(n\leq 1\). The terms of degree \(n+1\) in (56) are
\[-xy\frac{\partial h_{n}}{\partial x}+xy\frac{\partial h_{n}}{\partial y}=0,\]
and whose solution is, \(h_{n}=W_{n}(x+y,z)\), where \(W_{n}\) is an arbitrary polynomial in \(x+y\) and \(z\). Since \(h_{n}\) is homogeneous of degree \(n\), we can write \(h_{n}=\sum_{j=0}^{n}a_{j}z^{n-j}(x+y)^{j}\), where \(a_{j}\in\mathbb{C}\). By computing the terms of degree \(n\), in equation (56), we can obtain
\[h_{n-1}=-2\,b\,B\,arctanh(\frac{x-y}{x+y})-A\,\ln(-xy)+W_{n-1}(x+y,z),\]
where \(A=\sum_{j=0}^{n}ja_{j}(x+y)^{j-1}z^{n-j}\), \(B=\sum_{j=0}^{n}a_{j}(n-j)(x+y)^{j-1}z^{n-j}\) and \(W_{n-1}\) is an arbitrary function of \(x+y\) and \(z\). Since \(h_{n-1}\) must be homogeneous polynomial of degree \(n-1\) we must have \(A=0\) and \(B=0\) which implies that \(a_{j}=0\) for \(j=0,1,\ldots,n\) and thus \(h_{n}=0\), which is a contradiction. Then must be \(n\leq 1\). Therefore \(h=h_{0}+h_{1}x+h_{2}y+h_{3}z\). Substituting \(h\) in (56) we get that \(h=h_{2}(x+y)+h_{3}z\), and \(L=h_{2}(x-y)-b\,h_{3}z\). This completes the proof of the statement b.
## 6. First Integrals
The proof of Theorem 1.3 follows by direct computations, and it is omitted.
**Proof of Theorem 1.4**. According to Theorem 2.2, the system (1) has a Darboux first integral if and only if there exist \(\lambda_{i},\mu_{j}\in\mathbb{C}\) not all zero, such that equation (8) is satisfied. It follows from Theorem 1.1 that the system (1) have three Darboux polynomials with cofactors of \(K_{1}=1-y+cx-axz,K_{2}=-1+x\), and \(K_{3}=-b+ax^{2}\). Now, by Theorem 1.2, when \(a>0\) and \(c>0\), there are two exponential factors with cofactors \(L_{1}=cx^{2}-xy-bz+x\) and \(L_{2}=y(x-1)\). So equation (8) is equivalent to
\[\lambda_{1}(1-y+cx-axz)+\lambda_{2}(-1+x)+\lambda_{3}(-b+ax^{2})+\mu_{1}(cx^{2 }-xy-bz+x)+\mu_{2}(y(x-1))=0.\]
Solving it, we get \(\lambda_{1}=\lambda_{2}=\lambda_{3}=\mu_{1}=\mu_{2}=0\). In short there are no first integrals of Darboux type in this case.
**Proof of Theorem 1.5**. The proof of Theorem 1.5 can be obtain easily from Theorem 1.1 and corollary 1 as well as Lemmas 2.1 and 2.3.
**Proof of Theorem 1.6**. If \(b=0\), then system (1) has the form
\[\begin{split}\dot{x}&=x(1-y+cx-axz),\\ \dot{y}&=y(-1+x),\\ \dot{z}&=ax^{2}z,\end{split} \tag{57}\]
Let \(f\) be a formal first integral of system (57). Without loss of generality, we can assume that \(f\) has no constant terms. We write \(f=\sum_{j\geq 0}f_{j}(x,y)z^{j}\), where every \(f_{j}(x,y)\) is a formal power series in the variables \(x\) and \(y\). We consider two cases.
**Case 1**: If \(f\) is not divisible by \(z\). In this case, we have that \(f_{0}=f_{0}(x,y)\) is a formal first integral of system (57) restricted to \(z=0\). Note that system (57), restricted to \(z=0\), becomes system (21). Consequently, \(f_{0}\) is also a formal first integral of system (21). However, we proved in Proposition 2 that system (21) has no formal first integral. Therefore, \(f_{0}\) is not a formal first integral of system (21), and we have a contradiction.
**Case 2**: If \(f\) is divisible by \(z\). In this case, we write \(f=z^{l}g\) where \(l\geq 1\), and \(g\) is not divisible by \(z\). Furthermore, \(g\) is a formal power series that satisfy,
\[x(1-y+cx-axz)\frac{\partial g}{\partial x}+y(-1+x)\frac{\partial g}{\partial y }+ax^{2}z\frac{\partial g}{\partial z}=a\,l\,x^{2}g.\]
Now we introduce the variable \(g=\exp(x)T\). Then, since \(g\) is a formal series in the variables \(x,y\), and \(z\), we have that \(T\) is a formal power series in the same variables, and it satisfies
\[x(1-y+cx-axz)\frac{\partial T}{\partial x}+y(-1+x)\frac{\partial T}{\partial y}+ ax^{2}z\frac{\partial T}{\partial z}=(-ax^{2}z+(al+c)x^{2}-xy+x)T, \tag{58}\]
after dividing by \(\exp(x)\). We write \(T\) as \(T=\sum_{j\geq 0}T_{j}z^{j}\), where every \(T_{j}=T_{j}(x,y)\) is a formal power series in the variables \(x\) and \(y\). Furthermore, since \(g\) is not divisible by \(z\), we have that \(T\) is not divisible by \(z\), so \(T_{0}=T_{0}(x,y)\neq 0\). It follows that at least one of \(\tilde{T}_{0}=\tilde{T}_{0}(x)\neq 0\) or \(\bar{T}_{0}=\bar{T}_{0}(y)\neq 0\) must hold, where \(\tilde{T}_{0}(x)\) is the restriction of \(T_{0}\) to \(y=0\), and \(\bar{T}_{0}(y)\) is the restriction of \(T_{0}\) to \(x=0\). Otherwise, \(T\) would be divisible by \(z\), a contradiction. Without loss of generality, we can assume that \(\tilde{T}_{0}\neq 0\). Moreover, if we restrict equation (58) to \(y=z=0\) and simplify it, we obtain that
\[(1+cx)\frac{d\tilde{T}_{0}}{dx}=-(1+(al+c)x)\tilde{T}_{0}. \tag{59}\]
Now we consider two cases.
**Subcase 2.1**\(\tilde{T}_{0}\) is not divisible by \((1+cx)\). In this case, since \(\tilde{T}_{0}\neq 0\) and \(al\neq 0\) from (59), we get a contradiction.
**Subcase 2.2**\(\tilde{T}_{0}\) is divisible by \((1+cx)\). We write \(\tilde{T}_{0}(x)=(1+cx)^{m}h(x)\) with \(m\geq 1\) and \(h(x)\) being a formal power series in the variable \(x\) that is not divisible by \((1+cx)\). Then from (59) we obtain \(h\) must satisfy, after dividing by \((1+cx)^{m}\),
\[(1+cx)\frac{dh}{dx}=-(1+(al+c)x-cm)h. \tag{60}\]
Since \(cm>0\) and \(al>0\), we have from (60) that \(h\) must be divisible by \((1+cx)\), a contradiction. So system (1) has no formal first integral if \(b=0\).
Now we consider system (1) with \(a\neq 0\),\(c\neq 0\), and \(b\neq 0\). We provide the proof of Theorem 1.7 in order to prove Theorem 1.6 when \(b>0\).
**Proof of Theorem 1.7**. Since \(b\) is one of the parameters in the system, we can rewrite it as
\[\begin{split}\dot{x}&=x(1-y+cx-axz),\\ \dot{y}&=y(-1+x),\\ \dot{z}&=z(-b+ax^{2}),\\ \dot{b}&=0.\end{split} \tag{61}\]
In other words, we add a new variable, \(b\), that was a parameter in system (1). Note that a non constant function \(f=f(b)\) is a first integral of system (61), but it is not a first integral of the system (1).
We assume that \(f=f(x,y,z,b)\) is a formal power series first integral of system (61). Expanding \(f\) in powers of the variable \(b\), we get \(f=\sum\nolimits_{k\geq 0}f_{k}(x,y,z)b^{k}\), where each \(f_{k}\) is a formal series in its variables. We can write \(f=f_{0}+bg\) where \(g=\sum\nolimits_{k\geq 0}f_{k+1}b^{k}=\sum\nolimits_{k\geq 0}f_{k+1}(x,y,z)b^{k}\) is a formal series in variables \(x,y,\) and \(z\). Since \(f(x,y,z,0)\) is a formal first integral of the system (1) with \(b=0\), and since \(a\neq 0\), and \(c\neq 0\), we are in the assumptions of Theorem 1.6 for \(b>0\), applying it, we get that \(f(x,y,z,0)=f_{0}=d_{0}\). We claim that
\[f_{k+1}=d_{k+1},\quad k\geq 0, \tag{62}\]
where \(d_{k+1}\) are constants for \(k\geq 0\). Then, since \(f\) is a first integral, it satisfies \(\mathcal{X}f=0\). So, the function \(g\) satisfies the equation
\[x(1-y+cx-axz)\frac{\partial g}{\partial x}+y(-1+x)\frac{\partial g}{\partial y }+(b+ax^{2}z)\frac{\partial g}{\partial z}=0. \tag{63}\]
Moreover, \(f_{1}=g(x,y,z,0)\) satisfies (63) restricted to \(b=0\), that is,
\[x(1-y+cx-axz)\frac{\partial f_{1}}{\partial x}+y(-1+x)\frac{\partial f_{1}}{ \partial y}+(ax^{2}z)\frac{\partial f_{1}}{\partial z}=0.\]
Hence, \(f_{1}\) is a formal power series first integral of system (1) with \(b=0\), and by assumption, additionally, we have that \(a\neq 0\), and \(c\neq 0\). So, from Theorem 1.6 for \(b>0\), we obtain that \(f_{1}=d_{1}\), a constant. This proves (62) for \(k=0\). Now we assume (62) is true for \(k=0,\ldots,j-1\) with \(j\geq 1\), and we will prove it for \(k=j\). Since \(g=bd_{1}+b\sum\nolimits_{k\geq 1}f_{k+1}b^{k}\), then \(f=d_{0}+bd_{1}+b\sum\nolimits_{k\geq 1}f_{k+1}b^{k}\), and by the induction hypothesis, we have
\[f=\sum\nolimits_{k=0}^{j-1}d_{k}b^{k}+b^{j+1}\sum\nolimits_{k\geq j}f_{k+1}b^ {k-j},\]
so \(\sum\nolimits_{k\geq j}f_{k+1}b^{k-j}\) is a formal first integral of system (4). Consequently, \(f_{j+1}\) is a formal first integral of system (1) with \(b=0\), and from part (a), with \(a\neq 0\),\(c\neq 0\), we get \(f_{j+1}=d_{j+1}\), and this proves the claim for \(k=j\). Then, from (62), we get that \(f=\sum\nolimits_{k\geq 0}d_{k+1}b^{k}\), which finishes the proof of the theorem. |
2310.09096 | Consensus Formation Among Mobile Agents in Networks of Heterogeneous
Interaction Venues | Exploring the collective behavior of interacting entities is of great
interest and importance. Rather than focusing on static and uniform
connections, we examine the co-evolution of diverse mobile agents experiencing
varying interactions across both space and time. Analogous to the social
dynamics of intrinsically diverse individuals who navigate between and interact
within various physical or digital locations, agents in our model traverse a
complex network of heterogeneous environments and engage with everyone they
encounter. The precise nature of agents internal dynamics and the various
interactions that nodes induce are left unspecified and can be tailored to suit
the requirements of individual applications. We derive effective dynamical
equations for agent states which are instrumental in investigating thresholds
of consensus, devising effective attack strategies to hinder coherence, and
designing optimal network structures with inherent node variations in mind. We
demonstrate that agent cohesion can be promoted by increasing agent density,
introducing network heterogeneity, and intelligently designing the network
structure, aligning node degrees with the corresponding interaction strengths
they facilitate. Our findings are applied to two distinct scenarios: the
synchronization of brain activities between interacting individuals, as
observed in recent collective MRI scans, and the emergence of consensus in a
cusp catastrophe model of opinion dynamics. | Guram Mikaberidze, Sayantan Nag Chowdhury, Alan Hastings, Raissa M. DSouza | 2023-10-13T13:32:20Z | http://arxiv.org/abs/2310.09096v1 | # Consensus Formation Among Mobile Agents in Networks of Heterogeneous Interaction Venues
###### Abstract
Exploring the collective behavior of interacting entities is of great interest and importance. Rather than focusing on static and uniform connections, we examine the co-evolution of diverse mobile agents experiencing varying interactions across both space and time. Analogous to the social dynamics of intrinsically diverse individuals who navigate between and interact within various physical or digital locations, agents in our model traverse a complex network of heterogeneous environments and engage with everyone they encounter. The precise nature of agents' internal dynamics and the various interactions that nodes induce are left unspecified and can be tailored to suit the requirements of individual applications. We derive effective dynamical equations for agent states which are instrumental in investigating thresholds of consensus, devising effective attack strategies to hinder coherence, and designing optimal network structures with inherent node variations in mind. We demonstrate that agent cohesion can be promoted by increasing agent density, introducing network heterogeneity, and intelligently designing the network structure, aligning node degrees with the corresponding interaction strengths they facilitate. Our findings are applied to two distinct scenarios: the synchronization of brain activities between interacting individuals, as observed in recent collective MRI scans, and the emergence of consensus in a cusp catastrophe model of opinion dynamics.
## I Introduction
In recent years, the scientific community has made impressive progress in comprehending the complexities of interacting systems. The methodology of network science [1] has provided a powerful framework for this quest. It has unveiled fresh insights into the role of the network structure, wielding profound influence over collective behavior [2; 3], in diverse realms spanning from the networks of cortical neurons to the fabric of society. However, most studies have focused on scenarios where the interactions between units remain constant over time, neglecting numerous realistic situations with time-varying interactions [4], such as person-to-person communication [5; 6], cooperative dynamics of animal groups [7] and rational individuals [8], and robot and vehicle movements [9; 10], among others.
Here, we study the _co-evolution of diverse mobile agents with interactions varying across space and time_. Consider how social consensus emerges when individuals, each with a unique thinking pattern, navigate between and interact within varied physical or digital locations. Accordingly, in our model, various agents navigate a complex network of diverse locations, interacting with everyone they meet along the way (see Fig. 1). To maintain realism, we assume that nodes can facilitate a range of interactions, while the specific forms of agents' internal dynamics and interactions are deliberately left unspecified and can be chosen based on the application. Our approach yields concise effective dynamical equations for agent states in the weak coupling limit. These equations serve as critical tools to explore thresholds of consensus, crucial in opinion dynamics, devise effective strategies to hinder coherence, and design optimal network structures with inherent node variations in mind.
We find that enhancing the effective interaction strength and thus promoting coherence can be achieved by: increasing the number of agents, reducing the network size, aligning node degrees with the interactions they induce, or augmenting the degree distribution in the network. The latter serves as a prime example of converse symmetry breaking [11], since discrepancies in node degrees promote unity among agent states. We also find that a strategic approach to disrupt coherence involves targeting high-degree nodes due to their extensive influence on collective behavior.
For validation and applications, we will introduce specific internal dynamics and interactions for agents. As the first example, we will delve into an intriguing line of experimental research that examines the "conceptual alignment" or "brain-to-brain synchronization" between interacting individuals [12; 13; 14]. Such experiments often
utilize collective MRI and EEG brain scans [15]. This neuroimaging technique is called "Hyperscanning" and it simultaneously records brain activity from multiple individuals during social interactions or coordinated tasks. Recent observations have shown that brain activity patterns in response to stimuli become synchronized and remain so after engaging in a common discussion [16]. Similarly, the pupils of conversing individuals contract and dilate in synchrony [17]. To represent these interactions, we will model the participants as Kuramoto agents that synchronize during constructive discussions and desynchronize during disruptive interactions. This example is similar to the metapopulation model [18] where Kuramoto agents perform a degree biased random walk on a network. However, in contrast from Ref. [18], our work considers non-uniform couplings and allows them to be repulsive.
As the second application, we will explore a mathematical model of polarization within and across individuals [19]. This model incorporates internal dynamics based on a cusp catastrophe of opinion, which is a function of external influence and individuals' attention to the subject matter. We will incorporate this model as the agents' internal dynamics and study the feasibility of consensus.
In both application systems, the individuals will navigate various social settings, including online platforms like the comments sections of news articles and social media posts, as well as in-person gatherings such as offices, schools, bars, and book clubs. Each interaction venue is considered a distinct node in the network. During these gatherings, agents engage in interactions with all other participants present in the same node. Experimental studies have shown that exposure to a different point of view can lead to either convergence [20] or divergence [21] in opinions. It is plausible that the nature of the interaction setting plays an important role in this outcome. Thus, in our model, the network nodes represent diverse environments inducing various interactions among the visitors. For example, interactions in a bar are expected to be different from interactions in a debate club. Some nodes will foster consensus by introducing attractive (cohesive) interactions between interacting agents, while others may contribute to discord by imposing repulsive (disruptive) interactions.
The literature on mobile agents [22; 23; 24; 25; 26; 27] remains limited and most studies have focused on two simplistic assumptions. First, they consider mobile agents moving randomly on a continuous two or three-dimensional plane. In contrast, we use a novel approach assuming that mobile agents traverse a complex network, hopping node to node along the edges. This movement exposes them to varying sets of neighbors. Second, the interactions in the literature are usually fixed and independent of the agents absolute location. Our model overcomes this assumption by allowing the locations to dictate how the agents will interact. The local interactions on the nodes promote local coherence of the internal dynamics whereas the random walk continuously updates the interacting subsets of agents, facilitating global coherence. It is important to note that our proposed model only partially reflects the complexities of real-world scenarios. Nonetheless, in an effort to capture the richness of natural settings, we incorporate both cohesive and disruptive interactions [28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. The interplay between positive and negative couplings holds significant relevance in neuronal networks comprised of both excitatory and inhibitory neurons [38; 39]. Similarly, in social networks, one can discern the coexistence of contrarians alongside conformists [40], giving rise to starkly contrasting dynamics evident in phenomena such as political elections or the spread of rumors. The coexistence of these two types of couplings allows our model to capture a wide range of realistic settings [41]. By introducing these advancements to the study of mobile agents, we aim to expand our understanding of complex systems and their collective behaviors, acknowledging the inherent simplifications of our model while embracing its potential to capture essential aspects of real-world dynamics.
With the aforementioned objectives in mind, we embark on addressing the following pivotal questions through analytical means:
1. _Can coherence be achieved among mobile agents, even under disruptive influences? We aim to determine the critical threshold analytically, indicating the number and strength of disruptive nodes required to fully disrupt coherence._
2. _How does the network topology impact the collective behavior, among these mobile agents when subjected to the combined influence of attractive and repulsive interactions? Can we discern which network topology is more robust in the face of repulsive interactions?_
To unravel the answers to these inquiries, we first provide an in-depth exposition of our model in Sec. II. In Sec. III, we give a comprehensive analytical derivation
Figure 1: Network of heterogeneous locations hosting diverse mobile agents. Agents move around the network and interact with other agents. Types of agent interactions depend on the host.
of our main result, the effective equations of agents' internal states. We also discuss the general insights offered by them. In Sec. IV we focus on the application example of Brain-to-Brain synchronization, and, for the first time in Sec. IV.1, we introduce specific free evolution and interactions into our system. We also need to specify the distribution of disruptive and cohesive nodes. Section IV.2 considers the "untargeted attacks", where disruptive nodes are selected uniformly at random. In contrast, Sec. IV.3 considers "targeted attacks," where the disruptive nodes are selected among the most well connected nodes. Sections IV.2 and IV.3 comprise various subsections, each dedicated to exploring a distinct network topology, namely: (i) Regular [42], (ii) Random [43], (iii) Small-world [44], and (iv) Scale-free [45; 46]. For each attack strategy and each network topology, we derive the threshold of synchronization analytically and compare it with extensive numerical calculations. In Sec. IV.4, we briefly discuss the scenario of targeting low degree nodes. Next, in Sec. V we move on to the second application, the cusp catastrophe model of opinion dynamics. Section V.1 introduces the applicable free evolution and interaction functions and computes analytically the condition of consensus formation among mobile agents under untargeted attacks. We additionally confirm the result through numerical validation. Finally, Sec. VI presents the discussion and conclusions.
## II Mathematical model
We consider a finite network of \(n\) vertices. The connectivity of this network is characterized by an adjacency matrix \(A=[A_{\alpha\beta}]_{n\times n}\), where \(A_{\alpha\beta}=1\) (or 0) indicates the presence (or absence) of a link between nodes \(\alpha\) and \(\beta\). We also impose the following assumptions on the network: it is connected, undirected (\(A_{\alpha\beta}=A_{\beta\alpha}\)), devoid of self-loops (\(A_{\alpha\alpha}=0\)). The degree of a node \(\alpha\) is given by the conventional way, expressed as \(d_{\alpha}=\sum_{\beta=1}^{n}A_{\alpha\beta}\). Greek indices number the nodes, while Latin indices are reserved for enumerating the agents which we discuss next.
We randomly place \(N\) mobile agents on this network of \(n\) nodes. After every fixed time interval \(\Delta T\), each agent jumps to one of the nodes adjacent to its current position. Consider the \(i\)-th agent, located at node \(\alpha\) at time \(t\). At the time \((t+\Delta T)\), this agent will hop to one of the node-\(\alpha\)'s neighbor nodes, say \(\beta\), with a uniform probability \(\frac{A_{\alpha\beta}}{d_{\alpha}}\). Once the agent has made its jump, it interacts with all the other agents present in node \(\beta\) at that time. The state \(\phi_{i}\) of the mobile agent \(i\) (\(i=1,2,3,\cdots,N\)), situated on node \(\alpha\) (\(\alpha=1,2,3,\cdots,n\)), evolves according to the following equation:
\[\dot{\phi}_{i}=F_{i}(\phi_{i})+\sum_{j\in O_{\alpha}}H_{\alpha}(\phi_{i}, \phi_{j}). \tag{1}\]
The term \(F_{i}(\phi_{i})\) describes the agents natural, free evolution. The subscript of \(F_{i}\) explicitly enumerates the diversity of the agents. \(H_{\alpha}(\phi_{i},\phi_{j})\) describes how agents interact with each other within node \(\alpha\). The subscript of \(H_{\alpha}\) enumerates the diversity of nodes or locations. At every time instance, each node \(\alpha\) hosts a particular subset of mobile agents \(O_{\alpha}\), and these subsets change after random walk iterations. We continue this hopping process, which involves local interactions, for a significant number of iterations until a stationary state is reached.
## III Analytical findings
To start our analysis, we use a master equation describing the random walk of a single mobile agent on the network. Let us assume that the random walk begins at a node \(\delta\). We use the notation \(P_{\alpha\delta}(t)\) to represent the probability of finding the agent at node \(\alpha\) after a specific time \(t\). This probability can be expressed recursively using the master equation
\[P_{\alpha\delta}(t)=\sum_{\beta}A_{\alpha\beta}\frac{P_{\beta\delta}(t- \Delta T)}{d_{\beta}}. \tag{2}\]
The random walk iterations occur regularly at intervals of \(\Delta T\), therefore, \(P_{\beta\delta}(t-\Delta T)\) represents the probabilities during the last iteration. The master equation states that the agent will be in node \(\alpha\) if, during the previous iteration, it was in one of the neighboring nodes \(\beta\) (probability given by \(P_{\beta\delta}(t-\Delta T)\)), and then it jumped to node \(\alpha\) (probability given by \(\frac{1}{d_{\beta}}\)). As the time \(t\) approaches infinity, Eq. (2) reaches a stationary state where the probability distribution becomes independent of time \(t\) and the starting node \(\delta\)[77]. Thus, in the stationary state, we have the following equation,
\[P_{\alpha}=\sum_{\beta}A_{\alpha\beta}\frac{P_{\beta}}{d_{\beta}}. \tag{3}\]
We can easily verify that \(\frac{P_{\alpha}}{d_{\alpha}}=c\) (with \(c\) independent of \(\alpha\)) is a solution of Eq. (3). Pulling out \(\frac{P_{\beta}}{d_{\beta}}\) as a common factor, the remaining sum evaluates to \(d_{\alpha}\). Therefore, the expression \(\frac{P_{\alpha}}{d_{\alpha}}=c\) satisfies the stationary state condition. After normalization, the probability of finding the specific agent in node \(\alpha\) can be expressed as
\[P_{\alpha}=\frac{d_{\alpha}}{\sum_{\beta}d_{\beta}}. \tag{4}\]
In a connected network, any node can be reached from any other node. This means that the Markov chain corresponding to such a random walk is irreducible and therefore the stationary state given by Eq. (4) must be unique.
Considering that we have a total of \(N\) agents, the expected number of agents in node \(\alpha\) can be calculated using the following equation:
\[|O_{\alpha}|=\frac{Nd_{\alpha}}{\sum_{\beta}d_{\beta}}. \tag{5}\]
Next, we will utilize the averaging theory to establish the equations for the internal states of the mobile agents in the weak coupling limit. We define the weak coupling limit by demanding that the interaction time-scale is much slower than the random walk time-scale. In this case the averaging theory [78] allows us to replace the weak, fast-shifting interaction terms in Eq. (1) with their averaged values over all agents. This approximation becomes exact in the limit of infinitely separated time scales, where \(\Delta T\to 0+\). One can equivalently interpret this as a rapid random walk instead of weak interactions. In terms of opinion dynamics, this limit is justified by the fact that changing ones opinion significantly is unlikely after just one interaction. By averaging over all possible neighbors, Eq. (1) can be simplified as follows
\[\dot{\phi}_{i} =F_{i}(\phi_{i})+\sum_{j\in O_{\alpha}}\frac{1}{N}\sum_{l=1}^{N}H_ {\alpha}(\phi_{i},\phi_{l}), \tag{6}\] \[=F_{i}(\phi_{i})+|O_{\alpha}|\frac{1}{N}\sum_{l=1}^{N}H_{\alpha}( \phi_{i},\phi_{l}),\] \[=F_{i}(\phi_{i})+\frac{d_{\alpha}}{\sum_{\beta=1}^{n}d_{\beta}} \sum_{l=1}^{N}H_{\alpha}(\phi_{i},\phi_{l}).\]
Recall that, in the given expression, node \(\alpha\) represents the current location of agent \(i\). Building upon the reasoning discussed earlier; we can proceed by averaging the interaction terms originating from node \(\alpha\) across all possible nodes \(\delta\) that the agent could visit. This averaging considers the appropriate probability weights \(P_{\delta}\) associated with each node \(\delta\). Hence, we obtain
\[\dot{\phi}_{i}=F_{i}(\phi_{i})+\sum_{\delta=1}^{n}P_{\delta}\frac{d_{\delta}} {\sum_{\beta=1}^{n}d_{\beta}}\sum_{l=1}^{N}H_{\delta}(\phi_{i},\phi_{l}). \tag{7}\]
To make further progress, we assume that the interaction functions of different nodes relate to each other through scaling \(H_{\delta}(\phi_{i},\phi_{j})=k_{\delta}H(\phi_{i},\phi_{j})\), where \(k_{\delta}\) is the coupling strength between the mobile agents in node \(\delta\).
\[\dot{\phi}_{i} =F_{i}(\phi_{i})+\sum_{\delta=1}^{n}P_{\delta}\frac{k_{\delta}d_{ \delta}}{\sum_{\beta=1}^{n}d_{\beta}}\sum_{l=1}^{N}H(\phi_{i},\phi_{l}), \tag{8}\] \[=F_{i}(\phi_{i})+\frac{\sum_{\delta=1}^{n}{k_{\delta}d_{\delta}} ^{2}}{\left(\sum_{\beta=1}^{n}d_{\beta}\right)^{2}}\sum_{l=1}^{N}H(\phi_{i}, \phi_{l}),\] \[=F_{i}(\phi_{i})+\frac{\frac{1}{n}\sum_{\delta=1}^{n}{k_{\delta}d _{\delta}}^{2}}{n\left(\frac{1}{n}\sum_{\beta=1}^{n}d_{\beta}\right)^{2}}\sum _{l=1}^{N}H(\phi_{i},\phi_{l}),\] \[=F_{i}(\phi_{i})+\frac{\left\langle d^{2}k\right\rangle}{n{\left \langle d\right\rangle}^{2}}\sum_{l=1}^{N}H(\phi_{i},\phi_{l}),\] \[=F_{i}(\phi_{i})+\frac{\tilde{k}}{N}\sum_{l=1}^{N}H(\phi_{i},\phi_ {l}).\]
In the given expression, the notation \(\langle\cdot\rangle\) represents a simple, unweighted average taken over all nodes. It is important to observe that the outcome is a differential equation resembling the original equation (1). However, this time the system is globally coupled with an effective coupling strength denoted as \(\tilde{k}\). The resulting effective dynamical equation is
\[\dot{\phi}_{i} =F_{i}(\phi_{i})+\frac{\tilde{k}}{N}\sum_{j=1}^{N}H(\phi_{i},\phi _{j}) \tag{9}\] \[\tilde{k} =\frac{N}{n}\frac{\left\langle d^{2}k\right\rangle}{{\left\langle d \right\rangle}^{2}}.\]
These effective differential equations for agent states are the main analytic result of our findings. The effective coupling strength \(\tilde{k}\) depends on several factors including the size of the network, the number of agents, the network topology, and the distribution of the coupling strengths. Therefore, by varying any of these parameters, the effective coupling strength can be altered, leading to different dynamics and collective behaviors in the network of interacting mobile agents. Below, we will discuss the insights readily available from Eq. (9), as well as its detailed consequences for different applications in Secs. IV and V.
At this stage, several observations can be made without delving into the specific details of the dynamics.
* _First, due to the weighting by the squared node degree, nodes with higher degrees have a greater influence on the system's behavior. These highly connected nodes play a more significant role in shaping the overall dynamics of the system._
* _Second, when considering the number of agents \(N\) and the network size \(n\), an inverse relationship can be observed. As the number of agents increases and the network size decreases, the agents become more concentrated within the network. This
concentration leads to a higher frequency of interactions among any given pair of agents, resulting in an increased effective coupling strength. This density-dependent synchronization threshold is closely linked to phenomena like bacterial infection, biofilm formation, and bioluminescence, unveiling quorum-sensing transitions in coupled systems [47; 48; 49; 50]._
* _Third, the term_ \(\langle d^{2}k\rangle\) _in the effective coupling informs an intelligent design of the network structure, with nodes' inherent variations in mind. In particular, correlating the node degrees with their coupling strengths enhances the effective interactions._
Finally, it is informative to examine the scenario where all nodes possess an identical positive coupling strength. In this case, the coupling term can be factored out of the expectation, resulting in \(\langle d^{2}k\rangle=k\langle d^{2}\rangle\). Consequently, the expression for the effective coupling in Eq. (9) simplifies to:
\[\tilde{k}=\frac{N}{n}\frac{\langle d^{2}\rangle}{\langle d\rangle^{2}}k. \tag{10}\]
This observation reveals a counter-intuitive finding: degree heterogeneity, which refers to variation in the number of connections among network nodes, actually enhances the effective coupling and promotes coherence among the agents. On the other hand, when network nodes have similar degrees, the similarity in agent states decreases exemplifying the converse symmetry breaking phenomenon [11]. This result becomes more intuitive when interpreted in the context of opinion dynamics. When there are several highly connected hubs in the network that serve as focal points for discussions or interactions, and most agents are concentrated within these hubs, it becomes easier to reach a consensus or synchronization among the agents. In contrast, if there are many small discussion venues or nodes with equal popularity, the process of achieving consensus becomes more challenging.
## IV Exploring synchronization: random walk of Kuramoto agents
The exploration of synchronization has a fascinating history that dates back to Huygens' classical pendulum experiment [51] and Winfree's pioneering work [52] on coupled oscillators for circadian rhythms. Winfree discovered that synchronization spontaneously emerges when the coupling strength between oscillators exceeds a critical value, resembling a phase transition. Building upon this, Kuramoto [53] simplified the model and derived an exact analytical solution, sparking widespread interest in the dynamics of coupled oscillators [54]. Kuramoto's model has been extended to various systems beyond circadian rhythms in recent years. Examples include firing neurons [55], chorusing frogs [56], and even audiences clapping in perfect unison at concerts [57]. The study of Kuramoto oscillators synchronizing has also provided insights into diverse phenomena, such as the behavior of power grids [58], phase locking in Josephson junction arrays [59], the feedback between the oscillatory and cascading dynamics [60], and even the unexpected wobbling of London's Millennium Bridge on its opening day [61]. Remarkable progress has been achieved in understanding how different network structures influence the synchronization behavior of coupled Kuramoto oscillators [62].
It is worth highlighting that the domains of swarming and synchronization share numerous commonalities, residing at the intersection of nonlinear dynamics and statistical physics. However, it is regrettable that these fields have remained largely disconnected, calling for additional research attention. In the study of swarming, the primary emphasis lies in understanding how individuals move collectively, often overlooking the internal dynamics within each agent. Conversely, studies on synchronization predominantly delve into the intricacies of oscillators' internal dynamics, paying less attention to their motion. This disparity in focus presents an intriguing opportunity for further exploration and integration of ideas from both fields. By bridging this gap and combining insights from swarming and synchronization [63; 64], we can gain a deeper understanding of collective behaviors in complex systems.
Below we discuss swarming and synchronizability of oscillators motivated by recent neuro-sociological studies [12; 13; 14; 16; 17]. These experiments show that brain activities of interacting individuals get synchronized. We will consider mobile agents that synchronize upon interactions with others. After interacting for some time, they move to other locations in a network of interaction venues, where they interact with a new set of agents, and so on. We represent the agents' internal dynamics and their interactions through the most widely studied model of synchronization, the Kuramoto dynamics.
First, we will describe the specific setup of Kuramoto oscillators in our model (Sec.IV.1). Then, we will compute explicitly the synchronizability condition for various network topologies and compare them with simulations for various attack strategies (Secs. IV.2, IV.3, IV.4).
### Kuramoto model as internal dynamics of mobile agents
To progress with the analysis, we fix the free evolution function \(F_{i}(\phi_{i})=\omega_{i}\) and the interaction function \(H(\phi_{i},\phi_{j})=\sin(\phi_{j}-\phi_{i})\) in accordance with Kuramoto dynamics. Here, \(\omega_{i}\) represents the natural frequency of agent \(i\) sampled from a unimodal, symmetric distribution denoted \(g(\omega)\). We can numerically investigate the system's dynamics and validate our theoretical analysis.
A video showcasing the random movement and synchronization of such Kuramoto agents can be found at [65]. With these choices, Eq. (9) can be expressed as
\[\dot{\phi}_{i}=\omega_{i}+\frac{\tilde{k}}{N}\sum_{j=1}^{N}\sin(\phi_{j}-\phi_{i}). \tag{11}\]
To analyze synchronization, we employ the conventional Kuramoto order parameter \(r=|\frac{1}{N}\sum_{j=1}^{N}\exp\left(\hat{i}\phi_{j}\right)|\) where \(\hat{i}=\sqrt{-1}\). Here averaging happens over all \(N\) agents. For the incoherent states, the order parameter vanishes in the thermodynamic limit of agents \(N\rightarrow\infty\), while once synchronization emerges, order parameter attains a positive value in this limit. The synchronizability condition for the globally coupled Kuramoto oscillators, as described in Ref. [62], can be expressed as
\[\tilde{k}>\frac{2}{\pi\|g(\omega)\|_{\infty}}. \tag{12}\]
The notation \(\|g(\omega)\|_{\infty}\) refers to the L-infinity norm of the distribution \(g(\omega)\) and is calculated as the maximum value of \(g(\omega)\). Combining Eqs. (9) and (12), we get the synchronization condition for the full model
\[\frac{\langle d^{2}k\rangle}{\langle d\rangle^{2}}>\frac{n}{N}\frac{2}{\pi\|g( \omega)\|_{\infty}}. \tag{13}\]
It depends on the joint degree and coupling distributions through the term \(\langle d^{2}k\rangle\). The computation of the expectation terms in this equation relies on the specific distributions used to generate the network under consideration. In order to obtain accurate results using this expression, it is necessary to consider the thermodynamic limit of the network size \(n\rightarrow\infty\). In this limit, the degree distributions become exact and more accurately represent the statistical properties of the network structure.
To validate our results through simulations, we need to specify the frequency distribution \(g(\omega)\). As is customary in many studies, we select the normal distribution:
\[g(\omega)=\frac{1}{\Delta\omega\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\frac{ \omega-\omega_{0}}{\Delta\omega}\right)^{2}\right). \tag{14}\]
Here, \(\omega_{0}\) represents the mean or central value of the frequencies, and \(\Delta\omega\) is the standard deviation or width of the distribution. The normal distribution is a widely used choice in modeling various systems, including Kuramoto oscillators, due to its abundance in real world, mathematical tractability, and symmetry.
The maximum value of the normal distribution \(g(\omega)\) occurs at the mean frequency \(\omega=\omega_{0}\). By substituting \(\omega=\omega_{0}\) into Eq. (14), we obtain
\[\|g(\omega)\|_{\infty}=g(\omega_{0})=\frac{1}{\Delta\omega\sqrt{2\pi}}. \tag{15}\]
Consequently, the synchrony condition given in Eq. (13) can be rewritten as
\[\frac{\langle d^{2}k\rangle}{\langle d\rangle^{2}}>\sqrt{\frac{8}{\pi}}\frac{ n}{N}\Delta\omega. \tag{16}\]
In this form, the condition relates the joint degree and coupling distributions to the network size \(n\), the number of agents \(N\), and the width of the frequency distribution \(\Delta\omega\). It provides a criterion for synchronization based on these parameters, indicating the necessary condition for achieving synchronization in the system of Kuramoto oscillators.
### Untargeted attacks
We begin our analysis by examining the simplest scenario of untargeted attacks where nodes are chosen uniformly at random to be corrupted, i.e., assigned a negative, repulsive coupling. Then the coupling distribution is not influenced by the node degree, and any node has an equal probability of being repulsive, regardless of its degree. In this case, we can observe that the joint distribution term becomes independent and separates into two individual terms
\[\langle d^{2}k\rangle=\langle d^{2}\rangle\langle k\rangle. \tag{17}\]
This equation simplifies the analysis, allowing us to examine the behavior of each term independently. Consequently, Eq. (16) becomes
\[\langle k\rangle>\sqrt{\frac{8}{\pi}}\frac{n}{N}\frac{\langle d\rangle^{2}}{ \langle d^{2}\rangle}\Delta\omega. \tag{18}\]
The quantity \(\frac{\langle d\rangle^{2}}{\langle d^{2}\rangle}\) is always non-negative and it can not exceed \(1\) since the nonnegativity of the variance implies \(\langle d\rangle^{2}\leq\langle d^{2}\rangle\). The extreme cases for this quantity are observed in two types of networks. In regular networks, where each node has the same degree, we have \(\frac{\langle d\rangle^{2}}{\langle d^{2}\rangle}=1\). On the other hand, in scale-free networks with a degree distribution characterized by a power-law exponent \(1<\gamma\leq 3\), we find that \(\frac{\langle d\rangle^{2}}{\langle d^{2}\rangle}=0\) (explicitly derived later in (IV.2.4)). _In summary, the scale-free networks with this range of power-law exponents are the most robust to untargeted attacks, while regular networks are weakest to untargeted attacks_. This finding perfectly aligns with the structural robustness of the giant connected component in complex networks under random removal of nodes [66; 67]. By understanding the behavior of the quantity \(\frac{\langle d\rangle^{2}}{\langle d^{2}\rangle}\) and its implications for network robustness, we gain valuable insights into the interplay between network structure and the impact of untargeted attacks.
In order to examine how the repulsive couplings impact the system, let us consider the Bernoulli distribution for the coupling strengths. Each node will be assigned a negative coupling strength \(k_{-}\) with probability \(p\) (called disruptors or corrupted nodes), or a positive coupling strength \(k_{+}\) with probability \((1-p)\).
\[\begin{split} Pr(k=k_{-})&=p,\\ Pr(k=k_{+})&=1-p.\end{split} \tag{19}\]
Hence, the average coupling becomes
\[\langle k\rangle=p(k_{-}-k_{+})+k_{+}. \tag{20}\]
By substituting the value of \(\langle k\rangle\) into Eq. (18), we obtain the expression for the critical fraction \(p_{c}\) of corrupted nodes needed to achieve complete incoherence:
\[p_{c}=\frac{1}{k_{+}-k_{-}}\left(k_{+}-\sqrt{\frac{8}{\pi}}\frac{n}{N}\frac{ \langle d\rangle^{2}}{\langle d^{2}\rangle}\Delta\omega\right). \tag{21}\]
It should be noted that the equation may yield non-physical values such as \(p_{c}<0\) or \(p_{c}>1\). This implies that no critical value of \(p_{c}\) separates the synchronized and incoherent phases. For instance, such a scenario can arise when the positive coupling \(k_{+}\) lacks sufficient strength to synchronize the system, even in the absence of corrupted nodes. Another extreme scenario can occur if the disruptor coupling \(k_{-}\) is set to a positive value \(k_{-}>0\), accompanied by a narrow frequency distribution \(\Delta\omega\to 0\). This inevitably leads to synchrony.
Next, we take a closer look at how different network topologies affect the synchronization of mobile agents. We specifically focus on understanding how the critical fraction \(p_{c}\) changes when we use various network structures.
#### iii.2.1 Regular networks
For regular networks such as random regular networks, complete graphs, regular lattices, or any other regular network where each node has the same degree \(d\), a simplification can be made. In such cases, the expression \(\frac{\langle d\rangle^{2}}{\langle d^{2}\rangle}\) evaluates to \(1\), resulting in the critical coupling equation (21) being reduced to
\[p_{c}=\frac{1}{k_{+}-k_{-}}\left(k_{+}-\sqrt{\frac{8}{\pi}}\frac{n}{N}\ \Delta \omega\right). \tag{22}\]
We put our findings to test by comparing them to simulations using regular networks. The outcome is depicted in Fig. 2. As anticipated, when the corrupted nodes possess strong negative couplings, fewer of them are required to create disorder. This figure is drawn by keeping fixed the parameters at \(k_{+}=1\), \(d=3\), \(\Delta\omega=1\), \(n=100\), \(N=500\), and \(\Delta T=0.001\). Moving forward, Fig. 3 reveals that _the condition for synchronization remains unchanged, regardless of the network connectivity_. We will see below that what matters instead is the degree fluctuations. Our analytical finding in Eq. (22) aligns well with these numerical simulations, as \(p_{c}\) does not depend on the degree of the regular network. In other words, the ability for synchronization to occur is not influenced by the specific way the network is structured as long as it has a regular degree distribution. The figure was generated using the following parameter values: \(k_{+}=1\), \(k_{-}=-1\), \(\Delta\omega=1\), \(n=100\), \(N=500\), and \(\Delta T=0.001\).
Throughout our study, the plots serve as visual representations of crucial parameter values that define the boundary between synchronized and incoherent phases. To generate each point on these plots, we keep all parameters fixed except the one represented on the \(y\)-axis. We then conduct simulations from the beginning, including the generation of the network, for different values of the \(y\)-axis parameter. During these simulations, we measure the global synchronization order parameter, denoted as \(r\), in the stationary state. We observe how \(r\) changes as a function of the \(y\)-axis parameter.
In the incoherent phase, where the system lacks synchronization, \(r\) is equal to \(0\). On the other hand, when the system exhibits synchronization, \(r\) takes on values greater than \(0\), with full synchrony approaching a value close to \(1\). With this information, we fit the measured
Figure 2: **Synchronizability of regular networks under untargeted attacks**: As the repulsive coupling strength \(k_{-}<0\) decreases, a smaller fraction \(p_{c}\) becomes sufficient to disrupt the synchronization among the Kuramoto oscillators. The solid curve represents our analytically derived result (Eq. (22)), matching with the numerical simulations in orange data points. Throughout our analysis, we maintain fixed values for the other parameters: \(k_{+}=1\), \(d=3\), \(\Delta\omega=1\), \(n=100\), \(N=500\), and \(\Delta T=0.001\). To validate our findings, we conduct multiple numerical simulations and plot the results, showing the mean value along with the standard error. This comprehensive approach ensures the robustness and reliability of our conclusions.
data using a heuristic curve that captures the behavior of \(r\). When the incoherent phase lies below the critical point \((y<y_{c})\), we employ the curve expression \(r(y)=\frac{2}{\pi}\tan^{-1}(c(y-y_{c}))\theta(y-y_{c})\), where \(\theta(\cdot)\) represents the Heaviside step function. Conversely, when the incoherent phase is above the critical point \((y>y_{c})\), we use the expression \(r(y)=\frac{2}{\pi}\tan^{-1}(c(y_{c}-y))\theta(y_{c}-y)\).
The values of \(c\) and \(y_{c}\) are determined by fitting the curve to the data using the root-mean-square method. The extracted value of \(y_{c}\) corresponds to the critical value of the \(y\)-axis parameter. To ensure accuracy, we repeat this entire process multiple times, generating a sample of \(y_{c}\) measurements. In the plots, we present the mean value of this sample, along with the standard error, providing an indication of the reliability and precision of our findings.
#### iii.2.2 Random networks
The case of random networks is different from regular networks because the degrees of nodes are no longer equal. In fact, the probability that a node has a certain degree can be described by the binomial distribution [66]
\[Pr(d)=\binom{n-1}{d}\kappa^{d}(1-\kappa)^{n-1-d}. \tag{23}\]
Here \(n\) represents the size of the network and \(\kappa\) is the probability that two randomly chosen nodes are connected.
We can calculate the average degree \(\langle d\rangle\) and average squared degree \(\langle d^{2}\rangle\) of the network using the following formulas:
\[\begin{split}\langle d\rangle&=\kappa(n-1),\\ \langle d^{2}\rangle&=\kappa(1-\kappa)(n-1)+\kappa^{ 2}(n-1)^{2}.\end{split} \tag{24}\]
With this, the synchronization condition (21) reduces to
\[p_{c}=\frac{1}{k_{+}-k_{-}}\left(k_{+}-\sqrt{\frac{8}{\pi}}\frac{n}{N}\frac{ \kappa(n-1)}{1-\kappa+\kappa(n-1)}\Delta\omega\right). \tag{25}\]
Interestingly, when the condition \(\kappa\neq 1\) is satisfied, we observe that the ratio \(\frac{\langle d\rangle^{2}}{\langle d^{2}\rangle}=\frac{\kappa(n-1)}{1-\kappa +\kappa(n-1)}\) is strictly less than \(1\). This finding has significant implications; it indicates that the critical probability \(p_{c}\) is higher for random networks compared to regular networks: _random networks generally exhibit greater robustness and are capable of withstanding a larger number of corrupted nodes than regular networks_. It is worth noting that when \(\kappa=1\), random networks become fully connected and therefore regular, hence both expressions (22) and (25) yield the same results. To provide visual evidence supporting this observation, we have included a comparison with simulations in Fig. 4 where we keep fixed the parameter values at \(k_{+}=1\), \(\kappa=0.03\), \(\Delta\omega=1\), \(n=100\), \(N=500\), and \(\Delta T=0.001\). The close correspondence between the analytical predictions and the numerical data further validates the accuracy and reliability of our theoretical framework.
#### iii.2.3 Small world networks
In the case of the Watts-Strogatz model for small-world networks, the degree distribution [68] is described by the equation:
Figure 3: **Degree independence of synchronizability in regular networks**: The disruption of coherence among the Kuramoto oscillators does not appear to depend on the degree \(d\) of each node in the regular network. This observation is similar to our analytical result (Eq. (22)), represented by the solid line in the graph. We maintain a consistent set of parameter values throughout our analysis, with \(k_{+}=1\), \(k_{-}=-1\), \(\Delta\omega=1\), \(n=100\), \(N=500\), and \(\Delta T=0.001\).
Figure 4: **Synchronizability of random networks under untargeted attacks**: Critical fraction of disruptors necessary to destroy synchrony as a function of disruptors’ coupling strength on a random network. Solid curve presents our analytical result Eq. (25) while the datapoints come from numerical simulations. Fixed parameter values are \(k_{+}=1\), \(\kappa=0.03\), \(\Delta\omega=1\), \(n=100\), \(N=500\), and \(\Delta T=0.001\).
\[\begin{split}\text{Pr}(d)=& e^{-qK}\sum_{m=0}^{\min(d-K,K)} \binom{K}{m}(1-q)^{m}q^{K-m}\\ &\times\frac{(Kq)^{d-K-m}}{(d-K-m)!},\quad\text{for}\quad d\geq K \end{split} \tag{26}\]
where \(2K\) represents the degree of the original lattice (before rewiring), and \(q\) is the rewiring probability. We could not obtain a closed form expression for the synchronization threshold in this case due to the complicated nature of the degree distribution. Instead, we utilize Eq. (26) to numerically determine the expectations \(\langle d\rangle\) and \(\langle d^{2}\rangle\) and subsequently apply them in Eq. (21) to predict the critical fraction of corrupted nodes. The results, complemented by simulation data, are illustrated in Fig. 5 and show an agreement. The simulations in Fig. 5 are produced by maintaining set parameter values: \(n=100\), \(K=2\), \(k_{-}=-1\), \(k_{+}=1\), \(N=500\), \(\Delta\omega=1\), and \(\Delta T=0.001\).
#### iii.2.4 Scale-free networks
When we look at scale-free networks, where the node degree follows a power-law distribution \(Pr(d)\propto d^{-\gamma}\), we discover that the system behavior changes qualitatively depending on the value of the exponent \(\gamma\). For \(2<\gamma\leq 3\), the second moment \(\langle d^{2}\rangle\) diverges while the first moment \(\langle d\rangle\) is finite. And hence, the ratio \(\langle d^{2}\rangle/{\langle d\rangle}^{2}\) vanishes. Similarly, for \(1<\gamma\leq 2\) we have \(\frac{\langle d^{2}\rangle}{\langle d\rangle^{2}}\approx\frac{(2-\gamma)^{2}}{ (\gamma-3)(\gamma-1)}\lim\limits_{D\to\infty}\frac{(D^{2-\gamma}-1)^{2}}{D^{3 -\gamma-1}}=0\). Thus, Eq. (21) reduces to
\[p_{c}=\frac{k_{+}}{k_{+}-k_{-}}. \tag{27}\]
However, for \(\gamma>3\), we have
\[\begin{split}\langle d\rangle&=\frac{\zeta(\gamma- 1)}{\zeta(\gamma)},\\ \langle d^{2}\rangle&=\frac{\zeta(\gamma-2)}{\zeta( \gamma)},\end{split} \tag{28}\]
where \(\zeta(\gamma)\) is the Riemann zeta function. Then, Eq. (21) yields
\[p_{c}=\frac{1}{k_{+}-k_{-}}\left(k_{+}-\sqrt{\frac{8}{\pi}}\frac{n}{N}\frac{ \zeta(\gamma-1)^{2}}{\zeta(\gamma-2)\zeta(\gamma)}\Delta\omega\right). \tag{29}\]
Figure 6 presents numerical data on how \(p_{c}\) depends on \(\gamma\). For values of \(\gamma\) below \(3\), \(p_{c}\) remains constant. Above \(3\), agents become easier to desynchronize, resulting in a lower value of \(p_{c}\). Even though the analytical curve and the numerical data show the same trend, the curve is clearly outside the error bars. This happens because the error bars show the precision of the numerical data and
Figure 5: **Synchronizability of small world networks under untargeted attacks**: We begin with a lattice structure where each node has a degree of \(2K\). We rewire the links with a probability \(q\). While keeping other parameters fixed at \(n=100\), \(K=2\), \(k_{-}=-1\), \(k_{+}=1\), \(N=500\), \(\Delta\omega=1\), and \(\Delta T=0.001\), we generate various small world networks by varying the value of \(q\). Subsequently, we plot the numerically simulated \(p_{c}\) for each of these networks alongside the analytical findings (Eqs. (21) and (26)). Remarkably, the results from our numerical simulations exhibit an impressive agreement with our theoretical analysis. This confirms the accuracy and reliability of our analytical predictions.
Figure 6: **Synchronizability of scale-free networks under untargeted attacks**: Our analytical result (Eq. (27)) reveals that the critical fraction \(p_{c}\) of nodes with repulsive coupling, beyond which synchronization becomes unattainable, remains constant regardless of the power law degree exponent \(\gamma\), as long as \(1<\gamma\leq 3\). As a result, when \(\gamma\) falls within the range of \((1,3]\), we observe a horizontal line in our analysis, while for \(\gamma>3\), the solid line (Eq. (29)) demonstrates a decreasing trend. We conduct numerical simulations on scale-free networks with \(n=1000\) vertices and \(N=5000\) mobile agents to further verify this analytical understanding. Other parameters are kept fixed at \(k_{-}=-1\), \(k_{+}=1\), \(\Delta\omega=1\), and \(\Delta T=0.0001\).
not the accuracy. The accuracy, on the other hand, is controlled by the extent to which we were able to reproduce the infinite time-scale separation and the thermodynamic limits of network size and agent numbers. For the thermodynamic limits, one would need to send \(N\to\infty\) and \(n\to\infty\), keeping \(n/N=\textit{constant}\) all the while. And the infinite time-scale separation is attained by sending \(\Delta T\to 0\). Improving the simulations in either of these aspects is computationally costly and can be realized only to an extent. This figure was created using specific parameters: \(n=1000\), \(k_{-}=-1\), \(k_{+}=1\), \(N=5000\), \(\Delta\omega=1\), and \(\Delta T=0.0001\). In other words, compared with previous examples, we increased \(n\) and \(N\), and decreased \(\Delta T\) by one order of magnitude each. The numerical results gradually approach our theoretical findings. Yet, we still see the finite size effects in Fig. 6. This should not be a surprise since scale free networks are extremely sensitive to finite size effects [69; 70]. This topic will be discussed further in Sec. IV.3.4.
### Targeted Attacks: Unveiling the Impact of Targeting High-Degree Nodes
In this new approach, we aim to strategically assign the repulsive coupling strength \(k_{-}\) by targeting the highest degree nodes in the networks. These nodes are particularly influential as they have a greater impact on the collective behavior. To implement this strategy, we sort the nodes in ascending order based on their degrees and select a fraction \(p\) from the end of this sorted list. The nodes in this selected fraction will be assigned a negative coupling \(k_{-}\), while the remaining nodes will have a positive coupling \(k_{+}\). This targeted assignment ensures that the most highly connected nodes, which have the potential to disrupt synchronization more efficiently, are equipped with the negative coupling, while other nodes maintain a positive coupling.
To determine the synchronization condition in this targeted attack scenario, we must calculate the term \(\langle d^{2}k\rangle\) that appeared in Eq. (13). First, we determine the cutoff degree \(d_{c}\in\mathbb{Z}\) beyond which nodes are targeted and assigned with negative coupling \(k_{-}\). This cutoff is determined by the consistency equation:
\[p=\sum_{d=d_{c}}^{\infty}\Pr(d)=1-\Pr(d\leq d_{c}), \tag{30}\]
where \(\Pr(d\leq d_{c})\) represents the cumulative probability distribution of node degrees. It's important to note that \(p\) represents a fraction of nodes with values ranging from 0 to 1, while \(d\) denotes integer values for node degrees. Consequently, there might not be a clean integer cutoff \(d_{c}\) that isolates an arbitrary fraction \(p\) of all nodes. In such cases, we can resort to continuous approximations of the sums or, when applicable, start the summation at \(d=\lceil d_{c}\rceil\) and include only the fraction \(\lceil d_{c}\rceil-d_{c}\) of nodes with degree \(\lfloor d_{c}\rfloor\). Similar interpretations apply to sums with non-integer bounds.
For the targeted attack scenario, the expression for the joint distribution term becomes:
\[\langle d^{2}k\rangle=k_{+}\sum_{d=1}^{d_{c}-1}d^{2}\Pr(d)+k_{-}\sum_{d=d_{c} }^{\infty}d^{2}\Pr(d). \tag{31}\]
By utilizing this expression along with Eq. (16), we can determine the critical coupling strength \(k_{-}^{c}\) required to disrupt synchronization for the corrupted nodes:
\[k_{-}^{c}=\frac{-1}{\sum\limits_{d=d_{c}}^{\infty}d^{2}\Pr(d)}\left(k_{+}\sum \limits_{d=1}^{d_{c}-1}d^{2}\Pr(d)-\frac{n\sqrt{8}\langle d\rangle^{2}\Delta \omega}{N\sqrt{\pi}}\right). \tag{32}\]
The advantage of targeting higher-degree nodes becomes evident when comparing the term \(\frac{\langle d^{2}k\rangle}{\langle d\rangle^{2}}\) (the left-hand side of Eq. (13)) for the two attack strategies. By sorting the nodes \(\alpha\) based on their degrees (non-decreasing order), we observe that since the highest degree nodes possess the negative coupling \(k_{-}\), the sequence \(k_{\alpha}\) becomes non-increasing. Utilizing Chebyshev's sum inequality, we obtain:
\[\frac{\langle d^{2}k\rangle}{\langle d\rangle^{2}}\leq\frac{\langle d^{2} \rangle\langle k\rangle}{\langle d\rangle^{2}}. \tag{33}\]
We recognize the upper bound in Eq. (33) as the left-hand side of Eq. (13) for the untargeted attack scenario. This inequality indicates that _the synchronization condition (13) is more difficult to satisfy under targeted attacks, signifying a weaker network robustness in this case_. It is worth noting that the inequality in Eq. (33) is not strict, and some networks may exhibit equal robustness against both types of attacks.
#### iv.3.1 Regular networks
In the case of regular networks, degree-targeted attacks do not provide any advantage over untargeted attacks. This happens due to the absence of any strategic targets in regular networks, where each node contributes equally to the global dynamics. Thus the synchronization condition remains governed by Eq. (22).
As we found in the last section, the system is less robust under targeted attacks than under untargeted attacks. The more heterogeneous the degrees, the more strategic targets exist. Thus regular networks represent an edge case with no added benefit from targeting, and as we will see later, scale-free networks with \(1<\gamma\leq 3\) become the least robust under targeted attacks. All this may suggest that regular networks should be the most robust topologies under targeted attack, but this is not so.
The reason for this lies in the interplay between degree heterogeneity and the synchronization ability of nodes with positive couplings. While degree heterogeneity enhances the influence of highly connected corrupted nodes with repulsive coupling strength \(k_{-}<0\), it also improves the synchronization capability of nodes with positive couplings. As a result, the overall effect is not straightforward. Even under targeted attacks, heterogeneous networks may exhibit easier synchronization compared to regular networks. The degree distribution of the most robust network depends on various factors, such as the fraction \(p\) of corrupted nodes, the coupling strengths, and the distribution of frequencies. The dynamics of the system play a crucial role in determining the specific characteristics of the most robust network structure.
#### iv.2.2 Random networks
In random networks, the degrees of nodes are distributed binomially, as described by Eq. (23). Analytically working with the binomial distribution can be challenging, so we make use of the normal approximation \(\Pr(d)\approx\frac{1}{\sigma\sqrt{2\pi}}\exp\left(-\frac{(x-\mu)^{2}}{2\sigma^ {2}}\right)\). The mean of this approximation is given by \(\mu=(n-1)\kappa=\langle d\rangle\), and the standard deviation is \(\sigma=\sqrt{(n-1)(1-\kappa)\kappa}\). With this approximation, Eq. (30) can be expressed as:
\[\begin{split} p&=1-\Pr(d\leq d_{c})\\ &=1-\frac{1}{2}\left(1+\operatorname{erf}\left(\frac{d_{c}-\mu}{ \sigma\sqrt{2}}\right)\right)\\ &=\frac{1}{2}\left(1-\operatorname{erf}\left(\frac{d_{c}-\mu}{ \sigma\sqrt{2}}\right)\right).\end{split} \tag{34}\]
Solving for \(d_{c}\), we obtain:
\[\begin{split} d_{c}&=\mu+\sigma\sqrt{2}\operatorname{ erf}^{-1}(1-2p)\\ &=(n-1)\kappa+\sqrt{2(n-1)(1-\kappa)\kappa}\operatorname{erf}^{-1}(1-2p). \end{split} \tag{35}\]
Using these expressions, we can compute the critical corrupted coupling strength \(k_{-}^{c}\) through Eq. (32). However, the closed-form solution, obtained by evaluating the sums as integrals of the normal approximation, is lengthy and not explicitly presented here. Figure 7 provides a comparison between the analytical results and simulation data. In this figure, we set the parameters as follows: \(n=100\), \(k_{+}=1\), \(\Delta\omega=1\), \(\kappa=0.05\), \(N=500\), and \(\Delta T=0.001\). The plot shows that as the magnitude of \(k_{-}^{c}\) increases, the critical fraction \(p\) decreases. This means that only a smaller fraction of nodes with a higher repulsive coupling strength is needed to disrupt the synchronization among the mobile agents. The trend observed in this figure is similar to the untargeted attack case (cf. Fig. 4), where higher values of \(k_{-}\) require a smaller fraction \(p_{c}\) to destroy the coherence among the Kuramoto oscillators. This suggests that _increasing the repulsive coupling strength makes the synchronization more vulnerable, regardless of whether the attack is targeted or untargeted_.
#### iv.2.3 Small world networks
Here, we explore the relationship between the critical negative coupling strength \(k_{-}^{c}\) and the rewiring probability \(q\) in small world networks. The degree distribution of small world networks is given by Eq. (26), which is quite complex. Therefore, we could not obtain a closed-form solution for \(k_{-}^{c}\) in this case. Instead, we adopt a numerical approach to calculate \(k_{-}^{c}\). First, we numerically solve Eq. (30) to find the cutoff degree \(d_{c}\). Then, we directly evaluate Eq. (32) to determine the critical repulsive coupling strength \(k_{-}^{c}\).
The results obtained from this numerical approach are plotted alongside the simulations in Fig. 8. We compare the values of \(k_{-}\) obtained numerically with those obtained from Eqs. (30) and (32), providing a visual representation of the agreement between theory and practice. Figure 8 illustrates the impact of rewiring probability \(q\) on the critical negative coupling strength \(k_{-}^{c}\) required to disrupt synchronization in small-world networks. The parameters used in the simulations are \(n=100\), \(K=2\), \(p=0.1\), \(k_{+}=1\), \(N=500\), \(\Delta\omega=1\), and \(\Delta T=0.0001\). The figure shows that as the rewiring probability \(q\) increases, the magnitude of the critical negative coupling strength \(k_{-}^{c}\) decreases. This means that with a higher probability of rewiring, a relatively smaller magnitude of the negative coupling strength is sufficient to disrupt the coherence among the phase oscillators and lead to desynchronization.
Note, that increasing the rewiring probability \(q\) showed the opposite effect on the system under targeted (Fig.
Figure 7: **Synchronizability of random networks under targeted attacks**: In this figure, we set the parameters as follows: \(n=100\), \(k_{+}=1\), \(\Delta\omega=1\), \(\kappa=0.05\), \(N=500\), and \(\Delta T=0.001\). The solid line represents our analytical result (see Eqs. (32) and (35)), which is well aligned with the numerical simulations.
5) and untargeted (Fig. 8) attacks. If, during untargeted attacks, higher rewiring facilitated synchrony, in case of targeted attacks, it hindered synchronizability. This is because original lattice is completely uniform and thus regular, while rewiring introduces degree fluctuations that can be exploited during targeting.
We should also address the unexpected corner in Fig. 8, occurring at \(q=0.06\). It is directly related to a very similar corner in the plot of the average degree of top \(10\%\) highest degree nodes as a function of \(q\) (see the analytic curve in the inset of Fig. 8). This is caused by the fact that for \(q<0.06\) there are not enough nodes with degree 5 and higher, so nodes with degree 4 make the cutoff. For \(q>0.06\) all the selected nodes have degree 5 or higher. Thus below \(q=0.06\), an infinitesimal increment of \(q\) replaces degree 4 nodes with higher degree nodes, whereas above \(q=0.06\), same increment of \(q\) replaces degree 5 nodes with higher degree nodes, resulting in discontinuously larger gain of average degree in the top \(10\%\) of most well connected nodes.
#### iv.2.4 Scale-free networks
In the case of scale-free networks with degree distribution \(\Pr(k)\propto k^{-\gamma}\), where \(1<\gamma\leq 3\), the left-hand side of Eq. (13) diverges to negative infinity (since \(\frac{d^{2}k}{\langle d\rangle^{2}}\rightarrow-\infty\)), indicating absolute vulnerability to targeted attacks. In other words, any finite fraction of the most connected nodes being corrupted can disrupt synchronization, regardless of the magnitude of the negative coupling strength \(k_{-}\). This contrasts with the untargeted attack scenario that are highly robust against untargeted attacks. Targeted attacks corrupt hubs whereas untargeted attacks corrupt nodes at random, which are predominantly leafs and other low degree nodes.
This observation aligns with earlier findings [71, 72, 73, 74, 75, 76] on the structural robustness of heterogeneous networks. Heterogeneous networks, which include scale-free networks as a special case, are characterized by a wide range of node degrees. They are structurally robust against random node removal because the majority of nodes have low degrees and their removal does not significantly affect the overall connectivity. However, when we selectively remove important nodes, such as hubs, the network structure becomes fragmented, and its robustness is compromised [66, 67]. This fragility to preferential attacks on hubs is a consequence of the inherent structure of scale-free networks, where a small number of highly connected nodes play a crucial role in maintaining the overall connectivity and coherence. Therefore, _our findings highlight the dual nature of scale-free networks--they possess robustness against random disruptions but exhibit fragility when targeted attacks are directed towards hubs for \(1<\gamma\leq 3\)_. These results resonate with the earlier studies [71, 72, 73, 74, 75, 76] on the structural robustness and vulnerability of heterogeneous networks, emphasizing the intricate relationship between network topology, targeted attacks, and system dynamics.
For scale-free networks with \(\gamma>3\), we can explicitly compute the summation terms in Eq. (32) as shown in the following equation
\[\begin{split}\sum_{d=d_{c}}^{\infty}d^{2}\Pr(d)&= \sum_{d=d_{c}}^{\infty}\frac{d^{2-\gamma}}{\zeta(\gamma)}=\frac{\zeta(\gamma-2,d_{c})}{\zeta(\gamma)},\\ \sum_{d=1}^{d_{c}-1}d^{2}\Pr(d)&=\sum_{d=1}^{d_{c}- 1}\frac{d^{2-\gamma}}{\zeta(\gamma)}\\ &=\sum_{d=1}^{\infty}\frac{d^{2-\gamma}}{\zeta(\gamma)}-\sum_{d=d _{c}}^{\infty}\frac{d^{2-\gamma}}{\zeta(\gamma)}\\ &=\frac{\zeta(\gamma-2)-\zeta(\gamma-2,d_{c})}{\zeta(\gamma)}. \end{split} \tag{36}\]
Here \(\zeta(\cdot,\cdot)\) represents the Hurwitz zeta function. Combining this with Eqs. (28) and (32), we get
\[\begin{split} k_{-}^{c}=-\Bigg{(}& k_{+}\bigg{(} \frac{\zeta(\gamma-2)}{\zeta(\gamma-2,d_{c})}-1\bigg{)}\\ &-\frac{\zeta(\gamma-1)^{2}}{\zeta(\gamma)\zeta(\gamma-2,d_{c})} \sqrt{\frac{8}{\pi}}\frac{n}{N}\Delta\omega\Bigg{)}.\end{split} \tag{37}\]
Now we calculate the cutoff degree \(d_{c}\) in terms of \(p\). It must be chosen such that it separates the top \(p\) fraction
Figure 8: **Synchronizability of small world networks under targeted attacks**: As the rewiring probability \(q\) increases, the strength of negative coupling \(k_{-}^{c}\) needed to disrupt synchronization decreases. Thus, with more rewiring, a weaker negative coupling can disrupt the coherence among the nodes and lead to desynchronization. In the initial ring lattice structure, the network is regular, making it quite resilient under targeted attacks. However, as rewiring increases, the degree fluctuations grow, creating strategic targets and making the attack more effective. Other parameters: \(n=100\), \(K=2\), \(p=0.1\), \(k_{+}=1\), \(N=500\), \(\Delta\omega=1\), and \(\Delta T=0.0001\). The inset shows the analytically computed average degree of top \(10\%\) of most well connected nodes as a function of the rewiring probability \(q\) for \(K=2\) in the thermodynamic limit \(n\rightarrow\infty\).
of nodes. In mathematical terms, this is expressed as
\[p=\sum_{d=d_{c}}^{\infty}\Pr\left(d\right)=\frac{1}{\zeta(\gamma)}\sum_{d=d_{c}}^{ \infty}d^{-\gamma}=\frac{\zeta(\gamma,d_{c})}{\zeta(\gamma)}. \tag{38}\]
Inverting this, we get
\[d_{c}=\zeta^{-1}\left(\gamma,\zeta(\gamma)p\right), \tag{39}\]
where \(\zeta^{-1}(x,y)\) denotes the inverse of the Hurwitz zeta function with \(x\) fixed: \(\zeta^{-1}(x,\zeta(x,y))=y\).
Figure 9 presents the relationship between the critical negative coupling strength and the scale-free exponent \(\gamma\). When it comes to targeted attacks on scale-free networks, the dynamics are highly influenced by the system's size, and our theoretical derivations are valid in the thermodynamic limit, i.e., for \(n,N\rightarrow\infty\) and in the rapid movement limit of mobile agents, i.e., for \(\Delta T\to 0\). Unfortunately, the simulations with scale-free networks are highly sensitive to finite size effects [69, 70], and due to the limited computation capacity, it is challenging to perfectly match numerical simulations with analytic predictions. However, as expected, increasing the network size \(n\) and the number of mobile agents \(N\) while maintaining a constant ratio \(\frac{n}{N}\) brings the simulations closer to the thermodynamic limit and improves the agreement with analytic predictions.
The finite size effects can be understood intuitively as follows. In scale-free networks, the degrees of the corrupted hubs are limited in finite-sized systems. As a consequence, these hubs have less influence on the synchronization dynamics compared to hubs in larger networks. In order to disrupt the synchronization, limited-degree hubs need to possess stronger negative couplings. This requirement arises because their reduced influence necessitates a more potent disruptive force to achieve incoherence.
Overall, these findings highlight the intricate relationship between network size, topology, and the effectiveness of targeted attacks on scale-free networks. They remind us of the complex interplay between system properties, highlighting the importance of considering various factors when assessing the vulnerability of networks to targeted attacks.
### Targeted Attacks: Unveiling the Impact of Targeting Low-Degree Nodes
The mathematical results derived in the previous section can also be applied to a scenario where the low-degree nodes are targeted instead of the high-degree ones. In this case, we exchange the roles of \(k_{-}\) and \(k_{+}\) so that the nodes with the highest degrees now have positive couplings represented by \(k_{+}\). We also substitute \(p\) with \(1-p\) while using \(p\) to describe the fraction of disruptive nodes.
Under this type of targeted attack strategy, regular networks behave identically to the previous two cases. However, heterogeneous networks become even more robust compared to untargeted attacks. To understand this, we sort the nodes based on their degrees, where \(d_{\alpha}\) represents a non-decreasing sequence. Since the highest degree nodes now have positive couplings \(k_{+}\), the coupling strength \(k_{\alpha}\) also exhibits a non-decreasing trend. By applying Chebyshev's sum inequality, we can establish the following relationship:
\[\frac{\langle d^{2}k\rangle}{\langle d\rangle^{2}}\geq\frac{\langle d^{2} \rangle\langle k\rangle}{\langle d\rangle^{2}}. \tag{40}\]
This inequality indicates that _the synchronization condition is more easily satisfied when lower-degree nodes are targeted compared to the untargeted case_. It further reinforces the enhanced robustness of the network under this targeted attack strategy.
## V Exploring Opinion Dynamics: Random Walk of Polarized Agents
To showcase the generality and applicability of our results, we next consider a different type of internal dynamics and interactions: the cusp catastrophe model for polarization within and across the individuals [19] based on the Ising model of opinion.
Let us first consider the internal dynamics of one individual as described in Ref. [19]. Each person forms
Figure 9: **Synchronizability of scale-free networks under targeted attacks**: The vulnerability of scale-free networks to targeted attacks depends on the network’s size and the distribution of connections. Achieving a perfect match between theoretical predictions and simulations can be tricky due to sensitivity of scale-free networks to finite size effects, and computational constraints. Nonetheless, larger networks exhibit behavior closer to the analytical predictions, while smaller networks display more significant deviation. Hubs with limited-degree have a diminished ability to disrupt synchronization, demanding stronger negative couplings to achieve the same disruptive effect. Other parameters: \(p=0.1\), \(k_{+}=1\), \(\Delta\omega=1\), and \(\Delta T=0.0001\).
their attitude about a subject matter based on an interconnected network of issues related to this subject. For example, if the subject is meat consumption, the issues could consist of beliefs (meat consumption doesn't affect climate), feelings (loves steak), and behavioral patterns (eats burgers). Each of these issues is treated as a binary node \(x_{i}=-1,1\) indicating if the node label holds true for the given person. The overall opinion is given by an average over all subparts of attitude, i.e., network nodes (note, that the network is inside the individuals head, we are not discussing human-to-human interactions yet). The edge weights are given by \(\omega_{ij}\). Additionally, one considers external influences affecting each issue \(\tau_{i}\) (all their friends eat burgers) and attention to the subject matter \(\mathscr{A}\) (how important the person thinks this topic is).
When the attention \(\mathscr{A}\) is low, the connected issues can be misaligned \(x_{i}\neq x_{j}\) (they can think that meat consumption affects the environment and still eat lots of burgers). However, as the person spends more and more time thinking about the topic, the cognitive dissonance tends to align the nodes with each-other and with the external influence. In other words, high attention implies the lower misalignment function
\[\mathcal{H}=-\sum_{i}\tau_{i}x_{i}-\sum_{i,j}\omega_{ij}x_{i}x_{j}. \tag{41}\]
This equation is known as the Ising model and is well studied in physics. The analogue of _high attention_ in opinion dynamics is _low temperature_ in the Ising model since both result in the lower value of Eq. (41). The overall opinion \(\phi\) is analogous to the magnetization in the Ising model. Magnetization, on the other hand, has a cusp catastrophe behavior as a function of temperature and external influence in the Ising model. This can be directly translated to opinions: the opinion changes smoothly as a function of external influence \(I\) for a low value of attention, while for high attention, hysteresis appears and, depending on the initial state, agent's opinion may be positive or negative for the same attention \(\mathscr{A}\) and external influence \(I\). The normal form dynamical equation describing a cusp catastrophe in its stationary states is given below:
\[\dot{\phi}=f(\phi)=-\phi^{3}+(\mathscr{A}-\mathscr{A}_{c})\phi+I. \tag{42}\]
Here \(\phi\) stands for opinion, \(\mathscr{A}\) indicates the attention to the subject matter, \(\mathscr{A}_{c}\) stands for the critical value of attention beyond which the hysteresis appears, and \(I\) describes the external influence coming from interactions with other individuals. For an in-depth study of this model, along with the description of interactions, and different real-world examples see Ref. [19].
### Cusp catastrophe as internal dynamics of mobile agents
The internal variable of mobile agents that stood for the phase \(\phi_{i}\in\mathbb{S}^{1}\) will now be a real number \(\phi_{i}\in\mathbb{R}\) denoting the opinion of the agent (note, that the result in Eq. (9) remains the same, in fact the internal state could even be a vector or a tensor). We consider the internal dynamics of agent \(i\) to be given by Eq. (42)
\[\dot{\phi}_{i}=f(\phi_{i})=-\phi_{i}^{3}+(\mathscr{A}-\mathscr{A}_{c})\phi_{i }+I. \tag{43}\]
For the sake of simplicity, we consider that agents have a constant high value of attention \(\mathscr{A}>\mathscr{A}_{c}\). And that the external influence experienced by each agent depends linearly on the neighbors' opinions. The coupling constants again vary between discussion venues through which the agents move randomly.
\[\begin{split}&\dot{\phi}_{i}=F(\phi_{i})+k_{\alpha}\sum_{j\in O_{ \alpha}}H(\phi_{i},\phi_{j}),\\ & F(\phi_{i})=-\phi_{i}^{3}+(\mathscr{A}-\mathscr{A}_{c})\phi_{i },\\ & H(\phi_{i},\phi_{j})=\phi_{j}.\end{split} \tag{44}\]
Here the interaction term \(H(\phi_{i},\phi_{j})\) ensures that agents with positive opinions affect their neighbors in the positive direction proportional to their conviction level (as long as the coupling \(k_{\alpha}\) is positive). For friendly, constructive discussions \(k_{\alpha}\) will be positive, meaning that the listener takes the speakers words at the face value. For antagonistic interactions the coupling may well be negative, indicating that the listener will want to distance themselves from the speaker.
Employing Eq. (9), we can write down the state equations in the weak coupling limit
\[\begin{split}&\dot{\phi}_{i}=-\phi_{i}^{3}+(\mathscr{A}- \mathscr{A}_{c})\phi_{i}+\frac{\tilde{k}}{N}\sum_{j=1}^{N}\phi_{j},\\ &\tilde{k}=\frac{N}{n}\frac{\langle d^{2}k\rangle}{\langle d \rangle^{2}}.\end{split} \tag{45}\]
As expected, the result is of the same form as Eq. (43), but for globally coupled agents. We will initiate the agents with polarized opinions. If the effective coupling \(\tilde{k}\) is large, the agents will achieve a consensus, whereas for low values of \(\tilde{k}\) the agents will remain polarized. We can find the critical effective coupling necessary for the consensus using bifurcation analysis. Treating \((\mathscr{A}-\mathscr{A}_{c})\) as a positive constant, and \(I\) as a parameter in Eq. (43), we evaluate the bifurcation conditions \(f(\phi_{i})=0\) and \(f^{\prime}(\phi_{i})=0\) to get the bifurcation curve
\[I=\pm\frac{2(\mathscr{A}-\mathscr{A}_{c})^{\frac{3}{2}}}{3\sqrt{3}}. \tag{46}\]
Without loosing generality, we focus on the positive solution and compute the two equilibrium points for opinion
\[\begin{split}\phi^{-}&=-\sqrt{\frac{\mathscr{A}- \mathscr{A}_{c}}{3}},\\ \phi^{+}&=2\sqrt{\frac{\mathscr{A}-\mathscr{A}_{c}}{3 }}.\end{split} \tag{47}\]
This, in turn, helps us calculate the interaction term \(I\). Let us assume that in the initial state the opinions are divided into fraction \(q\) that has a negative opinion and the rest \((1-q)\) tat that thinks positively. Then the influence of population opinions on each individual is
\[\begin{split} I&=\frac{\tilde{k}}{N}\sum_{j=1}^{N} \phi_{j}=\tilde{k}\big{(}q\phi^{-}+(1-q)\phi^{+}\big{)}\\ &=\tilde{k}\sqrt{\frac{\mathscr{A}-\mathscr{A}_{c}}{3}}(2-3q). \end{split} \tag{48}\]
The consensus appears when the interaction term Eq. (48) exceeds the bifurcation value Eq. (46). This yields the condition
\[\tilde{k}>\frac{2(\mathscr{A}-\mathscr{A}_{c})}{6-9q}. \tag{49}\]
Note that since we considered only the positive solution for the bifurcation curve, Eq. (49) is relevant only when positive opinion prevails, i.e., \(q>0.5\). For the reversed scenario, the symmetry of the problem implies that one simply needs to replace \(q\) by \(1-q\) in Eq. (49). This condition for consensus is for the globally coupled system Eq. (45). Now we can use the expression for \(\tilde{k}\) to arrive at the general consensus condition for the random walking agents
\[\frac{\langle d^{2}k\rangle}{\langle d\rangle^{2}}>\frac{n}{N}\frac{2(\mathscr{ A}-\mathscr{A}_{c})}{6-9q}. \tag{50}\]
Equation (9) predicts that the impact of the network topology and the coupling distribution (or the attack strategy) remains independent of the internal dynamics of the agents (compare Eq. (50) with Eq. (13)). In order to avoid redundancy, we only present the numerical experiments with a regular network under untargeted attacks.
The consensus condition for untargeted corruption of a fraction \(p\) of discussion venues is given by a derivation identical to Eq. (21)
\[p_{c}=\frac{1}{k_{+}-k_{-}}\left(k_{+}-\frac{2(\mathscr{A}-\mathscr{A}_{c})}{6 -9q}\frac{n}{N}\frac{\langle d\rangle^{2}}{\langle d^{2}\rangle}\right). \tag{51}\]
Figure 10 shows the numerical validation of Eq. (51) with a random 3-regular network of \(n=100\) nodes and \(N=1000\) agents. The initial split of opinions \(q=0.1\), the attention \((\mathscr{A}-\mathscr{A}_{c})=1\), positive coupling \(k_{+}=0.1\), and the time intervals \(\Delta T=0.001\).
## VI Discussion and Conclusions
In this study, we have explored the dynamics of mobile agents as they navigate a complex network and interact with diverse sets of neighbors in different environments during their movements. The agents' internal dynamics are described by arbitrary first-order differential equations, and their interactions are governed by an arbitrary function of agent states. The network nodes act as interaction venues for the agents and exhibit heterogeneity. This variation between the nodes is modeled by a parameter known as the coupling constant, which regulates the strength of interactions between agents within the node.
Our analytic framework is validated in two distinct scenarios motivated by different applications. In both cases, we consider individuals moving through a network of interaction venues, including offices, bars, online chats, social media comments sections, news articles, and other physical or digital locations. The internal dynamics and interactions of individuals vary between the two applications.
The first application is synchronization of brain activity among groups of interacting individuals. This phenomenon has been observed in various recent experiments, employing techniques such as MRI, EEG, and eye tracking [12; 13; 14; 15; 16; 17]. We model this behavior by representing agents as Kuramoto oscillators, which tend to synchronize upon interactions, and explore the global synchronizability of the system.
The second application pertains to a cusp catastrophe model of opinion dynamics [19]. This recently published
Figure 10: **Emergence of consensus in regular networks under untargeted attacks**: Critical fraction of disruptive nodes \(p_{c}\) as a function of their coupling strength \(k_{-}\). \(p_{c}\) is the fraction of disruptions in the network, beyond which consensus becomes impossible. Other parameters: \(q=0.1\), \((\mathscr{A}-\mathscr{A}_{c})=1\), \(k_{+}=0.1\), and \(\Delta T=0.001\).
model delves into polarization within and across individuals, highlighting how Ising-like interactions between related issues lead to the emergence of hysteresis in opinion dynamics when attention to the subject matter is high. We incorporate this cusp catastrophe model as the internal dynamics in our random-walking model to examine the possibility of consensus.
Based on our analysis, which becomes exact in the limit of weak couplings, we derive effective differential equations governing the evolution of agents' internal states Eq. (9). Our analysis accounts for the network topology and coupling heterogeneity, incorporated in the expression for the effective coupling. Through our analytical findings we can make several general observations: small networks with many agents facilitate a strong effective coupling, high-degree nodes exert a strong influence on the system behavior, and node degree fluctuations play a crucial role in stimulating interactions. Additionally, we find that designing the network structure intelligently, with inherent node variations in mind, can improve its functionality. In particular, aligning node degrees with their respective couplings enhances interactions.
An important strength of our analysis lies in its ability to accommodate diverse network nodes, both in terms of their degrees and their internal dynamics. The coupling constants associated with nodes can be arbitrarily distributed, enabling us to explore the interplay of positive and negative couplings. Nodes with negative couplings can be interpreted as disruptors of the system. Moreover, our approach allows for the selection of disruptive nodes to be dependent on the network topology, leading to different attack strategies and notions of robustness concerning these attacks.
To investigate the effect of different network topologies on facilitating coherence under disruptive influences, we consider two distinct methods of introducing nodes with negative couplings. First, we randomly select a portion of nodes and assign them negative coupling strengths. Second, we employ a more sophisticated approach by specifically targeting high-degree nodes and analyzing the consequences of this preferential placement.
Through detailed analysis, we provide analytical proof that under untargeted attacks, scale-free networks with a power-law exponent (\(\gamma\)) between 1 and 3 exhibit the highest robustness, while regular networks are the most vulnerable. Random networks fall somewhere in between these extremes. However, when attackers strategically target high-degree nodes, the response of the system changes. Scale-free networks with \(1<\gamma\leq 3\) become the weakest, their coherence easily disrupted even with mildly negative coupling strengths (\(k_{-}\)). This reversal is well illustrated in small world networks too, where increasing the rewiring probability makes coherence easier under untargeted attacks but harder under targeted attacks. The networks that were previously the most robust under untargeted attacks now become the most susceptible when targeted. This finding aligns with previous studies exploring complex networks' structural robustness [76]. Regular networks, which were initially vulnerable, show higher robustness under targeted attacks than heterogeneous networks. However, it is important to note that the most robust network topology under targeted attacks is not universally fixed and depends on various factors. The heterogeneity of node degrees in the network plays a crucial role. On one hand, heterogeneity promotes coherence when it comes to nodes with positive coupling. However, this very heterogeneity also creates potential targets for disruptors aiming to destabilize the network.
Our analysis indicates that achieving coherence becomes more challenging when targeted attacks are directed toward higher-degree nodes, demonstrating a decreased network robustness. Additionally, we investigate the effects of targeting lower-degree nodes with repulsive couplings and find that the coherence conditions are more readily satisfied under this preferential attack. In addition to formulating a comprehensive analytical solution, we supplement our research with extensive numerical experiments to corroborate our discoveries. While the majority of these outcomes reveal robust concurrence with our analytical conclusions, it becomes evident that scale-free networks exhibit pronounced finite-size effects. In conclusion, the relationship between network topology and internal coherence is complicated. The most robust network structure under targeted attacks depends on agents' internal dynamics, interaction function, coupling strengths, and the proportion of disruptors. Gaining a deep understanding of these intricate details will enable us to identify the network structures that are most robust when faced with strategic attacks. By unraveling this interplay, we can enhance our ability to design robust networks capable of withstanding and recovering from disruptions.
###### Acknowledgements.
G.M. and R.M.D. express sincere gratitude for the support provided by Army Research Award number W911NF-23-1-0087. S.N.C. and A.H. would like to acknowledge the support of the National Science Foundation under Grant No. 1840221.
|
2302.08199 | Deep learning based surrogate modeling for thermal plume prediction of
groundwater heat pumps | The ability for groundwater heat pumps to meet space heating and cooling
demands without relying on fossil fuels, has prompted their mass roll out in
dense urban environments. In regions with high subsurface groundwater flow
rates, the thermal plume generated from a heat pump's injection well can
propagate downstream, affecting surrounding users and reducing their heat pump
efficiency. To reduce the probability of interference, regulators often rely on
simple analytical models or high fidelity groundwater simulations to determine
the impact that a heat pump has on the subsurface aquifer and surrounding heat
pumps. These are either too inaccurate or too computationally expensive for
everyday use. In this work, a surrogate model was developed to provide a quick,
high accuracy prediction tool of the thermal plume generated by a heat pump
within heterogeneous subsurface aquifers. Three variations of a convolutional
neural network were developed that accepts the known groundwater Darcy
velocities as discrete two-dimensional inputs and predicts the temperature
within the subsurface aquifer around the heat pump. A data set consisting of
800 numerical simulation samples, generated from random permeability fields and
pressure boundary conditions, was used to provide pseudo-randomized Darcy
velocity fields as input fields and the temperature field solution for training
the network. The subsurface temperature field output from the network provides
a more realistic temperature field that follows the Darcy velocity streamlines,
while being orders of magnitude faster than conventional high fidelity solvers | Kyle Davis, Raphael Leiteritz, Dirk Pflüger, Miriam Schulte | 2023-02-16T10:29:16Z | http://arxiv.org/abs/2302.08199v1 | # Deep learning based surrogate modeling for thermal plume prediction of groundwater heat pumps
###### Abstract
The ability for groundwater heat pumps to meet space heating and cooling demands without relying on fossil fuels, has prompted their mass roll-out in dense urban environments. In regions with high subsurface groundwater flow rates, the thermal plume generated from a heat pump's injection well can propagate downstream, affecting surrounding users and reducing their heat pump efficiency. To reduce the probability of interference, regulators often rely on simple analytical models or high-fidelity groundwater simulations to determine the impact that a heat pump has on the subsurface aquifer and surrounding heat pumps. These are either too inaccurate or too computationally expensive for everyday use. In this work, a surrogate model was developed to provide a quick, high accuracy prediction tool of the thermal plume generated by a heat pump within heterogeneous subsurface aquifers. Three variations of a convolutional neural network were developed that accepts the known groundwater Darcy velocities as discrete 2D inputs and predicts the temperature within the subsurface aquifer around the heat pump. A data set consisting of 800 numerical simulation samples, generated from random permeability fields and pressure boundary conditions, was used to provide pseudo-randomized Darcy velocity fields as input fields and the temperature field solution for training the network. The subsurface temperature field output from the network provides a more realistic temperature field that follows the Darcy velocity streamlines, while being orders of magnitude faster than conventional high-fidelity solvers.
* February 2023
## 1 Introduction
Addressing the challenges of climate change and growing energy costs are forcing cities to increase their usage of renewable energies. The European Union 2030 climate and energy framework requires a minimum reduction of 40% in greenhouse gas emissions, with a minimum of 32% renewable energy usage [1]. Energy usage can be significantly reduced
by reducing the space heating and cooling demands of buildings, which have been met predominantly with fossil fuel based resources. An increasingly popular alternative is to use shallow groundwater heat pumps, which have proved to be an effective alternative within cities.
Rolling out groundwater heat pumps on a city-wide scale is not without its challenges. City regulators must decide how many heat pumps can be effectively installed, how much can they be used and where can they be placed. The mass roll-out of heat pumps within a confined urban environment often leads to negative interference, where one heat pump changes the temperature of the water within a shared subsurface aquifer, reducing the efficiency of downstream heat pump systems. Additionally, the synergistic use of heat pump systems are often over-looked due to the added complexity of determining their mutual interactions. Therefore, monitoring any negative interactions and improving the overall efficiency by optimizing the usage and location of the heat pumps is imperative.
Numerous approaches to optimize heat pump usage and layout exist. Most methods involve either performing expensive numerical groundwater simulations using classical physics-based solver [2, 3, 4], or by using a cheaper approximation of the heat transfer within the subsurface [5, 6]. The classical numerical simulations offer highly accurate solutions for the subsurface thermal field, especially in the case of heterogeneous groundwater properties but at the expense of computational cost and increased run-time. The simulations often require extensive model preparation, calibration and validation, before running on large computing clusters. Only a few variations of the layout of heat pumps can be simulated within a reasonable time frame, limiting the practicality of this method for large cities with potentially thousands of heat pumps. Alternatively, using simplified approximations of the subsurface thermal profile allows for almost real-time solutions of the subsurface thermal profile induced by the heat pump, but at the expense of accuracy and only is valid for certain groundwater properties.
A surrogate model that is able to reproduce the classical numerical simulation results, while being orders of magnitude cheaper, provides an attractive solution to the problem. This can be used to create a virtual model of the subsurface domain, allowing city and energy planners to perform optimization studies that are otherwise computationally infeasible, or provide an online monitoring tool of the subsurface of the city while having fast and accurate results. Many virtual models, or digital twins, of cities typically focus only on the above surface world and rarely model what happens underneath.
In order to provide the level of accuracy as the high-fidelity solver, the surrogate model must be able to account for the subsurface heterogeneity. Popular methods for creating a reduced order model (ROM) are projection-based methods, such as proper orthogonal decomposition [7, 8, 9]. Projection methods use snapshots of input-output pair information, where previous high-fidelity simulation runs are used to obtain these pairs. Once a ROM is generated, access to the underlying numerical model is typically required to build the surrogate model. This is sometimes problematic if the underlying
numerical model is inaccessible, or if a truly black-box surrogate model is required.
Recently, deep learning for black-box surrogate modeling of physical problems have been created using artificial neural networks (ANN) [10, 11], requiring access to the input-output data pairs only and not the underlying numerical model or internal simulation solver. ANNs have already been applied to modeling reacting flows [12, 13], predicting airflow over airfoils [14] and modeling fluid flow within porous media [15, 16]. In addition to surrogate modeling, ANNs can provide coarse to fine mesh mapping, where the network learns to approximate the fine scale information with access to the coarse scale features only [17], providing a middle-ground feature between a ROM and high-fidelity solution. Critical to the success of ANNs are the data, where generating sufficient simulation data to train an ANN is often a stumbling block for physics-based surrogate models. Physics-informed neural networks have developed due to the underlying difficulty of obtaining expensive numerical simulation data [18]. Instead of using the input-output data pairs, the network is instead trained knowing that the output must satisfy a partial differential equation (PDE) that defines the physical problem. Therefore, only input data is required and the network loss function consists of the mismatch between the output result and the PDE that it must satisfy. This has been applied to numerous physics-based surrogate models where the governing equations are known [19, 20, 21].
The objective of our study was to utilize artificial neural networks to create a fast surrogate model that solves for the local subsurface thermal field due to the presence of a groundwater heat pump, while accounting for the subsurface heterogeneity of the permeability and Darcy velocity fields. The surrogate model can be used to perform heat pump layout and usage optimization, which may consist of thousands of layout configurations, or serve as a fast approximation to build into an online evaluation tool for groundwater heat pump management.
## 2 Method
### Open-loop groundwater heat pumps
Shallow groundwater heat pumps are devices that transfer heat to and from a shallow aquifer beneath the surface in order to provide space heating or cooling. Open-loop groundwater heat pumps, depicted in figure 1, function by extracting water from the subsurface aquifer, passes the water through a heat exchanger, before re-injecting the water back into the subsurface. When operating in heating mode, i.e., heating a building, the energy in the water is passed into the building through the heat exchanger, thereby cooling the water and re-injecting the fluid at a lower temperature back into the aquifer. Conversely, in cooling mode the energy is passed from the building and into the fluid, warming up the water and re-injecting the warmer fluid back into the aquifer. The natural flow of the groundwater within the subsurface aquifer causes the re-injected water, now at an elevated or reduced temperature to the rest of the aquifer, to be
pulled along with the movement of the groundwater, causing a thermal plume to form downstream of the injection well. If this plume reaches the extraction well of a nearby downstream heat pump, the heightened or lowered temperature of the groundwater within the plume may reduce the efficiency of the downstream system to the extent that it becomes unusable. When a small urban area is densely populated with heat pumps, the overall efficiency may be reduced more than if only a handful of well-placed heat pumps were used.
### Subsurface modelling
Before any heat pump layout optimization study or heat pump monitoring can be performed, a suitable model for the groundwater temperature is required. There are two popular methods that exist to model the thermal plume that develops from an open-loop heat pump. The linear advective heat transport model (LAHM) from [23] is a common analytical formulation that defines the change in groundwater temperature around the heat pump over time compared to the far-away background temperature
\[\Delta T(x,y,t)=\frac{Q\cdot\Delta T_{inj}}{4\cdot n_{e}\cdot M \cdot v_{a}\cdot\sqrt{\pi\cdot\alpha_{T}}}\cdot exp\left(\frac{x-r}{2\cdot \alpha_{L}}\right)\cdot\] \[\frac{1}{\sqrt{r}}\cdot erfc\left(\frac{r-v_{a}\cdot t/R}{2\cdot \sqrt{v_{a}\cdot\alpha_{L}\cdot t/R}}\right), \tag{1}\]
where \(\Delta T(x,y,t)\) is the time-dependent temperature difference at coordinates \(x\) and \(y\) (the heat pump is at coordinates \(x\) = 0 and \(y\) = 0) at time \(t\), \(Q\) is the injection mass flow rate, \(\Delta T_{inj}\) is the difference between the background temperature and the injection temperature, \(v_{a}\) is the groundwater velocity at the heat pump injection well, \(r\) is the radial distance from the injection well, \(M\) is the aquifer thickness, and \(\alpha_{L}\) and \(\alpha_{T}\) are the longitudinal and tangential dispersivity values. The disadvantage of the LAHM is that the plume is dependent only on the velocity magnitude in subsurface
Figure 1: Open loop groundwater heat pump, where the flow of groundwater causes a plume to develop that stretches downstream of the injection well. Image modified from [22]
aquifer at the heat pump location with constant subsurface properties. Therefore, no heterogeneity of the subsurface is accommodated. The uni-directional thermal plumes determined using the LAHM are shown in figure 2 as examples of the solution.
The LAHM has already been integrated into an online evaluation tool shown in figure 2 (a), that determines the iso-lines of the change in subsurface temperature \(\Delta T(x,y,t)\) due to the presence of a heat pump. This provides a fast surrogate model, but does not model the heterogeneity as shown in figure 2 (b), where the LAHM solution iso-lines are overlaid onto the classical physics numerical simulation results using a 3D groundwater mechanics solver. The classical solver accounts for subsurface heterogeneity, causing non-uniform Darcy velocity streamlines and varying direction of the thermal plume.
More complex 3D numerical groundwater models can provide a more accurate solution for the groundwater temperature field. The governing equations for the subsurface groundwater model is derived from [24]. The subsurface model applies to a single phase and variably saturated fluid, defined on a single space-time domain. The subsurface fluid flow is governed first by the conservation of mass,
\[\frac{\delta}{\delta t}\left(\varphi s\eta\right)+\nabla\cdot\left(\eta\mathbf{ q}\right)=Q_{w}, \tag{2}\]
with the porosity \(\varphi\) [-], saturation ratio \(s\) [\(m^{3}m^{-3}\)], molar density \(\eta\) [\(kmolm^{-3}\)], the Darcy velocity \(\mathbf{q}\) and mass source/sink term \(Q_{w}\) [\(kmol\cdot m^{-3}\cdot s^{-1}\)]. The groundwater temperature is modeled by the inclusion of the conservation of energy,
\[\frac{\delta}{\delta t}\left(\varphi s\eta U+\left(1-\varphi \right)\rho_{p}c_{p}T\right)\] \[+\nabla\cdot\left(\eta\mathbf{q}H-\kappa\nabla T\right)=Q_{e}, \tag{3}\]
with the rock density \(\rho_{r}\), heat capacity \(c_{p}\) and thermal conductivity \(\kappa\) of the porous medium - fluid mixture and the energy source/sink term \(Q_{e}\). The fluid enthalpy \(H\) is
Figure 2: Temperature iso-lines of the LAHM analytical solution applied to an online groundwater heat pump planning and optimization tool (left) and a comparison between the LAHM analytical solution temperature iso-lines overlaid onto the 2D numerical solution (right).
related to the internal energy of the water through the expression
\[U=H-\frac{P}{\eta}. \tag{4}\]
The Darcy velocity \(\mathbf{q}=(q_{x},q_{y},q_{z})^{T}\) in \([m\cdot s^{-1}]\) is defined as
\[\mathbf{q}=-\frac{\mathbf{K}(s)}{\mu}\nabla\left(P-\rho gz\right), \tag{5}\]
with the relative permeability field \(\mathbf{K}(s)\)\([m^{2}]\), the viscosity \(\mu\)\([Pa\cdot s]\), subsurface water pressure \(P\)\([Pa]\), gravitational constant \(g\)\([m\cdot s^{-2}]\) and the relative reference height \(z\)\([m]\). The constant pressure initial condition for the subsurface model is defined on the boundary of the domain \(\delta\Omega_{P}\), inducing the Darcy velocity \(\mathbf{q}=\mathbf{K}\nabla P\) throughout \(\Omega_{P}\). The boundary condition of a heat pump can be specified by either injecting energy into the domain via the heat flux \(Q_{e}\), or by injecting a mass of water \(Q_{w}\) at a predetermined temperature. The latter method more resembles the operation of an open-loop heat pump, which pumps water back into the subsurface at a new temperature that is dependent on the extraction well temperature.
The operation of commercial heat pumps utilize a constant temperature difference across the heat exchanger, and alters the amount of energy passed to/from the groundwater by increasing or decreasing the mass flow rate extracted from the subsurface aquifer. The difference between the injection well \(T_{inj}\) and extraction well \(T_{ext}\) temperature is defined as \(\Delta T=T_{inj}-T_{ext}=5\) when operation in cooling mode and \(\Delta T=-5\) when operating in heating mode. The extraction and injection mass flow rate is defined from the amount of energy passed into-or-from the fluid and \(\Delta T\) by
\[\dot{m}=\frac{\dot{Q}}{c_{p}\Delta T}\,, \tag{6}\]
with the energy transferred to/from the fluid \(\dot{Q}\)\([J]\), the specific heat of water (assumed constant) \(c_{p}=4184\)\([J\cdot kg^{-1}\cdot K^{-1}]\) and the temperature difference \(\Delta T\).
### Neural network design
#### 2.3.1 Input data:
The neural network operation is divided into two parts: offline and online stages. The offline stage involves generating and processing the training data, training the network and finally testing the network accuracy. The online stage involves feeding the input to the network and obtaining the output during normal usage of the network. The offline step is typically computationally expensive, but is tolerable as this is only performed once in order to generate a network. The online step is intended to be performed thousands if not millions of times, thereby requiring a fast prediction performance in the online step. To minimize the computational cost of the network prediction step, the surrogate model must not rely on performing any numerical simulation during the online step and therefore, all input data must be readily available. A baseline simulation of the entire subsurface of Munich already exists and contains the subsurface permeability, pressure and Darcy velocity throughout the city's subsurface.
These data can simply be extracted from the baseline simulation result and input into the network during the online phase without having to perform any expensive numerical simulation.
For this study of building a surrogate model, we ignore the time-dependent behavior of the thermal plume propagation and only consider the steady state solution. Examining equation 3 and ignoring the time-derivative, the thermal plume is dependent on an advection term and a diffusion term. We assume that the plume behavior is dominated by the advection term of this equation, where figure 2 strengthens this assumption as the thermal plume tends to follow the velocity streamlines. As the subsurface Darcy velocity is readily available and likely to provide meaningful information to the network in order to predict the plume behavior, we use the Darcy velocity components \(q_{x}\) and \(q_{y}\) as inputs into the network. Previous work on building a surrogate model of the subsurface flow accepted the groundwater permeability as input and predicted the groundwater pressure and Darcy velocities [19]. However, this is only applicable for scenarios with a constant pressure boundary condition, and therefore not applicable for general use such as with our surrogate model.
#### 2.3.2 Output data:
Obtaining the output results from the surrogate model must be orders of magnitude faster to attain than the high-fidelity model in order to be useful, while remaining accurate. For our specific application of optimization and condition monitoring, only the temperature output is required for build a practical surrogate model. The network input data (obtained from the baseline simulation data), does not include the influence that the heat pump has on the Darcy velocities or thermal field. The baseline simulations do not have the required output data necessary to train the network using supervised learning, which must be sourced elsewhere. Therefore, smaller 2D numerical simulations are performed in order to generate input-output data pairs for the network training and testing. However, these smaller numerical simulations are much smaller than the baseline simulation case, such that the data generation time is feasible for our model.
#### 2.3.3 Network architecture:
The available input and output data structure must be considered when designing the network architecture. The output of the 2D numerical groundwater simulation, which will be used as input and output of the network, can be treated as an image where the value at each pixel defines the Darcy velocity (input) or temperature (output) at the cell centers of the finite volume mesh. The input data are provided as a two-channel image \(\varphi_{in}\in\mathbb{R}^{2\times 65\times 65}\) of the Darcy velocity \(q_{x}\) and \(q_{y}\) of size \(65\times 65\) pixels each (\(\varphi_{in,1}\) and \(\varphi_{in,2}\)). The network output \(\varphi_{out}\in\mathbb{R}^{65\times 65}\) is a single channel image of \(65\times 65\) of the temperature field. The location of the heat pump is always at the center pixel. An image size of \(65\times 65\) was selected so that an equal number of pixels surround the center pixel in each direction. The image-like nature of the input and output data naturally allows the use of a convolutional neural network (CNN), which is favorable for image like data. Previous work on predicting the behavior of physical
systems have shown that CNNs are favorable using image-like data [14, 19, 17, 25].
Within this work, we built and evaluated three variations of the TurbNet architecture by Thuerney et. al. [14], which is also a variant of the U-Net architecture by Ronneberger et. al. [26]. The TurbNet architecture features skip connections between the encoding and decoding steps.
Each layer of the CNN applies a 2D convolution operation to the input image by sweeping a fixed size kernel (e.g. 4x4) over the input and multiplying it with the underlying image data. The kernel weights \(\mathbf{W}_{c}\), are parameters which are optimized during the network's training procedure. By having these weights shared for the whole image, the convolutional layer essentially learns to pick up translation invariant features. A user-defined number of features, each having its own kernel, can then be learned to create multiple outputs per layer. Usually, after each convolutional layer an aggregation operation is applied such as a pooling operation where multiple pixels are combined to create a coarser output. Recursively continuing this process allows the network to pick up different features at multiple scales.
For the U-Net architecture this process is reversed after reaching a pre-defined bottleneck state where the coarsest features are represented. This bottleneck is then followed by inverse convolutions to reconstruct higher-resolution images for the output path. This is comparable to simple auto-encoder architectures which also feature an encoder-bottleneck-decoder structure.
The entire encoding-decoding procedure can be characterized as a mapping \(S\), from the input images and kernel weights to the prediction output \(\varphi_{pred}\),
\[S\left(\mathbf{W}_{c},\varphi_{in,1},\varphi_{in,2}\right)\mapsto\varphi_{ pred}. \tag{7}\]
The network is trained to minimize the difference between the known output \(\varphi_{out}\) and the network prediction output \(\varphi_{pred}\) by varying the kernel weights \(\mathbf{W}_{c}\) and other network parameters \(\theta\)
\[\operatorname*{arg\,min}_{W_{c},\theta}\sum_{i}^{N_{data}}\|\varphi_{pred}- \varphi_{out}\|. \tag{8}\]
Both the encoding and decoding steps are divided into multiple layers, which each contain a rectified linear unit (ReLU) activation function with a max pooling layer of \(2\times 2\), a stride of 2 for down-sampling the image size and batch normalization. The image down-sampling halves the image pixel size per direction (the first convolution layer only reduces the image size from \(65\times 65\) to \(64\times 64\)). The total number of trainable parameters of the network can be increased by increasing the number of layers or increasing the number of initial features. This also affects the accuracy of the network.
The three architectures tested are:
1. TurbNet-Geo: 6 layers with skip connections
2. TurbNet-Geo-Light: 4 layers with skip connections.
3. TurbNet-Geo-NoSkip-Light: 4 layers without skip connections (TurbNet-Geo-Light without the connections).
With the latter two architectures, we test the model performance with regard to fewer network parameters and the elimination of skip connections. The skip connections add a concatenation step between the encoding and decoding layers in order to retain features in the encoding steps. The TurbNet-Geo-Light network architecture is illustrated in figure 3. The 2 channel input is provided on the left, with the network output on the right.
### Data generation
In order to study whether the CNN is able to predict the thermal plume with sufficient accuracy, a simple 2D groundwater model is utilized. This is sufficient for two reasons: firstly, the network only accepts 2D images as inputs and secondly, the thermal plume only influences the local area around a heat pump. This allows for relatively cheap numerical simulations to be performed to obtain the Darcy velocities without the influence of the heat pump, followed by re-running the same model but with the heat pump activated to obtain the thermal plume profile. As the thermal plume profiles are not available from the current baseline simulation, it would be too costly to run multiple large, high-fidelity simulations of a large area of the city, only to obtain local temperature profiles around the heat pumps. Therefore, the data generation procedure is divided into 4 steps:
1. Input field generation - generates the numerical simulation input files with
Figure 3: TurbNet-Geo architecture containing skip connections for the temperature plume prediction. The network accepts a 2-channel input of the \(x\) and \(y\)-direction Darcy velocity magnitudes and outputs a single channel of the temperature field. The bottleneck contains \(32\times\) the amount of initial features.
randomized permeability fields and pressure boundary conditions,
2. Without-heat-pump evaluation - generates the Darcy velocity fields without the influence of a heat pump,
3. With-heat-pump evaluation - the "Without-heat-pump" simulation is repeated but with the heat pump activated,
4. Data manipulation - pre-processing of the input and output data for the network.
All input and output training and testing data of the numerical groundwater simulations were generated using the subsurface simulation software PFLOTRAN v3 [27]. The data generation simulations were performed on a workstation with an AMD Ryzen Threadripper 3960X 24-core Processor at the University of Stuttgart, running Ubuntu 20.04.
#### 2.4.1 Input field generation:
The prediction quality of the network is dependent on the quality of the training data. Therefore, the purpose of the input field generation procedure is to generate a variety of Darcy velocities and flow directions. To generate the variety of velocity data, randomized permeability fields are combined with randomized pressure gradient boundary conditions to generate seemingly arbitrary Darcy velocities that are within a suitable velocity range. For each data sample (one permeability field and one pressure gradient boundary condition), two simulations are performed: one with no heat pump active and, therefore, generating a velocity field without a heat pump. The second simulation has an active heat pump, injecting water at a different temperature than the background groundwater temperature to generate the thermal plume.
The simulation domain covers an area of 130m\(\times\)130m\(\times\)2m, divided into 65\(\times\)65\(\times\)1 finite volume grid cells (each grid cell is 2m\(\times\)2m\(\times\)2m) for a total of 4225 cells. The heat pump is placed at location \((33,33)\) to be in the center of the domain. A permeability field is generated by placing pilot points on a uniformly space grid of either 4\(\times\)4 (16 pilot points) or 6\(\times\)6 (36 pilot points) within the simulation domain. A random value is generated for each pilot point between 1.13 \(\times 10^{-7}\) and 3.77 \(\times 10^{-11}\). The values at the pilot points are interpolated onto the PFLOTRAN mesh using radial basis function interpolation with thin-plate-splines basis function, to create the randomized permeability field. The interpolation step from a coarse mesh of pilot points to a fine PFLOTRAN mesh avoids sudden changes in the permeability, which could occur if a random value was assigned to each mesh cell. The large difference in orders of magnitudes creates both high and low permeability regions, that can allow for the non-uniform velocity streamlines. The pressure gradient boundary condition is generated by randomly selecting two values between \([-0.0006,0.0006]\) and applying these in the \(x\)-direction and \(y\)-direction. The magnitude of the values generates realistic groundwater Darcy velocities and allow the direction of flow at the heat pump location to be in any 360deg direction.
Finally, the combination of each permeability field and pressure boundary condition
resulted in 800 unique input data samples. The small PFLOTRAN simulation domain allowed for the fast generation of data, with each set of 25 samples taking approximately \(180s\).
#### 2.4.2 Without-heat-pump evaluation:
The input data for the CNN requires the Darcy velocities without the influence of a heat pump in the region. This recreates the conditions of extracting the velocities from a region of the baseline simulation domain without a heat pump. Therefore, the heat pump mass flow rate was set to zero and the PFLOTRAN simulation was run for a period of 365 days, where the Darcy velocities were extracted at day 365 to obtain a pseudo steady-state solution.
#### 2.4.3 With-heat-pump evaluation:
Supervised training of the network requires the correct temperature field to evaluate the prediction accuracy. Therefore, each simulation performed in the "Without-heat-pump evaluation" was rerun with the heat pump activated. The heat pump mass flow rate was set to 0.05 \(l/s\) and an injection temperature of \(15^{\circ}\)C, against the background temperature of \(10^{\circ}\)C.
#### 2.4.4 Data pre-processing:
The accuracy of the CNN model can be greatly improved by suitable pre-processing of the data. Firstly, the background of \(10^{\circ}\)C is subtracted from the temperature output \(T_{Target}\) such that the far away temperature is around \(0^{\circ}\)C. Next, each data sample field (Darcy velocities \(q_{x}\) and \(q_{y}\) and temperature field \(T_{Target}\)) is normalized to the range \([-1,1]\) over the whole data set. To obtain the correct output, the inverse operation is applied to the temperature prediction to obtain the final output.
#### 2.4.5 Network limitations:
The current network design still suffers from some practical limitations. Firstly, the network can only predict the groundwater temperature under the assumption that the background temperature is completely uniform. Secondly, it is assumed that the injection temperature is exactly \(5^{\circ}\)C higher than the background temperature and with a constant flow rate of 0.05 l/s. However, the purpose of this study was to determine whether CNNs are capable of being a reasonably good surrogate model. Tackling further practical usage aspects are planned for future studies.
### Experimental setup
The three networks were trained using the Adam optimizer [28] for 50,000 epochs with a fixed learning rate of 0.0005 and a batch size of 64. Each network was trained using 4, 8, 16 and 32 initial features, with the total number of trainable parameters per network shown in table 1.
A total of 800 samples were generated, of which 650 were used for training and 150 for testing (validation). A data-driven loss function was used to compare the network
output with the actual results from the numerical simulation training data. The mean squared error is defined as
\[MSE=\frac{1}{N_{data}}\sum_{i}^{N_{data}}\left(T_{Pred}^{i}-T_{Target}^{i}\right)^ {2}, \tag{9}\]
where \(T_{Pred}^{i}\) is the temperature prediction for sample \(i\), \(T_{Target}^{i}\) is the known solution, and \(N_{data}\) is the total number of data samples used for either training or testing.
The neural networks were all built using PyTorch and trained using an NVIDIA GeForce RTX 3090 GPU with 24Gb RAM at the University of Stuttgart. The training and testing data, as well as the Python code for the network is provided in the DaRUS dataset "Replication Data for: Geothermal-ML - predicting thermal plume from groundwater heat pumps" [29].
## 3 Results and Discussion
The following section provides the results and discussion for all three networks: TurbNetGeo (TNG), TurbNet-Geo-Light (TNG-L) and TurbNet-Geo-NoSkip-Light (TNG-NS-L). The training and testing loss for the three network architectures and varying initial features are shown in table 2 at the 50,000\({}^{th}\) epoch.
The training loss for all three networks with 4 and 8 initial features are at least an order of magnitude larger than with 16 and 32 initial features. The TNG-NS-L has the largest training loss at 50,000 epochs for both 16 and 32 initial features. The TNG network has the lowest training loss for both 16 and 32 initial features, but only marginally lower with 4 times the number of trainable parameters than the other two networks, making each network evaluation more expensive. Despite having fewer trainable parameters than the other networks, the TNG-NS-L has the lowest testing loss. Overall, the three network architectures show similar training losses for the same number of initial features. This indicates that even the smallest network architecture is already expressive enough to capture the prediction task at hand.
The training loss across all 50,000 epochs for all three network architectures with 16 and 32 initial features is shown in Figure 4. For each case, the loss value is only plotted for every 500\({}^{th}\) epoch, with no smoothing performed on the data. Examining
\begin{table}
\begin{tabular}{l c c c} \hline \hline Init. Feat. & TurbNetGeo & TurbNetGeo-Light & TurbNetGeo-NoSkip-Light \\ \hline
4 & 77,033 & 18,697 & 17,221 \\
8 & 306,257 & 73,873 & 68,041 \\
16 & 1,221,281 & 293,665 & 270,481 \\
32 & 4,877,633 & 1,171,009 & 1,078,561 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Number of trainable parameters for the convolutional neural network. The number of parameters increases when increasing the number of initial features extracted in the first convolutional layer.
the training loss over time in Figure 4, there is little difference between the networks with an equivalent number of 16 or 32 initial features. It is clear that the number of initial features influences the training loss more than the number of network layers.
The testing loss, measured every 10,000\({}^{th}\) epoch, is shown for all three network architectures with 16 and 32 initial features in Figure 5. Minor, if any, improvement in the testing loss is observed after the first test at 10,000 epochs, indicating that additional training does not aid in improving the real-world capability of the network on predicting the thermal plume on unseen Darcy velocity data. This would perhaps require more training and testing data to improve, or a fundamental change to the network is required to reduce the testing loss.
For each network, a total of 150 test samples were evaluated and categorized according to the maximum error \(\epsilon_{max}\) across all pixels:
1. good: \(\epsilon_{max}<\) 0.5
2. medium: \(0.5<\epsilon_{max}<1.0\)
3. bad: \(1.0<\epsilon_{max}\)
The total number of predictions categorized as "good", "medium" and "bad" for each
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{TurbNetGeo} & \multicolumn{2}{c}{TurbNetGeo-Light} & \multicolumn{2}{c}{TurbNetGeo-NoSkip-Light} \\ Init. Feat. & Loss & Test Loss & Loss & Test Loss & Loss & Test Loss \\ \hline
4 & 1.19 \(\cdot 10^{-5}\) & 0.0659 & 1.38 \(\cdot 10^{-5}\) & 0.0556 & 2.40 \(\cdot 10^{-5}\) & 0.0224 \\
8 & 6.22 \(\cdot 10^{-6}\) & 0.0455 & 3.93 \(\cdot 10^{-6}\) & 0.0423 & 1.26 \(\cdot 10^{-6}\) & 0.0173 \\
16 & 2.22 \(\cdot 10^{-7}\) & 0.0812 & 3.31 \(\cdot 10^{-7}\) & 0.0318 & 3.96 \(\cdot 10^{-7}\) & 0.0171 \\
32 & 2.16 \(\cdot 10^{-7}\) & 0.0433 & 2.17 \(\cdot 10^{-7}\) & 0.0270 & 2.66 \(\cdot 10^{-7}\) & 0.0159 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Training and testing loss of all networks with varying number of initial features. The three networks were trained with 4, 8, 16 and 32 initial features (’Init. Feat.’), which varies the total number of training parameters in the network. The training loss (’Loss’) and the testing loss (’Test Loss’) are evaluated at the 50,000\({}^{th}\) epoch.
Figure 4: Training loss for all three network architectures TurbNet-Geo (’TNG’), TurbNet-Geo-Light (’TNG-Light’) and TurbNet-NoSkip-Light (’TNG-NoSkip’), were evaluated for 16 and 32 initial features, denoted as ’-16’ and ’-32’, respectively. The training loss is plotted at every 200\({}^{th}\) epoch.
network and varying number of initial features, is shown in table 3. The TNG network performs only moderately better than TNG-L, with 24 and 22 "good" prediction samples, respectively. The TNG-NS-L has 15 "good" predictions, while the TNG is a might larger network with more trainable parameters. To reduce the network size as much as possible while maintaining reasonably good predictions, we focus on the TNG-L and TNG-NS-L networks as the preferred networks for further analysis. Comparing these two, the TNG-L has more test samples categorized as good for 16 and 32 initial features and more samples categorized as medium for 16 initial features. Even though the testing loss is lower for the TNG-NS-L network, there may be few pixels where the error is large, forcing test samples to be categorized as "medium" or "bad". However, we select the TNG-L with 32 initial features as the current default network for the rest of the analysis and compare the analysis to this network.
Table 3 categorizes the predictions for the test set into "good", "medium" and "bad" predictions based on the maximum point-wise error. For the TurbNetGeo-Light (TNG-L) and TurbNetGeo-NoSkip-Light (TNG-NS-L) networks with 16 and 32 initial features, box plots (box and whisker plot) of the per-pixel error magnitude across all
\begin{table}
\begin{tabular}{l|c c c|c c c|c c c} \hline & \multicolumn{3}{c}{**TurbNetGeo**} & \multicolumn{3}{c}{**TurbNetGeo-Light**} & \multicolumn{3}{c}{**TurbNetGeo-NoSkip-Light**} \\ \cline{2-10} Init. Feat. & Good & Medium & Bad & Good & Medium & Bad & Good & Medium & Bad \\ \hline
4 & 0 & 5 & 145 & 0 & 3 & 147 & 0 & 1 & 149 \\
8 & 1 & 50 & 99 & 1 & 28 & 121 & 1 & 43 & 106 \\
16 & 7 & 81 & 62 & 14 & 56 & 80 & 8 & 64 & 78 \\
32 & 24 & 66 & 60 & 22 & 63 & 65 & 15 & 69 & 66 \\ \hline \end{tabular}
\end{table}
Table 3: Number of samples classified as ’good’, ’medium’ and ’bad’ predictions for all three network architectures. Any prediction with the maximum absolute error \(|\epsilon_{max}|_{\mathbf{i}}\) 0.5\({}^{\circ}\)C is defined as ’good’, any prediction with 0.5\({}^{\circ}\)C i \(|\epsilon_{max}|\) i 1\({}^{\circ}\)C is defined as ’medium’ and all others are defined as bad.
Figure 5: Testing loss for all three network architectures TurbNet-Geo (‘TNG’), TurbNet-Geo-Light (‘TNG-Light’) and TurbNet-NoSkip-Light (‘TNG-NoSkip’), were evaluated for 16 and 32 initial features, denoted as ’-16’ and ’-32’, respectively. The testing loss is plotted at every 5000\({}^{th}\) epoch.
150 test samples are plotted in Figure 6(A) - (D). Each box provides the median \(Q_{2}\) (middle black line), lower quartile \(Q_{1}\) and upper quartile \(Q_{3}\) (box edges). The top and bottom edges (whiskers) define the 1.5\(\times\) interquartile range (IQR), i.e., 1.5\(\cdot\)(\(Q_{3}\) - \(Q_{1}\)) values of the error magnitude. For each test (A) to (D), the box plots were generated using values where the error was more than 0.2\({}^{\circ}\)C to ignore minor fluctuations of the prediction output error in regions where the background temperature was 10\({}^{\circ}\)C, i.e., to ignore errors far away from the thermal plume. Furthermore, all points within an inner region of 7 pixels from the heat pump injection site in each direction are shown in the 'Inner' plots and all other points are shown in the 'Outer' plots. This highlights if the error is, in general, larger near the heat pump injection site or further downstream the thermal plume. For all box plots, the median error is below 0.4\({}^{\circ}\)C, indicating that at least half of the pixels where the error is above 0.2\({}^{\circ}\)C is also below 0.5\({}^{\circ}\)C (the "good" criterion). The largest upper interquartile point occurs for the TNG-L-16 'Inner' with a magnitude of 0.73\({}^{\circ}\)C, meaning 75% of error values are beneath this value. As these plots also include the "bad" predictions, this shows that the networks are able to provide reasonably good predictions, and only few pixels have a large error that cause the "bad" categorization. The 'Inner' box plots have a larger interquartile range and top whisker than the 'All' plot as the error tends to be slightly larger in the middle of the domain, leaving fewer small error values to pull the box plot down.
Examining the training and testing loss conveys how well the network performs overall. Additionally, the box plots tells use how many pixels are considered good, medium and bad. However, it does not indicate where or how the greatest errors occur, i.e., whether it occurs at the GWHP location itself or if the thermal plume is unable to follow the streamlines. Therefore, a qualitative assessment of the network's performance is required to gain further insight. Four different network predictions for the TNG-L
Figure 6: Per-pixel error magnitude box plots for the TurbNet-Geo-Light and TurbNet-Geo-NoSkip-Light network architectures with 16 and 32 initial features for all 150 test samples. Each box provides the median \(Q_{2}\), lower quartile \(Q_{1}\) and upper quartile \(Q_{3}\). The top and bottom edges (whiskers) define the 1.5 interquartile range (1.5\(\cdot\)(\(Q_{3}\) - \(Q_{1}\))) values of the error magnitude. Only error values larger than 0.2\({}^{\circ}\)C were added to the dataset for the box plots.
network with 32 initial features from the "good", "medium" and "bad" categories are shown in Figure 7, Figure 8 and Figure 9, respectively. Finally, the difference between the TurbNetGeo-Light (TNG-L) and the TurbNetGeo-NoSkip-Light (TNG-NS-L) with 32 initial features from the "bad" category are exemplified on four test samples in Figure 10 and Figure 11.
In Figure 7 the maximum error occurs at the heat pump location for all four samples. However, the second sample (second row) has very small errors in the thermal plume in comparison to the others. The top and bottom samples indicate a good ability to follow the streamline as its path changes. The similar looking middle samples show how the
Figure 7: TurbNet-Geo-Light: ”good” network prediction for four test samples with 32 initial features. The test sample numbers listed from top to bottom: 35, 96, 100 and 136.
thermal plume is able to spread out when the streamlines diverge (second row), whereas the thermal plume remains narrow when the streamlines remain straight (third row). As expected, the "good" category performs well in general.
Four test samples from the "medium" category for the TNG-L with 32 initial features are shown in Figure 8. The top two samples have a very low error magnitude in the thermal plume, and was categorized into "medium" due to a small error spike at the heat pump location. The large error occurs when the GWHP location in the plume prediction is shifted one or two pixels compared to the target solution, shifting the
Figure 8: TurbNet-Geo-Light: ”medium” network prediction for four test samples with 32 initial features. The test sample numbers listed from top to bottom: 16, 76, 99 and 12.
maximum temperature pixel. If shifted in the wrong direction, i.e., if the plume extends downwards but the GWHP pixel location is shifted upwards, a large difference exists at this shifted location only and artificially inflates the maximum error value. However, the error values within the thermal plumes themselves are well within acceptable limits. The bottom two samples show that the plume can morph with the streamlines, but not well enough to keep the error below 0.5\({}^{\circ}\)C.
Four test samples from the "bad" category for the TNG-L with 32 initial features are shown in Figure 9. Each sample indicates various problems with some predictions.
Figure 9: TurbNet-Geo-Light: ”bad” network prediction for four test samples with 32 initial features. The test sample numbers listed from top to bottom: 20, 52, 66 and 139.
The top sample cannot accurately capture the temperature far downstream, whereas the second sample does not follow the streamline at all, but cuts across it instead. This is in contrast to our assumption that the thermal plume follows the streamline. The third sample cannot capture the high temperature at the heat pump location, where the plume is also relatively wide due to having a low Darcy velocity at the heat pump. This causes the higher temperature to diffuse outwards instead of being dragged downstream. Finally, the bottom sample is unable to capture the thermal profile near the heat pump.
Another four test samples from the "bad" category for the TNG-NS-L (first and third image) and TNG-L (second and fourth image) with 32 initial features, comparing the ability of the two networks, are shown in Figure 10 and Figure 11, each with two samples. For sample 20 in Figure 10, the TNG-NS-L network provides a better prediction than the TNG-L network, which cannot predict the entire plume correctly. The largest error occurs further downstream of the GWHP. However, the TNG-L network provides a better prediction of the downstream plume compared to the TNG-NS-L for sample 51.
In Figure 11, the prediction for the first sample is similar for both networks, where the largest error is close to the GWHP. However, the TNG-NS-L network has a wider plume and does not quite follow the streamline as well as the TNG-L network. The final prediction is difficult for both networks. Both fail to predict the maximum temperature directly downstream of the GWHP and also over-predict the width of the plume.
By comparing the two networks, both have their own positives and negatives. The TNG-NS-L sometimes outperforms the TNG-L, but it is not clear that it is necessarily better for all cases. The only clear benefit of the TNG-NS-L would be the simpler design and fewer trainable parameters for training. The provided examples are only a small set of samples that are available for comparison and were chosen from the set of 150 samples to highlight problems that may occur for the network prediction. Even for the "bad" category, the predictions are not completely unusable, but there are edge cases where the prediction would not be suitable to use in place of the high-fidelity solver. However, the CNN is suitable as an initial pre-processing step for the high-fidelity optimization problem, where highly interacting heat pumps can be identified. Therefore, the recommended network is the TNG-L with 32 initial features, followed by TNG-NS-L with 32 initial features. This network is suitable to implement inside the online evaluation tool of Fig to complement the LAHM.
Figure 10: TurbNet-Geo-Light versus TurbNet-Geo-NoSkip-Light: ”bad” network prediction for two test samples with 32 initial features.
## 4 Conclusion
In this paper, we introduced a novel method for predicting the thermal plume downstream propagation from open-loop groundwater heat pump injection wells.
Figure 11: TurbNet-Geo-Light versus TurbNet-Geo-NoSkip-Light: ”bad” network prediction for two samples with 32 initial features.
Conventional methods for modeling heat pumps are either too expensive or lack a suitable level of accuracy. By understanding the governing equations for the subsurface water temperature and studying readily available simulation data, a seemingly simplistic convolutional neural network was built that accepts the subsurface Darcy velocities as input and outputs the thermal profile due to the presence of a heat pump. The network was based on a modified U-Net architecture to create three network variants, one comprised of 6 layers with skip connections (TurbNet-Geo), one with 4 layers with skip connections (TurbNet-Geo-Light) and one with 4 layers without skip connections (TurbNet-NoSkip-Light).
Training and testing data was generated for small 2D domains, by creating random permeability and pressure gradient boundary conditions, and performing numerical groundwater simulations with PFLOTRAN to obtain the Darcy velocities and temperature field. A total of 800 input-output samples were generated, with 650 samples used to train the network and 150 samples used to test the network prediction.
The large TurbNet-Geo architecture was found to be only marginally more accurate than the smaller network architectures, but requiring nearly 4 times the number of trainable parameters. The prediction results were classified into good, medium and bad according to a specified criterion of \(|\epsilon_{max}|<0.5\), \(0.5<|\epsilon_{max}|<1.0\) and 1.0 \(<|\epsilon_{max}|\), respectively. It was found that only a few pixels in the prediction caused the predictions to be classified as medium or bad. Notably, 75% of all pixels where the error was larger than 0.2degC were also below 0.73degC, within the "medium" classification. Examining the thermal plume profiles of each network, each architecture showed good agreement between the prediction and the ground truth for most of the test samples. There were a few samples where the network struggled to reasonably determine the shape of the plume following the streamlines. However, this was limited to a few samples only and were presented in the discussion. Overall, the networks performed well as a fast surrogate model alternative to the high-fidelity solver. Future work will include training the network on larger datasets, larger domains, and more complex 3D domains where generating the training data is significantly more expensive.
## 5 Funding
Funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy--EXC 2075--390740016. We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech).
## 6 Conflicts of Interest
The funders had no role in the design of the study, in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. |
2303.07183 | Large-Scale Integrated Vector-Matrix Multiplication Processor Based on
Monolayer MoS2 | Led by the rise of the internet of things, the world is experiencing
exponential growth of generated data. Data-driven algorithms such as signal
processing and artificial neural networks are required to process and extract
meaningful information from it. They are, however, seriously limited by the
traditional von-Neuman architecture with physical separation between processing
and memory, motivating the development of in-memory computing. This emerging
architecture is gaining attention by promising more energy-efficient computing
on edge devices. In the past few years, two-dimensional materials have entered
the field as a material platform suitable for realizing efficient memory
elements for in-memory architectures. Here, we report a large-scale integrated
32x32 vector-matrix multiplier with 1024 floating-gate field-effect transistors
(FGFET) that use monolayer MoS2 as the channel material. In our wafer-scale
fabrication process, we achieve a high yield and low device-to-device
variability, which are prerequisites for practical applications. A statistical
analysis shows the potential for multilevel and analog storage with a single
programming pulse, allowing our accelerator to be programmed using an efficient
open-loop programming scheme. Next, we demonstrate reliable, discrete signal
processing in a highly parallel manner. Our findings set the grounds for
creating the next generation of in-memory processors and neural network
accelerators that can take advantage of the full benefits of semiconducting van
der Waals materials for non-von Neuman computing. | Guilherme Migliato Marega, Hyun Goo Ji, Zhenyu Wang, Mukesh Tripathi, Aleksandra Radenovic, Andras Kis | 2023-03-13T15:24:08Z | http://arxiv.org/abs/2303.07183v1 | # Large-Scale Integrated Vector-Matrix Multiplication Processor Based on Monolayer MoS2
###### Abstract
**Led by the rise of the internet of things, the world is experiencing exponential growth of generated data. Data-driven algorithms such as signal processing and artificial neural networks are required to process and extract meaningful information from it. They are, however, seriously limited by the traditional von-Neuman architecture with physical separation between processing and memory, motivating the development of in-memory computing. This emerging architecture is gaining attention by promising more energy-efficient computing on edge devices. In the past few years, two-dimensional materials have entered the field as a material platform suitable for realizing efficient memory elements for in-memory architectures. Here, we report a large-scale integrated 32\(\times\)32 vector-matrix multiplier with 1024 floating-gate field-effect transistors (FGFET) that use monolayer MoS2 as the channel material. In our wafer-scale fabrication process, we achieve a high yield and low device-to-device variability, which are prerequisites for practical applications. A statistical analysis shows the potential for multilevel and analog storage with a single programming pulse, allowing our accelerator to be programmed using an efficient open-loop programming scheme. Next, we demonstrate reliable, discrete signal processing in a highly parallel manner. Our findings set the grounds for creating the next generation of in-memory processors and neural network accelerators
that can take advantage of the full benefits of semiconducting van der Waals materials for non-von Neuman computing.**
Over the past decade, billions of sensors from connected devices have been used to translate physical signals and information to the digital world. Due to their limited computing power, sensors integrated into embedded remote devices often transmit raw and unprocessed data to their hosts. However, the high energy cost of wireless data transmission[1] affects device autonomy and data transmission bandwidth. Improving their energy efficiency would open a new range of applications while reducing the environmental footprint. This motivates the desire to shift data processing from remote hosts to local sensor nodes so that data transmission would be limited to structured and valuable data. In this context, the von-Neuman architecture, with its separation of memory and logic units, is widely seen as the most critical limiting factor for the efficiency of computing systems in general and edge-based devices in particular. The separation between the processing and memory imposed by the von Neumann architecture requires that the data be sent back and forth between the two during data and signal processing or inference in neural networks. This intense data communication between the memory and the processing unit already accounts for a third of the energy spent in scientific computing[2].
The desire to overcome the Von-Neumann communication bottleneck[3],[4] motivates the rise of in-memory computing architectures in which memory, logic and processing operations are collocated. Such processing-in-memory devices are especially suitable for performing vector-matrix multiplication, which is the key operation for data processing and the most intensive calculation for implementing machine-learning algorithms. By taking advantage of the memory's physical layer to perform the multiply and accumulate operation (MAC), this architecture overcomes the Von-Neumann communication bottleneck. So far, this processing strategy has shown promise for applications such as solving linear[5, 6], and differential
equations[7], signal and image processing[8], and in artificial neural network accelerators[9, 10, 11, 12]. However, the search for the ultimate material and device for realizing this type of processor is still ongoing. Several devices have been studied for in-memory computing, from resistive random access memories (RRAM) to ferroelectric memories (FeFET)[13, 14, 15, 16, 3]. More recently, two-dimensional materials have shown promise in the field of beyond-CMOS devices[17, 18, 19, 20, 21, 22] and in-memory and in-sensor computing[23, 24, 25, 26]. Floating-gate field-effect transistors (FGFET) based on monolayer MoS\({}_{2}\) have been shown to be scalable[27, 28, 25]. They can be used for logic-in-memory[29] or in-memory computing, building perceptron layers. Here, they are projected to offer more than an order of magnitude improvements in power efficiency compared to CMOS-based circuits[28]. Even though these past realizations have sparked interest and highlighted the promise of two-dimensional materials for in-memory computing, further progress and real-world applications require wafer-scale fabrication and large or very-large system integration. Currently, demonstrations of wafer-scale and integration of 2D semiconducting-based circuits have been limited to photodetectors[30, 31, 32, 33] or traditional analog and digital integrated circuits[34, 35, 36, 37, 38]. However, full-wafer and large-system integration involving 2D-based non-volatile memories that can perform general-purpose computation are missing. The realization of such a system would allow in-memory processors to reap all the benefits of 2D materials for the next generation of in-memory processors and open the way to realizing non-Von Neumann computing systems based on 2D materials.
To bring this next generation of in-memory processors closer to reality, we demonstrate a chip containing a 32\(\times\)32 floating-gate field-effect transistor matrix with 1024 memory devices per chip and an 83.1% yield (please refer to Supplementary information for more details). The working devices show a similar I\({}_{\text{DS}}\) versus V\({}_{\text{G}}\) hysteresis and characterization behavior. During the fabrication, we use wafer-scale metalorganic chemical vapor deposited (MOCVD) monolayer MoS\({}_{2}\) as the channel material, and the entire fabrication process is
carried out in a 4-inch line cleanroom. We further demonstrate multi-bit data storage in each device with a single programming pulse, allowing us to overcome the need to use write-verifying schemes, making the programming considerably faster. Finally, we show that our devices can be employed in the context of in-memory computing by performing discrete signal processing with different kernels in a highly parallelized manner.
### Memory Matrix
Here, we approach in-memory computing by exploiting charge-based memories using monolayer MoS\({}_{2}\) as a channel material. Specifically, we fabricated floating-gate field-effect transistors (FGFET) to take advantage of the electrostatic sensitivity of 2D semiconductors[17]. To enable the realization of larger arrays, we organized our FGFETs in a matrix in which we can address individual memory elements by carefully choosing the corresponding row and column. Figures 1a and b show the three-dimensional rendering of the memory matrix and the detailed structure of each FGFET, respectively. The use of a matrix configuration allows a denser topology and corresponds directly to performing vector-matrix multiplications. Our memories are controlled by local 2nm/40nm Cr/Pt gates fabricated in a gate-first approach. This allows us to improve the growth of the dielectric by atomic layer deposition[34] and minimize the number of processing steps that the 2D channel is exposed to, resulting in an improved yield. The floating gate is a 5 nm Pt layer sandwiched between 30 nm HfO\({}_{2}\) (block oxide) and 7 nm HfO\({}_{2}\) (tunnel oxide). Next, we etch via on the HfO\({}_{2}\) to electrically connect the bottom (M1) and top metal (M2) layers. This is required for routing the source and drain signals without an overlap. Wafer-scale MOCVD-grown MoS\({}_{2}\) is transferred on top of the gate stack and etched to form the transistors' channels. Details about material quality and characterization can be found in the Supplementary Information. Finally, 2nm/60nm Ti/Au is patterned and evaporated on top, forming the transistors' drain-source contacts as well as the second metal layer. Further details about the fabrication can be found in the methods section
and in the Supplementary Information. Figure 1c shows the optical image of the fabricated chip containing 32 rows and 32 columns a total of 1024 memories. In the image, source channels are accessed on the bottom, the drain channels from the right, and gate channels from the left.
Our memories are based on standard flash memories. The memory mechanism relies on shifting the neutral threshold voltage (V\({}_{\text{TH0}}\)) by changing the number of charges in the trapping layer (\(\Delta\)Q), i.e., the platinum floating gate in our case. When a high positive/negative bias is applied to the gate, the band alignment starts favoring the tunneling in/out of electrons from the semiconductor to the floating gate, changing the carrier concentration in the trapping layer. We define our memory window (\(\Delta\)V\({}_{\text{TH}}\)) by taking the difference between the threshold voltage from the forward and reverse paths, which are taken at a constant current level. Our previous work verified the programming mechanism by fitting our experimental curves in a device simulation model[25, 27]. Since the memory effect relies entirely on a charge-based process, flash memories tend to have better reliability and reproducibility than emerging memories that are material dependent such as resistive random-access memories (RRAM) and phase change memories (PCMs)[3]. We designed and manufactured a custom device interface board (DIB) to facilitate the characterization of the memory array, with a detailed description in Supplementary Information. Figure 1d shows the I\({}_{\text{DS}}\) versus V\({}_{\text{G}}\) sweeps performed for each device. The fabrication presents a yield of 83.1% and good reliability and reproducibility. The relatively high OFF-state current is due to a lack of resolution of the analog to digital converters used in the setup. High-resolution single-device measurements confirm typical OFF-state currents on the order of pA. Figure 1e shows the ON and OFF current distribution over the memory matrix. Both ON and OFF currents are taken at V\({}_{\text{DS}}\) = 100mV creating 2 distinct planes. The ON and OFF current shows a good distribution over the entire matrix. Further detailed single-device characterization can be found in the supplementary information, confirming the performance of the devices as memories with good retention and endurance
stabilities. We show that the devices have a statistically similar memory window \(\Delta\)V\({}_{\text{TH}}\) = 4.30 \(\pm\) 0.25 V. This value is smaller compared to the the one extracted from single-device measurement due to the higher slew-rates (5 V/s) required for time-effective charaterisation of 1024 devices in the matrix.
### Open-Loop Programming
The similarity of the devices motivates us to pursue a statistical study of the memories' programming behavior. In the context of in-memory computing, an open-loop programming analysis is fundamental. Standard write-verify approaches may be too time-consuming while programming a large flash memory array. A statistical understanding of memory states in open-loop is essential to improving performance and speed.
We perform the experiment such that each device is independently excited by selecting the corresponding row (i) and column (j). Analog switches in the device interface board keep a low impedance path in the selected row (i) / column (j) and high impedance in the remaining rows and columns (Supplementary Information). This ensures that a potential difference is only applied to the desired device, avoiding unwanted programming. For the same reason, we divide the device programming and reading into two independent stages. During the programming phase, the corresponding gate line (row) and the corresponding source line (column) are selected and programming pulses with parameters T\({}_{\text{PULSE}}\) and V\({}_{\text{PULSE}}\) are applied in the gate. Due to the tunneling nature of the device, only two terminals are required to generate the band bending needed for the charge injection into the floating gate. After the pulse, the gate voltage is changed to V\({}_{\text{READ}}\), which is low enough to prevent reprogramming the memory state. In the reading phase, the drain line is also connected, and the conductance value is probed by applying a voltage V\({}_{\text{DS}}\) in the drain. This two-stage procedure is required because we are using a 3-terminal device; therefore, both gate and drain share the same row, and consequently, the entire row is biased when the gate and drain line is engaged. If high voltages in the gate were applied
when the drain line is connected, the whole row would be reprogrammed, causing the loss of information in the memories. Figure 2a shows the description of this two-stage programming procedure.
For the subsequent measurements, we used V\({}_{\text{READ}}=-3\) V, V\({}_{\text{DS}}=1\) V, and T\({}_{\text{PULSE}}=100\) ms. Before each measurement, we reset the memories by applying a positive 10 V pulse which puts the devices into a low conductance state. Due to parasitic resistances in the matrix, a linear compensation in the digital gains is applied (see Supplementary Information for further details). The compensation method improves the programming reliability of the devices by an order of magnitude. We estimate a programming error of 500 errors per million for programming 1-bit while having 1 error per million for programming the erase state. Figure 2b, c shows the distribution of memory states after different pulse intensities, V\({}_{\text{PULSE}}=+10\)V, -4V, -6V, -8V, and -10V in both linear and logarithmic representations. We observe that on a linear scale, the increase in the pulse amplitude is accompanied by a higher memory state value and a larger spread. On the other hand, by analyzing the logarithm of the state value, we can see that the memory has well-defined defined storage states. This leads us to conclude that this memory has the potential for reliable and scalable multivalued storage without write-verify algorithms at a decent programming error.
Figure 2d shows the spatial distribution of the states on the entire chip. We observe that the memory states create a constant plane value for the different programming voltages, V\({}_{\text{PULSE}}\). Finally, Figure 2e shows the empirical cumulative distribution function (ECDF) of the logarithmic representation. These results support the possibility of multivalued programming, as discussed previously and indicate that the memory elements can be used for storing analog weights for in-memory computing.
### States and Vector-Matrix Multiplications
With the open-loop analysis completed, in Figure 3a, we plot the memory states (\(<\)w\(>\)) as a function of the programming voltage (VPROG). We define four equally distributed states (2-bit resolution) to be programmed as discrete weights in the matrix for the vector-matrix multiplication (please refer to Supplementary information for more details). To analyze the effectiveness of the processor for performing vector-matrix operations, we compare, in Figure 3b, the normalized theoretical (yTHEORY) value with the normalized experimental (yEXP) value obtained on several dot-product operations. The linear regression of the experimental points shows a line with parameters \(\mathbf{a}=0.988\pm 0.008\) and \(\mathbf{b}=\) -0.129\(\pm 0.003\) for yEXP = \(\mathbf{a}.\)yTHEORY + \(\mathbf{b}\), while the shaded area corresponds to a 95% confidence interval. The ideal processor should converge to \(\mathbf{a}=1\) and \(\mathbf{b}=0\) with a confidence interval that converges to the linear fitting. In our case, the processor has a linear behavior converging to the ideal case, with a large spread and slightly non-linearity of the experimental values. We explain this behavior by the non-ideality of the memories and the quantization error due to the limited resolution of the states. The shift in parameter \(\mathbf{b}\) can be explained by a non-perfect OFF state of the memories seen at yTHEORY = 0, but it does not affect the observed linear trend. We conclude that we can perform multiplication-accumulation operations with reasonable accuracy. This operation is needed for performing diverse types of algorithms, such as signal processing and inference in artificial neural networks.
### Signal Processing
Next, we configure this accelerator to perform signal processing to demonstrate a real-world scenario and application. For signal processing, the input signal (x) is convoluted with a kernel (h) resulting in the processed signal (y). Depending on the nature of the kernel elements, different types of processing can be achieved. Here, we limit ourselves to 3 different kernels that perform respectively low-pass filtering, high-pass filtering, and feedthrough. All the
kernels run in parallel within a single processing cycle, demonstrating the efficiency of this processor targeting data-centric problems by parallelized processing. Figure 4a shows the convolution operation and the different kernels used for processing the input signal. The strategy to encode negative kernel values into the memories conductance values is to split the kernel (h) into a kernel with only the positive values (h\({}^{+}\)) and one with the absolute values of the negative numbers (h\({}^{-}\)) and encode only the positive numbers with a direct relation with the conductance values (G). After the processing is realized, the outputs of the positive (y\({}^{+}\)) and negative (y\({}^{-}\)) kernels are subtracted (y\({}^{+}\) - y\({}^{-}\)), resulting in the final signal (y).
Figure 4b shows the comparison between the original weights and the weights transferred into the memory matrix using the previously described open-loop programming scheme. To simplify the transfer, we normalize the weight values at each kernel. As a result, we observe a good agreement between the original and experimental values. Next, to verify the effectiveness of the processing, we first construct our input signal (x) as a sum the sinusoidal waves with different frequencies. In this way, we can easily probe the behavior of the filters at different frequencies without creating an overly complex signal. Since the signal has positive and negative values, the signal amplitude must fall on the linear region of the device operation. Thus, we restrict the signal range from -100 mV to 100 mV at V\({}_{\text{READ}}=0\). Figure 4c shows the fast Fourier transform of simulated processed signals on the left and the experimental signals on the right. The grey line in both simulated and measured signals is the fast Fourier transform of each kernel, giving a guideline for the predicted behavior of each operation. We highlight that the experimental processing of all three filters matches quite well the theoretical values as well as the prototype filter.
Here, we have demonstrated large-scale integration of 2D materials as the semiconducting channel in an in-memory processor. We demonstrated the reliability and reproducibility of our devices both in terms of characterization and the statistical similarity of
the programming states in the open-loop programming. The processor carries out vector-matrix multiplications and demonstrates its functionality by performing discrete signal processing. This functionality and integration represent a milestone for in-memory computing, allowing in-memory processors to reap all the benefits of 2D materials and bringing new functionality to edge devices for the internet of things.
## Acknowledgments
We thank Z. Benes (CMI) for help with electron-beam lithography. We acknowledge support from the European Union's Horizon 2020 research and innovation program under grant agreements 829035 (QUEFORMAL), 785219, and 881603 (Graphene Flagship Core 2 and Core 3), 964735 (EXTREME-IR) from the H2020 European Research Council (ERC, grants no 682332 and 899775) as well as from the CCMX Materials Challenge grant 'Large area growth of 2D materials for device integration'. Device preparation was carried out in part in the EPFL Centre of MicroNanotechnology (CMI).
## Methods
### Wafer Scale Memory Fabrication
The fabrication starts with a silicon substrate with a 270 nm thick SiO\({}_{2}\) insulating layer. The first metal layer and FGFET gates were fabricated by photolithography using an MLA150 advanced maskless aligner with a bilayer LOR 5A/AZ 1512 resist. The 2 nm/40 nm Cr/Pt gate metals were evaporated using an e-beam evaporator under a high vacuum. After resist removal by dimethyl sulfoxide (DMSO), DI water and O\({}_{2}\) plasma are used to further clean and activate the surface for HfO\({}_{2}\) deposition. The blocking oxide is deposited by thermal atomic layer deposition using TEMAH and water as precursors. The 5 nm Pt floating gates were patterned by photolithography and deposited using the same process as described previously. With the same atomic layer deposition system, we deposit the 7 nm tunnel oxide layer. After the transfer of MoS\({}_{2}\) onto the substrate, patterning it with photolithography and etching by O\({}_{2}\) plasma.
Drain-source electrodes are patterned by photolithography and 2 nm/60 nm Ti/Au is evaporated in the same machine. To increase the adhesion of contacts and the MoS\({}_{2}\) onto the substrate, a 200 \({}^{\circ}\)C annealing step is performed in high vacuum. The devices have a W/L ratio of 49.5 \(\upmu\)m/ 3.1 \(\upmu\)m.
### Device Passivation
The fabricated device is the first wire-bonded onto a 145-pin PGA chip carrier. The device is heated inside an Ar glovebox at 135\({}^{\circ}\)C for 12 hours which removes the adsorbed water from the device surface. After the in-situ annealing in the glovebox, a lid is glued onto the chip carrier using a high-vacuum epoxy and cured in an argon atmosphere. This protects the device from oxygen and water.
### Transfer procedure
The MOCVD-grown material is first spin coated with PMMA A2 at 1500 rpm for 60 s and baked at 180 \({}^{\circ}\)C for 5 min. Next, we attach a 135 \({}^{\circ}\)C thermal-release tape onto the MoS\({}_{2}\) sample and detach it from the sapphire in deionized water. After this, we dry the film and transfer it onto the patterned substrate. Next, we bake the stack at 55 \({}^{\circ}\)C for 1 hour. We remove the thermal release tape by heating it on the hot plate at 130 \({}^{\circ}\)C. Next, we immerse the sample in an acetone bath for cleaning the tape polymer residues. Finally, we transfer the wafer to an isopropanol bath and dry it in the air.
### MOCVD Growth
Monolayer MoS\({}_{2}\) was grown using the MOCVD method. Mo(CO)\({}_{6}\), Na\(\cdot\)MoO\({}_{4}\) and diethyl sulfide (DES) were used as precursors. NaCl was spin-coated as a catalyst. Pre-annealed 3-inch c-plane sapphire wafer with a small off-cut angle (\(<\) 0.2\({}^{\circ}\)) was used as a growth substrate (UniversityWafer Inc.). The CVD reaction was performed using a home-built furnace system with 4-inch quartz tube reactor and mass flow controllers connected with Ar, H\({}_{2}\), O\({}_{2}\), and
metalorganic precursors (Mo(CO)\({}_{6}\) and DES). For the MoS\({}_{2}\) crystal growth, a reactor was heated to 870 \({}^{\circ}\)C at ambient pressure for 20 minutes.
### Electrical Measurements
The electrical measurements were performed using a custom device interface board connected to a CompactRIO (cRIO-9056) running a Real-Time LabVIEW server. We have the modules NI-9264 (16 channels analog output), NI-9205 (32 channels analog inputs), and NI-9403 (Digital IO) installed.
## Author contributions
A.K. initiated and supervised the project. G.M.M. fabricated the devices, designed/prepared the measurement setup, and performed the device characterization and remaining measurements. H.G. and Z.W. grew the two-dimensional material and assisted in material characterization under supervision by A.R. M.T. and performed HRTEM for characterization of devices and materials. A.K. and G.M.M. analyzed the data. The manuscript was written by A.K., G.M.M. with the input of all authors.
## Competing financial interests
The authors declare no competing financial interests.
## Data availability
The data that support the findings of this study are available in Zenodo at [http://dx.doi.org/XXXX](http://dx.doi.org/XXXX).
|
2308.04920 | Influences of dynamical disruptions on the evolution of pulsars in
globular clusters | By comparing the physical properties of pulsars hosted by core-collapsed
(CCed) and non-core-collapsed (Non-CCed) globular clusters (GCs), we find that
pulsars in CCed GCs rotate significantly slower than their counterparts in
Non-CCed GCs. Additionally, radio luminosities at 1.4 GHz in CCed GCs are
higher. These findings are consistent with the scenario that dynamical
interactions in GCs can interrupt angular momentum transfer processes and
surface magnetic field decay during the recycling phase. Our results suggest
that such effects in CCed GCs are stronger due to more frequent disruptions of
compact binaries. This is further supported by the observation that both
estimated disruption rates and the fraction of isolated pulsars are
predominantly higher in CCed GCs. | Kwangmin Oh, C. Y. Hui, Jongsuk Hong, J. Takata, A. K. H. Kong, Pak-Hin Thomas Tam, Kwan-Lok Li, K. S. Cheng | 2023-08-09T12:38:09Z | http://arxiv.org/abs/2308.04920v1 | # Influences of dynamical disruptions on the evolution of pulsars in globular clusters
###### Abstract
By comparing the physical properties of pulsars hosted by core-collapsed (CCed) and non-core-collapsed (Non-CCed) globular clusters (GCs), we find that pulsars in CCed GCs rotate significantly slower than their counterparts in Non-CCed GCs. Additionally, radio luminosities at 1.4 GHz in CCed GCs are higher. These findings are consistent with the scenario that dynamical interactions in GCs can interrupt angular momentum transfer processes and surface magnetic field decay during the recycling phase. Our results suggest that such effects in CCed GCs are stronger due to more frequent disruptions of compact binaries. This is further supported by the observation that both estimated disruption rates and the fraction of isolated pulsars are predominantly higher in CCed GCs.
keywords: stars: binaries: general -- stars: pulsars: general -- globular clusters: general
## 1 Introduction
Millisecond pulsars (MSPs) are characterized by fast rotations with rotational periods \(P_{\rm rot}\) typically shorter than a few tens of milliseconds and relatively weak surface magnetic fields \(B_{s}\lesssim 10^{9}\) G (Manchester et al., 2005; Hui & Li, 2019). In order to achieve such fast rotation, MSPs are generally believed to have gone through an accretion phase, during which neutron stars gain angular momentum transferred from their companion stars (Alpar et al., 1982; Radhakrishnan & Srinivasan, 1982; Fabian et al., 1983). This is commonly referred to as the recycling process. During recycling, mass accretion on the neutron star surface can potentially lead to magnetic field decay, as shown in (Cumming et al., 2004), which might account for the weak dipolar field strength inferred from observations.
MSPs can be further separated into two subgroups according to their locations: those residing in globular clusters (GCs) and those in the Galactic field (GF). Owing to the high stellar densities in GCs, the formation of MSPs inside a cluster can be influenced by intracluster dynamical processes (cf. Sigurdsson & Phinney, 1995; Ivanova et al., 2008; Hui et al., 2010; Ye et al., 2019). While primary encounter interactions, such as tidal capture or direct collision with a giant, can facilitate binary formation (Fabian et al., 1975; Press & Teukolsky, 1977; Lee & Ostriker, 1986; Lombardi et al., 2006; Fregeau & Rasio, 2007; Ye et al., 2022), subsequent encounters (referred to as secondary encounters hereafter) can play a role in disrupting binaries (Verbunt & Freire, 2014).
Many studies have shown that dynamical interactions in GCs can lead to an increase in MSP population in comparison with the GF MSPs which rely on binary evolution alone (e.g. Ye et al., 2019; Hui et al., 2010; Ivanova et al., 2008). This is consistent with the well-known fact that the formation rate per unit mass of low-mass X-ray binaries (LMXBs), which are the progenitors of MSPs, is orders of magnitude larger in GCs than in GF (Katz, 1975; Clark, 1975). Although many more LMXBs can be assembled in GCs, the mass-transferring processes can be interrupted by the subsequent encounters (Verbunt & Freire, 2014). Such intricate dynamics could potentially lead to differences in the properties of MSPs in GCs compared to those in GF.
The sample sizes of the currently known populations of MSPs in GF and GCs are comparable, which allows a reasonable comparison of the properties between these two populations. In a recent study, Lee et al. (2023) performed a systematic comparison of rotational, orbital, and X-ray properties of MSPs in GCs and GF. They found that MSPs in GCs generally rotate slower than those in the GF. There is also an indication that the surface magnetic field of GC MSPs is stronger than those in the GF. These findings are consistent with the scenario that the recycling processes of GC MSPs were interrupted by secondary encounters, leading to shortened epochs for both angular momentum transfer and possible magnetic field decay.
Based on the photometric concentrations, GCs can be classified into core-collapsed (CCed) and non-core-collapsed (Non-CCed) (Harris 1996, 2010 edition). A core collapse in a GC is likely a result of gravothermal instability (cf. Lynden-Bell & Wood 1968), which can significantly affect the kinematic properties.
While the number of X-ray sources in GCs generally correlates with the primary encounter rate \(\Gamma\)1(Pooley et al., 2003), Bahramian et al. (2013) found that CCed GCs have fewer X-ray sources than Non-CCed GCs for the same value of \(\Gamma\) (see Figure 9 in Bahramian et al. (2013)). This might indicate the dynamical status of CCed GCs is different from that of Non-CCed GCs, which can leave an imprint on the evolution of compact binaries. Therefore, it is reasonable to speculate that the properties of GC MSPs may be further diversified between CCed and Non-CCed GCs.
Footnote 1: \(\Gamma\propto\rho_{c}^{1.5}r_{c}^{2}\), where \(\rho_{c}\) and \(r_{c}\) are the density and radius of the cluster core respectively.
Motivated by the aforementioned findings, we aim to explore potential differences in the properties of pulsars within CCed and Non-CCed GCs by conducting a statistical analysis of selected parameters. In Section 2, we describe our procedure for preparing the data for analysis. The results of statistical analysis are given in Section 3 and their implications will be discussed in Section 4.
## 2 Data Preparation
First, we have selected a sample of 280 pulsars from 38 different GCs from the Australia Telescope National Facility (ATNF) Pulsar Catalogue (Manchester et al., 2005, ver. 1.70). In this work, we only collected the following parameters from the Catalogue: rotational period \(P_{\rm rot}\), orbital period \(P_{\rm p}\), radio luminosity in L-band \(L_{\rm 1.4GHz}\). On the other hand, we have adopted the X-ray luminosities \(L_{\rm x}\) (0.3-8 keV) of 56 X-ray emitting MSPs from Table 2 in Lee et al. (2023).
Observationally, it is a common practice to classify whether a GC is CCed or Non-CCed based on its surface brightness profile (e.g. Trager et al., 1995; Harris, 1996; Rivera Sandoval et al., 2018). Owing to the increased stellar density towards the cluster centre, a GC is defined as CCed if its surface brightness profile exhibits a power law until the limit of observational resolution (Rivera Sandoval et al., 2018; Trager et al., 1995). On the other hand, Non-CCed GCs typically exhibit a flattened profile towards their centres and follow a King profile (Trager et al., 1995; King, 1966).
In Section 3.1, we adopted the classifications given by Harris (1996, 2010 version) in determining whether a GC is CCed or Non-CCed. Using these labels, we divided our samples accordingly and compared their properties.
## 3 Statistical Analysis & Results
### Core-Collapsed GCs versus Non-Core-Collapsed GCs
We conducted a detailed statistical analysis to compare the aforementioned selected properties of pulsars in CCed and Non-CCed GCs. For each population, we first constructed the unbinned empirical cumulative distribution function (eCDF) of the parameters which are shown in Figure 1. By visual inspection, these properties appear to be different between these two populations. To quantify the possible difference, we used a two-sample Anderson-Darling (A-D) test to investigate whether such differences are significant. In this work, we consider the difference between two eCDFs to be significant if the \(p\)-values inferred from A-D test is \(<0.05\). The results of the A-D test are summarized in Table 1.
We found that the distributions of \(P_{\rm rot}\) and \(L_{\rm 1.4GHz}\) from CCed GCs and Non-CCed GCs are significantly different. The corresponding \(p-\)values inferred from A-D test are found to be 0.003 and 0.014 respectively. From the distributions of \(P_{\rm rot}\) as shown in the upper-right panel of Figure 1, one can see that the pulsars in CCed GCs generally rotate slower than those in Non-CCed GCs. The median of \(P_{\rm rot}\) in CCed and Non-CCed populations are 5.24 ms and 4.45 ms respectively.
For \(L_{\rm 1.4GHz}\), the distributions of these two groups of GC pulsars are obviously different (lower-left panel of Figure 1). It is very clear that the pulsars in the CCed GCs are more powerful radio emitters. The median of \(L_{\rm 1.4GHz}\) in CCed and Non-CCed are found to be 4.29 mJy kpc\({}^{2}\) and 1.4 mJy kpc\({}^{2}\) respectively.
Figure 1 suggests that \(P_{b}\) of the pulsars in CCed GCs are shorter, indicating that they have tighter orbits compared to those in Non-CCed GCs. This finding is consistent with our understanding that pulsars with longer orbital periods in CCed GCs are more likely to have been disrupted by dynamical interactions. However, the \(p\)-value obtained from the A-D test is 0.24, which falls short of our pre-defined criterion for claiming a significant difference between the two groups. This result may be due to the small sample size.
While Lee et al. (2023) have compared the MSP properties between the GF and GC populations, and identified differences between these them, they did not separately compare GF MSPs with those in CCed GCs and Non-CCed GCs. To complement the analysis conducted by Lee et al. (2023) as well as our aforementioned investigations, we have further compared MSP properties among the populations in the GF, CCed GCs, and Non-CCed GCs.
In comparing with the MSP properties in the GF, we have followed the same selection criterion as in Lee et al. (2023), by selecting pulsars with \(P_{\rm rot}<20\) ms in all three populations (i.e. GF, CCed GCs and Non-CCed GCs). This procedure can avoid including non-recycled GF pulsars in this part of the analysis. The eCDFs of \(P_{\rm rot}\), \(P_{b}\), \(L_{\rm 1.4GHz}\), and \(L_{\rm x}\) are shown in Figure 2. The results of the A-D tests are summarized in Table 1.
From the distribution of \(P_{\rm rot}\), it is obvious that the rotation of MSPs in the GF is significantly faster than those in CCed and Non-CCed GCs. Moreover, we can see that the difference between CCed GCs and GF is larger than that between Non-CCed GCs and GF. We also find that the \(P_{b}\) of GF MSPs is significantly longer than those in GCs, regardless of whether they are CCed or Non-CCed. All these findings align with the scenario suggested by Lee et al. (2023), which posits that intracluster dynamics have resulted in the formation of more tightly-bound binaries and the interruptions of the recycling process.
For comparing the distributions of luminosities between GF and GC MSPs in X-ray and radio, we found differences that are statistically acceptable (see Table 1). However, given the current sample, it is difficult to rule out the possibility that such differences have resulted from the observational bias between GF and GCs (see the discussion in Section 4).
Since \(P_{\rm rot}\) and \(L_{\rm 1.4GHz}\) of the MSPs in CCed GCs are found to be significantly different from those in Non-CCed GCs and the GF, we have further examined their distributions by computing the kernel density estimates (KDEs). The results are shown in Figure 3. In the panel of \(P_{\rm rot}\), it clearly shows that the peaks of density distributions systematically shifted towards the larger value from the GF (which lacks dynamical interactions) to the CCed GCs (which have the largest disruption rates among three populations; see Table 2). The peaks for the KDEs of \(P_{\rm rot}\) for GF, Non-CCed GCs, and CCed
GCs are 3.6 ms, 4.6 ms, and 5.0 ms respectively. For \(L_{1.4\rm GHz}\), the KDEs of GF and CCed GC populations peaked at 1.8 mJy kpc\({}^{2}\) and 2.8 mJy kpc\({}^{2}\) respectively. In the case of Non-CCed GC MSPs, it is interesting to note that there appear to have two peaks in its \(L_{1.4\rm GHz}\) KDE which is located at 1.3 mJy kpc\({}^{2}\) and 8.0 mJy kpc\({}^{2}\). However, there are only 7 Non-CCed GC MSPs \(\gtrsim 3\) mJy kpc\({}^{2}\) in the current sample which does not allow us to determine whether such multi-modal distributions are genuine or simply a fluctuation due to the small sample.
Verbunt & Freire (2014a) have compared the fraction of isolated pulsars in GCs with the corresponding disruption rate \(\gamma\propto\rho_{c}^{0.5}r_{c}^{-1}\), where \(\rho_{c}\) and \(r_{c}\) represent the central density and core radius, respectively (cf. Tab. 1 in Verbunt & Freire 2014a). In their work, they considered a sample of only 14 GCs. Since our sample is now almost three times larger, it is legitimate to revisit this comparison. For computing \(\gamma\), we adopted \(\rho_{0}\) and \(r_{c}\) from Harris (1996, 2010 edition). In Table 2, we compare the numbers of isolated pulsars \(N_{s}\) and binary pulsars \(N_{b}\) in 37 GCs with their corresponding \(\gamma\). GLIMPSE01 is excluded in this part of the analysis because we cannot find its structural parameters in the literature.
We proceeded to examine if there is any correlation between the fraction of isolated pulsars \(f_{s}=N_{s}/(N_{s}+N_{b})\) and \(\gamma\) with the non-parametric Spearman's rank test, which yields a \(p\)-value of 0.014. This indicates the correlation between these two quantities is significant. This prompts us to perform a regression analysis to obtain an empirical relation between \(f_{s}\) and \(\gamma\). However, in view of the small statistics of pulsar population in most GCs, we notice that the \(f_{s}\) is very sensitive to \(N_{s}\) and \(N_{b}\). In particular, many of the GCs have \(f_{s}=0\) (Table 2).
To address this issue, we found that Laplace smoothing is a well-established technique in handling categorical data with a small sample size (e.g. Manning et al. 2008; Gelman et al. 2013). By adding a smoothing parameter \(\alpha\) to the observed counts, the method can stabilize the estimates and avoid zero empirical probabilities. With Laplace smoothing, we obtained the smoothed estimate of \(f_{s}\) as \(\tilde{f_{s}}=\frac{N_{s}+N_{b}+2\alpha}{N_{s}+N_{b}+2\alpha}\) with \(\alpha\) taken to be 1.
In Figure 4, we show the scatter plot between \(f_{s}\) and log \(\gamma\) of our sample. It is obvious that the disruption rates of CCed GCs are generally larger than those of Non-CCed GCs. Furthermore, GCs with \(f_{s}\gtrsim 0.5\) are predominantly CCed GCs with \(\gamma\) more than ten times larger than the conventional reference level in M4. These findings are fully consistent with the results reported by Verbunt & Freire
Figure 1: Comparisons of eCDFs of the selected pulsar properties between CCed GCs and Non-CCed GCs. The bracketed numbers in the legends show the corresponding sample sizes.
(2014a). By fitting a linear model \(\hat{f}_{s}=a\log\gamma+b\) to the data with each GC weighted by the numbers of detected pulsars, we found the best-fit parameters of \(a=0.12\pm 0.05\) and \(b=0.38\pm 0.05\) (\(1\sigma\) uncertainties) for this empirical relation. To test whether the result of linear regression is sensitive to the adopted smoothing parameter, we repeated the analysis by varying \(\alpha\) from 2 to 5. We found that the results obtained from different \(\alpha\) values all lie within the 95% confidence band shown in Figure 4 for the case of \(\alpha=1\).
### Alternative Classification by Unsupervised Clustering
While the aforementioned analyses show that \(P_{\rm rot}\) and \(L_{\rm 1.4GHz}\) of the MSPs in CCed GCs and Non-CCed GCs are significantly different, the possible ambiguity in the conventional CCed/Non-CCed classification can hamper the robustness of this conclusion. As we have mentioned in Section 2, such classification is determined by the structure of their brightness profiles. In case the central part of GC is poorly resolved, the CCed/Non-CCed classifications are subjected to uncertainties.
This concern is reflected by the central concentration parameters \(c\) given in Harris (1996, 2010 version), which is defined as the logarithm of the ratio between tidal radius \(r_{t}\) and core radius \(r_{c}\). \(c\) is deduced from surface brightness profile fitting (Trager et al., 1995; King, 1966). For most of the CCed GCs, no reasonable fit can be obtained and an upper bound of \(c=2.5\) is placed instead (cf. Trager et al., 1995; Harris, 1996). While \(c\) can provide a simple parameter for characterizing the structure, we realize that our sample spans the ranges of \(c=0.79-2.07\) and \(c=1.63-2.5\) for Non-CCed and CCed GCs, respectively. Such heavily-overlapped ranges of \(c\) indicate the CCed/Non-CCed classification in Harris (1996, 2010 version) is not without ambiguity.
\begin{table}
\begin{tabular}{l c||c c c} \hline & CCed vs non-CCed1 & Cced vs GF2 & non-CCed vs GF2 & GCs vs GF3 \\ \hline \(P_{\rm rot}\) & 0.003 & 0.002 & 0.023 & 0.001 \\ \(P_{p}\) & 0.242 & 0.001 & 9 \(\times 10^{-5}\) & \(10^{-7}\) \\ \(L_{\rm 1.4GHz}\) & 0.014 & 0.001 & 0.004 & 0.001 \\ \(L_{x}\) & 0.315 & 0.137 & 0.078 & 0.030 \\ \hline \end{tabular}
\end{table}
Table 1: Null hypothesis probabilities of A-D test for comparing \(P_{\rm rot}\), \(P_{b}\), \(L_{\rm 1.4GHz}\) and \(L_{x}\) among CCed GCs, Non-CCed GCs and GF.
Figure 2: Comparisons of eCDFs of the selected pulsar properties among CCed GCs, Non-CCed GCs, and GF. The bracketted numbers in the legends show the corresponding sample sizes.
On the other hand, the disruption rates \(\gamma\) in Table 2 might provide a more quantitative measure of the dynamical status of a GC (Verbunt & Freire, 2014). For example, in Figure 4, we have seen that the fraction of isolated pulsars \(f_{\rm s}\) is generally correlated with \(\gamma\), though the spread of the data from the best-fit linear model is rather wide.
Individually, the parameters \(\gamma\) and \(c\) might not allow an unambiguous classification of GCs. This motivates us to examine whether the classification can be improved by combining both parameters.
For deriving the classification rules in the plane spanned by \(\gamma\) and \(c\), we employed the Gaussian Mixture Model (GMM) algorithm. GMM is a probabilistic model with an assumption that the data originated from a mixture of finite numbers of Gaussian components. We have considered a set of models with the number of mixture components ranging from 1 to 9. We utilized the CRAN Mclust package (version 5.4.6 Scrucca et al., 2016) for the model fitting and computed the likelihoods, \(L\), of each model. Model selection is based on the Bayesian information criterion (BIC Schwarz, 1978): BIC = \(2\ln L-k\ln N\), where \(k\) and \(N\) are the number of estimated parameters and the sample size respectively. We found that the optimal BIC requires three 2-dimensional Gaussian components to model our adopted data in \(\gamma-c\) plane. In Figure 5, three different groups as clustered by GMM are represented by the symbols of different colour. According to their brightness concentration, we refer to these groups as "Sparse (S)", Intermediate (I)" and "Dense (D)" hereafter. The corresponding labels of each GC are given in Table 2. Under this classification scheme, S group consists of purely Non-CCed GCs and D group only comprises CCed GCs. For the I group, there is a mixture of both Non-CCed and CCed GCs.
These three groups in \(\gamma-c\) plane are well separated without much overlap (Figure 5). The averaged isolated pulsars fractions \(\langle f_{\rm s}\rangle\) in S, I and D groups are 0.64, 0.24 and 0.14, respectively, which increase progressively. These suggest such alternative classification is not unreasonable. And this prompts us to re-examine the possible differences of pulsar properties among these three groups. The comparisons of their cDFs of \(P_{\rm rot}\), \(P_{b}\), \(L_{\rm 1.4GHz}\) and \(L_{x}\) are shown in Figure 6. The corresponding \(p\)-values as inferred from the A-D test are summarized in Table 3.
In comparing \(P_{\rm rot}\) between S group and D group, we found that the pulsars in D groups generally rotate slower than those in S group. And such a difference is statistically significant (\(p=7\times 10^{-3}\)). Also, the distribution of \(L_{\rm 1.4GHz}\) of S group is found to be significantly different from that of D group (\(p=0.013\)) with the pulsars of D group significantly more luminous in L-band than those of S group. These results are fully consistent with those inferred from the comparison between Non-CCed and CCed populations as presented in Section 3.1 (cf. Figure 1 and Table 1).
For I group, which consists of both Non-CCed and CCed GCs, it is obvious that the \(P_{\rm rot}\) distribution of I group is very similar to S group (see Figure 6). Examining the composition of this group, we found that \(\sim 90\%\) of the pulsars in I group are originated from Non-CCed GCs which are dominated by the populations in 47 Tuc and Terzan 5. This might apparently account for the similarity. Nevertheless, despite the fact that the sample for \(L_{\rm 1.4GHz}\) in I group is also dominated by Non-CCed pulsars which have a contribution of 83%, its distribution is comparable to that of D group.
We would like to point out that the selection effect on the sample
Figure 4: Relation between the fraction of isolated pulsar estimated by Laplace smoothing \(\hat{f}_{\rm s}\) and the disruption rate log \(\gamma\). The symbol sizes scales with the actual number of observed isolated pulsars. The straight line represents the best-fit linear model with 95% confidence band illustrated by the shaded region.
Figure 3: Kernel density estimates for the distributions of log \(P_{\rm rot}\) and log \(L_{\rm 1.4GHz}\) of MSPs in CCed GCs, Non-CCed GCs, and the GF.
of \(L_{1.4\rm GHz}\) might prevent us from drawing any firm conclusion in comparing this property among these three groups. While the sample size for \(P_{\rm rot}\) is 279, there are only 59 pulsars that have their measures of \(L_{1.4\rm GHz}\) available for analysis. This effect is particularly obvious in 1 group which has its sample size reduced from 179 for \(P_{\rm rot}\) to 29 for \(L_{1.4\rm GHz}\). This can be accounted for by the fact that only those sufficiently bright pulsars can have their radio fluxes reliably measured. It is uncertain whether the \(L_{1.4\rm GHz}\) distribution of I group will remain comparable D group when the fainter pulsars are included. Pulsar surveys with improved sensitivity might help to resolve this issue in the future.
## 4 Summary & Discussion
Motivated by the recent work by Lee et al. (2023) which has identified the differences in various properties between the GC and GF pulsar populations, we proceed to investigate whether the variation of intracluster dynamics between CCed and Non-CCed GCs can further diversify the pulsar properties (see Figure 1 and Figure 2).
We found that pulsars in CCed GCs generally rotate slower than those in Non-CCed GCs. This is consistent with the notion that secondary encounters in CCed GCs are enhanced (Verbunt & Freire, 2014), which presumably results in the prevalence of isolated MSPs and fewer X-ray binaries than in Non-CCed GCs with comparable primary encounter rates (Bahramian et al., 2013; Verbunt & Freire, 2014; Kremer et al., 2022). The increased binary disruption efficiency in CCed GCs likely interrupts the angular momentum transfer at an earlier stage of recycling. Consequently, the slower rotation of pulsars in CCed GCs is not unexpected (see also Ivanova et al., 2008).
If the recycling process is halted at an earlier epoch, not only a slower rotating pulsar result, but we should also expect a stronger surface magnetic field than their counterparts in Non-CCed GCs because the magnetic decay due to the mass transfer is suppressed (see the discussion in Lee et al., 2023). For pulsars, the strength of the dipolar surface magnetic field can be estimated by their rotational period \(P_{\rm rot}\) and the corresponding spin-down rate \(\dot{P}_{\rm rot}\) namely \(B_{s}\approx\sqrt{\frac{3c^{2}I}{2\pi^{2}R_{NS}}}\dot{P}_{\rm rot}P_{\rm rot}\), where \(c\) is the speed of light and \(R_{NS}\) is the radius of the neutron star. However, such estimation for the pulsars in GCs is complicated by the accelerations in the gravitational potential of a GC, which can bias the measurement of \(\dot{P}_{\rm rot}\). Up to now, there are only a handful of GC pulsars with their intrinsic \(\dot{P}_{\rm rot}\) estimated
\begin{table}
\begin{tabular}{c c c c c c c} \hline Name & \(N_{b}\) & \(N_{s}\) & \(r_{c}\) & \(\log\rho_{c}\) & \(\gamma\) & Class \\ & & & (pc) & \((L_{\odot}pc^{-3})\) & \((\gamma_{MM})\) & \\ \hline \multicolumn{8}{c}{Non-core-collapsed GCs} \\ \hline
47 Tuc & 19 & 10 & 0.36 & 4.88 & 6.57 & I \\ M 10 & 2 & 0 & 0.77 & 3.54 & 0.67 & S \\ M 12 & 2 & 0 & 0.79 & 3.23 & 0.42 & S \\ M 13 & 4 & 2 & 0.62 & 3.55 & 0.52 & S \\ M 14 & 5 & 0 & 0.79 & 3.36 & 0.25 & S \\ M 2 & 6 & 0 & 0.32 & 4.00 & 1.05 & I \\ M 22 & 2 & 2 & 1.33 & 3.63 & 0.59 & S \\ M 28 & 10 & 4 & 0.24 & 4.86 & 7.88 & I \\ M 3 & 6 & 0 & 0.37 & 3.57 & 0.62 & I \\ M 4 & 1 & 0 & 1.16 & 3.64 & 1.00 & I \\ M 5 & 6 & 1 & 0.44 & 3.88 & 1.02 & I \\ M 53 & 4 & 1 & 0.35 & 3.07 & 0.21 & S \\ M 71 & 5 & 0 & 0.63 & 2.83 & 0.40 & S \\ M 92 & 1 & 0 & 0.26 & 4.30 & 2.53 & I \\ NGC 1851 & 9 & 6 & 0.09 & 5.09 & 12.44 & I \\ NGC 5986 & 1 & 0 & 0.47 & 3.41 & 0.40 & S \\ NGC 6440 & 4 & 4 & 0.14 & 5.24 & 13.53 & I \\ NGC 6441 & 3 & 6 & 0.13 & 5.26 & 10.93 & I \\ NGC 6517 & 3 & 14 & 0.06 & 5.29 & 26.82 & I \\ NGC 6539 & 1 & 0 & 0.38 & 4.15 & 1.55 & I \\ NGC 6652 & 2 & 0 & 0.1 & 4.48 & 6.71 & I \\ NGC 6712 & 1 & 0 & 0.76 & 3.18 & 0.29 & S \\ NGC 6749 & 2 & 0 & 0.62 & 3.30 & 0.35 & S \\ NGC 6760 & 1 & 1 & 0.34 & 3.89 & 1.35 & I \\ Omega Cen & 8 & 10 & 2.37 & 3.15 & 0.12 & S \\ Terzan 5 & 24 & 20 & 0.16 & 5.14 & 13.00 & I \\ \hline \multicolumn{8}{c}{Core-collapsed GCs} \\ \hline M 15 & 1 & 8 & 0.14 & 5.05 & 8.89 & D \\ M 30 & 2 & 0 & 0.06 & 5.01 & 25.42 & D \\ M 62 & 9 & 0 & 0.22 & 5.16 & 9.82 & I \\ NGC 362 & 5 & 1 & 0.18 & 4.74 & 5.85 & I \\ NGC 6342 & 1 & 1 & 0.05 & 4.97 & 27.76 & D \\ NGC 6397 & 2 & 0 & 0.05 & 5.76 & 254.79 & D \\ NGC 6522 & 0 & 6 & 0.05 & 5.48 & 55.13 & D \\ NGC 6544 & 3 & 0 & 0.05 & 6.06 & 275.92 & I \\ NGC 6624 & 2 & 10 & 0.06 & 5.30 & 36.40 & D \\ NGC 6752 & 1 & 8 & 0.17 & 5.04 & 18.81 & D \\ Terzan 1 & 0 & 7 & 0.04 & 3.85 & 12.13 & D \\ \hline \end{tabular}
* **Note** : Number of binary pulsars \(N_{b}\) and isolated pulsars \(N_{s}\) from Manchester et al. (2005). Core radius \(r_{c}\) and central luminosity density, \(\rho_{c}\) from Harris (1996, 2010 edition). Disruption rates \(\gamma\propto\rho_{c}^{-5}r_{c}^{-1}\) from Eq. 2 in Verbunt & & Freire (2014), which are normalized with the value of M4. The class labels in the seventh column represent the groups of Sparse (S), Intermediate (I), and Dense (D) as determined by GMM (See Sec. 3.2).
\end{table}
Table 2: Updated statistics of single and binary pulsars as well as the structural parameters of GCs.
Figure 5: Unsupervised classification of GCs in a plane spanned by the disruption rate \(\gamma\) and the central concentration parameter \(c\) with the method of 2-dimensional Gaussian Mixture Model (GMM).
\begin{table}
\begin{tabular}{l c c c} \hline & S vs D & S vs I & D vs I \\ \hline \(P_{\rm rot}\) & 0.007 & 0.898 & 0.0002 \\ \(P_{\rm b}\) & 0.612 & 0.214 & 0.472 \\ \(L_{1.4\rm GHz}\) & 0.013 & 0.012 & 0.902 \\ \(L_{\rm x}\) & 0.844 & 0.522 & 0.406 \\ \hline \end{tabular}
\end{table}
Table 3: Null hypothesis probabilities of A-D test for comparing \(P_{\rm rot}\), \(P_{b}\), \(L_{1.4\rm GHz}\) and \(L_{\rm x}\) among S, I and D groups as classified by GMM.
(cf. Tab. 4 in Lee et al., 2023) and therefore, we are not able to directly compare the \(B_{s}\) of the pulsars in CCed and Non-CCed GCs.
On the other hand, as a pulsar radiates by tapping its rotational energy, the radiation power should be proportional to the spin-down power \(\dot{E}\) which is expressed as \(\dot{E}=4\pi^{2}\dot{I}P_{\rm rot}P_{\rm rot}^{-3}\propto B_{s}^{2}P_{\rm rot}^ {-4}\) where \(I\) is the moment of inertia. Therefore, the radio luminosity \(L_{1.4\rm GHz}\) can be treated as a proxy for probing \(B_{s}\) of the GC pulsars.
Our analysis indicates that \(L_{1.4\rm GHz}\) of the pulsars in CCed GCs are significantly higher than those in Non-CCed GCs (cf. Figure 1). Together with the fact that \(P_{\rm rot}\) of CCed GC pulsars are longer than those in Non-CCed GCs, we can infer that \(B_{s}\) of CCed GC pulsars are stronger than those in Non-CCed GCs which is in line with our aforementioned speculation.
To investigate whether the difference in \(L_{1.4\rm GHz}\) is genuine, we have further checked whether such a difference can be a result of the observational effect. If a GC is close to us, a flux-limited survey will uncover more faint sources than those in the more distant GCs. For examining this issue, we compared the distances \(d\) between the CCed and Non-CCed GCs in our sample, and the results are shown in Figure 7. The medians of \(d\) of CCed and Non-CCed GCs are 6.8 kpc and 6.9 kpc respectively. With A-D test, we do not find any
Figure 6: Comparisons of eCDFs of the selected pulsar properties among S, I and D groups as determined by GMM. The bracketed numbers in the legends show the corresponding sample sizes.
Figure 7: Comparison of eCDFs of the distance between CCed GCs and Non-CCed GCs in our sample.
significant difference between these two eCDFs (\(p>0.05\)). And hence, we conclude that the difference in \(L_{\rm 1.4GHz}\) between CCed and Non-CCed GCs is genuine.
On the other hand, the A-D test indicates that the differences in \(L_{\rm 1.4GHz}\) and \(L_{x}\) between the GF and GC MSP populations are statistically significant. However, we notice that many GF MSPs are located in our proximity. The medians of \(d\) for radio-selected MSPs in GCs and GF in our sample are found to be 6.9 kpc and 1.7 kpc, respectively. A-D test yields a \(p\)-value of \(\sim 10^{-22}\) which indicates a very significant difference between their distributions of \(d\). Consequently, the excess at the lower end of the distribution of \(L_{\rm 1.4GHz}\) for GF MSPs (Figure 2) can be a result of observational bias. This bias also affects the comparison of \(L_{x}\) between MSPs in GCs (median \(d=4.9\) kpc) and GF (median \(d=1.2\) kpc) in our sample.
In conclusion, our results demonstrate that CCed and Non-CCed GC pulsar populations exhibit differences in their rotation rates and radio luminosities, with CCed GC pulsars rotating slower and having higher radio luminosities. This supports the idea that the recycling process is halted earlier in CCed GCs, leading to stronger surface magnetic fields and slower rotations.
For further examining the effect of dynamical effects on the structure of the surface magnetic field, we would like to compare the radio beam sizes of MSPs in GF, CCed GCs, and Non-CCed GCs. The beam sizes can be estimated by \(\Delta\phi=W_{50}/P_{\rm rot}\), where \(W_{50}\) is the pulse width at 50% of the peak in the unit of time as obtained from the ATNF catalog (Manchester et al., 2005). The comparisons of \(\Delta\phi\) among three populations are given in Figure 8.
It is interesting to note that the \(\Delta\phi\) of MSPs in GF is smaller than those in Non-CCed and CCed GCs. With A-D test, we find \(\Delta\phi\) of the GF population is significantly smaller than those of Non-CCed MSPs (\(p\)-value\(-\)0.01) and CCed MSPs (\(p\)-value\(-\)0.02). We also note that the \(\Delta\phi\) from Non-CCed GCs is apparently smaller than that from CCed GCs, although the A-D test does not yield a \(p\)-value below our pre-defined criterion.
These results conform with the expectation that different recycling histories can lead to different surface magnetic field structures. Chen & Ruderman (1993) argued that mass accretion could reduce the polar cap radius and hence the size of the open field line region. This notion is supported by Kramer et al. (1998), which found the open angle of GF MSPs is smaller than that expected from the dipolar geometry (cf. Fig. 12 in their paper).
The fact that the \(\Delta\phi\) of GF MSPs is smaller than those of GC MSPs is consistent with the scenario that the accretion phase of GC MSPs is shortened by dynamical disruption, as suggested by Lee et al. (2023). Since the disruption rate is generally higher in CCed GCs (see Table 2 & Figure 4), the MSPs in CCed GCs should have a larger beam size than those in Non-CCed GCs. However, a firm conclusion is precluded by the current sample size. With more samples available in the future, the comparison of \(\Delta\phi\) between these two classes of GC MSPs should be revisited.
We have to point out a caveat in the comparison of \(\Delta\phi\) presented here. First, owing to the complexity of the radio pulse profile, \(W_{50}\) should be considered as a poor estimator for the size of the emission beam. Second, beam size should be a function of observing frequency. However, such information is not available in the ATNF catalog. A more accurate determination of the emission geometry should be derived from fitting the polarization data. Therefore, we strongly encourage a dedicated study to compare the emission geometry of MSPs in GF and GCs with radio polarization, which can help to scrutinize our hypothesis.
Lastly, we would like to emphasize that all the aforementioned discussions are based on the conventional CCed/Non-CCed classification of GCs, which relies on photometric measurements (Trager et al., 1995; Harris, 1996). In Section 3.2, we have pointed out a possible ambiguity of this conventional classification scheme. Bianchini et al. (2018) have also mentioned that there is no robust connection between the photometric central concentration and the dynamical state of a GC.
By combining the central concentration parameter \(c\) and a dynamical measure of disruption rate \(\gamma\), we have shown that the GCs in our sample can be divided into three groups (Figure 5). For two groups maximally separated in the \(\gamma-c\) plane, namely S group and D group, they purely comprised Non-CCed GCs and CCed GCs, respectively (cf. Table 2). By comparing the distributions of \(P_{\rm rot}\) and \(L_{\rm 1.4GHz}\) between these two groups, the differences remain to be statistically significant. On the other hand, the intermediate I group has a mixture of CCed and Non-CCed GCs. Both flux-limited samples and a strong bias in I group by the pulsars from a few Non-CCed GCs (e.g. 47 Tuc and Terzan 5) preclude any conclusive comparison with the other two groups.
This has also raised a concern that the classification scheme of GCs might not be unique. In view of the complex evolution of GCs (e.g. Ivanova et al., 2006; Hong et al., 2017), the description of both dynamical status and structure of GCs can be more complicated than the binary classification as simple as CCed or Non-CCed. For example, by examining the radial distribution of blue stragglers, Ferraro et al. (2012) have shown that the dynamical age of GCs can be divided into three groups. With a more comprehensive classification scheme proposed by further studies, the differences in pulsar properties among different groupings can be reexamined.
## Acknowledgements
K.O is supported by the National Research Foundation of Korea grant 2022R1F1A1073952 and 2022R1A6A3A13071461. C.Y.H. is supported by the research fund of Chungnam National University and by the National Research Foundation of Korea grant 2022R1F1A1073952. J.T. is supported by the National Key Research and Development Program of China (grant No. 2020YFC2201400) and the National Natural Science Foundation of China (NSFC, grant No. 12173014). A.K.H.K. is supported by the National Science and Technology Council of Taiwan through grant 111-2112-M-007-020.
Figure 8: Comparison of eCDFs of the estimates of radio beam sizes \(\Delta\phi\) of MSPs in CCed GCs, Non-CCed GCs, and the GF.
P.H.T. is supported by NSFC grant No. 12273122 and the China Manned Space Project (No. CMS-CSST-2021-B09). K.L.L. is supported by the National Science and Technology Council of the Republic of China (Taiwan) through grant 111-2636-M-006-024, and he is also a Yushan Young Fellow supported by the Ministry of Education of the Republic of China (Taiwan).
## Data Availability
The data underlying this article were accessed from Chandra Data Archive ([https://cda.harvard.edu/chaser/](https://cda.harvard.edu/chaser/)) and ATNF ([https://www.atnf.csiro.au/research/pulsar/psrcat/](https://www.atnf.csiro.au/research/pulsar/psrcat/)).
|
2310.11801 | Towards Quantum Dynamics Simulation of Physical Systems: A Survey | After the emergence of quantum mechanics and realising its need for an
accurate understanding of physical systems, numerical methods were being used
to undergo quantum mechanical treatment. With increasing system correlations
and size, numerical methods fell rather inefficient, and there was a need to
simulate quantum mechanical phenomena on actual quantum computing hardware.
Now, with noisy quantum computing machines that have been built and made
available to use, realising quantum simulations are edging towards a practical
reality. In this paper, we talk about the progress that has been made in the
field of quantum simulations by actual quantum computing hardware and talk
about some very fascinating fields where it has expanded its branches, too. Not
only that, but we also review different software tool-sets available to date,
which are to lay the foundation for realising quantum simulations in a much
more comprehensive manner. | Rikteem Bhowmick, Navaneeth Krishnan Mohan, Devesh Kumar, Rohit Chaurasiya, Nixon Patel | 2023-10-18T08:45:35Z | http://arxiv.org/abs/2310.11801v1 | # Towards Quantum Dynamics Simulation of Physical Systems: A Survey
###### Abstract
After the emergence of quantum mechanics and realising its need for an accurate understanding of physical systems, numerical methods were being used to undergo quantum mechanical treatment. With increasing system correlations and size, numerical methods fell rather inefficient and there was a need to simulate quantum mechanical phenomena on actual quantum computing hardware. Now, with noisy quantum computing machines that have been built and made available to use, realising quantum simulations are edging towards a practical reality. In this paper, we talk about the progress that has been made in the field of quantum simulations by actual quantum computing hardware and talk about some very fascinating fields where it has expanded its branches too. Not only that, but we also review different software tool-sets available to date which are to lay the foundation for realising quantum simulations in a much more comprehensive manner.
## 1 Introduction
The beginning of the study of quantum mechanics dates back to 19th century with Young's double slit experiment and the seminal work on Black body radiation by Gustav Kirchhoff and Ludwig Boltzmann in 1862. Max Planck around 1900 suggested the quantization of electromagnetic energy to explain Black Body radiation and Albert Einstein in 1905 postulated that light is made of individual photons quoting Planck's hypothesis to explain the Photoelectric effect. In 1926, Erwin Schrodinger formulates the wave equation which laid the foundation of our understanding of the dynamics of a quantum system.
To explain the dynamics of an atom coupled to an electromagnetic field, the _master equation_ was formulated governing the equation of motion for reduced density operator of an atom interacting with multi-mode EM field thus building the foundation of the study of quantum mechanical light-matter interaction. Ever since consideration of quantum mechanical treatment became significant, numerical methods of solving differential equations were devised when analytical methods fell inefficient and were difficult to solve. Moreover, with increasing system size and variables required to solve a quantum system accurately, numerical methods were either inefficient or classical computing machines (HPCs and Supercomputers) were incapable to handle the large computations required which is especially true when there is a strong correlation between system parties[1]. In 1982, Richard P. Feynman described his realisation that in order to replicate the dynamics of a quantum system, the inherent computing machine must work on the quantum mechanical laws, only then one can accurately simulate the dynamical evolution of any quantum system with appropriate resource utilisation.
In this paper, we have explored how in recent years quantum simulations are progressing towards reality and how recent realisations have been achieved to mimic the behavior of quantum systems with available universal programmable quantum computing devices as well as specific purpose quantum computing machines. We have also discussed a few areas where in recent years quantum simulations have drastically changed our understanding and perspective. We have structured our paper in the following manner- in Section 2 we talk about Quantum simulation for simulating quantum systems and processes and describe the different classes of quantum simulation namely, _digital_, _analog_, and _digital-analog simulations_. Section 3 discusses various transformation techniques used for a physical system to qubit mapping. In Section 5, we discuss different algorithms for Hamiltonian |
2305.04704 | Operational Markovianization in Randomized Benchmarking | A crucial task to obtain optimal and reliable quantum devices is to quantify
their overall performance. The average fidelity of quantum gates is a
particular figure of merit that can be estimated efficiently by Randomized
Benchmarking (RB). However, the concept of gate-fidelity itself relies on the
crucial assumption that noise behaves in a predictable, time-local, or
so-called Markovian manner, whose breakdown can naturally become the leading
source of errors as quantum devices scale in size and depth. We analytically
show that error suppression techniques such as Dynamical Decoupling (DD) and
Randomized Compiling (RC) can operationally Markovianize RB: i) fast DD reduces
non-Markovian RB to an exponential decay plus longer-time corrections, while on
the other hand, ii) RC generally does not affect the average, but iii) it
always suppresses the variance of such RB outputs. We demonstrate these effects
numerically with a qubit noise model. Our results show that simple and
efficient error suppression methods can simultaneously tame non-Markovian noise
and allow for standard and reliable gate quality estimation, a fundamentally
important task in the path toward fully functional quantum devices. | Pedro Figueroa-Romero, Miha Papič, Adrian Auer, Min-Hsiu Hsieh, Kavan Modi, Inés de Vega | 2023-05-08T13:37:18Z | http://arxiv.org/abs/2305.04704v2 | # Operational Markovianization in Randomized Benchmarking
###### Abstract
A crucial task to obtain optimal and reliable quantum devices is to quantify their overall performance. The average fidelity of quantum gates is a particular figure of merit that can be estimated efficiently by Randomized Benchmarking (RB). However, the concept of gate-fidelity itself relies on the crucial assumption that noise behaves in a predictable, time-local, or so-called Markovian manner, whose breakdown can naturally become the leading source of errors as quantum devices scale in size and depth. We analytically show that error suppression techniques such as Dynamical Decoupling (DD) and Randomized Compiling (RC) can operationally _Markovianize_ RB: _i)_ fast DD reduces non-Markovian RB to an exponential decay plus longer-time corrections, while on the other hand, _ii)_ RC generally does not affect the average, but _iii)_ it always suppresses the variance of such RB outputs. We demonstrate these effects numerically with a qubit noise model. Our results show that simple and efficient error suppression methods can simultaneously tame non-Markovian noise and allow for standard and reliable gate quality estimation, a fundamentally important task in the path toward fully functional quantum devices.
## I Introduction
The characterization of noise in quantum information processors will remain a necessary and unavoidable task to build fault-tolerant devices. Some of the most common sets of techniques to achieve this can be said to fall within an interval comprising tomographic techniques on one end, and Randomized Benchmarking (RB) techniques on the other: while the first can provide a detailed description of noise with an exponential measurement and sampling overhead in system size, the latter can estimate coarse average error rates efficiently [1; 2].
Due to the simplicity of RB-based protocols, they have become ubiquitous both for small systems [3; 4; 5; 6; 7; 8], as well as a steppingstone for scalable techniques for more ambitious learning of quantum noise [9; 10; 11; 12; 13]. However, the manageable analytical behavior of the RB data -namely, an exponential decay capturing average gate-fidelities, with State Preparation and Measurement (SPAM) errors isolated as multiplicative and offset constants- is guaranteed only with highly-simplified and unrealistic assumptions about the noise. More realistic regimes have actively been under investigation [14; 15] and can still benefit from RB's simplicity.
Arguably, however, one of the most difficult simplifications to relax in any characterization technique is the one assuming that noise can be associated with each individual quantum gate separately. In reality, noise is e.g., introduced by the nature of the quantum device itself or by inherent limits in the control that the experimenter has, and errors occurring at a given time can propagate and affect future errors. These kinds of correlations and their effects are generally known as non-Markovianity [16; 17; 18] and can be fully described in the quantum realm through multi-time generalizations of quantum channels [19; 20; 21]. A signature aspect of non-Markovian noise in RB is the non-trivial non-exponential decay of the data rendered by it [22; 23; 24; 25; 26], which makes the information about the noise and its correlations remarkably hard to analyze. Moreover, the key feature of RB of robustness against SPAM errors ceases to hold if any of these are correlated. This is depicted schematically in Figure 1.
While characterizing and better understanding non-Markovian noise is an active area of re
search [27; 28; 29; 30; 31; 32; 33], various control and error suppression techniques, such as Dynamical Decoupling (DD) [34] and Randomized Compiling (RC) [35] have been shown to remove non-Markovian noise effects to a statistically significant extent. This can further be viewed with a resource-theoretic lens [36] as keeping local-system information in exchange of the consumption of temporal correlations. Generally, in the Markovian regime, both DD and RC are well-known to enhance the quality of quantum computations with a low resource overhead [37; 38; 39; 40], and combining RB with quantum control has previously been done successfully, e.g., to optimize the quantum control itself [41], or to demonstrate enhanced average gate-fidelities [42].
**Main Results (informal)**: In this manuscript, we show that for a broad class of non-Markovian noise models, DD can effectively and efficiently _Markovianize_ RB, i.e., remove non-Markovian non-exponential deviations, allowing for a straightforward prediction of enhanced average gate-fidelities with low overhead. In other words, DD converts non-Markovian correlations into more tractable quantum noise. We exemplify this effect numerically on a qubit with an _XY4_ sequence. Moreover, we analyze the effect of tailoring non-Markovian noise into Pauli noise within the accessible subsystem of interest in non-Markovian RB, finding that, while RC does not Markovianize RB in the same sense that DD does, the uncertainty in the outputs get suppressed so that average outputs can be almost as precise as analytical ones. We exemplify these effects with the same numerical model for a single qubit strongly interacting with another.
Our results show that coherent noise suppression and decoupling schemes in RB can both efficiently Markovianize and allow the extraction of enhanced average gate fidelities that accurately capture -and do not overestimate- all statistically-relevant error rates.
## II RB and operational modeling of quantum noise
### Setup and notation
Detail on the notation we employ in the main text, as well as in the derivations for our results can be read in full in Appendix A.1. Here we will consider a composite Hilbert space labeled SE, comprised by a system of interest S, and an environment
Figure 1: **Randomized Benchmarking (RB) under Markovian and non-Markovian noise**: Cartoons of sample circuits of the RB protocol for a given initial state \(\rho_{\text{S}}\), sequence of randomly sampled quantum gates \(\{\mathcal{G}_{i}\}\) with “undo” (compiled inverse of the sequence) gate \(\mathcal{G}_{m+1}\), and final measurement \(M\), subject to (a) Markovian noise, where errors are uncorrelated with each other and can be associated to each individual gate, and (b) non-Markovian noise, where an external (quantum) system –i.e., an environment– can serve as a memory, correlating errors in time and propagating their information to the final measurement. Example outputs of RB associated with (a) and (b) are shown on the top and bottom right, correspondingly; detail in notation and meaning will be explained throughout the manuscript, here we only point out the distinction that non-Markovian RB decays are generally non-trivially non-exponential, with the standard notion of an individual average gate-fidelity being ill-defined.
\(\mathsf{E}\), which we assume to be inaccessible. \(\mathsf{E}\) could simply be a subset of idle or ancillary qubits within a quantum device [26]. We will label their respective dimensions as \(d_{\mathsf{S}}=2^{n_{s}}\), for \(n_{s}\) qubits in \(\mathsf{S}\), and \(d_{\mathsf{E}}\).
We present our results with the uniformly distributed \(n_{s}\)-qubit Clifford group as our gate-set and denote individual gates with the symbol \(\mathcal{G}_{i}\), \(i=1,2,\ldots\), although the general treatment with finite groups can be seen in the Appendices. We denote initial states as \(\rho_{\mathsf{S}}\) and measurements as \(M\), both on system \(\mathsf{S}\), and which can be chosen arbitrarily, as long as the noiseless expectation \(\operatorname{tr}(M\rho_{\mathsf{S}})\) is known. While the RB protocol can be read in detail in Appendix A.3, the three main input components in a RB experiment are \(i\)) the gate-set \(\{\mathcal{G}_{i}\}_{i=1}^{m}\) for a given integer \(m\), \(ii\)) the initial state \(\rho_{\mathsf{S}}\), and \(iii\)) the measurement element or observable \(M\).
We will denote by \(\mathbf{E}_{\mathcal{G}}\) the uniform average over the gate-set \(\{\mathcal{G}_{i}\}\), and reserve the notation \(\mathcal{F}_{m}\) for a so-called average-sequence fidelity, defined here as \(\mathcal{F}_{m}:=\mathbf{E}_{\mathcal{G}}\operatorname{tr}[M^{\prime} \mathcal{G}_{m+1}^{\prime}\circ\mathcal{G}_{m}^{\prime}\circ\cdots\mathcal{G }_{1}^{\prime}(\rho_{\mathsf{S}}^{\prime})]\), where the primed terms denote the real (noisy) implementations of the corresponding initial state, measurement, and gates, and where \(\mathcal{G}_{m+1}:=\mathcal{G}_{m}^{-1}\circ\cdots\mathcal{G}_{1}^{-1}\).
Finally, we refer to a quantum channel being a "\(\mathsf{X}\) channel", or as a "channel acting on \(\mathsf{X}\)", when it maps inputs from space \(\mathsf{X}\) to outputs in \(\mathsf{X}\).
### RB: Markovian vs. non-Markovian
Up to a few more parameters, like the number of gates sampled or the number of samples to generate, a full-fledged framework for RB exists [15] when the underlying noise is assumed to be effectively uncorrelated in time, i.e., Markovian. This allows us to estimate average gate fidelities, which despite being a rough figure of merit, are an essential component for the characterization of noise in a quantum device. In the Markovian, time-stationary, and gate-independent noise regime, the outputs of a RB experiment over gate sequences of length \(m\) are estimates that can be fitted to a function of the form
\[\mathcal{F}_{m}=A\;p^{m}+B, \tag{1}\]
so-called an average sequence fidelity, where here \(p\lesssim 1\) is a _quality factor_ capturing the noise solely associated to gates, and both \(0\leq A,B\leq 1\) are constants isolating the SPAM errors. The average gate-fidelity [43] of the noisy gates with respect to the ideal ones, \(\mathsf{F}_{\mathsf{avg}}\), is then related to the quality factor \(p\) simply as (see e.g., [4])
\[\mathsf{F}_{\mathsf{avg}}=p+\frac{(1-p)}{d_{\mathsf{S}}}. \tag{2}\]
The Markovian assumption is effectively equivalent to an environment, \(\mathsf{E}\), dissipating or _forgetting_ any information of its interaction with system \(\mathsf{S}\) at any given time, thereby just introducing noise locally at such time and reducing the purity of the noisy outputs, effectively as if \(\mathsf{E}\) was not there at all. But several factors in the advancement and scaling up of quantum devices, including being able to probe smaller timescales and having larger systems strongly interacting with each other, make the Markov approximation implausible. Non-Markovianity, i.e., the presence of (non-negligible) temporal correlations, implies that anything other than \(\mathsf{S}\), in particular other qubits, can play the role of an \(\mathsf{E}\)_with memory_ propagating undesired noise correlations in time [44], and that noisy processes cannot be thought of as individual quantum channels associated independently to the ideal quantum gates.
A formal definition of non-Markovianity can be seen in Appendix A.2; in particular, we work within the process tensor framework [19], which generalizes the notion of stochastic processes to quantum theories [45] and contains both the classical notion and the various criteria for quantum non-Markovianity known hitherto [20; 21].
It has been shown in [25] that the generalization of Eq. (1) to the non-Markovian gate-independent [46] noise case, has the (generally non-exponential) form
\[\mathcal{F}_{m}=\operatorname{tr}\left\{M\operatorname{tr}_{\mathsf{E}}\left[ \left(\mathcal{G}_{m,A}^{\prime}+\mathcal{G}_{m,B}^{\prime}\right)\rho \right]\right\}, \tag{3}\]
where the \(\mathcal{G}_{m,\bullet}^{\prime}\) are generalizations of quality factors, so-called _quality maps_ of associated sequence length \(m\), and here \(\rho\) is a noisy \(\mathsf{SE}\) initial state depending on the prepared \(\rho_{\mathsf{S}}\) (which can generally get correlated with \(\mathsf{E}\)). Quality maps are, generically, Completely Positive (CP) maps which can be further understood as averaged multi-linear maps (so-called process-tensors [21]) taking the set of \(m\) ideal digital
gates \(\{\mathcal{G}_{i}\}_{i=1}^{m}\) as input and being uniformly averaged over all of these. More generally than Markovian quality factors, they encode both SPAM and fidelity-like information of the noise across a whole RB process, reducing to \(A\), \(B\) and \(p\) of Eq. (1) in the Markovian limit. In general, Eq. (3) is difficult to work with even in the time-stationary [47] noise regime, as opposed to the Markovian case, although it has been previously studied in [24; 25].
To derive both the decays in Eq. (1) and Eq. (3), and an explicit mathematical expression of all the quantities involved, we need a mathematical model of both Markovian and non-Markovian noise.
### Modeling of quantum noise
Markovian noise can be generically modeled by defining noisy gates, now possibly non-unitary quantum channels, as \(\tilde{\mathcal{G}}:=\Lambda\circ\mathcal{G}\), where \(\mathcal{G}\) is the ideal digital gate, and \(\Lambda\) is any CP (trace non-increasing) map on \(\mathbf{S}\) which we refer to as a noise channel. In general, noise channels could depend on which specific gate is applied, \(\tilde{\mathcal{G}}_{i}=\Lambda_{\mathcal{G}}\circ\mathcal{G}_{i}\), or at which time-step \(i\) in a gate-sequence such gate is applied, \(\tilde{\mathcal{G}}_{i}=\Lambda_{i}\circ\mathcal{G}\) (or, of course, depend on both, \(\tilde{\mathcal{G}}_{i}=\Lambda_{\mathcal{G}_{i}}\circ\mathcal{G}\)). To generalize this to non-Markovian (temporally correlated) noise we have to take \(\mathsf{E}\) into account. That is, we now model the whole \(\mathsf{SE}\) noisy map \(\tilde{\mathcal{G}}:=\Lambda\circ(\mathcal{I}_{\mathsf{E}}\otimes\mathcal{G})\), with \(\Lambda\) now a CP map on \(\mathsf{SE}\). Sequential applications \(\tilde{\mathcal{G}}_{j}\circ\cdots\circ\tilde{\mathcal{G}}_{k}\) can give rise to temporal correlations among the corresponding \(\Lambda\) noise maps, as formalized by the process tensor framework [19; 21].
The average sequence fidelity in Eq. (3) corresponds simply to the explicit evaluation of the uniform average over Clifford gates, \(\mathbf{E}_{\mathcal{G}}\), in
\[\mathcal{F}_{m}:=\mathbf{E}_{\mathcal{G}}[\text{tr}[M\,\text{tr}_{\mathsf{E} }\circ\Lambda\circ\mathcal{G}_{m+1}\circ\cdots\circ\Lambda\circ\mathcal{G}_{ 1}(\rho)]]\,, \tag{4}\]
where \(\mathcal{G}_{m+1}:=\mathcal{G}_{1}^{-1}\circ\cdots\circ\mathcal{G}_{m}^{-1}\) and \(\rho:=\Lambda(\rho_{\mathsf{E}}\otimes\rho_{\mathsf{S}})\) for given fiducial initial states of \(\mathbf{S}\) and \(\mathsf{E}\), \(\rho_{\mathsf{S}}\) and \(\rho_{\mathsf{E}}\), respectively. This is such that when \(\Lambda=\mathcal{I}\), the average sequence fidelity equals \(\text{tr}(M\rho_{\mathsf{S}})\). Implicitly, together the gate-independent and time-stationary noise assumptions mean \(\Lambda_{\mathcal{G}_{i}}=\Lambda_{i}=\Lambda\), for all gates \(\mathcal{G}_{i}\) and time-steps \(i\). A circuit representation of a RB sequence sample with non-Markovian time-non-stationary noise can be seen in Fig. 2; terms \(\Lambda_{0}\) and \(\Lambda_{m+1}\) are interpreted jointly as SPAM noise terms.
As detailed in Appendix A.4, in the Markovian case, Eq. (4) leads to Eq. (1) with \(p\) depending solely on the average gate-fidelity of \(\Lambda\), as in Eq (11), and \(A\), \(B\) depending solely on SPAM errors, as in Eq. (12). More generally, in [24], Eq. (4) was shown to be of the form of Eq. (3), with the quality factors \(\mathcal{Q}_{m,A}^{\prime}\) and \(\mathcal{Q}_{m,B}^{\prime}\) given explicitly as in Eq. (10) of Appendix B. This implies as well that, in the Markovian, time-stationary noise approximation regime, Eq. (3) reduces to the exponential decay of Eq. (1).
We wish to be able to exploit the simplicity of the RB protocol in spite of noise being non-Markovian. In the following, we show that precisely this reduction of the non-Markovian case to the Markovian one can be effectively achieved operationally by interleaving DD sequences within the RB protocol.
## III Markovianization of RB with DD
The way DD works is by applying a sequence of pulses via a control Hamiltonian, \(H_{\text{ctrl}}\), in a way that effectively averages out undesired coupling terms in the total free-evolution Hamiltonian, \(H\), which for finite-dimensional systems other than having a finite largest singular value, can be arbitrary [48]. For infinite-dimensional \(\mathsf{E}\), at least a frequency cut-off is required [48], but a broader classification for which Hamiltonians are amenable to DD exists [49; 50]. Generally, DD can effectively decouple, albeit partially, a wide class of \(\mathsf{SE}\) Hamiltonians. Here we will
consider Universal Dynamical Decoupling (UDD), which is universal in the sense that it averages out errors up to the first order in the Magnus expansion of the evolution [34], independently of \(H\). Concretely, we refer to a unitary group \(\mathds{V}\) on \(\mathds{S}\) as a universally decoupling group whenever \(\sum_{v\in\mathds{V}}vXv^{\dagger}=O_{\mathds{E}}\otimes\mathds{1}_{\mathds{S}}\), for any \(\mathsf{SE}\) operator \(X\) and some \(\mathsf{E}\) operator \(O_{\mathsf{E}}\) and \(\mathds{1}_{\mathds{S}}\) the identity on \(\mathds{S}\).
In the setting of RB, we can model free evolution at any given time-step as the underlying noise \(\Lambda\) on the whole \(\mathsf{SE}\), so _Markovianizing_ Eq. (3) can effectively be accomplished by applying UDD between RB gates. As detailed in Appendix C, we consider ideal pulses generated by \(H_{\mathsf{ctrl}}(t)=\frac{\pi}{2}\sum_{k}\delta(t-t_{k})v_{k}\) at times \(t_{1}<t_{2}<\ldots<t_{\eta}\), where \(\delta\) is a Dirac delta and \(\eta=|\mathds{V}|\) is the number of elements in the decoupling group, i.e., infinitely strong instantaneous pulses with decoupling operators \(v_{k}\) at times \(t_{k}\).
Let us label the noise maps by an elapsed-time \(t\) between subsequent applications of any two gates as \(\Lambda^{(t)}\). Denoting the channels associated to the ideal decoupling operators as \(\mathcal{V}(\cdot):=v(\cdot)v^{\dagger}\), these pulses can be applied at evenly-spaced time-intervals \(\tau_{\mathsf{dd}}\) through \(\mathcal{S}_{\mathsf{dd}}^{(\tau_{\mathsf{dd}})}:=\odot_{\mathcal{V}\in \mathds{V}}\left(\mathcal{V}\circ\Lambda^{(\tau_{\mathsf{dd}})}\circ\mathcal{ V}^{\dagger}\right)\). Notice that we are taking the application of DD pulses to be strictly evenly spaced. As we will interleave these among the random RB gates, we will necessarily have free-evolution (noise) on the edge terms between the application of gates; taking either the first or last pulse as the identity, we can equivalently write
\[\mathcal{S}_{\mathsf{dd}}^{(\tau_{\mathsf{dd}})}=\Lambda^{(\tau_{\mathsf{dd} }/2)}\odot_{\tilde{\mathcal{V}}}\left(\tilde{\mathcal{V}}\circ\Lambda^{(\tau _{\mathsf{dd}})}\circ\tilde{\mathcal{V}}^{\dagger}\right)\circ\Lambda^{(\tau_{ \mathsf{dd}}/2)}, \tag{5}\]
with \(\tilde{\mathcal{V}}\) being non-identity elements of \(\mathds{V}\).
For example, for a single-qubit, \(\mathds{V}=\{\mathds{1},X,Y,Z\}\), the Pauli group, so that \(\mathcal{S}_{\mathsf{dd}}^{(\tau_{\mathsf{dd}})}=\mathcal{Z}\circ\Lambda^{( \tau_{\mathsf{dd}})}\circ\mathcal{Z}\circ\mathcal{Y}\circ\Lambda^{(\tau_{ \mathsf{dd}})}\circ\mathcal{Y}\circ\Lambda^{(\tau_{\mathsf{dd}})}\), where curly letters here are the maps associated to each Pauli operator, and which can be seen to be equivalent to the so-called \(XY4\) and \(XZ4\) sequences.
If we label by \(\tau_{\mathsf{fb}}\) the original elapsed time between application of RB Clifford gates, and if \(\tau_{\mathsf{fb}}\) further turns out to be the _minimum_ time between application of two subsequent gates on the given device, then we would generally need to consider no less than \(\tau_{\mathsf{dd}}=2\tau_{\mathsf{fb}}\), due to the edge terms in Eq. (5). Other considerations could come into play here in order to optimize \(\tau_{\mathsf{dd}}\), e.g., the fact that Clifford gates are composite or in general whether DD pulses, in particular, can be applied with \(\tau_{\mathsf{dd}}<\tau_{\mathsf{fb}}\).
We can model the application of ideal UDD within a RB protocol by interleaving a single sequence \(\mathcal{S}_{\mathsf{dd}}^{(\tau_{\mathsf{dd}})}\) among each ideal gate of a RB circuit, i.e., \(\mathcal{S}_{\mathsf{dd}}^{(\tau_{\mathsf{dd}})}\circ\mathcal{G}_{m+1}\circ \cdots\circ\mathcal{S}_{\mathsf{dd}}^{(\tau_{\mathsf{dd}})}\circ\mathcal{G}_ {1}\circ\mathcal{S}_{\mathsf{dd}}^{(\tau_{\mathsf{dd}})}\), as depicted in Fig. 3. Notice this will change the circuit depth, and in particular it will modify the total time between application of the RB gates to \(\eta\,\tau_{\mathsf{dd}}\); nevertheless, we obtain the following:
**Result 1**.: _Let \(\Lambda^{(t)}=\mathrm{e}^{t\mathcal{L}}\), where \(\mathcal{L}(\cdot):=-i[H,\cdot]+\mathcal{D}(\cdot)\) for a \(\mathsf{SE}\) time-independent Hamiltonian \(H\) and a dissipator \(\mathcal{D}=\mathcal{D}_{\mathsf{S}}+\mathcal{D}_{\mathsf{E}}\) with only local \(\mathsf{S}\) and/or \(\mathsf{E}\) contributions, and let \(\gamma\) be the diagonal matrix of relaxation rates of \(\mathcal{D}_{\mathsf{S}}:=\sum_{k}\gamma_{k}[L_{k}(\cdot)L_{k}^{\dagger}- \frac{1}{2}\{L_{k}^{\dagger}L_{k},\cdot\}]\), for some (unit-less) traceless and orthonormal operators \(L_{k}\). Then for \(\tau_{\mathsf{dd}}\ll 1/\operatorname{tr}(\gamma)\), the average sequence fidelity of length \(m\) for a RB experiment under time-stationary noise with single interleaved UDD sequences of the form of Eq. (5) satisfies,_
\[\mathcal{F}_{m}=A\;p_{\tau_{\mathsf{dd}}}^{m}+B+\mathcal{O}(\tau_{\mathsf{dd}} ^{2}), \tag{6}\]
_where \(A,B\) are SPAM constants for fixed \(\tau_{\mathsf{dd}}\), and the
quality factor is a \(\mathcal{O}(\tau_{\mathsf{dd}})\) term_
\[p_{\tau_{\mathsf{dd}}}=1-\eta\,\tau_{\mathsf{dd}}\,\frac{\operatorname{tr}( \gamma)}{d_{\mathsf{S}}-\frac{1}{d_{\mathsf{S}}}}, \tag{7}\]
_where \(\eta=|\nabla|\) is the number of elements of the decoupling group \(\nabla\)._
The proof is shown in Appendix C and the value of all constants is shown explicitly in Eq. (C21-C23).
The main message in Result 1 is that ideal-pulse UDD Markovianizes RB for a broad class of non-Markovian noise models, in the sense of reducing the average sequence fidelity of Eq. (3) to that of Eq. (1), to the first order in the DD sequence interval time \(\tau_{\mathsf{dd}}\). Notice the limiting case of \(\tau_{\mathsf{dd}}\to 0\) would be that when both the ideal DD pulses _and_ the random gates in the RB sequence are implemented infinitely fast, so that, of course, \(\mathcal{F}_{m}=1\), as no added idling time is being considered.
The continuous dynamical model for the noise as a SE Lindblad evolution encompasses a broad class of noise models, where the SE dynamics are Markovian while the reduced ones on S are non-Markovian. The only restriction is that the dissipator term, \(\mathcal{D}\), cannot have global SE contributions to the first \(\tau_{\mathsf{dd}}\) order for our result to hold, which would be the case if correlated SE information is dissipated. While in Section V.1 we take as an example a two-qubit system, motivated by more realistic two-level defect noise models, relevant in superconducting systems, many other realistic models of non-Markovianity would consider an infinite-dimensional E. We must point out that the only limiting factor to our theory in such cases would be exotic models having parameter domains where the dynamics cannot be dynamically decoupled [51].
In the absence of dissipation terms, \(\mathcal{F}_{m}=1+\mathcal{O}(\tau_{\mathsf{dd}}^{2})\), i.e., the average sequence fidelity is identically one, albeit only to the first \(\tau_{\mathsf{dd}}\) order. Notice that the modeling of noise in Result 1 explicitly takes into account the presence or absence of dissipation terms between applications of RB gates and DD pulses: e.g., Result 1 will hold with a dilated Hamiltonian \(H^{\prime}\) on a larger system \(\mathsf{SEE^{\prime}}\), only if the absence of global dissipation on SE is also implied.
The quality factor \(p_{\tau_{\mathsf{dd}}}\) is derived in Appendix C and it is related to the average gate-fidelity of the channel \(\mathcal{I}+\tau_{\mathsf{dd}}\,\mathcal{D}_{\mathsf{S}}\) with respect to the identity \(\mathcal{I}\); i.e., it quantifies the noise contribution from the dissipator \(\mathcal{D}_{\mathsf{S}}\), which is the sole generator of Markovian noise once the Hamiltonian has been averaged out in S and it is a trace-zero map, not CP in general. The final expression in terms of the relaxation rates, \(\gamma_{k}\), can then be seen to follow.
Higher-order, \(\mathcal{O}\left(\tau_{\mathsf{dd}}^{2}\right)\) terms, albeit still also containing Markovian contributions, will, in general, be non-Markovian; so while non-exponential deviations may get suppressed and allow to fit an exponential, the resulting decay is purely Markovian only to first order. Thus we can interpret the efficacy of digital UDD sequences, in both Markovianization of the average sequence fidelity and improvement in overall average gate-fidelity, mainly in terms of how fast they can be applied, relative to the time between application of the RB gates, \(\tau_{\mathsf{fb}}\). Given that multi-qubit Cliffords are composite gates, it might be possible to implement DD pulses in time-scales \(\tau_{\mathsf{dd}}<\tau_{\mathsf{fb}}\). As we see in Section V.1, in practice this might be the main limiting factor, together with considerations such as non-ideal, finite-width pulses, as well as imperfect applications of these.
Finally, realistic DD cannot be implemented as ideal instantaneous pulses, and these themselves can introduce control errors [52]. Furthermore, it is known that finite-width DD does not achieve perfect decoupling even to first time-order [34]. However, as long as pulses are sufficiently narrow, the noise they introduce is sufficiently small [52] and local, UDD will Markovianize RB to an extent close to the one predicted by Result 1. One can furthermore employ more elaborated DD techniques such as Concatenated DD [48], or devised optimized decoupling sequences, although we do not pursue that here.
## IV RB under Pauli-Twirled Noise
While the Markovianizing effect of DD in RB is somehow expected, a prominent error-suppression technique that has recently been shown to suppress non-Markovian noise in a statistically significant way [35] is Randomized Compiling (RC) [40; 53]. RC can be understood as the operational way of tailoring arbitrary Markovian noise quantum channels into Pauli channels, which mathematically cor
responds to a mapping known as Pauli-twirling. As opposed to DD, RC is not a quantum control technique, but instead it relies on compiling a set of logically-equivalent circuits (i.e., with no increase in depth and effectively implementing the same quantum operations) where noisy gates are dressed with uniformly sampled random single-qubit Pauli gates; averaging over all such circuits, approximately and efficiently implements a Pauli twirl.
Given any \(n_{s}\)-qubit quantum channel \(\Phi\), RC effectively and efficiently maps its \(\chi\)-matrix representation, \(\Phi(\cdot):=\sum_{ik}\chi_{ik}P_{i}\left(\cdot\right)P_{k}\), to \(\Phi^{\mathsf{P}}(\cdot)=\sum_{i}\chi_{ii}P_{i}\left(\cdot\right)P_{i}\), where here \(P\) are \(n_{s}\)-qubit Pauli operators. The channel \(\Phi^{\mathsf{P}}\) is generally a non-unitary (stochastic or incoherent) quantum channel known as a Pauli channel, and the \(\alpha_{i}:=\chi_{ii}\) define a probability distribution called Pauli error rates. It can be seen that average gate-fidelity (and thus RB) estimates precisely a \(\chi_{00}=\alpha_{0}\) term, corresponding to the identity term, \(P_{0}:=\mathds{1}\) (probability of no error happening). Crucially, RC suppresses all off-diagonal terms of the \(\chi\) matrix, including terms associated with coherent errors, which can give worst-case error rates orders of magnitude higher than stochastic errors [54], while leaving average gate-fidelity unchanged.
For the non-Markovian case, a Pauli-twirl solely on system \(\mathsf{S}\) on any given \(\mathsf{SE}\) quantum channel \(\Lambda\), in the \(\chi\)-matrix representation, takes the form
\[\Lambda^{\mathsf{Ps}}(\cdot):=\sum\chi_{i\nu,ii}(R_{\mu}\otimes P_{i})(\cdot) (R_{\nu}^{\dagger}\otimes P_{i}), \tag{8}\]
for some basis operators \(R\) on \(\mathsf{E}\), and coefficients \(\chi_{\mu\nu,ii}\) of the corresponding \(\mathsf{SE}\) Hermitian \(\chi\)-matrix. If we were to trace out \(\mathsf{E}\) on this channel, then, of course, the reduced channel is a Pauli channel, \(\operatorname{tr}_{\mathsf{E}}\Lambda^{\mathsf{Ps}}\left(\rho_{\mathsf{E}} \otimes\cdot\right)=\sum\alpha_{i}(\rho_{\mathsf{E}}^{\mathsf{P}})P_{i}(\cdot )P_{i}\), with \(\alpha_{i}\) being Pauli error-rates dependent on the \(\mathsf{E}\)-reduced output state \(\rho_{\mathsf{E}}^{\mathsf{P}}\). This tracing, however, generally occurs at the end of a computation or any multi-time process.
Operationally, however, the mapping \(\Lambda\mapsto\Lambda^{\mathsf{Ps}}\) can only be implemented by modifying the noisy gates \(\tilde{\mathcal{G}}:=\Lambda\circ(\mathcal{I}_{\mathsf{E}}\otimes\mathcal{G})\), as we don't have direct access to \(\Lambda\). This can be done through a so-called \(\mathcal{G}\)-twisted twirl [12] on system \(\mathsf{S}\), defined as
\[\tilde{\mathcal{G}}(\cdot)\mapsto 4^{-n_{s}}\sum_{P}P\tilde{\mathcal{G}} \left(\mathcal{G}^{\dagger}(P)\left(\cdot\right)\mathcal{G}(P)\right)P, \tag{9}\]
with sum over \(P\) all \(n_{s}\)-qubit ideal Pauli operators, and where (abusing notation) we denote as \(\mathcal{G}^{\dagger}(\cdot):=G^{\dagger}(\cdot)G\) the adjoint map of the noiseless Clifford gate \(\mathcal{G}\). This is depicted schematically in Fig. 4. RC thus approximates \(\mathsf{S}\)-Pauli twirls by randomly sampling \(\mathcal{G}_{i}\)-twisted twirls on all time steps and recompiling the Pauli gates, which can be done with a number of samples much smaller than \(4^{n_{s}}\) for larger \(n_{s}\)[40].
Similar to DD, where we worked with the ideal pulse limit, here we focus on the case where RC fully and perfectly tailors \(\Lambda\) into \(\Lambda^{\mathsf{P}}\). While for a single-qubit this can be done exactly, in general, the tailoring of noise into Pauli noise by RC can be quantified to occur to a large percentage with a small number of samples [53]. We thus point out the following:
**Result 2**.: _For any RB sequence fidelity given as \(\mathsf{f}_{m}[\{\Lambda_{i}\}_{i=0}^{m+1}]:=\operatorname{tr}[M\operatorname {tr}_{\mathsf{E}}\zeta_{i=1}^{m+1}(\Lambda_{i}\circ\mathcal{G}_{i})\rho]\) of length \(m\), where here \(\rho:=\Lambda_{0}(\rho_{\mathsf{E}}\otimes\rho_{\mathsf{S}})\), the corresponding average sequence fidelity with \(\mathsf{S}\)-Pauli twirled noise, \(\mathbf{E}_{\mathsf{G}}\mathsf{f}_{m}[\{\Lambda_{i}^{\mathsf{Ps}}\}_{i=0}^{m+ 1}]\), remains in general non-exponential._
_Furthermore, when averaging is over uniformly distributed \(n_{s}\)-qubit Clifford gates,_
\[\mathbf{E}_{\mathsf{G}}\mathsf{f}_{m}\left[\{\Lambda_{i}\}_{i=0}^{m+1}\right] =\mathbf{E}_{\mathsf{G}}\mathsf{f}_{m}\left[\{\Lambda_{0},\Lambda_{m+1}\}\cup \{\Lambda_{i}^{\mathsf{Ps}}\}_{i=1}^{m}\right], \tag{10}\]
Figure 4: **RB under non-Markovian \(\mathsf{S}\)-Pauli twirled noise**: In (a), a sample circuit for a standard RB experiment where noise \(\Lambda_{i}^{\mathsf{Ps}}\) is time-non-stationary, non-Markovian, and has been Pauli-twirled on subsystem \(\mathsf{S}\), while in (b) the operational definition for \(\Lambda_{i}^{\mathsf{Ps}}\) is shown explicitly as an average over ideal \(n_{s}\)-qubit Pauli terms \(P\); this is known as a \(\mathcal{G}_{i}\)-twisted twirl [12] on \(\mathsf{S}\), and RC accurately approximates it efficiently by randomly sampling single-qubit Pauli terms and compiling the outputs [53]. Only here we employ the notation \(\mathcal{G}^{\dagger}(\cdot):=G^{\dagger}(\cdot)G\).
_where here \(\cup\) denotes the union of sets. On the other hand,_
\[\mathbf{V}_{\mathcal{G}}\mathrm{f}_{m}\left[\{\Lambda_{i}^{\mathrm{P}_{\mathrm{S }}}\}_{i=0}^{m+1}\right]\leq\mathbf{V}_{\mathcal{G}}\mathrm{f}_{m}\left[\{ \Lambda_{i}\}_{i=0}^{m+1}\right], \tag{11}\]
_where \(\mathbf{V}_{\mathcal{G}}\) is variance over arbitrary gates \(\mathcal{G}\)._
The proof can be seen in Appendix D.
The first statement means that Pauli twirling does not Markovianize the average sequence fidelity as DD does, i.e., it does not turn non-Markovian non-exponential RB decays into exponential ones. In particular, this statement holds for any gate set other than the multi-qubit Clifford group, at least as long as it forms a finite group.
On the other hand, the second statement in Eq. (10) holds only for the Clifford group but it essentially means that RC, or any Pauli-noise tailoring technique, would at most have an effect (not necessarily Markovianizing or increasing the average sequence fidelity) on SPAM contributions when it comes to non-Markovian RB data. The reason for this is not RB per-se, but rather average gate-fidelity as a figure of merit, as it only takes into account the probability of no error happening on S and not all the other error terms that Pauli twirling eliminates. That is, average gate-fidelity by definition is proportional only to the zero(tm) (identity) element of the \(\chi\)-matrix; explicitly, see Eq. (D11,D12) in Appendix D.1. This being said it is still possible that there is a class of noise profiles where it is sufficient to S-Pauli-twirl the SPAM noise in order to Markovianize the average sequence fidelity.
Despite not Markovianizing the average sequence fidelity, Eq. (11) establishes that RC will, in general, reduce the variance of sequence fidelities [55]. The importance of this is twofold, it allows for both a confident diagnosis of non-exponential deviations and for an accurate estimation of meaningful error rates. While mathematically this result can be seen to follow simply because Pauli-twirling removes additive terms to the variance, it can be argued that, physically, the reason is that it reduces the coherence of noise, as precisely the expectation of the squared sequence fidelity is proportional to the so-called average unitarity. The unitarity, \(u\), is a figure of merit quantifying loss of purity due to noise [7], and satisfies \(u=1\) if noise is coherent, i.e., due to a unitary map, and \(u\leq 1\) otherwise. In the Markov case, \(u\geq p^{2}\)[56], which is saturated for depolarizing noise, i.e., Pauli channels with all \(\chi_{ii}\) terms, for \(i\neq 0\), being equal. In the non-Markov case, it is _expected_ as well that S-Pauli-twirling would only decrease or leave the total unitarity unchanged [57]. This, in a sense, was already pointed out in the original unitarity benchmarking proposal of [7], although as also mentioned there, it is less straightforward to give a concrete bound.
Finally, similar to the case of DD, here we have considered an exact implementation of Pauli twirling, i.e., a perfect application of RC. While this is unrealistic, in general, RC incurs in only a small Pauli sampling overhead to closely approximate an exact Pauli twirl [40]. We now give two concrete numerical examples of our results for both DD and RC within non-Markovian RB.
## V Numerical examples
### XY4 to Markovianize a qubit
As proof-of-principle, we consider both one S qubit and one E qubit with time-stationary noise \(\Lambda^{(t)}=\mathrm{e}^{t\mathcal{L}}\) acting jointly on both qubits, where \(\mathcal{L}(\cdot):=-i[H,\cdot]+\mathcal{D}_{\mathrm{S}}(\cdot)\) for a Hamiltonian \(H=JXX+h_{x}(XI+IX)+h_{y}(YI+IY)\) with some constants \(J,h_{x},h_{y}\) and local S-dissigator \(\mathcal{D}_{\mathrm{S}}(\cdot):=\sum_{k}\gamma_{k}[L_{k}(\cdot)L_{k}^{+}- \frac{1}{2}[L_{k}^{\dagger}L_{k},\cdot])\) with \(L_{0}=X\) and \(L_{1}=Z\). While this is an arbitrary dynamical model, similar qubit-to-qubit noise mechanisms are conceivable to arise e.g., with two-level system defects in superconducting qubits [58], albeit with distinct spin interactions, dissipators, and varying strengths.
We compare interleaved DD pulses given by the \(XY4\) sequence, briefly described below Eq. (5) and depicted in Fig. 3, for DD sequences of \(\tau_{\mathrm{dd}}=\tau/2\), \(\tau\), and \(2\tau\) time-intervals, where we fixed \(\tau:=\tau_{\mathrm{fb}}\) to be the evolution time of the noise between applications of random Clifford gates in the original non-Markovian RB experiment. Outputs for particular constants in the noise model \(\Lambda^{(t)}\) can be seen in Fig. 5, where the non-Markovian analytical average sequence fidelity was computed according to [24], and the analytical decays to first \(\tau_{\mathrm{dd}}\) orders were computed according to Eq. (6) where \(\eta=4\).
The main message is that DD effectively removes non-Markovian non-exponential deviations for a time scale at which DD sequences are applied \(\tau_{\sf dd}\ll 1/\operatorname{tr}(\gamma)\), and how well it does so -whether it outputs decays close to the purely exponential \(\mathcal{O}(\tau_{\sf dd})\) decay-, depends mainly on whether the \(\tau_{\sf dd}\) is small relative to that of the noise, \(\tau_{\sf fb}\), in the original non-Markovian RB experiment.
In particular, interleaving DD pulses in time-intervals of \(\tau_{\sf dd}=2\tau_{\sf fb}\) does reduce non-exponential deviations, but still, non-Markovian noise dominates the decay. In general, this can be assessed by inspecting the separation of the numerical, all-\(\tau_{\sf dd}\)-order average decays with respect to the corresponding only first-\(\tau_{\sf dd}\)-order analytical decays; further discussion and an analysis in the time-scales for \(\tau_{\sf dd}\) relative to the chosen \(\tau_{\sf fb}\) can be seen in Appendix C.1, as captured by Fig. 9, where it is clear that this separation is suppressed and decays asymptotically become purely Markovian for \(\tau_{\sf dd}<\tau_{\sf fb}\).
Taking into account that pulses will have a finite width and might themselves be noisy due to implementation errors, the ability to perform good pulses in time scales shorter than that of the RB gates, would lead to a useful reduction of non-exponential deviations and enhancement of average sequence fidelities, as shown by the plots for time-intervals \(\tau_{\sf dd}=\tau_{\sf fb}\) and \(\tau_{\sf dd}=\tau_{\sf fb}/2\). Comparison with the corresponding analytical \(\tau_{\sf dd}\) decays, up to the first time-order, can help to estimate the contribution of non-Markovian and higher-order Markovian terms in the model.
Figure 5: **Interleaved DD in non-Markovian RB**: Average sequence fidelities (\(\mathcal{F}_{m}\)) for RB experiments on increasing sequence lengths (\(m\)) with the full, generally non-Markovian noise model \(\Lambda^{(\tau_{\sf fb})}\) (blue triangles), and interleaved ideal \(XY4\) sequences (green closed markers and black open markers) with time-intervals between pulses of \(\tau_{\sf dd}=\tau/2\), \(\tau\) and \(2\tau\), for a fixed \(\tau=\tau_{\sf fb}=0.03\). All numerical averages were taken over 40 samples, with red bands denoting uncertainty of mean; non-Markovian analytical decay (blue upper triangle) computed according to [24] and \(\mathcal{O}(\tau_{\sf dd})\) analytical decays (black open markers) up to first time-order according to Eq. (\(\circ\)). 1S+1E qubit model given by \(\Lambda^{(i)}=\mathrm{e}^{t\mathcal{L}}\) where \(\mathcal{L}(\cdot):=-i[H,\cdot]+\mathcal{D}_{\sf S}(\cdot)\) for \(H=JXX+h_{x}(XI+IX)+h_{y}(VI+IY)\) and local \(\sf S\) dissipator \(\mathcal{D}_{\sf S}(\cdot):=\sum_{k}\gamma_{k}[L_{k}(\cdot)L_{k}^{\dagger}- \frac{1}{2}(L_{k}^{\dagger}L_{k},\cdot)]\); chosen parameters \(\rho_{\sf S}=M=|0\rangle\!\langle 0|\), \(J=1.7\), \(h_{x}=1.47\), \(h_{y}=-1.05\) and Lindblad terms \(L_{0}=X\), \(\gamma_{0}=0.002\), \(L_{1}=Z\), \(\gamma_{1}=0.007\); Lindblad evolution approximated up to \(\mathcal{O}(t^{10})\).
### Subsystem Pauli-twirled noise
We now consider the same 1S+1E qubit model but where instead of interleaving _XY4_ pulses, we consider noise that has been Pauli-twirled on qubit \(\mathsf{S}\) (e.g., via RC). We now fix a value for \(\tau\) and simply denote \(\Lambda\) as the noise channel for all time steps. We distinguish between SPAM noise terms \(\Lambda_{0}=\Lambda_{m+1}\) and the _bulk_ noise terms \(\Lambda_{1}=\Lambda_{2}=\ldots=\Lambda_{m}\) (i.e., all non-SPAM noise). Since we deal with a single-qubit, and RC does not change the logical structure of quantum circuits (e.g., does not increase their depth), we take the limit of perfect Pauli-twirled noise on \(\mathsf{S}\).
In Fig. 6, we numerically demonstrate Result 2 by comparing the analytical average sequence fidelities (computed according to [24]) with different choices of either bare noise or \(\mathsf{S}\)-Pauli twirled noise for the bulk terms and the SPAM ones; we also show the behavior of the variance of the sequence fidelity with respect to sequence length for a set of 40 numerical samples, all either with bare noise or \(\mathsf{S}\)-Pauli twirled noise.
For the behavior of the average sequence fidelity with respect to SPAM, we should point out that an overall increase in fidelity due to \(\mathsf{S}\)-Pauli twirled noise is incidental in this example; other than this, it is clear that decays remain non-exponential and that whenever SPAM terms (\(\Lambda_{0}\) and \(\Lambda_{m+1}\)) coincide, it is irrelevant for the decay whether all other terms were \(\mathsf{S}\)-Pauli twirled or not. While we do not argue about the feasibility of operationally twirling SPAM noise, it is conceivable that this could be achieved at least partially on either the state preparation term or the measurement one (which in practice are indistinguishable).
For the case of the variance, we employed numerical averages on 40 RB samples. The \(\mathsf{S}\)-Pauli twirled variance gets clearly suppressed, and is lower than that of bare noise, for all sequence lengths. It stands out that the variance for bare noise itself changes considerably with increasing sequence length. While the variance for twirled noise is not entirely vanishing, it is considerably suppressed, given its relation to the unitarity, as mentioned above in Section IV, so deriving such guarantees mathematically would make a strong point for the employment of RC in obtaining fidelities.
## VI Conclusions and discussion
We have shown that noise suppression techniques, such as Dynamical Decoupling (DD) and Randomized Compiling (RC), can be efficient tools for dealing with non-Markovian noise sources in Randomized Benchmarking (RB). In particular, _i_) Universal Dynamical Decoupling (UDD) applied with fast and narrow pulses reduces a wide class of non-Markovian non-exponential RB decays to an exponential decay plus perturbative corrections in time, _ii_) RC does not Markovianize RB in the same sense
Figure 6: **The effect of RC on non-Markovian RB**: For the same model and parameters (fixed \(\tau_{\text{fb}}=0.03\) and \(\Lambda^{(\tau_{\text{fb}})}\) truncated at \(\mathcal{O}(\tau_{\text{fb}}^{10})\)) as Fig. 5, in (a), a comparison of the analytical average sequence fidelity decay (computed according to [24]) for combinations of bare noise (\(\Lambda\)) and \(\mathsf{S}\)-Pauli twirled noise (\(\Lambda^{\mathsf{Ps}}\)) on bulk terms \(\Lambda_{1}=\Lambda_{2}=\ldots=\Lambda_{m}\) and SPAM terms \(\Lambda_{0}=\Lambda_{m+1}\), showing that decays coincide up to SPAM terms; while in (b) the numerical variance over 40 samples of RB circuits, either all under bare noise or \(\mathsf{S}\)-Pauli twirled noise, showing that the variance generally gets suppressed for the latter.
that DD does, and in fact leaves the sequence fidelity decay invariant up to State Preparation and Measurement (SPAM) noise, however, _iii_) RC is ensured to decrease, or at worst leave unchanged, the variance of RB sequence fidelities.
Our results imply that standard noise suppression techniques can be valuable tools in taming, benchmarking, and optimizing non-Markovian noise. In particular, while we have dealt with the _standard_ (or original) RB protocol as a benchmarking framework, this approach is amenable to be adapted to any RB-based technique considering e.g., scalability or specific purpose metrics [15].
In the case of DD, while Markovianization is always achieved at short times, the full effective RB sequence fidelities can still be dominated by non-Markovian terms, so the main limitation for DD is the time-scale at which pulses can be applied, which for an enhancement of fidelities would need to be shorter than those between application of the RB gates. Our main Result 1 regarding DD is stated for a broad class of continuous noise models with mild restrictions on the global Hamiltonian and dissipators. More generally, however, for any noise dynamics, the timescales at which decoupling can be efficiently achieved are connected to such dynamics [59, 60], but not necessarily to whether they display non-Markovianity [51]. Furthermore, while they might be generally non-Markovian, some dynamics can hide non-Markovian effects [61, 62], in particular for our case in RB, and not display significant deviations. Finally, there is the fact that realistic DD pulses have a finite width and themselves can introduce errors; for our results, however, it is sufficient for them to be narrow and contain only local, time, and gate-independent small errors.
For RC in our main Result 2, it is somewhat surprising that S-Pauli twirling does not Markovianize RB data since twirling also effectively decouples by averaging out all non-Pauli error terms [35]. Concretely, RB with the Clifford group, while it is clear that in the Markovian case it leaves average gate-fidelities unchanged, as opposed to DD, it was expected for it to have a decoupling effect that would impact _directly_ the average of the non-Markovian RB outputs. Nevertheless, RC can be extremely valuable in suppressing the uncertainty in the average sequence fidelity, which can ensure an accurate diagnosis of non-exponential deviations (as opposed to statistical fluctuations) and reliable estimation of error rates. Furthermore, as argued before, this reduction in variance due to RC is related to the amount of coherence of the noise, which in turn is intimately connected and relevant to average gate-fidelity and fault tolerant-relevant metrics such as diamond norm [54].
Noise suppression techniques, such as DD and RC, are a vital ingredient allowing for the possibility to take quantum computing beyond a noise-intermediate regime. Our results highlight the importance of incorporating such basic noise suppression techniques to deal with one of the most complicated sources of errors, namely non-Markovianity, not only for deployment but for basic error diagnostics and benchmarking. As a perspective, we expect these ideas to be useful, easily adapted, and enhanced, to other scalable and holistic benchmarking and characterization techniques, e.g., [9, 10, 11, 12, 13], most of which are the intellectual progeny of RB.
###### Acknowledgements.
We thank Jay Nath for valuable discussions, and PFR thanks Robin Blume-Kohout for helpful comments during the APS March Meeting 2023. KM acknowledges support from the Australian Research Council Discovery Projects DP210100597 and DP220101793. PFR, MP, AA, and IdV acknowledge support from the German Federal Ministry of Education and Research (BMBF) under Q-Exa (grant No. 13N16062), QSolid (grant No. 13N16161), and MUNIQC-SC (grant No. 13N16185). |
2304.13151 | Novel phase transition at the Unruh temperature | We consider gas of massless fermions at certain temperature T and
acceleration a. We find a second order phase transition at temperature T
approaching the Unruh temperature TU. The implications for hadronization of the
quark-gluon plasma produced in heavy-ion collisions (HIC) and for black-hole
physics are discussed. In particular, this novel phase transition may be
associated with thermalization in HIC, indicating its analogy with falling into
a black hole. | Georgy Yu. Prokhorov, Oleg V. Teryaev, Valentin I. Zakharov | 2023-04-25T20:59:13Z | http://arxiv.org/abs/2304.13151v2 | # Novel phase transition at the Unruh temperature
###### Abstract
In recent years, the theory of quantum phase transitions has rapidly developed. These are transitions at a zero temperature which are associated with a change of the theory parameters like couplings. In contrast, the classical phase transitions occur "within" the same theory (in particular, with the same couplings) and are associated with a change in temperature. Within the framework of a simple model of Dirac fields in the Euclidean Rindler space, we establish an intermediate case when the phase transition occurs at a finite temperature, but the temperature itself is of a quantum origin (the Unruh temperature). Moreover, the phase transition point is uniquely associated with the behavior of individual levels, namely at the Unruh temperature the two lowest Matsubara modes become singular on the Rindler horizon. This provides a new manifestation of the duality between the thermodynamic description and the geometric approach (the behavior of the quantum levels of particles living on a nontrivial geometric manifold). Although the considered example refers to the physics of black holes, we note the formal similarity of the Unruh temperature with the parameter characterizing quantum transitions in the theory of condensed matter.
Introduction
Accelerated systems are very similar to black holes, having an event horizon and associated thermal radiation. This is the well-known Unruh effect [1], and the radiation temperature
\[T_{U}=\frac{\hbar|a|}{2\pi k_{B}c}\,, \tag{1}\]
is the Unruh temperature. Here \(a_{\mu}=u^{\nu}\partial_{\nu}u_{\mu}\) is the proper acceleration, \(|a|=\sqrt{-a^{\mu}a_{\mu}}\) and \(u_{\mu}\) is the four-velocity.
The discovery of quark-gluon plasma at the Relativistic Heavy Ion Collider (RHIC) raised the question of describing such processes such as hadronization and thermalization. In the series of papers [2; 3; 4], an explanation of thermalization based on the Unruh effect was proposed: the idea was that an extremely high acceleration is generated in heavy ion collisions (and also more elementary processes), at which the temperature (1) and the Unruh effect are significant.
Thus, the problem arises of considering the Unruh effect in a dense medium with a finite proper temperature \(T\), which, generally speaking, may differ from \(T_{U}\) (in most papers, these two temperatures coincide [2; 3; 5]). Despite the large number of works devoted to the Unruh effect, these aspects have not been discussed as extensively (see, however, [6; 7; 8; 9; 10]).
We investigate how the properties of a medium with finite acceleration and temperature change when these two parameters vary, that is, the phase diagram in coordinates \((T,|a|)\). We will show that two approaches - statistical and geometrical - predict that the temperature (1) is critical (in the general case, we can talk about an infinite set of critical points). The transition through this point is followed by a jump in heat capacity, which indicates a second-order phase transition. As usual, in the massless limit, the stress-energy tensor has a polynomial form, and this new phase transition manifests itself in the appearance of odd acceleration terms below the critical point, which is a nonperturbative effect.
From the point of view of the geometrical approach, the phase transition is associated with the singular behavior of the lowest Matsubara modes on the event horizon of the accelerated reference frame, and from the point of view of the statistical approach, it is related to the anti-Hermiticity of the spin part of the boost operator.
One of the most intriguing points is that comparing this phase transition with the known ones, we see that it is novel in the sense that it simultaneously has the features of a classical and quantum transition [11; 12].
We use the system of units \(e=\hbar=c=k_{B}=1\), and the metric signature \((+,-,-,-)\).
## II Singularity of the lowest Matsubara modes on the horizon
### Euclidean Rindler space
A medium with finite proper temperature \(T\) and acceleration \(a_{\mu}\) along the axis \(z\) can be described by considering quantized fields, in our case the Dirac fields (for simplicity, we consider the case of zero chemical potentials and massless fields), in the coordinates
\(x=(\varphi,\mathrm{x},\mathrm{y},\rho)\) with a metric of the form
\[ds^{2}=\frac{\rho^{2}}{\nu^{2}}d\varphi^{2}+d\mathrm{x}^{2}+d \mathrm{y}^{2}+d\rho^{2}\,, \tag{1}\]
which is the Euclidean version of the Rindler coordinates [13; 14]. This manifold \(\mathcal{M}=\mathbb{R}^{2}\otimes\mathcal{C}_{\nu}^{2}\) contains a 2D cone with an angular deficit of \(2\pi-2\pi/\nu\) and is an example of a well-known space with a conical singularity [13; 15; 16; 17; 18; 19; 20]. The \(\varphi\) coordinate corresponds to the periodic imaginary time, which is related to the proper (also imaginary) time \(\tau\) along the world line with constant \(\rho,\mathrm{x},\mathrm{y}\) as \(\tau=\rho\phi/\nu\).
According to (1), the acceleration and temperature of the medium have a geometric interpretation: the distance \(\rho\) from the top of the cone to a point on the cone corresponds to the inverse acceleration \(|a|^{-1}\), and the length of the corresponding circle \(2\pi\rho/\nu\) corresponds to the inverse temperature \(T^{-1}\). The dictionary of correspondence between geometrical and hydrodynamical quantities has the form
\[Geometry\,\rightleftarrows\,Hydrodynamics:\quad\rho=|a|^{-1} \,,\quad\nu=\frac{2\pi T}{|a|}=\mathrm{const}\,. \tag{2}\]
For the metric (1) the curvature is zero everywhere, except for the cone peak, where it has a delta-function singularity [21]. At the same time, the point \(\rho=0\) corresponds to the horizon since \(g_{00}(\rho=0)=0\).
Due to (2), by changing the temperature or acceleration, we change the geometry of spacetime. The region \(T>|a|/2\pi\) (or \(\nu>1\)) is well studied [16; 17; 18; 20], and we will be interested in the transition through the point \(T=|a|/2\pi\) (or \(\nu=1\)) to the region \(T<|a|/2\pi\) (or \(\nu<1\)), when the cone angular deficit becomes negative.
### Two solutions for the modes
Let us now consider the massless Dirac field in space (1). We will use a symmetric (Euclidean) vierbein of the form [17]\(e^{\mu}_{(a)}=\mathrm{diag}(\nu/\rho,1,1,1)\), where we use brackets for vierbein indices1. The curved Euclidean Dirac matrices \(\gamma^{\mu}_{E}\) are related to the usual non-Euclidean matrices \(\gamma^{\mu}_{E}=\gamma^{(a)}_{E}e^{\mu}_{(a)}=e^{\mu}_{(a)}i^{-1+\delta_{0a}} \gamma^{(a)}_{N}\). The Dirac operator \(\not{D}_{x}=\gamma^{\mu}_{E}\nabla_{\mu}\) and its square determine the Euclidean Green's functions, respectively \(S_{E}(x;x^{\prime})\) and \(G_{E}(x;x^{\prime})\), related to each other (\(I_{4}\) is the identity matrix \(4\times 4\))
Footnote 1: One can also choose another vierbein, but the final result does not depend on this choice.
\[\not{D}_{x}S_{E}(x;x^{\prime})=\not{D}_{x}^{2}G_{E}(x;x^{\prime}) =-I_{4}\frac{\delta^{4}(x-x^{\prime})}{\sqrt{g}}\,,\quad S_{E}(x;x^{\prime})= \not{D}_{x}G_{E}(x;x^{\prime})\,. \tag{3}\]
Function \(G_{E}(x;x^{\prime})\) can be constructed from the eigenmodes of the Laplacian-like operator \(\not{D}_{x}^{2}\)[20]
\[\not{D}_{x}^{2}\varphi(x)=-\lambda^{2}\phi(x)\,, \tag{4}\]
which are antiperiodic in imaginary time \(\phi\left(\varphi+2\pi n\right)=(-1)^{n}\phi\left(\varphi\right)\) (the antiperiodicity follows from the choice of the vierbein). Of key importance is that the equation (4) has two independent solutions \(\phi_{q}^{+}\) and \(\phi_{q}^{-}\) differing in sign of the order of the Bessel function
\[\phi_{q}^{\pm}(x)=\frac{\sqrt{\nu}}{4\pi^{3/2}}\,e^{ip_{\rm x}x+ip_{\rm y}y+i(n +\frac{1}{2})\varphi}J_{\pm\beta_{s_{1}}}(\xi\rho)\,w_{(s_{1},s_{2})}\,, \tag{5}\]
where \(\beta_{s_{1}}=\nu(n+\frac{1}{2})-\frac{s_{1}}{2}\), \(\xi^{2}=\lambda^{2}-p_{\rm x}^{2}-p_{\rm y}^{2}\), \(w_{(\pm,+)}=(1,0,\pm 1,0)\) and \(w_{(\pm,-)}=(0,1,0,\mp 1)\), and \(n\) is an integer. Eigenfunctions (5) are actually Matsubara modes, which is clearly seen if the last term in the exponent is rewritten in terms of the imaginary proper time \(i(n+\frac{1}{2})\varphi=i\,\pi T(2n+1)\tau\). Solutions (5) are classified according to the eigenvalues \(q=(p_{\rm x},p_{\rm y},n+1/2,\lambda,is_{1}/2,s_{2}/2)\) of the mutually commuting operators \(\widehat{p}_{\rm x}=-i\partial_{\rm x}\), \(\widehat{p}_{\rm y}=-i\partial_{\rm y}\), \(\widehat{p}_{0}=-i\partial_{\varphi}\), \(\not{D}_{x}^{2}\), \(\Sigma_{0}=\frac{i}{4}[\gamma_{N}^{(0)},\gamma_{N}^{(3)}]\) and \(\Sigma_{3}=\frac{i}{4}[\gamma_{N}^{(1)},\gamma_{N}^{(2)}]\). \(\Sigma_{0}\) and \(\Sigma_{3}\) correspond to spin parts of the (non-Euclidean) generators of boost and angular momentum. In particular, we obtain that the boost eigenvalue is an imaginary number
\[\Sigma_{0}\phi(x) = s_{1}\frac{i}{2}\phi(x)\,,\quad s_{1}=\pm 1\,,\] \[\Sigma_{3}\phi(x) = s_{2}\frac{1}{2}\phi(x)\,,\quad s_{2}=\pm 1\,, \tag{6}\]
which is related to the anti-Hermiticity of the spin boost (we will show its role in Section III).
Considering that \(J_{a}(x)\sim x^{a}\) at \(x\to 0\), only one of the solutions (5) for each mode \(q\) is finite on the horizon \(\rho\to 0\). We will impose a condition (which is usually done) that the modes are finite on the horizon.
Consider now the two lowest Matsubara modes with \(n=0,s_{1}=1\) and \(n=-1,s_{1}=-1\). When passing through \(T=T_{U}\) we should change the solution so that it remains finite at \(\rho\to 0\). Then in the region \(T>|a|/2\pi\) we choose \(\phi_{(n=0,\,s_{1}=1)}^{+}\) and \(\phi_{(n=-1,\,s_{1}=-1)}^{-}\), while in the region \(|a|/6\pi<T<|a|/2\pi\) we change the choice to \(\phi_{(n=0,\,s_{1}=1)}^{-}\) and \(\phi_{(n=-1,\,s_{1}=-1)}^{+}\).
So, above and below \(T=T_{U}\) we obtain different solutions. Obviously, this situation will repeat when we go lower in temperature, that is, we have an infinite series of critical points
\[T_{c}=T_{U}/(2k+1)\,\rightleftarrows\,\nu_{c}=1/(2k+1)\,, \tag{7}\]
where \(k=0,1,2...\). At each of these points, two modes \(n=k,s_{1}=1\) and \(n=-k-1,s_{1}=-1\) will change the form from \(\phi_{(n=k,s_{1}=1)}^{+}\) to \(\phi_{(n=k,s_{1}=1)}^{-}\) and from \(\phi_{(n=-k-1,s_{1}=-1)}^{-}\) to \(\phi_{(n=-k-1,s_{1}=-1)}^{+}\). Effectively, choosing only the regular solution can be fixed by introducing the modulus
\[\phi_{q}^{finite}(x)=\frac{\sqrt{\nu}}{4\pi^{3/2}}e^{ip_{\rm x}x+ip_{\rm y}y+ i(n+\frac{1}{2})\varphi}J_{|\beta_{s_{1}}|}(\xi\rho)\,w_{(s_{1},s_{2})}\,. \tag{8}\]
If we now consider (8) as functions of \(T\), then at points (7) these regular solution will contain peaks, as shown in Figure 1 on the left side, associated with a jump to the other of the two solutions in (5).
In the next subsections, we will show that near the critical points (7) the form of the Green's functions and the mean values of the operators change.
### Green's functions
Green's function \(G_{E}(x;x^{\prime})\) can be constructed from the eigenmodes (8). Indeed, using (4) and the orthonormality it is easy to show that
\[G_{ij}^{E}(x;x^{\prime})=\sum_{s_{1},s_{2}}\sum_{n}\iiint dp_{\rm x}\,dp_{\rm y} \,\xi\,d\xi\frac{\phi_{q,i}^{reg}(x){\phi_{q,j}^{reg}}^{\dagger}(x^{\prime})}{ \xi^{2}+p_{\rm x}^{2}+p_{\rm y}^{2}}\,, \tag{9}\]
where \(i\) and \(j\) are bispinor indices. After integrations and summations, we obtain
\[G^{E}(x;x^{\prime}|N_{0})=\frac{\nu\left[\sinh\big{(}\frac{1+\nu}{2}\vartheta- \vartheta\nu N_{0}\big{)}e^{(2N_{0}+1)\Delta\phi\Sigma_{0}}-\sinh\big{(} \frac{1-\nu}{2}\vartheta-\vartheta\nu N_{0}\big{)}e^{(2N_{0}-1)\Delta\phi \Sigma_{0}}\right]}{8\pi^{2}\rho\rho^{\prime}\sinh\vartheta\left[\cosh\left( \nu\vartheta\right)-\cos\Delta\varphi\right]}\,, \tag{10}\]
where \(\Delta x=x-x^{\prime}\), \(\cosh\vartheta=(\rho^{2}+{\rho^{\prime}}^{2}+\Delta{\rm x}^{2}+\Delta{\rm y}^ {2})/(2\rho\rho^{\prime})\), and the integer part appeared
\[N_{0}=\left\lfloor\frac{1}{2\nu}+\frac{1}{2}\right\rfloor=\left\lfloor\frac{a }{4\pi T}+\frac{1}{2}\right\rfloor\,, \tag{11}\]
which is equal to the number of complete revolutions per angle \(2\pi\) that can be done on the cone \({\cal C}_{\nu}\) and is simultaneously equal to the number of pairs of Matsubara modes that have changed their solutions. We are primarily interested in the first critical point \(T=|a|/2\pi\). At \(T>|a|/2\pi\), when \(N_{0}=0\), Green's function \(G^{E}(x;x^{\prime}|0)\) actually corresponds to the well-known expression [17; 18; 20]. At \(|a|/6\pi<T<|a|/2\pi\) we obtain \(N_{0}=1\) and (10) leads to a new Green's function.
Figure 1: Left: the temperature-dependent part of the finite solution (8) as a function of temperature for the modes with different Matsubara frequencies and \(s_{1}=1\) near horizon \(\xi\rho=10^{-6}\), \(k={\rm const}\). Right: behavior of the stress-energy tensor near the first critical point \(T=|a|/2\pi\), according to the geometrical approach (14) and heuristic statistical one (35), based on the Wigner function (32).
The function \(S_{E}(x;x^{\prime}|N_{0})\) can be obtained using (10) and (3). The functions \(G_{E}(x;x^{\prime}|N_{0})\) and \(S_{E}(x;x|N_{0})\) and the corresponding matrix elements are singular at \(x\to x^{\prime}\) and should be renormalized by subtracting their values at \(\nu=1\) (using the time variable \(\theta=\nu\phi\)). In particular, in the case of \(S_{E}\) it is necessary to subtract \(S_{E}^{0}=S_{E}(\Delta\theta/\nu,\Delta{\rm x},\Delta{\rm y},\rho,\rho^{ \prime})|_{\nu=1}\)
\[S_{E}^{ren}=S_{E}-S_{E}^{0}\,. \tag{12}\]
### Mean value of stress energy tensor
The mean value of the (non-Euclidean space) stress-energy tensor at a finite temperature, is given by the standard formula [17; 20]
\[\langle T_{\mu\nu}\rangle=\frac{i}{4}\lim_{x^{\prime}\to x}\left( \gamma_{\mu}^{N}\nabla_{\nu}^{N,x}-\gamma_{\mu}^{N}\nabla_{\nu}^{N,x^{\prime} }+\mu\leftrightarrow\nu\right)S_{E}^{ren}(x;x^{\prime}|N_{0})\,, \tag{13}\]
which includes curved non-Euclidean Dirac matrices \(\gamma_{N}^{\mu}=e_{(a)}^{N\,\mu}\gamma_{N}^{(a)}\), and non-Euclidean viewbein. In general case, (13) should include operators of the parallel transport between \(x\) to \(x^{\prime}\), but \(S_{E}^{ren}(x;x^{\prime}|N_{0})\) is finite in the limit \(x\to x^{\prime}\), and therefore, the corresponding operators do not contribute.
Using (10) for \(N_{0}=0\) and \(N_{0}=1\), and passing to an arbitrary reference system from the fluid rest frame, we obtain
\[T>\frac{|a|}{2\pi} : \langle T_{\beta}^{\alpha}\rangle=\left(\frac{7\pi^{2}T^{4}}{60}+ \frac{|a|^{2}T^{2}}{24}-\frac{17|a|^{4}}{960\pi^{2}}\right)\left(u^{\alpha}u_{ \beta}-\frac{1}{3}\Delta_{\beta}^{\alpha}\right)\,, \tag{14}\] \[\frac{|a|}{6\pi}<T<\frac{|a|}{2\pi} : \langle T_{\beta}^{\alpha}\rangle=\left(\frac{127\pi^{2}T^{4}}{6 0}-\frac{11|a|^{2}T^{2}}{24}-\frac{17|a|^{4}}{960\pi^{2}}\right)\left(u^{ \alpha}u_{\beta}-\frac{1}{3}\Delta_{\beta}^{\alpha}\right)\] \[+\left(\pi|a|T^{3}-\frac{T|a|^{3}}{4\pi}\right)\widetilde{\Delta }_{\beta}^{\alpha}\,,\]
where the projectors \(\Delta_{\beta}^{\alpha}=\delta_{\beta}^{\alpha}-u^{\alpha}u_{\beta}\) and \(\widetilde{\Delta}_{\beta}^{\alpha}=\Delta_{\beta}^{\alpha}+\frac{a^{\alpha}a _{\beta}}{|a|^{2}}\) on the (hyper)surfaces, orthogonal to \(u_{\mu}\) and \(u_{\mu}\), \(a_{\mu}\) are introduced.
The stress-energy tensor at \(T>|a|/2\pi\) in the form (14) was obtained in [7; 8; 9], and the formula at \(|a|/6\pi<T<|a|/2\pi\) is new. We see that the stress-energy tensor, in particular, the energy density \(\varepsilon=\langle T^{\mu\nu}\rangle u_{\mu}u_{\nu}\), is continuous at the point \(T=T_{U}\), but the heat capacity is discontinuous: \(\frac{\partial\varepsilon}{\partial T}\big{|}_{T\to T_{U}+0}=\frac{4\pi T_{U} ^{3}}{5}\) and \(\frac{\partial\varepsilon}{\partial T}\big{|}_{T\to T_{U}-0}=\frac{24\pi T_{ U}^{3}}{5}\), which indicates a second-order phase transition. The energy density as a function of temperature is shown in Figure 1 on the right hand side.
This phase transition, although it leaves the stress-energy tensor polynomial, leads to the appearance of odd terms in acceleration \(T^{3}|a|\) and \(T|a|^{3}\), which are obviously nonperturbative. Also, it can be verified that the stress-energy tensor is covariantly conserved in both cases. However, the trace is not equal to zero for \(T_{U}/3<T<T_{U}\)
\[\langle T_{\beta}^{\beta}\rangle=\frac{\nu(\nu^{2}-1)}{4\pi^{2}\rho^{4}}=2\pi T |a|\left(T^{2}-\frac{|a|^{2}}{4\pi^{2}}\right)\,. \tag{15}\]
This indicates that below the critical point the equation of state becomes anisotropic: there is an additional repulsive pressure due to the term with \(\widetilde{\Delta}^{\alpha}_{\beta}\) perpendicular to the acceleration. It can also be assumed that \(\langle T^{\beta}_{\beta}\rangle\) can play the role of an order parameter.
It can also be shown by direct calculation of the contribution of individual modes that a jump in (14) corresponds to a change in the contribution from the lowest Matsubara modes
\[\Delta\varepsilon=\Delta\varepsilon_{(n=0,\,s_{1}=1)}+\Delta \varepsilon_{(n=-1,\,s_{1}=-1)}=2\pi^{2}T^{4}-\frac{T^{2}|a|^{2}}{2}\,. \tag{16}\]
### Cosmic string instability
Phase transition in a thermal accelerated system, considered above, has a direct analog for a cosmic string, since the metric (1), up to renaming the coordinates, describes the (Euclidean) cosmic string \(ds^{2}=d\tau^{2}+d\rho^{2}+\frac{\rho^{2}}{\nu^{2}}d\varphi^{2}+d\text{z}^{2}\)[13; 20; 22]. The corresponding Green's function can be obtained from (10) by the substitutions
\[G^{E}(x;x^{\prime}|N_{0})_{string}=G^{E}(x;x^{\prime}|N_{0})_{Rindler}\Big{|}_{ \text{x}\rightarrow\tau,\,\text{y}\rightarrow\text{z},\,\Sigma_{0} \rightarrow i\Sigma_{3}}\,. \tag{17}\]
At \(N_{0}=0\) the standard result is reproduced [17; 18; 20]. Analyzing the behavior of eigenmodes, we find that there are critical points (7). Below the first critical point \(\nu=1\), the cosmic string density \(\mu^{*}=\frac{\nu-1}{4\nu}\) becomes negative and the (non-Euclidean) stress-energy tensor changes
\[\nu>1: \langle T^{\alpha}_{\beta}\rangle=\frac{17-10\nu^{2}-7\nu^{4}}{2 880\pi^{2}\rho^{4}}\text{diag}\left(1,1,-3,1\right)\,, \tag{18}\] \[\frac{1}{3}<\nu<1: \langle T^{\alpha}_{\beta}\rangle=\frac{17+110\nu^{2}-127\nu^{4} }{2880\pi^{2}\rho^{4}}\text{diag}\left(1,1,-3,1\right)+\frac{\nu(\nu^{2}-1)}{ 8\pi^{2}\rho^{4}}\text{diag}\left(1,0,0,1\right)\,.\]
The expression at \(\nu>1\) is well known [17; 18; 20], while the case of \(\nu<1\) is new.
## III Statistical heuristic prediction
A medium with acceleration \(\mathbf{a}\) and finite temperature can be described without using conical geometry (1), considering only its statistical properties and effective macroscopic interaction associated with acceleration [6]
\[H\to H-\mathbf{a}\cdot\mathbf{K}\,, \tag{19}\]
where \(\mathbf{K}\) is the boost operator. The additional term in the Hamiltonian \(-\,\mathbf{a}\cdot\mathbf{K}\) is analogous to the better-known term with angular velocity and angular momentum \(-\,\mathbf{\Omega}\cdot\mathbf{J}\). In [23], an _ansatz_ of the statistical distribution function \(X(x,p)\) (in fact, the Wigner function) for fermions with spin \(1/2\) was proposed, which takes into account (19). If the acceleration is directed along the \(z\) axis, then in the fluid rest frame
\[X(x,p)=\left\{\exp\left(\frac{\varepsilon_{p}I_{4}}{T}-\frac{a \Sigma_{0}}{T}\right)+I_{4}\right\}^{-1}\,, \tag{20}\]
where only the spin part of the boost \(K^{3}_{spin}=\Sigma_{0}\) contributes and \(\varepsilon_{p}=|{\bf p}|\).
Boost operator has unusual properties. In particular, unlike the spin angular momentum \(\Sigma_{3}^{\dagger}=\Sigma_{3}\), the spin part of the boost \(\Sigma_{0}\) is an anti-Hermitian operator
\[\Sigma_{0}^{\dagger}=-\Sigma_{0}\,. \tag{10}\]
This anti-Hermiticity is compensated by the anticommutativity of \(\Sigma_{0}\) and \(\gamma_{0}\), and the mean values will still be real, e.g., \(\langle\bar{\psi}(\Sigma_{0})^{n}\psi\rangle^{*}=\langle\bar{\psi}(\Sigma_{0})^ {n}\psi\rangle\). It would seem that then the anti-Hermitian effect is negligible, but this is not entirely true when infinite series with boost are summed. Due to this, \(X(x,0)\) becomes singular at the points (7), since \(\det\left[X(x,0)^{-1}\right]=\det\left[I_{4}\left(\cos\frac{|a|}{2T}+1\right) +2\Sigma_{0}\sin\frac{|a|}{2T}\right]\). This points in advance to the phase transitions discussed in Section II.
The validity of this expectation was demonstrated in [14] (see also [7; 24]), where using (11) (and additional heuristic modifications), the next formula was obtained for the energy density
\[\varepsilon=2\int\frac{d^{3}p}{(2\pi)^{3}}\Big{(}\frac{|{\bf p}|+i|a|}{1+e^{ \frac{|{\bf p}|}{T}+\frac{i|a|}{2T}}}+\frac{|{\bf p}|-i|a|}{1+e^{\frac{|{\bf p} |}{T}-\frac{i|a|}{2T}}}\Big{)}+4\int\frac{d^{3}p}{(2\pi)^{3}}\frac{|{\bf p}|}{ e^{\frac{2\pi|{\bf p}|}{|a|}}-1}\,, \tag{11}\]
where the key point is that acceleration appears as a kind of an imaginary chemical potential \(\pm i|a|/2\) (which follows from (10)). Equation (11) reproduces the same critical points (7) as the geometrical approach. In particular, computing (11) in the area \(|a|/6\pi<T<|a|/2\pi\), we obtain
\[\varepsilon=\frac{127\pi^{2}T^{4}}{60}-\frac{11T^{2}|a|^{2}}{24}-\frac{17|a|^ {4}}{960\pi^{2}}-\pi T^{3}|a|+\frac{T|a|^{3}}{4\pi}\,. \tag{12}\]
Comparing (12) and the lower formula in (14) we see that the statistical method largely (though not completely) reproduces also the change of the energy density (above \(T_{U}\) the result is the same as in (14)). The behavior of the energy density in the two approaches is shown in Figure 1 on the right hand side.
An analysis of the modes in the Rindler space confirms also the role the anti-Hermiticity of the spin boost \(\Sigma_{0}\). Indeed, the modes (5), have imaginary eigenvalues \(\pm i/2\) with respect to the spin boost (6). And it is the boost eigenvalue \(is_{1}/2\) that enters the order of the Bessel function \(\beta_{s_{1}}=\nu(n+\frac{1}{2})-\frac{s_{1}}{2}\), due to which singular points (7) appear. Also, the (non-Euclidean) Dirac equation in Rindler coordinates
\[\left[i\gamma_{N}^{0}\partial_{0}+i\gamma_{N}^{1}\partial_{1}+i\gamma_{N}^{2} \partial_{2}+\gamma_{N}^{3}\left(i\partial_{3}+i\frac{|a|}{2}\right)\right] \psi=0\,, \tag{13}\]
confirms the following from (11) similarity between acceleration and imaginary chemical potential (compare with [25; 26; 27]).
Summing up, we can now generalize the duality of the geometric and thermodynamic approaches, previously established only for \(T>T_{U}\)[8; 14], to the instability at the Unruh temperature.
## IV Discussion
The phase transition that we are considering has the features of both classical and quantum phase transition [11; 12]. On the one hand, it occurs at the finite temperature, which
makes it similar to the classical transition. On the other hand, the critical temperature, having the Planck scale, is extremely small (for \(|a|\sim 10\,m/s^{2}\) one obtains \(T_{c}\sim 10^{-20}K\)), and the transition is associated with the behavior of several individual modes, which is similar to a quantum phase transition, where the crossing of the lowest energy levels (either avoided level-crossing) plays a role. It is also interesting to note the similarity between the quantum relaxation time, characterizing the transition between classical and quantum regimes, and the Unruh temperature
\[t_{r}=\alpha\frac{\hbar}{k_{B}T}\ \leftrightarrow\ \ T_{U}=\frac{\hbar|a|}{2\pi k_{ BC}}\,. \tag{10}\]
Comparing \(T_{U}\) and \(t_{r}\), we see that the time scale \(t_{U}=c/|a|\) near \(T=T_{U}\) also becomes Planckian with \(\alpha=1/2\pi\).
Apparently, this phase transition should be common for theories with half-integer spin, since it is associated with a non-zero phase shift, accumulated when going around a closed loop (in particular, the points (7) correspond to the absence of the gravitational Aharonov-Bohm effect [28; 29]). It exists not only for massless, but also for massive fermions, since the eigenmodes in the massive case have a form similar to (5) [20], and the same critical points (7), in which they become singular on the horizon.
In the case of scalar theories, one should expect that the similar behavior near the Unruh temperature can be only when there is a mass or interaction [30; 31]. We also note that for fermions with imaginary rotation [32], a similar effect was observed, which may be due to the anti-Hermiticity of the spin operator \(i\cdot\Sigma_{3}\), as in the case of a cosmic string (17).
Generally speaking, when deriving (5) from (3), we neglected the contribution of the singular curvature [20; 21]. Consistently taking this curvature into account is related to the self-adjoint extension [33; 34; 35; 36] and could affect the result. However, as a rule, self-adjoint extension leads to logarithmic corrections and it seems unlikely that it will cancel the jump associated with changing the polynomial form of the stress-energy tensor.
The results obtained may be relevant to the physics of heavy ions. Indeed, in collisions of ions, quark-antiquark pairs are formed, which are affected by chromoelectric forces and move with acceleration. In [2; 3], an explanation of thermalization using the Unruh effect was proposed: due to equilibriation with a thermal bath, a system acquires temperature, even in the case when there is no collective dynamics.
We can establish an evident duality between hadronic and cosmic strings: a hadron string is stretched between a pair of quark and antiquark, which as a result move with acceleration and therefore can be described by Rindler coordinates (1). At the same time, the same coordinates correspond to the cosmic string, as we discussed in Subsection II.5. Thus, there are two dual pictures associated with the hadron string and the cosmic string, which can be recalculated into each other. In particular, it would be interesting to investigate the consequences of the discussed phase transition for the hadronic string (see Discussion in [14]).
## V Conclusion
We have shown that in an accelerated fermionic gas, there is a critical point when the temperature is equal to the Unruh temperature. At this point, the two lowest Matsubara modes become singular at the Rindler event horizon and abruptly change. The stress-energy
tensor also changes, in particular, below the critical point nonperturbative terms odd in acceleration appear. There is a jump in heat capacity, signaling a novel second order phase transition that has features of both classical and quantum phase transition. With a further decrease in temperature, there is an infinite series of similar critical points. This novel phase transition has a direct analogue for cosmic strings, when the density of the cosmic string becomes negative.
This picture confirms an earlier heuristic result based on a statistical approach, where the instability arises due to the anti-Hermiticity of the spin part of the boost operator. We also revealed the role of the boost anti-Hermiticity in the approach with the modes on the Rindler horizon.
In heavy ion collisions, a hadronic string with accelerated quarks is formed, which can be related to the cosmic string due to the same form of the metrics.
**Acknowledgements**
The authors are thankful to M. Bordag, M. S. Dvornikov, D. V. Fursaev and Yu. N. Obukhov for stimulating discussions and comments. The work was supported by Russian Science Foundation Grant No. 21-12-00237, the work of VIZ is partially supported by grant No. 0657-2020-0015 of the Ministry of Science and Higher Education of Russia.
|
2308.15193 | Rational torsion points on abelian surfaces with quaternionic
multiplication | Let $A$ be an abelian surface over $\mathbb{Q}$ whose geometric endomorphism
ring is a maximal order in a non-split quaternion algebra. Inspired by Mazur's
theorem for elliptic curves, we show that the torsion subgroup of
$A(\mathbb{Q})$ is $12$-torsion and has order at most $18$. Under the
additional assumption that $A$ is of $\mathrm{GL}_2$-type, we give a complete
classification of the possible torsion subgroups of $A(\mathbb{Q})$. | Jef Laga, Ciaran Schembri, Ari Shnidman, John Voight | 2023-08-29T10:21:32Z | http://arxiv.org/abs/2308.15193v1 | # Rational torsion points on abelian surfaces with quaternionic multiplication
###### Abstract.
Let \(A\) be an abelian surface over \(\mathbb{Q}\) whose geometric endomorphism ring is a maximal order in a non-split quaternion algebra. Inspired by Mazur's theorem for elliptic curves, we show that the torsion subgroup of \(A(\mathbb{Q})\) is \(12\)-torsion and has order at most \(18\). Under the additional assumption that \(A\) is of \(\operatorname{GL}_{2}\)-type, we give a complete classification of the possible torsion subgroups of \(A(\mathbb{Q})\).
###### Contents
* 1 Introduction
* 2 Quaternionic arithmetic
* 3 Galois actions, polarizations and endomorphisms
* 4 PQM surfaces over local and finite fields
* 5 Proof of Theorem 1.4: PQM surfaces of \(\operatorname{GL}_{2}\)-type
* 6 Proof of Theorem 1.1: reduction to \(\operatorname{GL}_{2}\)-type
* 7 Proof of Theorems 1.2 and 1.3: eliminating groups of order \(2^{i}3^{j}\)
* 8 Proof of Theorem 1.5: PQM Jacobians
## 1. Introduction
### Motivation
Let \(E\) be an elliptic curve over \(\mathbb{Q}\). In [10], Mazur famously showed that if a prime \(\ell\) divides the order of the torsion subgroup \(E(\mathbb{Q})_{\operatorname{tors}}\) of \(E(\mathbb{Q})\) then \(\ell\leq 7\). Combining with previous work of Kubert [12], Mazur deduced that \(\#E(\mathbb{Q})_{\operatorname{tors}}\leq 16\) and that \(E(\mathbb{Q})_{\operatorname{tors}}\) is isomorphic to one of fifteen finite abelian groups, each of which gives rise to a genus \(0\) modular curve with a well known rational parametrization.
It is not known whether there is a uniform bound on the size of the rational torsion subgroup of abelian varieties of fixed dimension \(g\geq 2\) over a fixed number field. In fact, there is not even a single integer \(N\) for which it is known that there is no abelian surface over \(\mathbb{Q}\) with a torsion point of order \(N\). Indeed, determining rational points on Siegel modular threefolds with level structure seems challenging in general.
### Results
In this paper we study the torsion subgroup of abelian surfaces \(A\) over \(\mathbb{Q}\) whose geometric endomorphism ring is large. Namely, we suppose that the geometric endomorphism ring \(\operatorname{End}(A_{\overline{\mathbb{Q}}})\) is a maximal order \(\mathcal{O}\) in a division quaternion algebra over \(\mathbb{Q}\); we refer to these as \(\mathcal{O}\)-PQM surfaces ("potential quaternionic multiplication"). Such abelian surfaces are geometrically simple, so their torsion subgroup cannot be studied using torsion subgroups of elliptic curves. On the other hand, they give rise to rational points on certain
Shimura curves, much as elliptic curves over \(\mathbb{Q}\) give rise to rational points on modular curves. Thus \(\mathcal{O}\)-PQM surfaces are a natural place to explore torsion questions in higher dimension.
Our main results show that the torsion behaviour of \(\mathcal{O}\)-PQM surfaces is rather constrained.
**Theorem 1.1**.: _Let \(A\) be an \(\mathcal{O}\)-PQM abelian surface over \(\mathbb{Q}\) with a rational point of order \(\ell\), where \(\ell\) is a prime number. Then \(\ell=2\) or \(\ell=3\)._
**Theorem 1.2**.: _Each \(\mathcal{O}\)-PQM abelian surface \(A\) over \(\mathbb{Q}\) has \(\#A(\mathbb{Q})_{\mathrm{tors}}\leq 18\)._
The fact that the rational torsion on \(\mathcal{O}\)-PQM surfaces is uniformly bounded is not new nor is it difficult to prove. Indeed, since \(\mathcal{O}\)-PQM surfaces have everywhere potentially good reduction (Lemma 4.1.2), local methods quickly show that \(\ell\mid\#A(\mathbb{Q})_{\mathrm{tors}}\) implies \(\ell\leq 19\) and that \(\#A(\mathbb{Q})_{\mathrm{tors}}\leq 72\)[1, Theorem 1.4]. The goal of this paper is instead to prove results which are as precise as possible.
Theorems 1.1 and 1.2 are optimal since it is known that each of the seven groups
\[\begin{gathered}\{1\},\,\mathbb{Z}/2\mathbb{Z},\,\mathbb{Z}/3 \mathbb{Z},\,(\mathbb{Z}/2\mathbb{Z})^{2}\\ \mathbb{Z}/6\mathbb{Z},\,(\mathbb{Z}/3\mathbb{Z})^{2},\,\mathbb{Z}/2 \mathbb{Z}\times(\mathbb{Z}/3\mathbb{Z})^{2}\end{gathered} \tag{1.2.1}\]
is isomorphic to \(A(\mathbb{Q})_{\mathrm{tors}}\) for some \(\mathcal{O}\)-PQM surface \(A/\mathbb{Q}\), with the largest group having order \(18\). Indeed, each of these groups arises as \(A(\mathbb{Q})_{\mathrm{tors}}\) for infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of such surfaces by [13, Theorem 1.1].
Our methods give the following more precise constraints on the group structure of \(A(\mathbb{Q})_{\mathrm{tors}}\).
**Theorem 1.3**.: _Let \(A\) be an \(\mathcal{O}\)-PQM abelian surface over \(\mathbb{Q}\). Then \(A(\mathbb{Q})_{\mathrm{tors}}\) is isomorphic either to one of the groups in (1.2.1) or to one of the following groups:_
\[\begin{gathered}\mathbb{Z}/4\mathbb{Z},\mathbb{Z}/2\mathbb{Z} \times\mathbb{Z}/4\mathbb{Z},(\mathbb{Z}/2\mathbb{Z})^{3},(\mathbb{Z}/2 \mathbb{Z})^{2}\times\mathbb{Z}/3\mathbb{Z},\\ \mathbb{Z}/4\mathbb{Z}\times\mathbb{Z}/3\mathbb{Z},(\mathbb{Z}/2 \mathbb{Z})^{2}\times\mathbb{Z}/4\mathbb{Z},(\mathbb{Z}/4\mathbb{Z})^{2}.\end{gathered} \tag{1.2.2}\]
We leave open the question of whether any of the groups of (1.2.2) arise as \(A(\mathbb{Q})_{\mathrm{tors}}\) for some \(\mathcal{O}\)-PQM surface or not.
Theorem 1.3 can be interpreted as a non-existence result for non-special rational points on certain types of Shimura curves with level structure. Since the discriminant of \(\mathrm{End}(A_{\overline{\mathbb{Q}}})\) and level are unconstrained, the result covers infinitely many distinct such curves. However, as we explain below, our proof of Theorem 1.3 does not make direct use of the arithmetic of Shimura curves.
Whereas the theorems above consider general \(\mathcal{O}\)-PQM abelian surfaces, one is sometimes interested in surfaces with additional structure. For example, recall that \(A\) is of \(\mathrm{GL}_{2}\)-type if the endomorphism ring \(\mathrm{End}(A)\) is a quadratic ring. Modularity results (see Theorem 5.1.1) imply that an abelian variety \(A\) of \(\mathrm{GL}_{2}\)-type over \(\mathbb{Q}\) is a quotient of the modular Jacobian \(J_{1}(N)\) for some \(N\). More precisely, the isogeny class of \(A\) arises from a cuspidal newform of weight \(2\) and level \(N\), where \(A\) has conductor \(N^{2}\). Specializing our methods to this setting, we obtain the following complete classification.
**Theorem 1.4**.: _Let \(A\) be an \(\mathcal{O}\)-PQM surface over \(\mathbb{Q}\) of \(\mathrm{GL}_{2}\)-type. Then \(A(\mathbb{Q})_{\mathrm{tors}}\) is isomorphic to one of the following groups:_
\[\{1\},\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/3\mathbb{Z},(\mathbb{Z}/2\mathbb{Z})^{ 2},(\mathbb{Z}/3\mathbb{Z})^{2}.\]
_Every one of these groups arises as \(A(\mathbb{Q})_{\mathrm{tors}}\) for infinitely many \(\overline{\mathbb{Q}}\)-isomorphism classes of \(\mathcal{O}\)-PQM surfaces \(A\) over \(\mathbb{Q}\) of \(\mathrm{GL}_{2}\)-type._
_Remark 1.2.3_.: The proof shows that if the maximality assumption on \(\mathcal{O}\) is omitted, then a similar classification holds except we do not know whether the group \((\mathbb{Z}/2\mathbb{Z})^{3}\) arises or not.
Another natural class of abelian surfaces is Jacobians of genus two curves. Recall that for geometrically simple abelian surfaces, being a Jacobian is equivalent to carrying a principal polarization. Thus, the following result gives a near-classification for rational torsion subgroups of genus two Jacobians over \(\mathbb{Q}\) in the \(\mathcal{O}\)-PQM locus of the Siegel modular 3-fold \(\mathcal{A}_{2}\) parameterizing principally polarized abelian surfaces.
**Theorem 1.5**.: _Let \(J\) be an \(\mathcal{O}\)-PQM surface over \(\mathbb{Q}\) which is the Jacobian of a genus two curve over \(\mathbb{Q}\). Then \(J(\mathbb{Q})_{\mathrm{tors}}\) is isomorphic to one of the following groups:_
\[\{1\},\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/3\mathbb{Z},(\mathbb{Z}/2 \mathbb{Z})^{2},\mathbb{Z}/6\mathbb{Z},(\mathbb{Z}/3\mathbb{Z})^{2},\] \[\mathbb{Z}/4\mathbb{Z},\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/4 \mathbb{Z},(\mathbb{Z}/2\mathbb{Z})^{2}\times\mathbb{Z}/3\mathbb{Z},\mathbb{ Z}/4\mathbb{Z}\times\mathbb{Z}/3\mathbb{Z},(\mathbb{Z}/4\mathbb{Z})^{2}\]
_In particular, \(\#J(\mathbb{Q})_{\mathrm{tors}}\leq 16\)._
The first six groups in the list above can be realized as \(J(\mathbb{Q})_{\mathrm{tors}}\); see Table 2. We do not know whether they can be realized infinitely often by \(\mathcal{O}\)-PQM Jacobians over \(\mathbb{Q}\).
### Methods
We first describe the proof of Theorem 1.4, which is almost entirely local in nature. Let \(A\) be an \(\mathcal{O}\)-PQM surface over \(\mathbb{Q}\) of \(\mathrm{GL}_{2}\)-type. We show that \(A\) has totally additive reduction at every prime \(p\) of bad reduction, meaning that the identity component of the special fiber of the Neron model at \(p\) is unipotent. It is well known that in this case the prime-to-\(p\) torsion subgroup of \(A(\mathbb{Q}_{p})\) embeds in the Neron component group of \(A\) at \(p\), and that this component group is controlled by the smallest field over which \(A\) acquires good reduction. Our proof of Theorem 1.4 therefore involves an analysis of this field extension, in particular we show that its degree is coprime to \(\ell\) for any prime \(\ell\geq 5\). Applying these local arguments requires the existence of suitable primes of bad reduction, and breaks down when \(A\) has conductor of the form \(2^{n}\), \(3^{n}\), or \(6^{4}\). We handle these cases seperately by invoking the modularity theorem. It turns out there is a single isogeny class whose conductor is of this form, namely the isogeny class of conductor \(3^{10}\), corresponding to a Galois orbit of newforms of level \(3^{5}=243\), with LMFDB label 243.2.a.d.
To prove Theorem 1.1, we need to exclude the existence of an \(\mathcal{O}\)-PQM surface \(A\) over \(\mathbb{Q}\) such that \(A[\ell](\mathbb{Q})\) is nontrivial for some prime \(\ell\geq 5\). By studying the interaction between the Galois action on the torsion points of \(A\) and the Galois action on \(\mathrm{End}(A_{\overline{\mathbb{Q}}})\), we show that such an \(A\) must necessarily be of \(\mathrm{GL}_{2}\)-type, so we may conclude using Theorem 1.4. The methods of this'reduction to \(\mathrm{GL}_{2}\)-type' argument are surprisingly elementary. Aside from some calculations in the quaternion order \(\mathcal{O}\), the key observation is that in the non-\(\mathrm{GL}_{2}\)-type case, the geometric endomorphism algebra \(\mathrm{End}^{0}(A_{\overline{\mathbb{Q}}})\) contains a (unique) Galois-stable imaginary quadratic subfield, which is naturally determined by the (unique) polarization defined over \(\mathbb{Q}\).
To prove Theorems 1.2 and 1.3, we must constrain the remaining possibilities for \(A(\mathbb{Q})_{\mathrm{tors}}\), which is a group of order \(2^{i}3^{j}\) by Theorem 1.1. Our arguments here are ad hoc, relying on a careful analysis of the reduction of \(A\) modulo various primes via Honda-Tate theory (with the aid of the LMFDB [13]) to constrain the possible torsion groups, reduction types, and Galois action on the endomorphism ring. The proof of Theorem 1.5 is similar, but using the relationship between endomorphisms, polarizations, and level structures.
### Previous work
Rational torsion on \(\mathcal{O}\)-PQM surfaces was previously considered in the Ph.D. thesis of Clark [10, Chapter 5], but see the author's caveat emptor, indicating that the proofs of the main results of that chapter are incomplete.
### Future directions
Our methods use the maximality assumption on \(\operatorname{End}(A_{\overline{\mathbb{Q}}})\) in various places. It would be interesting and desirable to relax this condition, especially since groups of order \(12\) and \(18\) can indeed arise in genus two Jacobians with non-maximal PQM; see, for example, the curve \(y^{2}=24x^{5}+36x^{4}-4x^{3}-12x^{2}+1\) and [11, Remark 7.17]. It would also be interesting to systematically analyze rational points on (Atkin-Lehner quotients of) Shimura curves with level structure, for example to determine whether the remaining groups (1.2.2) arise or not. We hope to address this in future work.
### Organization
Sections 2-4 are preliminary, and the remaining sections are devoted to proving the main theorems of the introduction. As explained in SS1.3, we start by proving Theorem 1.4 because the other theorems depend on it.
Those who wish to take the shortest route to Theorem 1.4 (minus eliminating \((\mathbb{Z}/2\mathbb{Z})^{3}\)) only need to read Sections 3.2, 4 and 5. Eliminating the last group \((\mathbb{Z}/2\mathbb{Z})^{3}\) in Proposition 5.3.8 requires more algebraic preliminaries from Section 2 and 3.
### Acknowledgements
We would like to thank Davide Lombardo for interesting discussions related to this project. Schembri and Voight were supported by a Simons Collaboration Grant (550029, to JV). Part of this project was carried out while Laga visited Shnidman at the Hebrew University of Jerusalem. Shnidman was funded by the European Research Council (ERC, CurveArithmetic, 101078157).
### Notation
We fix the following notation for the remainder of this paper:
* \(B\): an indefinite (so \(B\otimes\mathbb{R}\simeq\operatorname{Mat}_{2}(\mathbb{R})\)) quaternion algebra over \(\mathbb{Q}\) of discriminant \(\operatorname{disc}(B)\neq 1\);
* \(\operatorname{trd}(b)\), \(\operatorname{nrd}(b)\) and \(\bar{b}\): reduced trace, reduced norm, and canonical involution of an element \(b\in B\) respectively;
* \(\mathcal{O}\): a choice of maximal order of \(B\);
* \(\bar{F}\): a choice of algebraic closure of a field \(F\);
* \(\operatorname{Gal}_{F}\): the absolute Galois group of \(F\);
* \(\operatorname{End}(A)\): the endomorphism ring of an abelian variety \(A\) defined over \(F\);
* \(\operatorname{End}^{0}(A)=\operatorname{End}(A)\otimes\mathbb{Q}\): the endomorphism algebra of \(A\);
* \(\operatorname{NS}(A)\): the Neron-Severi group of \(A\);
* \(A_{K}\): base change of \(A/F\) along a field extension \(K/F\);
* \(\left(\frac{m,n}{F}\right)\): the quaternion algebra over \(F\) with basis \(\{1,i,j,ij\}\) such that \(i^{2}=m,j^{2}=n\) and \(ij=-ji\);
* \(D_{n}:\) the dihedral group of order \(2n\).
We say an abelian surface \(A\) over a field \(F\) is an \(\mathcal{O}\)-PQM surface if there is an isomorphism \(\operatorname{End}(A_{\bar{F}})\simeq\mathcal{O}\). \(\mathcal{O}\)-PQM surfaces over \(\mathbb{Q}\) are the central object of interest in this paper, but some of our results apply to abelian surfaces whose geometric endomorphism ring is a possibly non-maximal order in a non-split quaternion algebra. We call such surfaces simply PQM surfaces.
We emphasize that this is a restrictive definition of 'PQM': we require that \(\operatorname{End}(A_{\bar{F}})\) does not merely contain such a quaternion order, but is equal to it. In particular, under our terminology, a PQM surface \(A\) is geometrically simple.
Concerning actions: we will use view Galois actions as _right_ actions. We will view \(\operatorname{End}(A)\) as acting on \(A\) on the _left_. If a group \(G\) acts on a set \(X\) on the right, we write \(X^{G}\) for the set of \(G\)-fixed points.
## 2. Quaternionic arithmetic
This section collects some algebraic calculations in the quaternion order \(\mathcal{O}\). It can be safely skipped on a first pass; the reader can return back to it when these calculations are used.
### The normalizer of a maximal order
We recall the following characterization of the normalizer \(N_{B^{\times}}(\mathcal{O})\) of \(\mathcal{O}\) in \(B^{\times}\) (with respect to the conjugation action).
**Lemma 2.1.1**.: _An element of \(B^{\times}/\mathbb{Q}^{\times}\) lies in \(N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\) if and only if it can be represented by an element of \(\mathcal{O}\) of reduced norm dividing \(\operatorname{disc}(B)\)._
Proof.: An element \(b\in B\) lies in \(N_{B^{\times}}(\mathcal{O})\) if and only if it lies in the local normalizer \(N_{(B\otimes\mathbb{Q}_{p})^{\times}}(\mathcal{O}\otimes\mathbb{Z}_{p})\) for all primes \(p\). If \(p\) does not divide \(\operatorname{disc}(B)\), then this normalizer group equals \(\mathbb{Q}_{p}^{\times}(\mathcal{O}\otimes\mathbb{Z}_{p})^{\times}\)[20, (23.2.4)]. If \(p\) divides \(\operatorname{disc}(B)\), this group equals \((B\otimes\mathbb{Q}_{p})^{\times}\) ((23.2.8) in op. cit.). If \(b\) has norm dividing \(\operatorname{disc}(B)\), then this description shows that \(b\) lies in all local normalizer groups. Conversely, if \(b\) normalizes \(\mathcal{O}\) then this description shows that there exists a finite adele \((\lambda_{p})_{p}\) such that \(\lambda_{p}b\in(\mathcal{O}\otimes\mathbb{Z}_{p})^{\times}\) for all \(p\nmid\operatorname{disc}(B)\) and such that \(\operatorname{nrd}(\lambda_{p}b)\) has \(p\)-adic valuation \(\leq 1\) for all \(p\mid\operatorname{disc}(B)\). Since \(\mathbb{Z}\) has class number one, there exists \(\lambda\in\mathbb{Q}^{\times}\) such that \(\lambda\lambda_{p}^{-1}\in\mathbb{Z}_{p}^{\times}\) for all \(p\) and so \(\lambda b\in\mathcal{O}\) has norm dividing \(\operatorname{disc}(B)\), as desired.
We recall for future reference that the quotient of \(N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\) by the subgroup \(\mathcal{O}^{\times}/\{\pm 1\}\) is by definition the Atkin-Lehner group \(W\) of \(\mathcal{O}\), an elementary abelian \(2\)-group whose elements can be identified with positive divisors of \(\operatorname{disc}(B)\).
### Dihedral actions on \(\mathcal{O}\)
For reasons that will become clear in SS3.2, we are interested in subgroups \(G\subset\operatorname{Aut}(\mathcal{O})\) isomorphic to \(D_{n}\) for some \(n\in\{1,2,3,4,6\}\). In this section we describe these subgroups very explicitly.
By the Skolem-Noether theorem, every ring automorphism of \(\mathcal{O}\) is of the form \(x\mapsto b^{-1}xb\) for some \(b\in B^{\times}\) normalising \(\mathcal{O}\), and \(b\) is uniquely determined up to \(\mathbb{Q}^{\times}\)-multiples. Therefore \(\operatorname{Aut}(\mathcal{O})\simeq N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\).
If \(b\in B^{\times}\), write \([b]\) for its class in \(B^{\times}/\mathbb{Q}^{\times}\).
**Lemma 2.2.1**.: _Every element of \(N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\) of order \(2\) is represented by an element \(b\in\mathcal{O}\) such that \(b^{2}=m\neq 1\) is an integer dividing \(\operatorname{disc}(B)\). Moreover \(\mathcal{O}^{\langle b\rangle}=\{x\in\mathcal{O}\mid b^{-1}xb=x\}\) is isomorphic to an order in \(\mathbb{Q}(\sqrt{m})\) containing \(\mathbb{Z}[\sqrt{m}]\)._
Proof.: By Lemma 2.1.1, we may choose a representative \(b\in N_{B^{\times}}(\mathcal{O})\) lying in \(\mathcal{O}\) whose norm \(\operatorname{nrd}(b)\) divides \(\operatorname{disc}(B)\). Since the element has order \(2\), \(m:=b^{2}=-\operatorname{nrd}(b)\) is an integer. We have \(m\neq 1\) since otherwise \(b^{2}=1\) hence \(b=\pm 1\in\mathbb{Q}^{\times}\), which is trivial in \(N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\). This implies \(\mathcal{O}^{\langle b\rangle}=\{x\in B\mid b^{-1}xb=x\}\) is an order in \(B^{\langle g\rangle}=\mathbb{Q}(b)\) containing \(\mathbb{Z}[b]\simeq\mathbb{Z}[\sqrt{m}]\), as claimed.
**Lemma 2.2.2**.: _Let \(G\subset N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\) be a subgroup isomorphic to \(D_{2}=C_{2}\times C_{2}\). Then there exist elements \(i,j,k\in\mathcal{O}\) such that \(B\) has basis \(\{1,i,j,k\}\), such that \(i^{2}=m\), \(j^{2}=n\) and \(k^{2}=t\) all divide \(\operatorname{disc}(B)\), such that \(ij=-ji\) and \(ij\in\mathbb{Q}^{\times}k\), and such that \(G=\{1,[i],[j],[k]\}\). Moreover, \(t\) equals \(-mn\) up to squares._
Proof.: By Lemma 2.2.1, we can pick representatives \(i,j,k\in\mathcal{O}\) of the nontrivial elements of \(G\) that each square to an integer dividing \(\operatorname{disc}(B)\). Since \(G\) is commutative, \(ji=\lambda ij\) for some \(\lambda\in\mathbb{Q}^{\times}\). Comparing norms shows that \(\lambda=\pm 1\). If \(\lambda=1\), then \(ij=ji\) but this would imply that \(B\) is commutative, contradiction. Therefore \(ij=-ji\). Finally, since \([i][j]=[k]\), \(k\in\mathbb{Q}^{\times}ij\). Taking norms, we see that \(t\) equals \(-mn\) up to squares.
**Lemma 2.2.3**.: _Let \(G\subset N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\) be a subgroup isomorphic to \(D_{4}\). Then there exists \(i,j\in\mathcal{O}\) such that \(B\) has basis \(\{1,i,j,ij\}\), such that \(i^{2}=-1\), \(j^{2}=m\) divides \(\operatorname{disc}(B)\) and \(ij=-ji\), and such that \(G=\langle[1+i],[j]\rangle\). Moreover, \(2\mid\operatorname{disc}(B)\)._
Proof.: The fact that such \(i,j\in B\) exist follows from [20, SS32.5 and SS32.6] (itself based on results of [10]). By \(\mathbb{Q}^{\times}\)-scaling \(j\) we may assume that \(j^{2}=m\) is a squarefree integer. Since \(1+i,j\in N_{B^{\times}}(\mathcal{O})\), Lemma 2.1.1 shows that \(i,j\in\mathcal{O}\) and \(m\mid\operatorname{disc}(B)\) and \(\operatorname{nrd}(1+i)=2\mid\operatorname{disc}(B)\).
**Lemma 2.2.4**.: _Let \(G\subset N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times}\) be a subgroup isomorphic to \(D_{3}\) or \(D_{6}\). Then there exist elements \(\omega,j\in\mathcal{O}\) such that \(B\) has basis \(\{1,\omega,j,\omega j\}\), such that \(\omega^{3}=1\), \(j^{2}=m\mid\operatorname{disc}(B)\) and \(\omega j=j\bar{\omega}=j(-1-\omega)\), and such that \(G=\langle[1+\omega],[j]\rangle\) if \(G\simeq D_{3}\) and \(G=\langle[1-\omega],[j]\rangle\) if \(G\simeq D_{6}\). Moreover, if \(G\simeq D_{6}\) then \(3\mid\operatorname{disc}(B)\)._
Proof.: Identical to that of Lemma 2.2.3, again using [20, SS32.5 and SS32.6] and Lemma 2.1.1.
### Fixed point subgroups modulo \(N\)
For reasons similar to those of SS2.2, we study the fixed points of \(G\)-actions on \(\mathcal{O}/N\mathcal{O}\) for subgroups \(G\subset\operatorname{Aut}(\mathcal{O})\) isomorphic to \(D_{n}\) for some \(n\in\{1,2,3,4,6\}\) and integers \(N\geq 1\) of interest.
**Theorem 2.3.1**.: _Let \(G\) be a subgroup of \(\operatorname{Aut}(\mathcal{O})\) isomorphic to \(D_{n}\) for some \(n\in\{1,2,3,4,6\}\)._
1. _Suppose that_ \(N\) _is coprime to_ \(2\) _and_ \(3\)_. Then_ \((\mathcal{O}/N\mathcal{O})^{G}\) _is isomorphic to_ \((\mathbb{Z}/N\mathbb{Z})^{2}\) _if_ \(G=D_{1}\) _and isomorphic to_ \(\mathbb{Z}/N\mathbb{Z}\) _if_ \(G=D_{2},D_{3},D_{4}\) _or_ \(D_{6}\)_._
2. _The group_ \((\mathcal{O}/3\mathcal{O})^{G}\) _is isomorphic to_ \((\mathbb{Z}/3\mathbb{Z})^{2}\) _if_ \(G=D_{1}\)_; isomorphic to_ \(\mathbb{Z}/3\mathbb{Z}\) _if_ \(G=D_{2},D_{4},D_{6}\)_; and isomorphic to_ \(\mathbb{Z}/3\mathbb{Z}\) _or_ \((\mathbb{Z}/3\mathbb{Z})^{2}\) _if_ \(G=D_{3}\)_._
3. _We have_ \[(\mathcal{O}/2\mathcal{O})^{G}\simeq\begin{cases}(\mathbb{Z}/2\mathbb{Z})^{2}, (\mathbb{Z}/2\mathbb{Z})^{3}\text{ or }(\mathbb{Z}/2\mathbb{Z})^{4}&\text{ if }G=D_{1},\\ (\mathbb{Z}/2\mathbb{Z})^{2}\text{ or }(\mathbb{Z}/2\mathbb{Z})^{3}&\text{ if }G=D_{2},\\ (\mathbb{Z}/2\mathbb{Z})^{2}&\text{ if }G=D_{4},\\ \mathbb{Z}/2\mathbb{Z}&\text{ if }G=D_{3}\text{ or }D_{6}.\end{cases}\]
Proof.: The reduction map \(r_{N}\colon\mathcal{O}^{G}\otimes\mathbb{Z}/N\mathbb{Z}\to(\mathcal{O}/N \mathcal{O})^{G}\) is injective and its cokernel is isomorphic to the \(N\)-torsion of the group cohomology \(H^{1}(G,\mathcal{O})\). Indeed, this can be seen by taking \(G\)-fixed points of the exact sequence \(0\to\mathcal{O}\to\mathcal{O}\to\mathcal{O}/N\to 0\). The group \(\mathcal{O}^{G}\) is isomorphic to \(\mathbb{Z}^{2}\) if \(G=D_{1}\) and to \(\mathbb{Z}\) if \(G=D_{2},D_{3},D_{4}\) or \(D_{6}\). Since the finite abelian group \(H^{1}(G,\mathcal{O})\) is killed by the order of \(G\), Part (a) immediately follows. To prove (b) and (c), it
therefore suffices to prove that \(H^{1}(G,\mathcal{O})[6]\) is a subgroup of \((\mathbb{Z}/2\mathbb{Z})^{2}\) if \(G=D_{1}\); isomorphic to \((\mathbb{Z}/2\mathbb{Z})\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\) if \(G=D_{2}\); a subgroup of \(\mathbb{Z}/3\mathbb{Z}\) if \(G=D_{3}\); isomorphic to \((\mathbb{Z}/2\mathbb{Z})\) if \(G=D_{4}\); and trivial if \(G=D_{6}\). Since \(H^{1}(G,\mathcal{O}\otimes\mathbb{Z}_{p})\simeq H^{1}(G,\mathcal{O})\otimes \mathbb{Z}_{p}\) for all primes \(p\) and since \(\operatorname{Aut}(\mathcal{O}\otimes\mathbb{Z}_{p})\) has only finitely many subgroups isomorphic to \(G\) up to conjugacy, this is in principle a finite computation; we give a more detailed proof below.
Case \(G=D_{1}.\) Since \(G=D_{1}=C_{2}\) has order \(2\), \(H^{1}(G,\mathcal{O})\) is \(2\)-torsion and is isomorphic to the cokernel of \(r_{2}\colon\mathcal{O}^{G}\otimes\mathbb{Z}/2\mathbb{Z}\to(\mathcal{O}/2 \mathcal{O})^{G}\). By Lemma 2.2.1, \(\mathcal{O}^{G}\simeq\mathbb{Z}^{2}\) and so this cokernel is either \(0,\mathbb{Z}/2\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\). It follows that \(H^{1}(G,\mathcal{O})\simeq 0,\mathbb{Z}/2\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\).
Case \(G=D_{2}.\) By Lemma 2.2.2, there exist \(i,j,k\in\mathcal{O}\) such that \(i^{2}=m\), \(j^{2}=n\) and \(k^{2}=t\) are all integers dividing \(\operatorname{disc}(B)\), such that \(ij=-ji\) and \(k\in\mathbb{Q}^{\times}ij\) and such that \(G=\{1,[i],[j],[k]\}\). Let \(S_{i}=\mathcal{O}\cap\mathbb{Q}(i)\), \(S_{j}=\mathcal{O}\cap\mathbb{Q}(j)\), \(S_{k}=\mathcal{O}\cap\mathbb{Q}(k)\). Then \(S_{i}\) is an order in \(\mathbb{Q}(i)\) containing \(\mathbb{Z}[i]\), and similarly for \(S_{j}\) and \(S_{k}\). Since \(-mn\) equals \(t\) up to squares, upon reordering \(\{i,j,k\}\) we may assume that \(\mathbb{Z}[i]\) is maximal at \(2\). Therefore \(\mathbb{Z}[\sqrt{m}]\otimes(\mathbb{Z}/2\mathbb{Z})=S_{i}\otimes(\mathbb{Z}/ 2\mathbb{Z})\subset(\mathcal{O}/2\mathcal{O})\) is a subring on which \(G\) acts trivially. It follows that \((\mathbb{Z}/2\mathbb{Z})^{2}\subset(\mathcal{O}/2\mathcal{O})^{G}\). We will now show that \(G\) acts nontrivially on \((\mathcal{O}/2\mathcal{O})\), so assume by contradiction that this action is trivial. By the classification of involutions on finite free \(\mathbb{Z}\)-modules, every such involution is a direct sum of involutions of the form \(x\mapsto x\), \(x\mapsto-x\) and \((x,y)\mapsto(y,x)\). If \(G=\langle[i],[j]\rangle\) acts trivially on \(\mathcal{O}/2\mathcal{O}\), then both \([i],[j]\in\operatorname{Aut}(\mathcal{O})\) are direct sums of involutions of the first two kinds. It follows that \(\mathcal{O}\) is a direct sum of the eigenspaces corresponding to the eigenvalues of \([i]\) and \([j]\). It follows that \(\mathcal{O}=\mathbb{Z}1\oplus\mathbb{Z}i\oplus\mathbb{Z}j\oplus\mathbb{Z}k\). This implies that the discriminant of \(\mathcal{O}\) is \(\pm 4u\), contradicting the fact that \(\mathcal{O}\) is maximal at \(2\). We conclude that \((\mathcal{O}/2\mathcal{O})^{G}\) is \((\mathbb{Z}/2\mathbb{Z})^{2}\) or \((\mathbb{Z}/2\mathbb{Z})^{3}\) and since \(G\) is coprime to \(3\), this proves that \(H^{1}(G,\mathcal{O})[6]\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\).
Case \(G=D_{4}.\) Let \(i,j\in\mathcal{O}\) be elements satisfying the conclusion of Lemma 2.2.3, so \(G=\langle[1+i],[j]\rangle\). Since \(\mathbb{Z}[i]\) is maximal at \(2\), the map \(\mathbb{Z}[i]\otimes\mathbb{Z}/2\mathbb{Z}\to\mathcal{O}/2\mathcal{O}\) is injective. Since \(G\) acts trivially on the image of this map, \((\mathcal{O}/2\mathcal{O})^{G}\) contains \((\mathbb{Z}/2\mathbb{Z})^{2}\). We need to show that \((\mathcal{O}/2\mathcal{O})^{G}=(\mathbb{Z}/2\mathbb{Z})^{2}\). To prove this, it is enough to show that \((\mathcal{O}/2\mathcal{O})^{\langle 1+i\rangle}=(\mathbb{Z}/2\mathbb{Z})^{2}\). Since \(\mathcal{O}\) is ramified at \(2\), there exists a unique conjugacy class of embeddings \(\mathbb{Z}_{2}[i]\hookrightarrow\mathcal{O}_{\mathbb{Z}_{2}}\)[11, Proposition 30.5.3]. Therefore it is enough to verify that \((\mathcal{O}/2\mathcal{O})^{\langle 1+i\rangle}=(\mathbb{Z}/2\mathbb{Z})^{2}\) in a single example, for which this can be checked explicitly. Indeed, one may take \(B=\left(\frac{-1,6}{\mathbb{Q}}\right)\), which has maximal order with \(\mathbb{Z}\)-basis \(\{1,(1+i+ij)/2,(1-i+ij)/2,(j+ij)/2\}\). Since \(\#G\) is coprime to \(3\), we conclude that \(H^{1}(G,\mathcal{O})[6]=\mathbb{Z}/2\mathbb{Z}\).
Case: \(G=D_{3},D_{6}.\) Let \(\omega,j\in\mathcal{O}\) be elements satisfying the conclusion of Lemma 2.2.4. Let \(C_{n}\leq D_{n}\) be the cyclic normal subgroup of order \(n\) for \(n\in\{3,6\}\). The low terms of the Lyndon-Hochschild-Serre spectral sequence give rise to the exact sequence
\[0\to H^{1}(C_{2},\mathcal{O}^{C_{n}})\to H^{1}(G,\mathcal{O})\to H^{1}(C_{n}, \mathcal{O})^{C_{2}}\to H^{2}(C_{2},\mathcal{O}^{C_{n}}). \tag{2.3.2}\]
The subring \(\mathcal{O}^{C_{n}}\) equals \(\mathcal{O}^{\langle 1\pm\omega\rangle}=\mathbb{Z}[\omega]\) and \(C_{2}=D_{n}/C_{n}\) acts on \(\mathcal{O}^{C_{n}}\) via conjugation \(\omega\mapsto\bar{\omega}\). A cyclic group cohomology calculation shows that \(H^{i}(C_{2},\mathbb{Z}[\omega])\) is trivial for all \(i\geq 1\). Therefore \(H^{1}(G,\mathcal{O})\simeq H^{1}(C_{n},\mathcal{O})^{C_{2}}\). Assume \(G=D_{6}\). Using the analogous exact sequence to (2.3.2) for the subgroup \(C_{3}\leq C_{6}\), we get \(H^{1}(C_{6},\mathcal{O})\simeq H^{1}(C_{3},\mathcal{O})^{C_{2}}\). Since \(C_{2}\) acts trivially on \(C_{3}=\{1,g,g^{2}\}\) and acts as \(-1\) on \(\{x\in\mathcal{O}\mid x+gx+g^{2}x=0\}\), it will act as \(-1\) on \(H^{1}(C_{3},\mathcal{O})\simeq(\mathbb{Z}/3\mathbb{Z})^{r}\), so \(H^{1}(C_{3},\mathcal{O})^{C_{2}}=0\) and so \(H^{1}(G,\mathcal{O})\simeq H^{1}(C_{6},\mathcal{O})^{C_{2}}\subset H^{1}(C_{6}, \mathcal{O})\simeq H^{1}(C_{3},\mathcal{O})^{C_{2}}\) is zero too in this case. It remains to consider the case \(G=D_{3}\). Then \(H^{1}(G,\mathcal{O})\simeq H^{1}(C_{3},\mathcal{O})^{C_{2}}\). Let \(g\in C_{3}\) be a generator, given by conjugating by \(1+\omega\). Let
\(L=\{x\in\mathcal{O}\mid x+gx+g^{2}x=0\}\). Using the basis \(\{1,\omega,j,\omega\}\) of \(B\), we see that \(L=\mathcal{O}\cap\mathbb{Q}(\omega)\cdot j\). Using the explicit description of group cohomology of cyclic groups, \(H^{1}(C_{3},\mathcal{O})\) is isomorphic to \(L/(1-g)\mathcal{O}\). Since \((1-g)\mathcal{O}\) contains \((1-g)L=(1-\omega)L\), \(H^{1}(C_{3},\mathcal{O})\) is a quotient of \(L/(1-\omega)L\). Since \((1-\omega)^{2}L=3L\) and \(L/3L\simeq(\mathbb{Z}/3\mathbb{Z})^{2}\), \(L/(1-\omega)L\) is of order \(3\). This shows that \(H^{1}(C_{3},\mathcal{O})=0\) or \(\mathbb{Z}/3\mathbb{Z}\), so \(H^{1}(D_{3},\mathcal{O})=0\) or \(\mathbb{Z}/3\mathbb{Z}\), as claimed.
_Remark 2.3.3_.: A calculation with the quaternion algebra package in Magma shows that all the possibilities in Theorem 2.3.1 do indeed occur.
The next three lemmas give some more precise information about subgroups \(G\subset\operatorname{Aut}(\mathcal{O})\) for which \((\mathcal{O}/2\mathcal{O})^{G}\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\). In these lemmas, we will use the fact that if \(2\mid\operatorname{disc}(B)\), there exists a unique ring homomorphism \(\mathcal{O}/2\mathcal{O}\to\mathbb{F}_{4}\), see [25, Theorem 13.3.11].
**Lemma 2.3.4**.: _Let \(b\in\mathcal{O}\cap N_{B^{\times}}(\mathcal{O})\) be an element with \(b^{2}=m\mid\operatorname{disc}(B)\) and \(m\neq 1\). Write \(F\subset\mathcal{O}/2\mathcal{O}\) for the subset centralized by the reduction of \(b\) in \(\mathcal{O}/2\mathcal{O}\). Then \(F\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\) if and only if \(2\mid\operatorname{disc}(B)\) and \(m\equiv 3\mod 4\). In that case \(F\) equals the subset of elements of \(\mathcal{O}/2\mathcal{O}\) whose image under the ring homomorphism \(\mathcal{O}/2\mathcal{O}\to\mathbb{F}_{4}\) lands in \(\mathbb{F}_{2}\)._
Proof.: Suppose \(F\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\). We first show that \(2\mid\operatorname{disc}(B)\). If not, then \(m\) is odd by Lemma 2.2.1, \(\mathcal{O}/2\mathcal{O}\simeq\operatorname{Mat}_{2}(\mathbb{F}_{2})\) and \(F\) is the fixed points of conjugating by an element of order dividing \(2\) in \(\operatorname{GL}_{2}(\mathbb{F}_{2})\). Since there is only one involution in \(\operatorname{GL}_{2}(\mathbb{F}_{2})\) up to conjugacy, which we may calculate has centralizer \((\mathbb{Z}/2\mathbb{Z})^{2}\), this shows that \(2\mid\operatorname{disc}(B)\). We now show that \(2\nmid m\). If \(2\mid m\), then since \(m\) is squarefree \(b\) is a \(2\)-adic uniformizer of \(\mathcal{O}\otimes\mathbb{Z}_{2}\). Then there exists an unramified quadratic subring \(S\subset\mathcal{O}\otimes\mathbb{Z}_{2}\) isomorphic to \(\mathbb{Z}_{2}\left[\frac{-1+\sqrt{-3}}{2}\right]\) such that \(\mathcal{O}\otimes\mathbb{Z}_{2}=S+S\cdot b\)[25, Theorem 13.3.11]. This shows that conjugation by \(b\) acts via \(x+yb\mapsto\bar{x}+\bar{y}b\). This map has \(4\) fixed points, hence we obtain a contradiction and \(m\) is odd. It follows that \(F\) is given by the fixed points of conjugating by an element of \((\mathcal{O}/2\mathcal{O})^{\times}\). This element is trivial if and only if \(b\in 1+2\mathcal{O}\). Since \(\mathcal{O}\otimes\mathbb{Z}_{2}\) consists of all integral elements of \(B\otimes\mathbb{Q}_{2}\)[25, Proposition 13.3.4] and since \(b\in\mathcal{O}\), this is equivalent to \((b-1)/2\) being integral at \(2\), that is to say to \(m\equiv 1\mod 4\). This proves the forward direction of the lemma. For the other direction, note that \((\mathcal{O}/2\mathcal{O})^{\times}\) (where \(\mathcal{O}\) is ramified at \(2\)) has a unique involution up to conjugacy, which can be checked to have \((\mathbb{Z}/2\mathbb{Z})^{3}\) fixed points in the presentation (6.1.1).
**Lemma 2.3.5**.: _Let \(b\in\mathcal{O}\cap N_{B^{\times}}(\mathcal{O})\) be an element with \(b^{2}=m\mid\operatorname{disc}(B)\) and \(m\neq 1\). Suppose that the conjugation action of \(b\) on \(\mathcal{O}/2\mathcal{O}\) has fixed points \(\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\). Then there exists no \(x\in\mathcal{O}/4\mathcal{O}\) with \(x\equiv 1\mod 2\mathcal{O}\) and \(b^{-1}xbx=-1\)._
Proof.: Suppose that \(x\in\mathcal{O}/4\mathcal{O}\) is such an element. Let \(y=bx\). Since \(mb^{-1}=b\), multiplying the equation \(b^{-1}xbx=-1\) by \(m\) shows that \(y^{2}=-m\) in \(\mathcal{O}/4\mathcal{O}\). By Lemma 2.3.4, \(2\mid\operatorname{disc}(B)\) and \(m\equiv 3\mod 4\), so \(y^{2}=1\) in \(\mathcal{O}/4\mathcal{O}\). Since \(x\equiv 1\mod 2\mathcal{O}\), \(y=bx\equiv b\mod 2\mathcal{O}\). We may therefore write \(y=b+2z\) for some \(z\in\mathcal{O}/4\mathcal{O}\). We compute, in \(\mathcal{O}/4\mathcal{O}\), that
\[y^{2}=(b+2z)(b+2z)=b^{2}+2(bz+zb)+4z^{2}=m+2(bz+zb)=3+2(bz+zb).\]
Since \(y^{2}=1\), this shows that \(2(bz+zb)=2\). Write \(\bar{b}\) and \(\bar{z}\) for the mod \(2\) reductions of \(b\) and \(z\). Then the above identity implies that
\[\bar{b}\bar{z}+\bar{z}\bar{b}=1. \tag{2.3.6}\]
Since \(2\) is ramified in \(B\) and \(\mathcal{O}\) is maximal, there exists a surjective ring homomorphism \(\lambda\colon\mathcal{O}/2\mathcal{O}\to\mathbb{F}_{4}\). Applying \(\lambda\) to (2.3.6) shows that \(\lambda(\bar{b})\lambda(\bar{z})+\lambda(\bar{z})\lambda(\bar{b})=\lambda(1)=1\). Since \(\mathbb{F}_{4}\) is commutative, the left hand side of this equation also equals \(2\lambda(\bar{b})\lambda(\bar{z})=0\), which is a contradiction.
Recall from Lemma 2.2.2 that a subgroup \(G\leq N_{B^{\times}}(\mathcal{O})\) isomorphic to \(C_{2}\times C_{2}\) can be generated by elements \(i,j\in\mathcal{O}\) with \(ij=-ji\), \(i^{2}=m\), \(j^{2}=n\) and \(m,n\mid\operatorname{disc}(B)\).
**Lemma 2.3.7**.: _Let \(G\subset N_{B^{\times}}(\mathcal{O})\) be a subgroup isomorphic to \(C_{2}\times C_{2}\). Then \((\mathcal{O}/2\mathcal{O})^{G}\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\) if and only if (in the above notation) \(2\mid\operatorname{disc}(B)\) and \(m,n\equiv 3\mod 4\)._
Proof.: Suppose first that \((\mathcal{O}/2\mathcal{O})^{G}\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\). Then the conjugation involutions \([i]\) and \([j]\) have both \(2^{3}\) or \(2^{4}\) fixed points on \(\mathcal{O}/2\mathcal{O}\). At least one of them, say \(j\), has \(2^{3}\) fixed points. By Lemma 2.3.4, \(2\mid\operatorname{disc}(B)\) and \(n\equiv 3\mod 4\). If \(2\mid m\), then \(i\) is a \(2\)-adic uniformizer and the action of \(i\) on \(\mathcal{O}/2\mathcal{O}\) would have \(2^{2}\) fixed points (by an argument similar to the proof of Lemma 2.3.4). So \(m\) is odd. If \(m\equiv 1\mod 4\), then the \(2\)-adic Hilbert symbol of \((m,n)\) is trivial, contradicting the fact that \(2\mid\operatorname{disc}(B)\) and \(B\simeq\left(\frac{m,n}{\mathbb{Q}}\right)\). We conclude that \(m\equiv 3\mod 4\). The converse follows from Lemma 2.3.4.
## 3. Galois actions, polarizations and endomorphisms
This section collects some preliminaries concerning the arithmetic of PQM surfaces. In particular, we study the Galois action on the endomorphism algebra, the set of polarizations, the torsion points and the interaction between these. The most important subsection is SS3.2, where the endomorphism field of a PQM surface is introduced.
### Abelian surfaces of \(\operatorname{GL}_{2}\)-type
Recall that an abelian surface \(A\) over a number field \(F\) is said to be of \(\operatorname{GL}_{2}\)-type if \(\operatorname{End}^{0}(A)\) is a quadratic field extension of \(\mathbb{Q}\). We will show that if \(A\) is geometrically simple and \(F\) admits a real place, then this field must be real quadratic. (The geometrically simple hypothesis is necessary; for example, the simple modular abelian surface \(J_{1}(13)\) satisfies \(\operatorname{End}^{0}(J_{1}(13))\simeq\mathbb{Q}(\sqrt{-3})\).) This is well known over \(\mathbb{Q}\) (see [10, Lemma 2.3]), which suffices for our purposes--but we also give an argument that works over any field contained in \(\mathbb{R}\) that might be of independent interest. (We thank Davide Lombardo for suggesting it.)
**Lemma 3.1.1**.: _Let \(A/\mathbb{R}\) be an abelian surface. Then \(\operatorname{rk}\,\operatorname{NS}(A)\geq\operatorname{rk}\,\operatorname{ NS}(A_{\mathbb{C}})-1\)._
Proof.: There exists a two-dimensional \(\mathbb{R}\)-vector space \(W\), a lattice \(\Lambda\subset W_{\mathbb{C}}:=W\otimes\mathbb{C}\) stable under the automorphism \(\sigma\) induced by complex conjugation on the second factor, and a complex analytic isomorphism \(A(\mathbb{C})\simeq(W_{\mathbb{C}})/\Lambda\) that intertwines complex conjugation on \(A(\mathbb{C})\) with \(\sigma\). Under this isomorphism, \(\operatorname{NS}(A_{\mathbb{C}})\) can be identified with the set of \(\mathbb{Z}\)-bilinear alternating forms \(E\colon\Lambda\times\Lambda\to\mathbb{Z}\) with the property that the \(\mathbb{R}\)-linear extension \(E_{\mathbb{R}}\) of \(E\) to \(W_{\mathbb{C}}\) satisfies \(E_{\mathbb{R}}(iv,iv)=E_{\mathbb{R}}(v,w)\) for all \(v,w\in\Lambda\otimes\mathbb{R}=W_{\mathbb{C}}\). By [11, Chapter IV, Theorem (3.4)] such an \(E\) lies in \(\operatorname{NS}(A)\) if and only if the associated Hermitian form \(E_{\mathbb{R}}(iv,w)+iE_{\mathbb{R}}(v,w)\) is \(\mathbb{R}\)-valued on \(W\times W\), that is to say \(E_{\mathbb{R}}(W,W)=0\). Since the intersection \(\Lambda^{\prime}=\Lambda\cap W\) is a lattice in \(W\), the condition \(E_{\mathbb{R}}(W,W)=0\) is equivalent to \(E(\Lambda^{\prime},\Lambda^{\prime})=0\). In conclusion, \(\operatorname{NS}(A)=\ker(\operatorname{NS}(A_{\mathbb{C}})\to\operatorname{ Hom}(\wedge^{2}(\Lambda^{\prime}),\mathbb{Z}))\), where the map sends \(E\) to its restriction to \(\Lambda^{\prime}\times\Lambda^{\prime}\). Since the target of this map is isomorphic to \(\mathbb{Z}\), the lemma is proved.
**Proposition 3.1.2**.: _Let \(A/\mathbb{R}\) be a geometrically simple abelian surface. Then \(\operatorname{End}(A)\) is isomorphic to \(\mathbb{Z}\) or an order in a real quadratic field._
Proof.: By the classification of endomorphism algebras of complex abelian surfaces [1, Proposition 5.5.7, Exercise 9.10(1) and Exercise 9.10(4)], \(\operatorname{End}^{0}(A_{\mathbb{C}})\) is isomorphic to either \(\mathbb{Q}\), a real quadratic field, a non-split indefinite quaternion algebra or a quartic CM field. The proposition is clear in the first two cases, so we may assume that we are in one of the latter two cases.
Since \(\operatorname{End}^{0}(A)\) acts on the \(\mathbb{Q}\)-homology of \(A(\mathbb{R})^{\circ}\simeq S^{1}\times S^{1}\), there is a (nonzero, hence injective) map \(\operatorname{End}^{0}(A)\hookrightarrow\operatorname{Mat}_{2}(\mathbb{Q})\). Since \(\operatorname{End}^{0}(A_{\mathbb{C}})\) does not embed in \(\operatorname{Mat}_{2}(\mathbb{Q})\), \(\operatorname{End}^{0}(A)\neq\operatorname{End}^{0}(A_{\mathbb{C}})\) and so \(\operatorname{End}^{0}(A)\) is at most two-dimensional. It remains to exclude that \(\operatorname{End}^{0}(A)\) is an imaginary quadratic field, so assume for contradiction that this is the case. If \(\operatorname{End}^{0}(A_{\mathbb{C}})\) is a quaternion algebra, Lemma 3.1.1 shows that \(\operatorname{rk}(\operatorname{NS}(A))\geq 3-1=2\), contradicting the fact that \(\operatorname{End}^{0}(A)\) is imaginary quadratic. If \(\operatorname{End}(A_{\mathbb{C}})\) is a quartic CM field \(F\), this CM field has at least two quadratic subfields (namely its unique real quadratic subfield and \(\operatorname{End}^{0}(A)\)) so it must be a biquadratic extension of \(\mathbb{Q}\). A counting argument then shows that every CM type of \(F\) is imprimitive. This implies [1, Theorem 3.5] that \(A_{\mathbb{C}}\) is not simple. We again obtain a contradiction and have completed all cases of the proof.
### The endomorphism field of a \(\operatorname{PQM}\) surface
Let \(F\) be a field of characteristic zero and \(A/F\) a \(\operatorname{PQM}\) surface. The absolute Galois group \(\operatorname{Gal}_{F}\) acts on \(\operatorname{End}(A_{F})\) on the right by ring automorphisms via \(\phi^{\sigma}(a)=\phi\left(a^{\sigma^{-1}}\right)^{\sigma}\) for \(\sigma\in\operatorname{Gal}_{F}\), \(\phi\in\operatorname{End}(A_{F})\) and \(a\in A(\bar{F})\). The kernel of this action cuts out a Galois extension \(L/F\) over which all the endomorphisms of \(A_{\bar{F}}\) are defined. Following [10] we call \(L\) the endomorphism field of \(A\). This determines an injective map \(\rho_{\operatorname{End}}\colon\operatorname{Gal}(L/F)\to\operatorname{Aut}( \operatorname{End}(A_{F}))\). We recall the results of [10] studying this map which are relevant for our purposes. Write \(C_{n}\) (resp. \(D_{n}\)) for the cyclic (resp. dihedral) group of order \(n\) (resp. \(2n\)). Note the isomorphisms \(D_{1}\simeq C_{2}\) and \(D_{2}\simeq C_{2}\times C_{2}\).
**Proposition 3.2.1**.: _Let \(A/F\) be a \(\operatorname{PQM}\) surface with endomorphism field \(L\) and let \(G=\operatorname{Gal}(L/F)\). Then \(G\simeq C_{n}\) or \(D_{n}\) for some \(n\in\{1,2,3,4,6\}\). If \(F\) admits an embedding into \(\mathbb{R}\), then \(G\simeq D_{n}\) for some \(n\in\{1,2,3,4,6\}\)._
Proof.: The classification of finite subgroups of \(B^{\times}/\mathbb{Q}^{\times}\) shows that \(G\) is isomorphic to \(C_{n}\) or \(D_{n}\) for some \(n\in\{1,2,3,4,6\}\)[10, Proposition 2.1]. It therefore suffices to exclude that \(G\) is isomorphic to \(C_{1},C_{3},C_{4}\) or \(C_{6}\) if there exists an embedding \(\iota\colon F\hookrightarrow\mathbb{R}\). If \(G\) is isomorphic to one of these groups, then \(\operatorname{End}^{0}(A)\) is isomorphic to \(B\) (if \(G\) is trivial) or an imaginary quadratic field [10, Theorem 3.4(C)]. This contradicts Proposition 3.1.2.
**Lemma 3.2.2**.: _Let \(A\) be a \(\operatorname{PQM}\) surface over a number field \(F\) admitting a real place. Then \(A\) is of \(\operatorname{GL}_{2}\)-type if and only if the endomorphism field \(L/F\) is a quadratic extension._
Proof.: By Proposition 3.1.2, \(A\) is of \(\operatorname{GL}_{2}\)-type if and only if \(\operatorname{End}(A)\neq\mathbb{Z}\). By [10, Theorem 3.4(C)], \(\operatorname{End}(A)\neq\mathbb{Z}\) if and only if \(L\) is a cyclic extension of \(F\). By Proposition 3.2.1, \(L/F\) is cyclic if and only if it is a quadratic extension.
Assume now that \(A\) is an \(\mathcal{O}\)-\(\operatorname{PQM}\) surface and fix an isomorphism \(\operatorname{End}(A_{F})\simeq\mathcal{O}\). By the Skolem-Noether theorem, every ring automorphism of \(\mathcal{O}\) is of the form \(x\mapsto b^{-1}xb\) for some \(b\in B^{\times}\) normalising \(\mathcal{O}\), and \(b\) is uniquely determined up to \(\mathbb{Q}^{\times}\)-multiples. Therefore
\(\operatorname{Aut}(\mathcal{O})\simeq N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{ \times}\subset B^{\times}/\mathbb{Q}^{\times}\), hence the map \(\operatorname{Gal}(L/F)\to\operatorname{Aut}(\operatorname{End}(A_{\bar{F}}))\) can be viewed as an injective homomorphism
\[\rho_{\operatorname{End}}\colon\operatorname{Gal}(L/F)\to\operatorname{Aut}( \mathcal{O})\simeq N_{B^{\times}}(\mathcal{O})/\mathbb{Q}^{\times} \tag{3.2.3}\]
whose image is isomorphic to \(C_{n}\) or \(D_{n}\) for some \(n\in\{1,2,3,4,6\}\) by Proposition 3.2.1.
_Remark 3.2.4_.: The existence of a polarization of a certain type puts restrictions on the Galois group of the endomorphism field, see [1, Theorem 3.4]. In particular, that theorem shows that if an \(\mathcal{O}\)-PQM surface \(A\) is principally polarized over \(F\) then this Galois group is \(\{1\}\), \(C_{2}\) or \(C_{2}\times C_{2}\).
For future reference we record the following result of Silverberg [15, Proposition 2.2].
**Proposition 3.2.5** (Silverberg).: _Let \(N\geq 3\) be an integer and suppose that the \(\operatorname{Gal}_{F}\)-action on \(A[N]\) is trivial. Then \(L=F\)._
We also record the useful fact that the endomorphism field is preserved by quadratic twist.
**Lemma 3.2.6**.: _Let \(A/F\) be a \(\operatorname{PQM}\) surface and \(M/F\) a quadratic extension. Let \(A^{M}\) be the quadratic twist of \(A\) along \(M/F\). Then under the identification \(\operatorname{End}(A_{\bar{F}})=\operatorname{End}((A^{M})_{\bar{F}})\), \(\rho_{\operatorname{End},A}=\rho_{\operatorname{End},A^{M}}\)._
Proof.: This follows from the fact that \(-1\) is central in \(\operatorname{End}(A_{\bar{F}})\).
### Polarizations and positive involutions
Let \(A\) be an abelian surface over a field \(F\) of characteristic zero. Recall that a polarization is an ample class \(L\) in \(\operatorname{NS}(A)\). Such a class gives rise to an isogeny \(\lambda_{L}\colon A\to A^{\vee}\), and we frequently identify \(L\) with this isogeny. There exists unique positive integers \(d_{1}\mid d_{2}\) such that \(\ker(\lambda_{L})(\bar{F})\simeq(\mathbb{Z}/d_{1})^{2}\times(\mathbb{Z}/d_{2}) ^{2}\); the pair \((d_{1},d_{2})\) is called the type of the polarization and the integer \(\deg(L)=d_{1}d_{2}\) is called its degree. We say two polarizations \(L\) and \(L^{\prime}\) are \(\mathbb{Q}^{\times}\)-equivalent if there exist nonzero integers \(m,n\) such that \(mL=nL^{\prime}\), and we call a \(\mathbb{Q}^{\times}\)-equivalence class of polarizations a \(\mathbb{Q}^{\times}\)-polarization. Every \(\mathbb{Q}^{\times}\)-polarization contains a unique polarization of type \((1,d)\) for some \(d\geq 1\).
Recall that a positive involution of \(B\) is a \(\mathbb{Q}\)-linear involution \(\iota\colon B\to B\) satisfying \(\iota(ab)=\iota(b)\iota(a)\) and \(\operatorname{trd}(a(a))\in\mathbb{Q}_{\geq 0}\) for all \(a,b\in B\). By the Skolem-Noether theorem, every such involution is of the form \(b\mapsto\mu^{-1}\bar{b}\mu\), where \(\bar{b}=\operatorname{trd}(b)-b\) denotes the canonical involution and \(\mu\in B^{\times}\) is an element with \(\mu^{2}\in\mathbb{Q}_{<0}\). Two such elements \(\mu,\mu^{\prime}\in B^{\times}\) give rise to the same involution if and only if \(\mu\) is a \(\mathbb{Q}^{\times}\)-multiple of \(\mu^{\prime}\).
To combine these two notions, suppose that \(\operatorname{End}(A)=\operatorname{End}(A_{\bar{F}})\simeq\mathcal{O}\); let us fix such an isomorphism to identify \(\operatorname{End}(A)\) with \(\mathcal{O}\). Given a polarization \(L\) of \(A\), the Rosati involution on \(\operatorname{End}^{0}(A)\), defined by \(f\mapsto\lambda_{L}^{-1}\circ f^{\vee}\circ\lambda_{L}\), corresponds to a positive involution \(\iota_{L}\) of \(B\).
**Proposition 3.3.1**.: _The assignment \(L\mapsto\iota_{L}\) induces a bijection between the set of \(\mathbb{Q}^{\times}\)-polarizations of \(A\) and the set of positive involutions of \(B\). In addition, if \(L\) is a polarization and \(\mu\in B^{\times}\) is an element such that \(\iota_{L}\) is of the form \(b\mapsto\mu^{-1}\bar{b}\mu\), then_
\[\deg(L)\equiv\operatorname{disc}(B)\cdot\operatorname{nrd}(\mu)\mod\mathbb{Q} ^{\times 2}. \tag{3.3.2}\]
Proof.: This can be deduced from [1, Theorem 3.1], but can also be proved purely algebraically as follows. Choose an element \(\nu\in\mathcal{O}\) with \(\nu^{2}=-\operatorname{disc}(B)\). Then it is well
known [20, Lemma 43.6.23] that \(A\) has a unique principal polarization \(M\) such that \(\iota_{M}(b)=\nu^{-1}\bar{b}\nu\) for all \(b\in B\). To determine all polarizations of \(A\), consider the maps
\[(\operatorname{NS}(A)\otimes\mathbb{Q})\setminus\{0\}\xrightarrow{\alpha}\{x \in B^{\times}\mid\nu^{-1}\bar{x}\nu=x\}\xrightarrow{\beta}\{\mu\in B^{ \times}\mid\bar{\mu}=-\mu\},\]
where \(\alpha(L)=\lambda_{M}^{-1}\circ\lambda_{L}\) and \(\beta(x)=\nu x\). Since \(L\mapsto\lambda_{L}\) induces a bijection \(\operatorname{NS}(A)\otimes\mathbb{Q}\to\{f\in\operatorname{Hom}(A,A^{\vee}) \mid f^{\vee}=f\}\), \(\alpha\) is a bijection. Moreover, \(\beta\) is a bijection by a direct computation. In addition, one can also compute that the Rosati involution associated to a Neron-Severi class \(L\) is given by conjugation by \(\beta(\alpha(L))\). Both \((\operatorname{NS}(A)\otimes\mathbb{Q})\setminus\{0\}\) and \(\{\mu\in B^{\times}\mid\bar{\mu}=-\mu\}\) have evident \(\mathbb{Q}^{\times}\)-actions, and their quotients are given by the set of \(\mathbb{Q}^{\times}\)-polarizations and the set of positive involutions on \(B\) respectively. Combining these observations shows that \(L\mapsto\iota_{L}\) is indeed a bijection between the set of \(\mathbb{Q}^{\times}\)-polarizations and the set of positive involutions. To check (3.3.2), we compute that for \(\alpha(L)=x\) and \(\mu=\nu x\): \(\deg(L)=\operatorname{nrd}(x)=\operatorname{nrd}(\mu)/\operatorname{nrd}(\nu )\equiv\operatorname{disc}(B)\cdot\operatorname{nrd}(\mu)\mod\mathbb{Q}^{ \times 2}\).
_Remark 3.3.3_.: If we want to avoid choosing an isomorphism \(\operatorname{End}(A)\simeq\mathcal{O}\), we may rephrase Proposition 3.3.1 as saying that there is a bijection between \(\mathbb{Q}^{\times}\)-polarizations on \(A\) and positive involutions on the quaternion algebra \(\operatorname{End}^{0}(A)\).
Now suppose that \(A/F\) is an abelian surface with \(\operatorname{End}(A_{\bar{F}})\simeq\mathcal{O}\). Recall from SS3.2 that \(\operatorname{Gal}_{F}\) acts on \(\operatorname{End}(A_{\bar{F}})\) by ring automorphisms. If \(L\) is a polarization on \(A_{\bar{F}}\), the Rosati involution associated to \(L\) is of the form \(b\mapsto\mu^{-1}b\mu\) for some \(\mu\in\operatorname{End}^{0}(A_{\bar{F}})\), uniquely determined up to \(\mathbb{Q}^{\times}\)-multiple.. Therefore the imaginary quadratic field \(\mathbb{Q}(\mu)\subset\operatorname{End}^{0}(A_{\bar{F}})\) is independent of the choice of \(\mu\).
**Corollary 3.3.4**.: _The map \(L\mapsto\mathbb{Q}(\mu)\) constructed above induces a bijection between \(\mathbb{Q}^{\times}\)-polarizations of \(A_{F}\) and imaginary quadratic fields contained in \(\operatorname{End}^{0}(A_{\bar{F}})\). A polarization descends to \(A\) if and only if the imaginary quadratic field is \(\operatorname{Gal}_{F}\)-normalized._
Proof.: The bijection part immediately follows from Proposition 3.3.1, together with the fact that the set of positive involutions on \(\operatorname{End}^{0}(A_{\bar{F}})\) is in bijection with the set of imaginary quadratic subfields of \(\operatorname{End}^{0}(A_{\bar{F}})\).
Since taking the Rosati involution is \(\operatorname{Gal}_{F}\)-equivariant, this bijection preserves the Galois action on both sides. This induces a bijection on the \(\operatorname{Gal}_{F}\)-fixed points on both sides, justifying the last sentence of the corollary.
### The distinguished quadratic subring
If \(A/\mathbb{Q}\) is an \(\mathcal{O}\)-PQM surface of \(\operatorname{GL}_{2}\)-type, then the torsion groups \(A[n]\) are modules over \(S/nS\), where \(S\) is the real quadratic ring \(\operatorname{End}(A)\). If \(A\) is not of \(\operatorname{GL}_{2}\)-type, then \(\operatorname{End}(A)=\mathbb{Z}\), and so it may seem that there is no structure to exploit. However, we have seen in Corollary 3.3.4 that any polarization of \(A\) determines a \(\operatorname{Gal}_{\mathbb{Q}}\)-stable imaginary quadratic subring \(S\subset\operatorname{End}(A_{\overline{\mathbb{Q}}})\).
**Definition 3.4.1**.: Let \(A/\mathbb{Q}\) be an \(\mathcal{O}\)-PQM surface. If \(A\) is of \(\operatorname{GL}_{2}\)-type, let \(M=\operatorname{End}^{0}(A)\). Otherwise, let \(M\subset\operatorname{End}^{0}(A_{\overline{\mathbb{Q}}})\) be the imaginary quadratic field corresponding to the unique primitive polarization on \(A\) via Corollary 3.3.4. We call \(M\subset\operatorname{End}^{0}(A_{\overline{\mathbb{Q}}})\) the distinguished quadratic subfield and \(S=M\cap\operatorname{End}(A_{\overline{\mathbb{Q}}})\) the distinguished quadratic subring of \(A\).
The next proposition describes the distinguished quadratic subring more explicitly.
**Proposition 3.4.2**.: _Let \(A/\mathbb{Q}\) be an \(\mathcal{O}\)-PQM surface and let \(S\) be its distinguished quadratic subring, seen as a subring of \(\mathcal{O}\) using an isomorphism \(\mathcal{O}\simeq\operatorname{End}(A_{\overline{\mathbb{Q}}})\). Let \(G\) be the Galois group of the endomorphism field of \(A\) (as in SS3.2)._
1. \(S\) _is isomorphic to an order of_ \(\mathbb{Q}(\sqrt{m})\) _containing_ \(\mathbb{Z}[\sqrt{m}]\) _for some_ \(m\in\mathbb{Z}_{\geq 2}\) _dividing_ \(\operatorname{disc}(B)\) _if_ \(G=C_{2}\)_; to an order of_ \(\mathbb{Q}(\sqrt{-m})\) _containing_ \(\mathbb{Z}[\sqrt{-m}]\) _for some_ \(m\in\mathbb{Z}_{\geq 2}\) _dividing_ \(\operatorname{disc}(B)\) _if_ \(G=D_{2}\)_; to_ \(\mathbb{Z}[i]\) _with_ \(i^{2}=-1\) _if_ \(G=D_{4}\)_; and to_ \(\mathbb{Z}[\omega]\) _with_ \(\omega^{3}=1\) _if_ \(G=D_{3}\) _or_ \(D_{6}\)_._
2. \(S\) _is an order in a quadratic field, maximal away from_ \(2\) _and unramified away from_ \(6\operatorname{disc}(B)\)_._
Proof.: The description of \(S\) in the \(C_{2}\) case follows from Lemma 2.2.1. If \(G\not\simeq C_{2}\) (in other words, if \(A\) is not of \(\operatorname{GL}_{2}\)-type by Lemma 3.2.2), then Corollary 3.3.4 shows that \(S\) is the unique imaginary quadratic subring of \(\operatorname{End}(A_{\overline{\mathbb{Q}}})\) that is \(\operatorname{Gal}_{\mathbb{Q}}\)-stable and that is optimally embedded, i.e. \((S\otimes\mathbb{Q})\cap\mathcal{O}=S\). So to prove (a) it suffices to find a subring of \(\mathcal{O}\) satisfying the stated conditions. This follows from the explicit description of the \(G\)-action given in SS2.2. Part (b) immediately follows from the first part.
### The enhanced Galois representation
Let \(A\) be an \(\mathcal{O}\)-PQM surface over a field \(F\) of characteristic zero, and fix an isomorphism \(\mathcal{O}\simeq\operatorname{End}(A_{F})\) so that \(\mathcal{O}\) acts on \(A_{F}\) on the left. In SS3.2 we have described how \(\operatorname{Gal}_{F}\) acts on the endomorphism ring \(\mathcal{O}\); this action is encoded by the homomorphism \(\rho_{\operatorname{End}}\colon\operatorname{Gal}_{F}\to\operatorname{Aut}( \mathcal{O})\) of Equation 3.2.3. On the other hand \(\operatorname{Gal}_{F}\) acts on the torsion points of \(A_{\bar{F}}\). In this section we formalize the interaction of these two \(\operatorname{Gal}_{F}\)-actions using a homomorphism that we call the enhanced Galois representation. This basic definition might be of independent interest and will be used in the proof of Theorem 1.4, more specifically to exclude \((\mathbb{Z}/2\mathbb{Z})^{3}\) in the \(\operatorname{GL}_{2}\)-type case in Proposition 5.3.8.
Let \(I\subset\mathcal{O}\) be a \(\operatorname{Gal}_{F}\)-stable two-sided ideal, for example \(I=N\cdot\mathcal{O}\) for some integer \(N\geq 1\). The subgroup \(A[I](\bar{F})\subset A(\bar{F})\) of points killed by \(I\) is a \(\operatorname{Gal}_{F}\)-module. Let \(\operatorname{GL}(A[I])\) be the group of \(\mathbb{Z}\)-module automorphisms of \(A[I](\bar{F})\), seen as acting on \(A[I](\bar{F})\) on the right. The \(\operatorname{Gal}_{F}\)-action on \(A[I]\) is encoded in a homomorphism \(\rho_{I}\colon\operatorname{Gal}_{F}\to\operatorname{GL}(A[I])\). The left \(\mathcal{O}\)-action on \(A_{\bar{F}}\) induces an \(\mathcal{O}/I\)-action on \(A[I](\bar{F})\) such that
\[(a\cdot P)^{\sigma}=a^{\sigma}\cdot P^{\sigma} \tag{3.5.1}\]
for all \(P\in A[I](\bar{F}),\ a\in\mathcal{O}\) and \(\sigma\in\operatorname{Gal}_{F}.\) Let \(\operatorname{Aut}^{\circ}(A[I])\) be the subgroup of pairs \((\gamma,\varphi)\in\operatorname{Aut}(\mathcal{O})\times\operatorname{GL}(A[I])\) such that \((a\cdot P)^{\varphi}=a^{\gamma}\cdot P^{\varphi}\) for all \(a\in\mathcal{O}\) and \(P\in A[I](\bar{F})\). The compatibility (3.5.1) implies that the product homomorphism \(\rho_{\operatorname{End}}\times\rho_{I}\colon\operatorname{Gal}_{F}\to \operatorname{Aut}(\mathcal{O})\times\operatorname{GL}(A[I])\) lands in \(\operatorname{Aut}^{\circ}(A[I])\), so we obtain a homomorphism
\[\rho_{I}^{\circ}\colon\operatorname{Gal}_{F}\to\operatorname{Aut}^{\circ}(A[I ]). \tag{3.5.2}\]
We now identity \(\operatorname{Aut}^{\circ}(A[I])\) with an explicit semidirect product. Consider the group \(\operatorname{Aut}(\mathcal{O})\ltimes(\mathcal{O}/I)^{\times}\), where \(\operatorname{Aut}(\mathcal{O})\) acts on \((\mathcal{O}/I)^{\times}\) via restricting the standard right \(\operatorname{Aut}(\mathcal{O})\)-action on \(\mathcal{O}/I\) to \((\mathcal{O}/I)^{\times}\). Multiplication in this group is given by \((\gamma_{1},x_{1})\cdot(\gamma_{2},x_{2})=(\gamma_{1}\gamma_{2},x_{1}^{\gamma_ {2}}x_{2})\). The \(\mathcal{O}/I\)-module \(A[I](\bar{F})\) is free of rank \(1\)[11]. Let \(Q\in A[I](\bar{F})\) be an \(\mathcal{O}/I\)-module generator. For every \((\gamma,x)\in\operatorname{Aut}(\mathcal{O})\ltimes(\mathcal{O}/I)^{\times}\), let \(\varphi_{(\gamma,x)}\) be the element of \(\operatorname{GL}(A[I])\) sending \(a\cdot Q\) to \(a^{\gamma}x\cdot Q\) for all \(a\in\mathcal{O}/I\).
**Lemma 3.5.3**.: _The map \((\gamma,x)\mapsto(\gamma,\varphi_{(\gamma,x)})\) induces an isomorphism \(\operatorname{Aut}(\mathcal{O})\ltimes(\mathcal{O}/I)^{\times}\xrightarrow{ \sim}\operatorname{Aut}^{\circ}(A[I])\)._
Proof.: This is a formal verification. The inverse of this isomorphism is given by sending \((\gamma,\varphi)\) to \((\gamma,x)\), where \(x\in(\mathcal{O}/I)^{\times}\) is the unique element with \(Q^{\varphi}=x\cdot Q\).
Using Lemma 3.5.3, we may view the homomorphism (3.5.2) as a homomorphism
\[\rho_{I}^{\circ}\colon\operatorname{Gal}_{F}\to\operatorname{Aut}(\mathcal{O} )\ltimes(\mathcal{O}/I)^{\times}. \tag{3.5.4}\]
**Definition 3.5.5**.: The homomorphism (3.5.2) or, after a choice of \(\mathcal{O}/I\)-module generator of \(A[I](\bar{F})\), the homomorphism (3.5.4), is called the enhanced Galois representation associated to \(A\) and \(I\).
Since \(\operatorname{Aut}^{\circ}(A[I])\) is a subgroup of \(\operatorname{Aut}(\mathcal{O})\times\operatorname{GL}(A[I])\), it comes equipped with projection homomorphisms \(\pi_{1}\colon\operatorname{Aut}^{\circ}(A[I])\to\operatorname{Aut}(\mathcal{ O})\) and \(\pi_{2}\colon\operatorname{Aut}^{\circ}(A[I])\to\operatorname{GL}(A[I])\) satisfying \(\rho_{\operatorname{End}}=\pi_{1}\circ\rho_{I}^{\circ}\) and \(\rho_{I}=\pi_{2}\circ\rho_{I}^{\circ}\).
_Remark 3.5.6_.: Suppose that \(\rho_{\operatorname{End}}\) is trivial, in other words \(\operatorname{End}(A)=\operatorname{End}(A_{\bar{F}})\simeq\mathcal{O}\). Then the homomorphism (3.5.4) lands in the subgroup \(\{1\}\ltimes(\mathcal{O}/I)^{\times}\) and hence simplifies to a homomorphism \(\operatorname{Gal}_{F}\to(\mathcal{O}/I)^{\times}\). This recovers the well known description [10] of the Galois representation \(\rho_{I}\) in this case.
We show that usually, \(\rho_{I}^{\circ}\) does not contain more information than \(\rho_{I}\) itself, using the following well known lemma.
**Lemma 3.5.7**.: _Let \(G\) be a finite subgroup of \(\operatorname{GL}_{n}(\mathbb{Z})\) for some \(n\geq 1\) and let \(\operatorname{red}_{N}\colon G\to\operatorname{GL}_{n}(\mathbb{Z}/N\mathbb{Z})\) be the restriction of the reduction map. Then \(\operatorname{red}_{N}\) is injective if \(N\geq 3\), and every element of the kernel of \(\operatorname{red}_{2}\) has order \(1\) or \(2\)._
Proof.: This is a classical result of Minkowski [14]; see [11, Theorem 4.1] for an accessible reference.
**Proposition 3.5.8**.: _Supppose that \(I=N\cdot\mathcal{O}\) for some integer \(N\geq 3\). Then \(\pi_{2}\) is injective on the image \(\rho_{I}^{\circ}\). Consequently, the image of \(\rho_{I}^{\circ}\) is isomorphic to the image of \(\rho_{I}\)._
Proof.: Choose a \(\mathcal{O}/N\)-module generator \(Q\in A[N](\bar{F})\). If \((\gamma,\varphi)\in\ker(\pi_{2})\), then \(\varphi=\operatorname{Id}\) and \(a\cdot Q=a^{\gamma}\cdot Q\) for all \(a\in\mathcal{O}/N\). So \(a=a^{\gamma}\) for all \(a\in\mathcal{O}/N\). Therefore \(\gamma\in\ker(\operatorname{Aut}(\mathcal{O})\to\operatorname{Aut}(\mathcal{O} /N))\). By Lemma 3.5.7, this kernel does not contain any nontrivial element of finite order. However, the image of \(\rho_{\operatorname{End}}\) is finite (Proposition 3.2.1). We conclude that \(\ker(\pi_{2})\cap\operatorname{image}(\rho_{I}^{\circ})=\{1\}\).
_Remark 3.5.9_.: We can also define \(\ell\)-adic versions of the enhanced Galois representation: for every prime \(\ell\) this is a group homomorphism \(\operatorname{Gal}_{F}\to\operatorname{Aut}(\mathcal{O})\ltimes(\mathcal{O} \otimes\mathbb{Z}_{\ell})^{\times}\) encoding both the \(\operatorname{Gal}_{F}\)-action on \(\mathcal{O}\) and on the \(\ell\)-adic Tate module of \(A\).
## 4. PQM surfaces over local and finite fields
We collect some results about PQM surfaces \(A\) over local and finite fields, especially the possible reduction types. The most important facts for our purposes are: a PQM surface \(A/\mathbb{Q}\) of \(\operatorname{GL}_{2}\)-type has totally additive reduction at every bad prime (Corollary 4.1.4); the prime-to-\(p\) torsion in the totally additive case is controlled by the Neron component group (Lemma 4.3.1); and the latter in turn is controlled by the smallest field extension over which \(A\) acquires good reduction (Proposition 4.2.1).
For the remainder of this section, let \(R\) be a henselian discrete valuation ring with fraction field \(F\) of characteristic zero and perfect residue field \(k\) of characteristic \(p\geq 0\).
### Neron models of PQM surfaces
We first recall some notions in the theory of Neron models. Let \(A/F\) be an abelian variety with Neron model \(\mathcal{A}/R\). The special fiber \(\mathcal{A}_{k}\) fits into an exact sequence
\[0\to\mathcal{A}_{k}^{\circ}\to\mathcal{A}_{k}\to\Phi\to 0\]
where \(\Phi\) is the component group of \(\mathcal{A}_{k}\), a finite etale \(k\)-group scheme. The identity component \(\mathcal{A}_{k}^{0}\) fits into an exact sequence
\[0\to U\times T\to\mathcal{A}_{k}^{0}\to B\to 0 \tag{4.1.1}\]
where \(U\) is a unipotent group, \(T\) is a torus and \(B\) is an abelian variety over \(k\). The dimensions of \(U,T\), and \(B\), which we denote by \(u,t\), and \(b\), are called the unipotent, toric and abelian ranks of \(A\), respectively. We have \(u+t+b=\dim A\), and \(A\) has bad reduction if and only if \(b<\dim A\). Similarly, \(A\) has potentially good reduction over \(F\) if and only if its toric rank is \(0\) over every finite extension of \(F\).
**Lemma 4.1.2**.: _Suppose that \(A/F\) is an abelian surface such that \(\operatorname{End}^{0}(A_{\bar{F}})\) contains a non-split quaternion algebra. Then there exists a finite extension \(F^{\prime}/F\) such that \(A_{F^{\prime}}\) has good reduction. If \(k\) is finite, we may take \(F^{\prime}\) to be a totally ramified extension of \(F\)._
Proof.: The fact that \(A\) has potentially good reduction is well known, see e.g. [23, p. 536]. It follows from the fact that a non-split quaternion algebra does not embed in \(\operatorname{Mat}_{2}(\mathbb{Q})\), and hence does not embed in \(\operatorname{End}(T)\otimes\mathbb{Q}\) for any torus \(T/k\) of dimension \(1\) or \(2\).
The last sentence of the lemma can be justified by taking a lift in \(\operatorname{Gal}_{F}\) of the Frobenius in \(\operatorname{Gal}_{k}\), in a manner analogous to [23, p. 498].
**Proposition 4.1.3**.: _Suppose that \(A/F\) is an abelian surface such that \(\operatorname{End}^{0}(A_{\bar{F}})\) contains a non-split quaternion algebra. Suppose that \(A\) has bad reduction. Then:_
1. \(t=0\)_._
2. _If_ \(\operatorname{End}^{0}(A)\) _contains a real quadratic field, then_ \(u=\dim A=2\)_._
3. _If_ \(u=1\)_, then_ \(A_{K}\) _has good reduction over any field extension_ \(K/F\) _such that_ \(\operatorname{End}^{0}(A_{K})\) _contains a real quadratic field._
Proof.: \((a)\) follows from the fact that \(A\) has potentially good reduction and the fact that the toric rank cannot decrease under extension of the base field [23, Proposition 2.4]. For \((b)\), we only need to exclude the possibility that \(u=b=1\), so suppose by contradiction that it holds. Let \(E\subset\operatorname{End}^{0}(A)\) be a real quadratic subfield. Reducing endomorphisms in (4.1.1) gives a (nonzero, hence injective) map \(E\hookrightarrow\operatorname{End}^{0}(B)\). By assumption, \(B\) is an elliptic curve. However, this contradicts the fact that the endomorphism algebra of an elliptic curve (over any field) does not contain a real quadratic field. Finally, \((c)\) follows from \((b)\), since the abelian rank cannot decrease after base change [23, Proposition 2.4].
When \(u=\dim A\) one says that \(A\) has totally additive reduction.
**Corollary 4.1.4**.: _Let \(A/\mathbb{Q}\) be a PQM surface and \(p\) a prime of bad reduction. Suppose that \(A\) is of \(\operatorname{GL}_{2}\)-type. Then \(A\) has totally additive reduction at \(p\)._
Proof.: This follows from Proposition 4.1.3(b) and the fact that \(\operatorname{End}(A)\) is real quadratic by Proposition 3.1.2.
_Remark 4.1.5_.: One can show that if \(p\geq 5\) then the Prym variety of \(y^{3}=x^{4}+x^{2}+p\) (which has PQM by [13]) has unipotent rank \(1\) over \(\mathbb{Q}_{p}\). So the \(\operatorname{GL}_{2}\)-type hypothesis cannot be dropped in general in Corollary 4.1.4.
Finally, we state Raynaud's criterion for \(A/F\) to have semistable reduction, which in the case of a PQM surface is necessarily good by Proposition 4.1.3.
**Lemma 4.1.6**.: _Let \(A/F\) be a PQM surface, \(n\) an integer not divisible by the residue characteristic \(p\) and suppose that all points in \(A[n]\) are defined over an unramified extension of \(F\). Then_
1. _if_ \(n=2\) _then_ \(A\) _has good reduction over every ramified quadratic extension of_ \(F\)_;_
2. _if_ \(n\geq 3\) _then_ \(A\) _has good reduction over_ \(F\)_._
Proof.: See [11, SS7].
### The good reduction field and component group of a PQM surface
Let \(A/F\) be an abelian variety with potentially good reduction. If \(k\) is algebraically closed, there exists a smallest field extension \(M/F\) such that \(A_{M}\) has good reduction, called the good reduction field of \(A\). This is a Galois extension, equal to \(F(A[N])\) for every \(N\geq 3\) coprime to \(p\)[12, SS2, Corollary 3]. It is relevant for us because it controls the size of the component group, by the following result [1, Theorem 1].
**Proposition 4.2.1**.: _Suppose that \(k\) is algebraically closed. Let \(A/F\) be an abelian variety with potentially good reduction and reduction field \(M/F\). Then the Neron component group \(\Phi\) is killed by \([M:F]\)._
The next lemma constrains the good reduction field of a PQM surface.
**Lemma 4.2.2**.: _Suppose that \(k\) is algebraically closed. Let \(A/F\) be a PQM surface with good reduction field \(M/F\). Then \([M:F]\) divides \(24^{2}\). In particular, \([M:F]\) is coprime to any prime \(\ell>3\)._
Proof.: Let \(L\) be the endomorphism field of \(A/F\) (Section 3.2). By the Neron-Ogg-Shafarevich criterion, all prime-to-\(p\) torsion is defined over \(M\), hence \(L\subset M\) by a result of Silverberg (Proposition 3.2.5). By Proposition 3.2.1, \([L:F]\) divides \(24\). By [19, Proposition 4.2] and its proof (whose notation does not agree with ours), we have \([M:L]\mid 24\). We conclude that \([M:F]=[M:L][L:F]\) divides \(24^{2}\).
**Lemma 4.2.3**.: _Let \(A/F\) be a PQM surface and let \(\ell\geq 5\). Then the order of \(\Phi\) is not divisible by \(\ell\)._
Proof.: Since formation of Neron models commutes with unramified base change, it is enough to prove the lemma in the case where \(F\) has algebraically closed residue field. This then follows from Proposition 4.2.1 and Lemma 4.2.2.
We record the following technical lemma that will allow us to sometimes 'quadratic twist away' bad primes. This will be useful in the proof of Proposition 5.2.1.
**Lemma 4.2.4**.: _Suppose that \(p\neq 2\). Let \(A/F\) be an abelian variety with totally additive reduction. Suppose that \(A_{M}\) has good reduction for some quadratic extension \(M/F\). Then the quadratic twist \(A^{M}\) of \(A\) by \(M\) has good reduction._
Proof.: Let \(I_{F}\) and \(I_{M}\) denote the inertia group of \(\operatorname{Gal}_{F}\) and \(\operatorname{Gal}_{M}\) respectively. Fix a prime \(\ell\neq p\). By the Neron-Ogg-Shafarevich criterion, the \(I_{F}\)-action on the \(\ell\)-adic Tate module \(T_{\ell}A\) factors through a faithful \(I_{F}/I_{M}\)-action, so acts via an element \(\sigma\in\operatorname{GL}(T_{\ell}A)\) of order \(2\). Since \(A\) has totally additive reduction, \((T_{\ell}A)^{I_{F}}=0\) and so \(\sigma=-1\). Let \(\chi_{M}\colon\operatorname{Gal}_{F}\to\{\pm 1\}\) be the character corresponding to the extension \(M/F\). Then \(T_{\ell}(A^{M})\simeq T_{\ell}A\otimes\chi_{M}\) as \(\operatorname{Gal}_{F}\)-modules. Therefore \(I_{F}\) acts trivially on \(T_{\ell}(A^{M})\) and \(A^{M}\) has good reduction.
### Component groups and torsion
The relevance of the component group is the following well-known fact, see for example [12, Remark 1.3]. If \(G\) is an abelian group, write \(G^{(p)}\) for its subgroup of elements of finite order prime to \(p\).
**Lemma 4.3.1**.: _If \(A/F\) is an abelian variety with totally additive reduction \((\)i.e. \(u=\dim A)\), then \(A(F)^{(p)}_{\operatorname{tors}}\) is isomorphic to a subgroup of \(\Phi(k)^{(p)}\), where \(\Phi\) denotes the component group of \(\mathcal{A}_{k}\)._
Lorenzini has studied the component groups of general abelian surfaces with potentially good reduction and totally additive reduction, which leads to the following severe constraint on their torsion subgroups [12, Corollary 3.25].
**Theorem 4.3.2** (Lorenzini).: _Let \(A/F\) be an abelian surface with totally additive and potentially good reduction. Then \(A(F)^{(p)}_{\operatorname{tors}}\) is a subgroup of one of the following groups:_
\[\mathbb{Z}/5\mathbb{Z},\,(\mathbb{Z}/3\mathbb{Z})^{2},\,(\mathbb{Z}/2\mathbb{Z} )^{4},\,\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/4\mathbb{Z},\,\mathbb{Z}/2 \mathbb{Z}\times\mathbb{Z}/6\mathbb{Z}.\]
We can say more if \(A\) has totally additive reduction over any proper subextension of the good reduction field. The following slight variant of [12, Corollary 3.24] will be very useful in classifying torsion in the \(\operatorname{GL}_{2}\)-type case.
**Proposition 4.3.3**.: _Suppose that the residue field of \(F\) is algebraically closed. Let \(A/F\) be an abelian variety with bad and potentially good reduction. Let \(M/F\) be the good reduction field of \(A\). Suppose that \(A_{F^{\prime}}\) has totally additive reduction for every \(F\subset F^{\prime}\subsetneq M\). Suppose that the prime-to-\(p\) torsion subgroup \(A(F)^{(p)}_{\operatorname{tors}}\) of \(A(F)\) is nontrivial. Then there exists a prime number \(\ell\neq p\) such that \([M:F]\) is a power of \(\ell\) and \(A(F)^{(p)}_{\operatorname{tors}}\simeq(\mathbb{Z}/\ell\mathbb{Z})^{k}\) for some \(k\geq 1\)._
Proof.: Let \(G:=\operatorname{Gal}(M/F)\). For every \(F\subset F^{\prime}\subsetneq M\), \(A(F)^{(p)}_{\operatorname{tors}}\subset A(F^{\prime})^{(p)}_{\operatorname{tors}}\) is isomorphic to a subgroup of the component group of \(A_{F^{\prime}}\) by Lemma 4.3.1, which is killed by \([F:F^{\prime}]\) by Proposition 4.2.1. By Galois theory, \(A(F)^{(p)}_{\operatorname{tors}}\) is therefore killed by \(\#H\) for every nontrivial subgroup \(H\leq G\). The group \(A(F)^{(p)}_{\operatorname{tors}}\) is nontrivial by assumption; let \(\ell\) be a prime dividing its order. We claim that this \(\ell\) satisfies the conclusions of the proposition. Indeed, by definition of \(A(F)^{(p)}_{\operatorname{tors}}\) we have \(\ell\neq p\). Moreover if \(\#G\) is divisible by another prime \(\ell^{\prime}\), then by taking \(H\) a Sylow-\(\ell^{\prime}\) subgroup of \(G\) we get a contradiction, so \(\#G=[M:F]\) is a power of \(\ell\). By taking \(H\) to be an order \(\ell\) subgroup of \(G\), we see that \(A(F)^{(p)}_{\operatorname{tors}}\) is killed by \(\ell\), as desired.
In the general case (not necessarily totally additive reduction), we have the following well-known result when \(F\) is a finite extension of \(\mathbb{Q}_{p}\), which follows from formal group law considerations [13, SS2.5 and Proposition 3.1].
**Lemma 4.3.4**.: _Suppose that \(F/\mathbb{Q}_{p}\) is a finite extension of ramification degree \(e\). Let \(A/F\) be an abelian variety with Neron model \(\mathcal{A}/R\). Let \(\operatorname{red}\colon A(F)=\mathcal{A}(R)\to\mathcal{A}(k)\) be the reduction map._
1. _The restriction of_ \(\operatorname{red}\) _to prime-to-_\(p\) _part of_ \(A(F)_{\operatorname{tors}}\) _is injective._
2. _If in addition_ \(e<p-1\)_, then_ \(\operatorname{red}\) _is injective on_ \(A(F)_{\operatorname{tors}}\)_._
### The conductor of a PQM surface
Recall that the conductor \(\mathfrak{f}(A)\) of an abelian variety \(A/\mathbb{Q}\) is a positive integer divisible exactly by the primes of bad reduction of \(A\); see [1] for a precise definition and more information. We may write \(\mathfrak{f}(A)=\prod_{p}p^{\mathfrak{f}_{p}(A)}\), where \(\mathfrak{f}_{p}(A)\) denotes the conductor exponent at a prime \(p\).
**Lemma 4.4.1**.: _Let \(A/\mathbb{Q}\) be a PQM surface of \(\operatorname{GL}_{2}\)-type. Let \(p\) be a prime such that \(A\) has bad reduction at \(p\) but acquires good reduction over a tame extension of \(\mathbb{Q}_{p}\). Then \(\mathfrak{f}_{p}(A)=4\)._
Proof.: In that case \(\mathfrak{f}_{p}(A)\) equals the tame conductor exponent at \(p\), which is \(2\times(\text{unipotent rank})+(\text{toric rank})\). This equals \(2\times 2+0=4\) by Proposition 4.1.3.
**Proposition 4.4.2**.: _Let \(A/\mathbb{Q}\) be a PQM surface of \(\operatorname{GL}_{2}\)-type. Then the conductor of \(A\) is of the form \(2^{2i}3^{2j}N^{4}\), where \(0\leq i\leq 10\), \(0\leq j\leq 5\), and \(N\) is squarefree and coprime to \(6\)._
Proof.: By Lemmas 4.2.2 and 4.4.1, \(\mathfrak{f}_{p}(A)=4\) for every bad prime \(p\geq 5\). The bounds \(\mathfrak{f}_{2}(A)\leq 20\) and \(\mathfrak{f}_{3}(A)\leq 10\) follow from a general result of Brumer-Kramer [1, Theorem 6.2]. The fact that \(\mathfrak{f}_{2}(A)\) and \(\mathfrak{f}_{3}(A)\) are even follows from the fact that \(\operatorname{End}^{0}(A)\) is a real quadratic field (Proposition 3.1.2) and [1, (4.7.2)].
### Finite fields
Let \(k=\mathbb{F}_{q}\) be a finite field of order \(p^{r}\). We will use the following two statements, whose proof can be found in [13, SS2].
**Lemma 4.5.1**.: _Let \(A/k\) be an abelian surface such that \(\operatorname{End}^{0}(A)\) contains the quaternion algebra \(B\). Then the characteristic polynomial of Frobenius is of the form \((T^{2}+aT+q)^{2}\) for some integer \(a\in\mathbb{Z}\) satisfying \(|a|\leq 2\sqrt{q}\)._
**Proposition 4.5.2**.: _Let \(A/k\) be an abelian surface such that \(\operatorname{End}^{0}(A)\) contains the quaternion algebra \(B\). If \(r\) is odd or \(p\nmid\operatorname{disc}(B)\), then \(A\) is isogenous to the square of an elliptic curve over \(k\). If \(r\) is even and \(p\mid\operatorname{disc}(B)\), \(A_{\bar{k}}\) is isogenous to the square of a supersingular elliptic curve over \(\bar{k}\)._
## 5. Proof of Theorem 1.4: PQM surfaces of \(\operatorname{GL}_{2}\)-type
Before proving Theorems 1.1-1.3, it is useful to first prove Theorem 1.4, which classifies torsion subgroups of \(\mathcal{O}\)-PQM abelian surfaces \(A\) over \(\mathbb{Q}\) which are of \(\operatorname{GL}_{2}\)-type. At a certain point in the argument we make use of the modularity of abelian surfaces of \(\operatorname{GL}_{2}\)-type, which we recall in SS5.1 and classify PQM surfaces of \(\operatorname{GL}_{2}\)-type with good reduction outside \(2\) or \(3\). In SS5.2, we deduce that a general \(\mathcal{O}\)-PQM surface cannot have a full level \(2\)-structure over \(\mathbb{Q}\). In SS5.3, we prove Theorem 1.4.
### Abelian surfaces of \(\operatorname{GL}_{2}\)-type and modular forms
**Theorem 5.1.1**.: _Let \(A\) be an abelian surface such that \(\operatorname{End}^{0}(A)\) is a real quadratic field. Then the conductor of \(A\) is of the form \(N^{2}\) for some positive integer \(N\), and there exists
a unique Galois orbit \([f_{A}]\subset S_{2}(\Gamma_{0}(N))\) having coefficient field \(K\simeq\operatorname{End}^{0}(A)\) whose local \(L\)-factors agree for each prime \(p\):_
\[L_{p}(A,T)=\prod_{\tau\colon K\hookrightarrow\mathbb{C}}L_{p}(\tau(f_{A}),T)\in 1 +T\mathbb{Z}[T]. \tag{5.1.2}\]
_Moreover, we have \([f_{A}]=[f_{A^{\prime}}]\) if and only if \(A\) is isogenous to \(A^{\prime}\) (over \(\mathbb{Q}\))._
Proof.: As explained by Ribet [14, Theorem (4.4)], the fact that \(A\) is of \(\operatorname{GL}_{2}\)-type over \(\mathbb{Q}\) implies that \(A\) is modular assuming Serre's modularity conjecture [11, SS4.7, Theorem 5], which was proven by Khare-Wintenberger [13]. Thus the equality of \(L\)-series (5.1.2) holds for some newform \(f_{A}\). Since \(\operatorname{End}^{0}(A)\) is real, the character of \(f_{A}\) is trivial [14, Lemma (4.5.1)]. It follows from a theorem of Carayol [15, Theoreme (A)] (local-global compatibility) that \(A\) has conductor equal to \(N^{2}\), where \(N\) is the level of \(f_{A}\). Finally, the fact that the Galois orbit of \(f_{A}\) characterizes \(A\) up to isogeny follows from the theorem of Faltings.
Recall that if \(f\in S_{2}(\Gamma_{0}(N))\) is a newform and \(\psi\) a primitive Dirichlet character, there exists a unique newform \(g=f\otimes\psi\), the twist of \(f\) by \(\psi\), whose \(q\)-expansion satisfies \(a_{n}(g)=a_{n}(f)\psi(n)\) for all \(n\) coprime to \(N\) and the conductor of \(\psi\). If \(f=g\), then \(g\) is called a self-twist. If \(f\) and \(g\) are Galois conjugate, \(g\) is called an inner twist.
**Proposition 5.1.3**.: _Let \(A\) be an abelian surface over \(\mathbb{Q}\) such that \(\operatorname{End}^{0}(A)\simeq\mathbb{Q}(\sqrt{m})\) with \(m\geq 2\). Then \(A\) has \(\operatorname{PQM}\) if and only if all of the following conditions hold:_
1. \(f_{A}\) _has no self-twists, equivalently_ \(f_{A}\) _is not CM;_
2. \(f_{A}\) _has a nontrivial inner twist by a Dirichlet character associated to a quadratic field_ \(\mathbb{Q}(\sqrt{d})\)_; and_
3. _The quaternion algebra_ \(B_{d,m}:=\left(\frac{d,m}{\mathbb{Q}}\right)\) _is a division algebra._
_If all conditions_ (i)-(iii) _hold, then in fact \(\operatorname{End}^{0}(A_{\overline{\mathbb{Q}}})\simeq B_{d,m}\)._
Proof.: See Cremona [12, SS2].
This reduces the enumeration of isogeny classes of \(\operatorname{GL}_{2}\)-type PQM surfaces \(A\) over \(\mathbb{Q}\) with fixed conductor to a computation in a space of modular forms.
**Corollary 5.1.4**.: _There are no \(\operatorname{PQM}\) surfaces \(A\) over \(\mathbb{Q}\) of \(\operatorname{GL}_{2}\)-type with good reduction outside \(\{2\}\)._
Proof.: By Proposition 4.4.2, it is enough to check that there is no eigenform corresponding to a PQM surface of level \(2^{k}\) for any \(k\leq 10\). This information is contained in the LMFDB [15] or [10, Table 1].
**Corollary 5.1.5**.: _There is exactly one isogeny class of \(\operatorname{PQM}\) surfaces \(A\) over \(\mathbb{Q}\) of \(\operatorname{GL}_{2}\)-type with good reduction outside \(\{3\}\): it has conductor \(3^{10}\), any abelian surface \(A\) in the isogeny class satisfies \(A(\mathbb{Q})_{\operatorname{tors}}\leq\mathbb{Z}/3\mathbb{Z}\)._
Proof.: The fact that there is exactly one such isogeny class again follows from Proposition 4.4.2 and information in the LMFDB or [10, Table 1]. The corresponding Galois orbit of weight two newforms has LMFDB label 243.2.d. From \(L_{2}(1)=3\) and \(L_{13}(1)=225\) we conclude that \(\#A(\mathbb{Q})_{\operatorname{tors}}\mid 3\) for every \(A\) in this isogeny class. (In fact, the corresponding
optimal quotient of \(J_{0}(243)\) has \(\mathbb{Z}/3\mathbb{Z}\) torsion subgroup by considering the image of the cuspidal subgroup of \(J_{0}(243)\).)
_Remark 5.1.6_.: The isogeny class of Corollary 5.1.5 has minimal conductor among all PQM surfaces \(A\) of \(\operatorname{GL}_{2}\)-type. It would be interesting to produce an explicit model over \(\mathbb{Q}\); see also [13, Question 2].
**Proposition 5.1.7**.: _There are exactly \(44\) isogeny classes of \(\operatorname{PQM}\) surfaces over \(\mathbb{Q}\) of \(\operatorname{GL}_{2}\)-type with good reduction outside \(\{2,3\}\)._
Proof.: Again we use Propositions 4.4.2 and 4.4.2 to reduce the question to computing the number of Galois orbits of newforms in \(S_{2}(\Gamma_{0}(N))\), where \(N\mid 2^{10}3^{5}\), with quadratic Hecke coefficient field, having an inner twist but no self-twist. However, here we need to do a new computation in a large dimensional space. The code is available at [https://github.com/ciaran-schembri/QM/](https://github.com/ciaran-schembri/QM/), we provide a few details to explain how we proceeded, referring to the book by Stein [14] on modular symbols and more broadly [1] for a survey of methods to compute modular forms.
We work with modular symbols, and we loop over all possible (imaginary) quadratic characters \(\psi\) supported at \(2,3\), corresponding to inner twist. For each character \(\psi\), of conductor \(d\):
* For a list of split primes \(p\geq 5\), we inductively compute the kernels of \(T_{p}-a\) where \(|a|\leq 2\sqrt{p}\).
* For a list of inert primes \(p\geq 5\), we further inductively compute the kernels of \(T_{p}^{2}-db^{2}\) where \(db^{2}\leq 4p\).
The first bound holds since \(\psi(p)=1\) so \(a_{p}(f)\psi(p)=\tau(a_{p}(f))=a_{p}(f)\) so \(a_{p}(f)\in\mathbb{Z}\), and the Ramanujan-Petersson bound holds; the second bound holds since \(\psi(p)=-1\) now gives \(\tau(a_{p}(f))=-a_{p}(f)\) so \(a_{p}(f)=\sqrt{db}\) with again \(\sqrt{d}|b|\leq 2\sqrt{p}\). It is essential to compute the split primes first, and only compute the induced action of \(T_{p}\) on the kernels computed in the first step.
To simplify the linear algebra, we work modulo a large prime number \(q\), checking that each Hecke matrix \(T_{p}\) (having entries in \(\mathbb{Q}\)) has no denominator divisible by \(q\). The corresponding decomposition gives us an 'upper bound': if we had the desired eigenspace for \(T_{p}\), it reduces modulo \(q\), but a priori some of these spaces could accidentally coincide or the dimension could go down (corresponding to a prime of norm \(q\) in the Hecke field). To certify the 'lower bound', we compute a small linear combination of Hecke operators supported at split primes and use the computed eigenvalues to recompute the kernel over \(\mathbb{Q}\) working with divisors \(N^{\prime}\mid N\), and when we find it we compute the dimension of the oldspace for the form at level \(N^{\prime}\) inside level \(N\) and confirm that it matches the dimension computed modulo \(q\).
In fact, we find that \(N\mid 2^{8}3^{5}\) or \(N\mid 2^{10}3^{4}\). (Indeed, a careful analysis of the possible endomorphism algebra can be used to show this a priori.)
To certify that the form is not PCM, we find a coefficient for an inert prime that is nonzero. That the form has the correct inner twist by \(\psi\) is immediate: the form would again appear somewhere in our list, so once we have identified the newforms uniquely with coefficients, the inner twist must match, Sherlock Holmes-style. We similarly discard the forms with PCM.
Finally, we compute the split PQM forms by identifying the quaternion algebra above using Proposition 5.1.3.
The complete data is available online ([https://github.com/ciaran-schembri/QM-Mazur](https://github.com/ciaran-schembri/QM-Mazur)); we give a summary in Table 1, listing forms in a fixed level, up to (quadratic) twist.
For example, Table 1 says that up to twist there are \(3\) newforms of level \(N=20736=2^{8}3^{4}\), each having \(4\) Galois newform orbits for a total of \(12\) newform orbits.
**Corollary 5.1.8**.: _If \(A\) is a \(\mathrm{PQM}\) abelian surface of \(\mathrm{GL}_{2}\)-type over \(\mathbb{Q}\) with good reduction outside \(\{2,3\}\) and \(\#A(\mathbb{Q})_{\mathrm{tors}}\) nontrivial, then \(A\) corresponds to either \(\ref{eq:c1}\)\(\ref{eq:c2}\)\(\mathtt{a.d}\) or \(\ref{eq:c2}\)\(\mathtt{2}\)\(\mathtt{a.e.}\) In particular, \(\#A(\mathbb{Q})_{\mathrm{tors}}\leq 9\)._
Proof.: Direct calculation as in Corollary 5.1.5.
### Full level \(2\)-structure
Before imposing the \(\mathrm{GL}_{2}\)-type assumption in the next subsection, we show that \(\mathcal{O}\)-PQM surfaces cannot have full level \(2\)-structure over \(\mathbb{Q}\).
**Proposition 5.2.1**.: _Let \(A/\mathbb{Q}\) be an \(\mathcal{O}\)-PQM surface. Then \(A(\mathbb{Q})[2]\not\simeq(\mathbb{Z}/2\mathbb{Z})^{4}\)._
Proof.: Suppose \(A(\mathbb{Q})[2]\simeq(\mathbb{Z}/2\mathbb{Z})^{4}\). Since \(A[2]\) is free of rank one as an \(\mathcal{O}/2\mathcal{O}\)-module and contains a \(\mathbb{Q}\)-rational generator, we have \(A[2]\simeq\mathcal{O}/2\mathcal{O}\) as \(\mathrm{Gal}_{\mathbb{Q}}\)-modules. By Theorem 2.3.1 and Proposition 3.2.1, this implies that the endomorphism field \(L/\mathbb{Q}\) is quadratic, so that \(A\) has \(\mathrm{GL}_{2}\)-type by Lemma 3.2.2.
Let \(K\) be a quadratic field ramified at all primes \(p\geq 3\) of bad reduction of \(A\) and unramified at all primes \(p\geq 3\) of good reduction. Corollary 4.1.4 and Lemmas 4.1.6(a), 4.2.4 and 3.2.6 show that the quadratic twist of \(A\) by \(K\) is an \(\mathcal{O}\)-PQM surface of \(\mathrm{GL}_{2}\)-type with good reduction outside \(\{2\}\). But by Corollary 5.1.4, no such surface exists.
\begin{table}
\begin{tabular}{c|c|c|c|c} \(N\) & \(\psi\) & \(\mathrm{disc}\,B\) & num & LMFDB labels \\ \hline \(243=3^{5}\) & \(-3\) & 6 & 1 & 243.2.a.d \\ \(972=2^{2}3^{5}\) & \(-3\) & 6 & 1 & 972.2.a.e \\ \(2592=2^{5}3^{4}\) & \(-4\) & 6 & 2 & 2592.2.a.l, 2592.2.a.p \\ \(2592=2^{5}3^{4}\) & \(-4\) & 6 & 2 & 2592.2.a.m, 2592.2.a.r \\ \(3888=2^{4}3^{5}\) & \(-3\) & 6 & 2 & 3888.2.a.b, 3888.2.a.t \\ \(5184=2^{6}3^{4}\) & \(-4\) & 6 & 2 & 5184.2.a.bl, 5184.2.a.bx \\ \(5184=2^{6}3^{4}\) & \(-4\) & 6 & 2 & 5184.2.a.bk, 5184.2.a.bv \\ \(15552=2^{6}3^{5}\) & \(-3\) & 6 & 2 & \\ \(20736=2^{8}3^{4}\) & \(-4\) & 6 & 4 & \\ \(20736=2^{8}3^{4}\) & \(-4\) & 22 & 4 & \\ \(20736=2^{8}3^{4}\) & \(-8\) & 10 & 4 & \\ \(62208=2^{8}3^{5}\) & \(-3\) & 6 & 4 & \\ \(62208=2^{8}3^{5}\) & \(-3\) & 6 & 4 & \\ \(82944=2^{10}3^{4}\) & \(-24\) & 6 & 4 & \\ \(82944=2^{10}3^{4}\) & \(-24\) & 6 & 4 & \\ \(82944=2^{10}3^{4}\) & \(-24\) & 6 & 4 & \\ \end{tabular}
\end{table}
Table 1. Twist classes of modular forms corresponding to PQM abelian surfaces over \(\mathbb{Q}\) of \(\mathrm{GL}_{2}\)-type with good reduction outside \(\{2,3\}\)
### Torsion classification in the \(\operatorname{GL}_{2}\)-type case
Now we assume \(A/\mathbb{Q}\) is a PQM surface of \(\operatorname{GL}_{2}\)-type. By Lemma 3.2.2, there exists a quadratic extension \(L/\mathbb{Q}\) (the endomorphism field) such that \(\operatorname{End}(A_{L})=\operatorname{End}(A_{\overline{\mathbb{Q}}})\).
**Lemma 5.3.1**.: _If \(\ell\) is a prime such that \(A[\ell](\mathbb{Q})\neq 0\), then \(\ell\leq 7\)._
Proof.: By Lemma 4.1.2, there exists a finite extension \(L^{\prime}/L\) that is totally ramified at \(2\) and such that \(A_{L^{\prime}}\) has good reduction. Let \(\mathfrak{q}\) be a prime in \(L^{\prime}\) above \(2\) and let \(k\) be its residue field. Since \(L/\mathbb{Q}\) is quadratic, \(k\) is isomorphic to \(\mathbb{F}_{2}\) or \(\mathbb{F}_{4}\). Therefore the reduction of \(A_{L^{\prime}}\) at \(\mathfrak{q}\) is an abelian surface \(B\) over \(k\) such that \(\operatorname{End}^{0}(B)\) contains \(\operatorname{End}^{0}(A_{L})\). By Lemma 4.3.4, \(B[\ell](k)\neq 0\) and so \(\ell\) divides \(\#B(\mathbb{F}_{4})\). On the other hand, Lemma 4.5.1 shows that the \(L\)-polynomial of \(B_{\mathbb{F}_{4}}\) is of the form \((T^{2}+aT+4)^{2}\) with \(a\in\mathbb{Z}\) satisfying \(|a|\leq 2\sqrt{4}=4\). Therefore \(\ell\) divides \((1+a+4)^{2}\), hence \(\ell\) divides \((1+a+4)\leq 9\), hence \(\ell\leq 9\).
**Lemma 5.3.2**.: _If \(\ell\geq 5\) is a prime such that \(A[\ell](\mathbb{Q})\neq 0\), then \(A/\mathbb{Q}\) has good reduction away from \(\ell\)._
Proof.: Let \(p\) be a prime of bad reduction of \(A\). Since \(A\) is of \(\operatorname{GL}_{2}\)-type, the algebra \(\operatorname{End}^{0}(A)\) is a quadratic field; it is real quadratic by Proposition 3.1.2. Proposition 4.1.3(c) implies that \(A\) has totally additive reduction at \(p\). By Lemmas 4.2.3 and 4.3.1, we must have \(p=\ell\). We conclude that \(A\) has good reduction outside \(\{\ell\}\).
**Proposition 5.3.3**.: _If \(\ell\) is a prime such that \(A[\ell](\mathbb{Q})\neq 0\), then \(\ell\in\{2,3\}\)._
Proof.: Suppose that \(\ell\geq 5\). By Proposition 3.1.2, the quadratic extension \(L/\mathbb{Q}\) is imaginary quadratic. Moreover, by a result of Silverberg [20, Theorem 4.2], the surface \(A\) has bad reduction at all primes ramifying in \(L\). By Lemma 5.3.2, \(L\) is therefore only ramified at \(\ell\). If \(\ell=5\), this is already a contradiction since there are no imaginary quadratic fields ramified only at \(5\). If \(\ell=7\), then we conclude that \(L=\mathbb{Q}(\sqrt{-7})\). Since \(2\) splits in \(L\), this means that the residue field in the proof of Lemma 5.3.1 is equal to \(\mathbb{F}_{2}\). Continuing with the proof there, we deduce the stronger inequality \(|a|\leq 2\sqrt{2}\), and we find that \(\ell\) divides \(1+a+2<6\), which is a contradiction.
_Remark 5.3.4_.: We can also deduce Proposition 5.3.3 from Lemma 5.3.2 by invoking modularity (Proposition 5.1.3), the fact that such an abelian surface must have conductor \(\ell^{4}\) (Proposition 4.4.2) and the fact that there are no PQM eigenforms in \(S_{2}(\Gamma_{0}(25))\) or \(S_{2}(\Gamma_{0}(49))\). We also note that Schoof has proven that there are no abelian varieties with everywhere good reduction over \(\mathbb{Q}(\zeta_{\ell})\) for various small \(\ell\), including \(5\) and \(7\)[20].
**Proposition 5.3.5**.: _Either \(A(\mathbb{Q})_{\operatorname{tors}}\subset(\mathbb{Z}/2\mathbb{Z})^{3}\) or \(A(\mathbb{Q})_{\operatorname{tors}}\subset(\mathbb{Z}/3\mathbb{Z})^{2}\)._
Proof.: By Proposition 5.3.3, \(A(\mathbb{Q})_{\operatorname{tors}}\) is a group of order \(2^{i}3^{j}\). We may assume that \(A(\mathbb{Q})_{\operatorname{tors}}\neq 0\); let \(\ell\in\{2,3\}\) be such that \(A[\ell](\mathbb{Q})\neq 0\).
Suppose there exists a prime \(p\geq 5\) of bad reduction. Then \(A\) has totally additive reduction over every finite extension \(F/\mathbb{Q}_{p}\) over which it has bad reduction by Proposition 4.1.3. Therefore the assumptions of Proposition 4.3.3 apply for \(F=\mathbb{Q}_{p}^{\operatorname{nr}}\) (the maximal unramified extension of \(\mathbb{Q}_{p}\)), and so \(A(\mathbb{Q})_{\operatorname{tors}}=A(\mathbb{Q})_{\operatorname{tors}}^{(p)} \subset A(\mathbb{Q}_{p}^{\operatorname{nr}})_{\operatorname{tors}}^{(p)}\simeq (\mathbb{Z}/\ell\mathbb{Z})^{k}\) for some \(1\leq k\leq 4\). If \(\ell=2\), then \(k\leq 3\) by Proposition 5.2.1. If \(\ell=3\), then \(k\leq 2\), since \(A(\mathbb{Q})_{\operatorname{tors}}^{(2)}\hookrightarrow A_{2}(\mathbb{F}_{2})\) for some abelian surface \(A_{2}/\mathbb{F}_{2}\) (using Lemmas 4.1.2 and 4.3.4) and \(\#A_{2}(\mathbb{F}_{2})\leq 25\) for all such surfaces. We conclude \(A(\mathbb{Q})_{\operatorname{tors}}\subset(\mathbb{Z}/2\mathbb{Z})^{3}\) or \(A(\mathbb{Q})_{\operatorname{tors}}\subset(\mathbb{Z}/3\mathbb{Z})^{2}\), as desired.
It remains to consider the case that \(A\) has good reduction outside \(\{2,3\}\). A computation with modular forms of level dividing \(2^{10}\cdot 3^{5}\) shows that \(\#A(\mathbb{Q})_{\mathrm{tors}}\mid 9\) for such surfaces by Corollary 5.1.8, but we give an argument that only involves computing modular forms of much smaller level. We may assume \(A\) has bad reduction at both of these primes by Corollaries 5.1.4 and 5.1.5. If \(A[2](\mathbb{Q})=0\), then Proposition 4.3.3 shows again that \(A(\mathbb{Q})_{\mathrm{tors}}=A(\mathbb{Q})_{\mathrm{tors}}^{(2)}\subset A( \mathbb{Q}_{2}^{\mathrm{nr}})_{\mathrm{tors}}^{(2)}\subset(\mathbb{Z}/3 \mathbb{Z})^{2}\). Similarly \(A(\mathbb{Q})_{\mathrm{tors}}\subset(\mathbb{Z}/2\mathbb{Z})^{3}\) if \(A[3](\mathbb{Q})=0\). Thus, it remains to rule out the possibility that \(A(\mathbb{Q})\) contains a point of order \(6\). In that case, Proposition 4.3.3 shows that the extensions \(M_{2}/\mathbb{Q}_{2}^{\mathrm{nr}}\) and \(M_{3}/\mathbb{Q}_{3}^{\mathrm{nr}}\) over which \(A\) attains good reduction have degrees that are powers of \(3\) and \(2\) respectively, and hence are tamely ramified. Hence \(A\) has conductor \(2^{4}3^{4}\) by Lemma 4.4.1 and corresponds to an eigenform of level \(2^{2}3^{2}=36\), by Theorem 5.1.1. However, there are no PQM eigenforms of level \(36\)[12, Table 1].
Next we constrain the torsion even further and show that \((\mathbb{Z}/2\mathbb{Z})^{3}\) does not occur. For this, we combine a cute fact from linear algebra with a purely local proposition that makes use of the enhanced Galois representation of SS3.5.
**Lemma 5.3.6**.: _Let \(k\) be a field and \(V\subset\mathcal{O}_{k}:=\mathcal{O}\otimes_{\mathbb{Z}}k\) a \(3\)-dimensional \(k\)-subspace. Then \(V\) contains an \(\mathcal{O}_{k}\)-module generator of \(\mathcal{O}_{k}\)._
Proof.: If \(\mathcal{O}_{k}\) is a division algebra, every nonzero element of \(V\) is an \(\mathcal{O}_{k}\)-generator. If the characteristic of \(k\) divides \(\mathrm{disc}(B)\), the lemma follows from Lemma 6.1.3 and the fact that the ideal \(J\) described there is \(2\)-dimensional. It suffices to consider the case when \(\mathcal{O}_{k}\simeq\mathrm{Mat}_{2}(k)\) and to prove that in this case \(V\) contains an invertible matrix. (This is well known, we give a quick proof here.) Suppose otherwise. If \(k\) admits a quadratic field extension \(k^{\prime}\), then embedding \(k^{\prime}\subset\mathrm{Mat}_{2}(k)\), we compute \(\dim(V+k^{\prime})=\dim V+\dim k^{\prime}-\dim(V\cap k^{\prime})=3+2-0=5\), which is a contradiction. In general, the subspace \(V\) is defined over a subfield \(k^{\prime\prime}\) of \(k\) which is finitely generated over its prime field. The previous argument then applies over \(k^{\prime\prime}\).
Recall that \(\mathbb{Q}_{p}^{\mathrm{nr}}\) denotes the maximal unramified extension of \(\mathbb{Q}_{p}\).
**Proposition 5.3.7**.: _Let \(p\) be an odd prime, \(F\) a finite extension of \(\mathbb{Q}_{p}^{\mathrm{nr}}\) and \(A/F\) an \(\mathcal{O}-\)_PQM _surface with \((\mathbb{Z}/2\mathbb{Z})^{3}\subset A[2](F)\). Then \(A\) acquires good reduction over every quadratic extension of \(F\)._
Proof.: If \(A[2](F)\simeq(\mathbb{Z}/2\mathbb{Z})^{4}\), this immediately follows from Raynaud's criterion (Lemma 4.1.6(a)), so assume that \(A[2](F)\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\). By Lemma 5.3.6, there exists an \(F\)-rational \(\mathcal{O}/2\mathcal{O}\)-generator \(P\in A[2](F)\), and hence \(A[2]\simeq\mathcal{O}/2\mathcal{O}\) as \(\mathrm{Gal}_{F}\)-modules.
Let \(L/F\) be the endomorphism field of \(A_{F}\) and let \(M/F\) be the smallest field over which \(A_{F}\) acquires good reduction. By the Neron-Ogg-Shafarevich criterion, \(M=F(A[4])\). By Proposition 3.2.5, \(L\subset M\). Since \(A[2]\simeq\mathcal{O}/2\mathcal{O}\) as \(\mathrm{Gal}_{\mathbb{Q}}\)-modules, \(F(A[2])\subset L\). We therefore have a chain of inclusions \(F\subset F(A[2])\subset L\subset M=F(A[4])\). Since \(A[2](\mathbb{Q})\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\), \(F(A[2])/F\) is a \((2,2,\ldots,2)\)-extension. The same is true for \(F(A[4])/F(A[2])\). Since \(p\) is odd and the residue field is algebraically closed, both these extensions are cyclic, so at most quadratic. Therefore \(F(A[2])/F\) is a quadratic extension. If \(L\neq F(A[2])\), then \(L/F\) would be cyclic of order \(4\), and there would be an order \(4\) element \(g\in\mathrm{Aut}(\mathcal{O})\) whose fixed points on \(\mathcal{O}/2\mathcal{O}\) is \((\mathbb{Z}/2\mathbb{Z})^{3}\). A calculation similar to the proof of the \(D_{4}\) case in Theorem 2.3.1
shows that this is not possible. We conclude that \(L=F(A[2])\) and that \(M/L\) is at most quadratic.
To prove the proposition, it suffices to prove that \(M/F\) is quadratic, so assume by contradiction that this is not the case. Then \(M/L\) and \(L/F\) are both quadratic and \(\operatorname{Gal}(M/F)=\{1,g,g^{2},g^{3}\}\) is cyclic of order \(4\).
Consider the mod \(4\) Galois representation \(\rho\colon\operatorname{Gal}_{F}\to\operatorname{GL}(A[4])\), which factors through \(\operatorname{Gal}_{F}\to\operatorname{Gal}(M/F)\). Let \(Q\in A[4](M)\) be a lift of the \(\mathcal{O}/2\mathcal{O}\)-generator \(P\in A[2](F)\). Then \(Q\) is an \(\mathcal{O}/4\mathcal{O}\)-generator for \(A[4]\), and hence by the enhanced Galois representation construction, we know that \(\rho\simeq\rho_{4}^{\circ}\) lands in \(\operatorname{Gal}(L/F)\ltimes(\mathcal{O}/4\mathcal{O})^{\times}\) (see SS3.5 and Proposition 3.5.8). The situation can be summarized as follows:
The horizontal maps are the enhanced Galois representations for \(L\) mod \(4\), \(F\) mod \(4\) and \(F\) mod \(2\) respectively. Write \(\operatorname{Gal}(L/F)=\{1,\sigma\}\). Since \(P\) is \(F\)-rational, the bottom map sends \(\sigma\) to \((\sigma,1)\). By commutativity of the bottom square, \(\rho_{4}^{\circ}(g)=(\sigma,x)\), where \(x\in(\mathcal{O}/4\mathcal{O})\) satisfies \(x\equiv 1\mod 2\mathcal{O}\). Since \(A_{L}\) has bad and hence totally additive reduction by Proposition 4.1.3, the nontrivial element of \(\operatorname{Gal}(M/L)\) maps to \(-1\) in \((\mathcal{O}/4\mathcal{O})^{\times}\). (In fact, the generator of \(\operatorname{Gal}(M/L)\) even maps to \(-1\) in \(\operatorname{GL}(T_{2}A)\) by an argument identical to the proof of Lemma 4.2.4.) By the commutativity of the top diagram, \((\sigma,x)^{2}=(1,-1)\). The involution \(\sigma\) acts on \((\mathcal{O}/4\mathcal{O})^{\times}\) by conjugating by an element \(b\in\mathcal{O}\cap N_{B^{\times}}(\mathcal{O})\) whose fixed points on \(\mathcal{O}/2\mathcal{O}\) are \((\mathbb{Z}/2\mathbb{Z})^{3}\). Therefore \((\sigma,x)^{2}=(1,-1)\) is equivalent to \(b^{-1}xbx=-1\). By Lemma 2.3.5, no such \(x\) exists, obtaining the desired contradiction.
**Proposition 5.3.8**.: _Let \(A/\mathbb{Q}\) be an \(\mathcal{O}\)-PQM surface of \(\operatorname{GL}_{2}\)-type. Then \((\mathbb{Z}/2\mathbb{Z})^{3}\not\subset A(\mathbb{Q})[2]\)._
Proof.: Let \(K\) be a quadratic field ramified at all primes \(p\geq 3\) of bad reduction of \(A\) and unramified at all primes \(p\geq 3\) of good reduction. Corollary 4.1.4, Proposition 5.3.7 and Lemmas 4.2.4 and 3.2.6 show that the quadratic twist of \(A\) by \(K\) is an \(\mathcal{O}\)-PQM surface of \(\operatorname{GL}_{2}\)-type with good reduction outside \(\{2\}\). But no such \(\mathcal{O}\)-PQM surface exists by Corollary 5.1.4.
We are finally ready to prove our classification result for torsion subgroups of \(\mathcal{O}\)-PQM surfaces of \(\operatorname{GL}_{2}\)-type.
Proof of Theorem 1.4.: By Propositions 5.3.5 and 5.3.8, we have ruled out all groups aside from those listed in the theorem. It remains to exhibit infinitely many abelian surfaces \(A/\mathbb{Q}\) of \(\operatorname{GL}_{2}\)-type with torsion subgroups isomorphic to each of the groups
\[\{0\},\mathbb{Z}/2\mathbb{Z},\mathbb{Z}/3\mathbb{Z},(\mathbb{Z}/2\mathbb{Z}) ^{2},(\mathbb{Z}/3\mathbb{Z})^{2}.\]
Let \(\mathcal{O}_{6}\) be the maximal quaternion order of reduced discriminant \(6\) (unique up to isomorphism). In [13, SS9], one-parameter families of \(\operatorname{GL}_{2}\)-type \(\mathcal{O}_{6}\)-PQM surfaces with generic
torsion subgroups \(\{0\},\mathbb{Z}/2\mathbb{Z},\)\(\mathbb{Z}/3\mathbb{Z}\) and \((\mathbb{Z}/3\mathbb{Z})^{2}\) are given among Prym surfaces of bielliptic Picard curves. In Proposition 5.3.9 below, we give a one-parameter family of \(\mathrm{GL}_{2}\)-type \(\mathcal{O}_{6}\)-PQM Jacobians \(J\) with \((\mathbb{Z}/2\mathbb{Z})^{2}\subset J(\mathbb{Q})_{\mathrm{tors}}.\)
To state the next result, we define the rational functions
\[j(T) =\frac{(-64T^{20}+256T^{16}-384T^{12}+256T^{8}-64T^{4})}{(T^{24}+ 42T^{20}+591T^{16}+2828T^{12}+591T^{8}+42T^{4}+1)};\] \[J_{2}(T) =12(j+1);\] \[J_{4}(T) =6(j^{2}+j+1);\] \[J_{6}(T) =4(j^{3}-2j^{2}+1);\] \[J_{8}(T) =(J_{2}J_{6}-J_{4}^{2})/4;\] \[J_{10}(T) =j^{3}.\]
**Proposition 5.3.9**.: _For all but finitely many \(t\in\mathbb{Q}\), there exists a genus two curve \(C_{t}/\mathbb{Q}\) with Igusa invariants \((J_{2}(t):J_{4}(t):J_{6}(t):J_{8}(t):J_{10}(t))\), whose Jacobian \(J_{t}/\mathbb{Q}\) is an \(\mathcal{O}_{6}\)-PQM surface of \(\mathrm{GL}_{2}\)-type and satisfies \(J_{t}(\mathbb{Q})_{\mathrm{tors}}\supset(\mathbb{Z}/2\mathbb{Z})^{2}\)._
Proof.: In [1, p.742], the authors have an expression for Igusa-Clebsch invariants (which we have translated to Igusa invariants) of genus \(2\) curves defining \(\mathcal{O}\)-PQM surfaces for every value of a parameter \(j\) (which is a coordinate on the full Atkin-Lehner quotient of the discriminant \(6\) Shimura curve). The field of moduli for \(k_{R_{3}}\), in their notation, is \(\mathbb{Q}(\sqrt{-27-16j^{-1}})\) and the obstruction for these genus \(2\) curves to be defined over \(\mathbb{Q}\) is given by the Mestre obstruction \(\left(\frac{-6j,-2(27j+16)}{\mathbb{Q}}\right)\). A short computation for the family \(j(T)\) shows that \(-27-16j^{-1}\) is a square in \(\mathbb{Q}(T)^{\times}\), and hence \(k_{R_{3}}=\mathbb{Q}\) for all non-singular specializations. Furthermore, one checks that the Mestre obstruction also vanishes for all such \(t\). Thus, the Igusa invariants in the statement of the proposition give an infinite family of \(\mathcal{O}\)-PQM Jacobians \(J/\mathbb{Q}\) of \(\mathrm{GL}_{2}\)-type with \(\mathrm{End}^{0}(J)\simeq\mathbb{Q}(\sqrt{3}).\) (Only finitely many \(j\in\mathbb{Q}\) correspond to CM points [1, SS5, Table 1], so \(J\) is geometrically simple for all but finitely many \(t\in\mathbb{Q}\).)
Using Magma, one can write down an explicit sextic polynomial \(f_{T}(x)\) such that \(C_{t}\) has model \(y^{2}=f_{t}(x)\). The coefficients of \(f_{T}(x)\) are too large to include here, but we have posted them here. We find that there is a factoriztion
\[f_{T}(x)=q_{1,T}(x)q_{2,T}(x)q_{3,T}(x)\]
where each \(q_{i,T}\) is a quadratic polynomial in \(\mathbb{Q}(T)[x]\). From this we see that for all but finitely many \(t\), the group \((\mathbb{Z}/2\mathbb{Z})^{2}\) is a subgroup of \(J_{t}(\mathbb{Q})_{\mathrm{tors}}.\) Indeed, \(J_{t}=\mathrm{Pic}_{0}(C_{t})\) and for each \(i\in\{1,2,3\}\), the divisor class \((\alpha,0)-(\alpha^{\prime},0)\), where \(q_{i,t}(x)=(x-\alpha)(x-\alpha^{\prime})\), is defined over \(\mathbb{Q}\) and has order \(2\). In future work, we will explain how the special family \(j(T)\) was found using the arithmetic of Shimura curves.
## 6. Proof of Theorem 1.1: reduction to \(\mathrm{GL}_{2}\)-type
In this section, we prove Theorem 1.1. By Theorem 1.4, it is enough to prove:
**Theorem 6.0.1**.: _Let \(A/\mathbb{Q}\) be an \(\mathcal{O}\)-PQM surface, and let \(\ell\geq 5\) be a prime number such that \(A[\ell](\mathbb{Q})\neq 0\). Then \(A\) is of \(\mathrm{GL}_{2}\)-type._
Theorem 6.0.1 follows from combining Propositions 6.2.5 and 6.2.7 below. The proofs consist mostly of careful semi-linear algebra over non-commutative rings, combined with a small drop of global arithmetic input.
### Linear algebra
Let \(\ell\) be a prime and \(\mathcal{O}_{\ell}:=\mathcal{O}\otimes\mathbb{F}_{\ell}\). If \(\ell\nmid\operatorname{disc}(B)\), then \(\mathcal{O}_{\ell}\simeq\operatorname{Mat}_{2}(\mathbb{F}_{\ell})\), since \(\mathcal{O}\) is maximal. If \(\ell\mid\operatorname{disc}(B)\), then \(\mathcal{O}_{\ell}\) is isomorphic to the nonsemisimple algebra [11, SS4]
\[\left\{\begin{pmatrix}\alpha&\beta\\ 0&\alpha^{\ell}\end{pmatrix}\mid\alpha,\beta\in\mathbb{F}_{\ell^{2}}\right\} \subset\operatorname{Mat}_{2}(\mathbb{F}_{\ell^{2}}). \tag{6.1.1}\]
In both cases, we will describe all left ideals of \(\mathcal{O}_{\ell}\). Equivalently, given a left \(\mathcal{O}_{\ell}\)-module \(M\), free of rank one, we will describe all its (left) \(\mathcal{O}_{\ell}\)-submodules.
First we suppose that \(\ell\nmid\operatorname{disc}(B)\); fix an isomorphism \(\mathcal{O}_{\ell}\simeq\operatorname{Mat}_{2}(\mathbb{F}_{\ell})\) and a free rank one left \(\mathcal{O}_{\ell}\)-module \(M\). Let \(e_{1},e_{2},w\) be the elements of \(\mathcal{O}_{\ell}\) corresponding to the matrices
\[\begin{pmatrix}1&0\\ 0&0\end{pmatrix},\begin{pmatrix}0&0\\ 0&1\end{pmatrix},\begin{pmatrix}0&1\\ 1&0\end{pmatrix}\]
respectively. Then \(e_{1}\),\(e_{2}\) are idempotents satisfying \(e_{1}e_{2}=0\), \(e_{1}+e_{2}=1\) and \(e_{1}w=we_{2}\). Set \(M_{i}=\ker(e_{i}\colon M\to M)\subset M\). Then \(M=M_{1}\oplus M_{2}\) and \(w\) induces mutually inverse bijections \(M_{1}\to M_{2}\) and \(M_{2}\to M_{1}\). Given an \(\mathcal{O}_{\ell}\)-submodule \(N\subset M\), define \(N_{i}:=N\cap M_{i}\). Since \(N\) is \(\mathcal{O}_{\ell}\)-stable, \(N=N_{1}\oplus N_{2}\) and \(w(N_{1})=N_{2}\).
**Lemma 6.1.2** (Unramified case).: _The map \(N\mapsto(N_{1},N_{2})\) induces a bijection between left \(\mathcal{O}_{\ell}\)-submodules of \(M\) and pairs of \(\mathbb{F}_{\ell}\)-subspaces \((N_{1}\subset M_{1},N_{2}\subset M_{2})\) satisfying \(w(N_{1})=N_{2}\)._
Proof.: This is elementary, using the fact that \(\mathcal{O}_{\ell}\) is generated (as a ring) by \(e_{1},e_{2}\) and \(w\).
Next suppose that \(\ell\) divides \(\operatorname{disc}(B)\) and fix an isomorphism between \(\mathcal{O}_{\ell}\) and the ring described in (6.1.1). The set of strictly upper triangular matrices is a two-sided ideal \(J\subset\mathcal{O}_{\ell}\) that satisfies \(\mathcal{O}_{\ell}/J\simeq\mathbb{F}_{\ell^{2}}\). The following lemma is easily verified [11, SS4].
**Lemma 6.1.3** (Ramified case).: _The only proper left ideal of \(\mathcal{O}_{\ell}\) is \(J\). Consequently, the only proper \(\mathcal{O}_{\ell}\)-submodule of \(M\) is \(M[J]=\{m\in M\mid j\cdot m=0\text{ for all }j\in J\}\)._
### The subgroup generated by a torsion point
Let \(A/\mathbb{Q}\) be an \(\mathcal{O}\)-PQM surface and \(\ell\) be a prime number. Let \(\mathcal{O}_{\ell}:=\mathcal{O}\otimes\mathbb{F}_{\ell}\) and \(M:=A[\ell](\bar{\mathbb{Q}})\). Then \(M\) is a free \(\mathcal{O}_{\ell}\)-module of rank one, and \(\operatorname{Gal}_{\mathbb{Q}}\) acts on \(\mathcal{O}_{\ell}\) by ring automorphisms (as studied in SS3.2) and on \(M\) by \(\mathbb{F}_{\ell}\)-linear automorphisms. These actions satisfy \((a\cdot m)^{\sigma}=a^{\sigma}\cdot m^{\sigma}\) for all \(\sigma\in\operatorname{Gal}_{\mathbb{Q}},a\in\mathcal{O}_{\ell}\) and \(m\in M\).
**Lemma 6.2.1**.: _Suppose that the \(\operatorname{Gal}_{\mathbb{Q}}\)-modules \(\mathcal{O}_{\ell}\) and \(M\) are isomorphic. Then \(\ell\leq 3\)._
Proof.: This follows by comparing determinants. On one hand, the \(\operatorname{Gal}_{\mathbb{Q}}\)-action on \(\mathcal{O}_{\ell}\) has determinant \(1\). Indeed, the determinant of left/right multiplication by \(b\in B\) acting on \(B\) is the square of the reduced norm, so conjugation has determinant \(1\). On the other hand, the determinant of the \(\operatorname{Gal}_{\mathbb{Q}}\)-action on \(M\) is the square of the mod \(\ell\) cyclotomic character \(\bar{\chi}_{\ell}\). This implies that \(\bar{\chi}_{\ell}^{2}=1\), so \(\mathbb{Q}(\zeta_{\ell}+\zeta_{\ell}^{-1})=\mathbb{Q}\), so \(\ell\leq 3\).
_Remark 6.2.2_.: When \(\ell=3\), we know of no examples of \(\mathcal{O}\)-PQM surfaces over \(\mathbb{Q}\) with \(\mathcal{O}_{\ell}\simeq M\) as \(\operatorname{Gal}_{\mathbb{Q}}\)-modules. Such examples do exist for \(\ell=2\); see [13, Corollary 7.5].
**Lemma 6.2.3**.: _If \(m\in M^{\operatorname{Gal}_{\mathbb{Q}}}\) is nonzero and \(\ell\geq 5\), then \(\mathcal{O}_{\ell}\cdot m\subset M\) has order \(\ell^{2}\)._
Proof.: By Lemmas 6.1.2 and 6.1.3, it suffices to show that \(\mathcal{O}_{\ell}\cdot m\neq M\). But if \(\mathcal{O}_{\ell}\cdot m=M\), then \(\mathcal{O}_{\ell}\to M,x\mapsto x\cdot m\) is an isomorphism, contradicting Lemma 6.2.1.
To analyze the case \(\ell\mid\operatorname{disc}(B)\), we use the following theorem attributed to Ohta.
**Theorem 6.2.4**.: _Let \(F\) be a number field and let \(A/F\) be an abelian variety with \(\operatorname{End}(A)\simeq\mathcal{O}\). Suppose \(\mathcal{O}\) is ramified at a prime \(\ell\) and let \(J\subset\mathcal{O}\) be the maximal ideal above \(\ell\). Then the composition of the Galois representation \(\operatorname{Gal}_{F}\to\operatorname{Aut}_{\mathbb{F}_{\ell^{2}}}A[J]\simeq \mathbb{F}_{\ell^{2}}^{\times}\) with the norm \(\mathbb{F}_{\ell^{2}}^{\times}\to\mathbb{F}_{\ell}^{\times}\) is equal to the mod \(\ell\) cyclotomic character \(\operatorname{Gal}_{F}\to\operatorname{Aut}(\mu_{\ell})\simeq\mathbb{F}_{ \ell}^{\times}\)._
Proof.: See [11, Proposition 4.6].
**Proposition 6.2.5**.: _If \(\ell\mid\operatorname{disc}(B)\) and \(M^{\operatorname{Gal}_{\mathbb{Q}}}\neq 0\), then \(\ell\leq 3\)._
Proof.: Choose a nonzero \(m\in M^{\operatorname{Gal}_{\mathbb{Q}}}\) and suppose that \(\ell\geq 5\). By the previous lemma, \(\mathcal{O}_{\ell}\cdot m\) is a proper submodule of \(M\). Therefore \(\mathcal{O}_{\ell}\cdot m=M[J]\) by Lemma 6.1.3. Let \(L/\mathbb{Q}\) be the endomorphism field of \(A\). Then the \(\operatorname{Gal}_{\mathbb{Q}}\)-action on \(M\) restricts to a \(\operatorname{Gal}_{L}\)-action on \(M[J]\) through elements of \(\mathbb{F}_{\ell^{2}}^{\times}\) (after choosing an isomorphism \(\mathcal{O}_{\ell}/J\simeq\mathbb{F}_{\ell^{2}}\)), giving a homomorphism \(\epsilon\colon\operatorname{Gal}_{L}\to\mathbb{F}_{\ell^{2}}^{\times}\). Since \(m\) is \(\operatorname{Gal}_{\mathbb{Q}}\)-invariant, the \(\operatorname{Gal}_{L}\)-action on \(M[J]\) is trivial, so \(\epsilon\) is trivial. On the other hand, the composition \(N_{\mathbb{F}_{\ell^{2}}/\mathbb{F}_{\ell}}\circ\epsilon\colon\operatorname{ Gal}_{L}\to\mathbb{F}_{\ell}^{\times}\) equals the mod \(\ell\) cyclotomic character \(\bar{\chi}_{\ell}\), by Theorem 6.2.4. It follows that \(\bar{\chi}_{\ell}|_{\operatorname{Gal}_{L}}=1\), or in other words \(\mathbb{Q}(\zeta_{\ell})\subset L\). Thus \(\operatorname{Gal}(L/\mathbb{Q})\) surjects onto \(\operatorname{Gal}(\mathbb{Q}(\zeta_{\ell})/\mathbb{Q})\simeq(\mathbb{Z}/ \ell\mathbb{Z})^{\times}\simeq\mathbb{Z}/(\ell-1)\mathbb{Z}\). Since \(\operatorname{Gal}(L/\mathbb{Q})\) is dihedral (Proposition 3.2.1), every nontrivial cyclic quotient of \(\operatorname{Gal}(L/\mathbb{Q})\) has order \(2\), and we conclude that \(\ell\leq 3\).
We now treat the unramified case, using the following key linear-algebraic lemma, which we call the 'torus trick'.
**Lemma 6.2.6**.: _Suppose that \(\ell\nmid\operatorname{disc}(B)\). Let \(S\subset\mathcal{O}_{\ell}\) be a \(2\)-dimensional semisimple commutative \(\operatorname{Gal}_{\mathbb{Q}}\)-stable subalgebra such that \(S\cdot m=\mathcal{O}_{\ell}\cdot m\) for some nonzero \(m\in M^{\operatorname{Gal}_{\mathbb{Q}}}\). Then every \(\sigma\in\operatorname{Gal}_{\mathbb{Q}}\) acting trivially on \(S\) also acts trivially on \(\mathcal{O}_{\ell}\)._
Proof.: Let \(\sigma\in\operatorname{Gal}_{\mathbb{Q}}\) be an element acting trivially on \(S\) and let \(m\in M^{\operatorname{Gal}_{\mathbb{Q}}}\setminus\{0\}\) be an element such that \(S\cdot m=\mathcal{O}_{\ell}\cdot m\). Let \(k=\bar{\mathbb{F}}_{\ell}\). It suffices to prove that \(\sigma\) acts trivially on \(\mathcal{O}_{k}:=\mathcal{O}_{\ell}\otimes_{\mathbb{F}_{\ell}}k\). The assumptions imply that \(S_{k}\simeq k\times k\), and we may fix an isomorphism \(\mathcal{O}_{k}\simeq\operatorname{Mat}_{2}(k)\) of \(k\)-algebras such that \(S_{k}\) is identified with the subalgebra of diagonal matrices of \(\operatorname{Mat}_{2}(k)\). Lemma 6.1.2 and the fact that \(S_{k}\) is \(2\)-dimensional shows that \(\dim_{k}(S_{k}\cdot m)=\dim_{k}(\mathcal{O}_{k}\cdot m)=2\). Let \(I=\{x\in\mathcal{O}_{k}\mid x\cdot m=0\}\) be the annihilator of \(m\), an ideal of \(\mathcal{O}_{k}\) of dimension \(2\). Using the analogue of Lemma 6.1.2 over \(k\), such an ideal is necessarily of the form
\[\left\{\begin{pmatrix}ax&bx\\ ay&by\end{pmatrix}\mid x,y\in k\right\}\]
for some \(a,b\in k\) which are not both zero. The assumption that \(S\cdot m=\mathcal{O}_{\ell}\cdot m\) implies that \(S_{k}\cap I=0\). It follows that \(a\) and \(b\) must be nonzero and \(\mathcal{O}_{k}=S_{k}\oplus I\) as \(\operatorname{Gal}_{\mathbb{Q}}\)-modules. Let \(N\subset\mathcal{O}_{k}\) be the subspace normalising but not centralising \(S_{k}\). Then the above calculation also shows that \(N\cap I=0\). Moreover \(N\) is \(\operatorname{Gal}_{\mathbb{Q}}\)-stable since \(S\) is. The relation \(\mathcal{O}_{k}=S_{k}\oplus I\) shows that \(\sigma(x)-x\in I\) for all \(x\in\mathcal{O}_{k}\). It follows that \(\sigma(x)-x\in I\cap N=0\) for all \(x\in N\). Since \(\mathcal{O}_{k}\) is spanned by \(S_{k}\) and \(N\), the claim follows.
**Proposition 6.2.7**.: _Suppose that \(\ell\nmid\operatorname{disc}(B)\) and \(M^{\operatorname{Gal}_{\mathbb{Q}}}\neq 0\) and \(\ell\geq 5\). Then \(A\) is of \(\operatorname{GL}_{2}\)-type._
Proof.: We apply the torus trick using the distinguished quadratic subring \(S\subset\mathcal{O}\) of \(A\) (Definition 3.4.1). Write \(S_{\ell}=S\otimes_{\mathbb{Z}}\mathbb{F}_{\ell}\). Then \(S_{\ell}\subset\mathcal{O}_{\ell}\) is a commutative semisimple subalgebra since \(S\) is unramified at \(\ell\) by Proposition 3.4.2. Suppose that \(A\) is not of \(\operatorname{GL}_{2}\)-type. Then \(\operatorname{Gal}_{\mathbb{Q}}\) acts nontrivially on \(S\) since \(\operatorname{End}(A)=\mathbb{Z}\); let \(K/\mathbb{Q}\) be the quadratic extension splitting this action. We claim that the \(\operatorname{Gal}_{K}\)-action on \(\mathcal{O}_{\ell}\) is trivial. Indeed, let \(m\in M^{\operatorname{Gal}_{\mathbb{Q}}}\) be a nonzero element. By Lemma 6.2.6 it suffices to prove that \(S_{\ell}\cdot m=\mathcal{O}_{\ell}\cdot m\). But the set \(\{x\in S_{\ell}\mid x\cdot m=0\}\) is a proper \(\operatorname{Gal}_{\mathbb{Q}}\)-invariant ideal of \(S_{\ell}\). Since the only such ideal is \(0\) (using the fact that the \(\operatorname{Gal}_{\mathbb{Q}}\)-action on \(S\) is nontrivial and \(\ell\neq 2\)), the map \(S\cdot m\to\mathcal{O}\cdot m\) is injective and hence by dimension reasons (and Lemma 6.2.3) it must be surjective. This proves that the \(\operatorname{Gal}_{K}\)-action on \(\mathcal{O}_{\ell}\) is trivial. By Lemma 3.5.7, this even implies that that \(\operatorname{Gal}_{K}\)-action on \(\mathcal{O}\) is trivial. We conclude that the quadratic field \(K\) is the endomorphism field of \(A\), hence \(A\) is of \(\operatorname{GL}_{2}\)-type by Lemma 3.2.2, contradiction.
## 7. Proof of Theorems 1.2 and 1.3: eliminating groups of order \(2^{i}3^{j}\)
Let \(A/\mathbb{Q}\) be an \(\mathcal{O}\)-PQM surface. By Theorem 1.1, we have \(\#A(\mathbb{Q})_{\operatorname{tors}}=2^{i}3^{j}\) for some \(i,j\geq 0\). Since \(A\) has potentially good reduction, local methods show that \(2^{i}3^{j}\leq 72\)[1, Theorem 1.4]. In this section, we will improve this bound and constrain the group structure of \(A(\mathbb{Q})_{\operatorname{tors}}\) as much as possible using the \(\mathcal{O}\)-action on \(A_{\overline{\mathbb{Q}}}\). We may assume \(A\) is not of \(\operatorname{GL}_{2}\)-type since we have already proven Theorem 1.4.
For each prime \(p\), there exists a totally ramified extension \(K/\mathbb{Q}_{p}\) such that \(A_{K}\) has good reduction (Lemma 4.1.2). The special fiber of the Neron model of \(A_{K}\) is an abelian surface over \(\mathbb{F}_{p}\) which we denote by \(A_{p}\). We call \(A_{p}\)the good reduction of \(A\) at \(p\), though it is only uniquely determined up to twists (since a different choice of totally ramified extension \(K^{\prime}\) would give rise to a possibly non-isomorphic twist of \(A_{p}\)).
Lemma 4.3.4 shows that the prime-to-\(p\) subgroup of \(A(\mathbb{Q})_{\operatorname{tors}}\) injects into \(A_{p}(\mathbb{F}_{p})\). Moreover \(\operatorname{End}(A_{\overline{\mathbb{Q}}})\subset\operatorname{End}(A_{ \mathbb{F}_{p}})\) hence \(A_{p}\) is \(\bar{\mathbb{F}}_{p}\)-isogenous to the square of an elliptic curve \(E/\bar{\mathbb{F}}_{p}\) by Proposition 4.5.2, so its isogeny class is rather constrained. This leads to the following slight strengthening of [1, Theorem 1.4] in our case:
**Proposition 7.0.1**.: _We have \(\#A(\mathbb{Q})_{\operatorname{tors}}=2^{i}3^{j}\) for some \(i\in\{0,1,2,3,4\}\) and \(j\in\{0,1,2\}\). Moreover, \(\#A(\mathbb{Q})_{\operatorname{tors}}\leq 48\)._
Proof.: By the above remarks, to bound the prime-to-\(2\) (resp. prime-to-\(3\)) torsion, it is enough to bound \(X(\mathbb{F}_{2})[3^{\infty}]\) (resp. \(X(\mathbb{F}_{3})[2^{\infty}]\)), as \(X\) varies over all abelian surfaces over \(\mathbb{F}_{2}\) (resp. \(\mathbb{F}_{3}\)) that are geometrically isogenous to the square of an elliptic curve. For this it is enough to compute \(\max_{X}\gcd(f_{X}(1),3^{100})\) (resp. \(\max_{X}\gcd(f_{X}(1),2^{100})\)), where \(f_{X}\) is the \(L\)-polynomial of \(X\) and the maximum is over all the aforementioned isogeny classes. This computation is easily done with the help of the LMFDB's database of isogeny classes of abelian varieties over finite fields [13], and the conclusion is the first sentence of the proposition.
The second sentence is equivalent to the claim that \(\#A(\mathbb{Q})_{\operatorname{tors}}\) cannot equal \(144\) nor \(72\). We cannot have \(144\) since \(\#A_{5}(\mathbb{F}_{5})\leq 100\), and we cannot have \(72\) since the only isogeny class of abelian surfaces \(X/\mathbb{F}_{5}\) with \(72\mid\#X(\mathbb{F}_{5})\) (which has LMFDB label \(2.5.f_{q}\)) is not geometrically isogenous to a square of an elliptic curve.
The remainder of the proof of Theorems 1.2 and 1.3 will be similar (but more difficult) to that of 7.0.1, using the good reduction model \(A_{p}\) at various primes \(p\) and the \(\mathcal{O}\)-action. In what follows, we will freely use the Honda-Tate computations conveniently recorded in the LMFDB [10], so the careful reader will want to follow along in a web browser. We use the LMFDB's method of labeling isogeny classes, e.g. 2.5.\(d_{e}\) is an isogeny class of abelian surfaces over \(\mathbb{F}_{5}\) with label \(d_{e}\).
### Torsion constraints arising from the endomorphism field
Before analyzing specific groups, we state the following useful proposition, which uses techniques similar to the proof of Theorem 6.0.1, including the torus trick.
**Proposition 7.1.1**.: _Let \(G\) be the Galois group of the endomorphism field \(L/\mathbb{Q}\)._
1. _If_ \(G\simeq D_{3}\) _or_ \(D_{6}\)_, then_ \(A[2](\mathbb{Q})\subset\mathbb{Z}/2\mathbb{Z}\)_. If in addition_ \(A[2](\mathbb{Q})=\mathbb{Z}/2\mathbb{Z}\)_, then_ \(A[2]\simeq\mathcal{O}/2\mathcal{O}\) _as_ \(\operatorname{Gal}_{\mathbb{Q}}\)_-modules or_ \(2\mid\operatorname{disc}(B)\)_._
2. _If_ \(G\simeq D_{2}\) _or_ \(D_{4}\)_, then_ \(A[3](\mathbb{Q})\subset\mathbb{Z}/3\mathbb{Z}\)_. If in addition_ \(A[3](\mathbb{Q})=\mathbb{Z}/3\mathbb{Z}\)_, then_ \(A[3]\simeq\mathcal{O}/3\mathcal{O}\) _as_ \(\operatorname{Gal}_{\mathbb{Q}}\)_-modules or_ \(3\mid\operatorname{disc}(B)\)_._
Proof.:
1. Let \(S\subset\mathcal{O}\) be the distinguished quadratic subring of \(A\) (Definition 3.4.1). By Proposition 3.4.2, \(S\simeq\mathbb{Z}[\omega]\) where \(\omega^{2}+\omega+1=0\). Let \(K/\mathbb{Q}\) be the quadratic field trivializing the Galois action on \(S\), so \(\operatorname{End}(A_{K})=S\). Let \(S_{2}:=S\otimes\mathbb{F}_{2}\) and \(\mathcal{O}_{2}:=\mathcal{O}\otimes\mathbb{F}_{2}\). If \(A[2]\simeq\mathcal{O}_{2}\) as \(\operatorname{Gal}_{\mathbb{Q}}\)-modules, then \(A[2](\mathbb{Q})\simeq(\mathcal{O}/2\mathcal{O})^{\operatorname{Gal}_{ \mathbb{Q}}}\) is isomorphic to \(\mathbb{Z}/2\mathbb{Z}\) by Theorem 2.3.1, so indeed \(A[2](\mathbb{Q})\subset\mathbb{Z}/2\mathbb{Z}\) in this case. We may therefore assume that \(A[2](\mathbb{Q})\not\simeq\mathcal{O}_{2}\) in what follows. It suffices to show that if there exists a nonzero \(m\in A[2](\mathbb{Q})\), then \(A[2](\mathbb{Q})\) has order \(2\). By the classification of \(\mathcal{O}_{2}\)-submodules of \(A[2]\) of SS6.1 and the fact that \(\mathcal{O}_{2}\) is not isomorphic to \(A[2]\), the submodule \(\mathcal{O}_{2}\cdot m\subset A[2]\) has order \(4\). Since \(S_{2}\simeq\mathbb{F}_{4}\) has no \(\operatorname{Gal}_{\mathbb{Q}}\)-stable nonzero proper ideals, the map \(S_{2}\to A[2],x\mapsto x\cdot m\) is injective, hence \(S_{2}\cdot m\subset\mathcal{O}_{2}\cdot m\) has order \(4\) too. Therefore \(S_{2}\cdot m=\mathcal{O}_{2}\cdot m\). Suppose first that \(2\nmid\operatorname{disc}(B)\). We can then apply Lemma 6.2.6 to conclude that \(\operatorname{Gal}_{K}\) acts trivially on \(\mathcal{O}_{2}\). Since \(\operatorname{Gal}_{K}\) acts on \(\mathcal{O}_{2}\) through \(\operatorname{Gal}(L/K)\simeq C_{3}\) or \(C_{6}\), this contradicts Lemma 3.5.7 and proves the second claim of (a). It remains to consider the case \(2\mid\operatorname{disc}(B)\). In that case there exists a unique proper nonzero left ideal \(J\) of \(\mathcal{O}_{2}\), and \(A[J]\) is the unique nonzero proper \(\mathcal{O}_{2}\)-submodule of \(A[2]\) (Lemma 6.1.3). It follows that \(S_{2}\cdot m=\mathcal{O}_{2}\cdot m=A[J]\). Since \(A[2]\not\simeq\mathcal{O}_{2}\) as \(\operatorname{Gal}_{\mathbb{Q}}\)-modules, no element of \(A[2](\mathbb{Q})\) is an \(\mathcal{O}_{2}\)-generator, so \(A[2](\mathbb{Q})=A[J](\mathbb{Q})\). On the other hand, the equality \(S_{2}\cdot m=A[J]\) shows that \(S_{2}\simeq A[J]\) as \(\operatorname{Gal}_{\mathbb{Q}}\)-modules. Since \(\operatorname{Gal}_{\mathbb{Q}}\) acts nontrivially on \(S_{2}=\mathbb{F}_{4}\), \(A[J](\mathbb{Q})=A[2](\mathbb{Q})\) has order \(2\). 2. The argument is very similar to the proof of (a), using that in the \(D_{4}\) case, the distinguished quadratic subring \(\mathbb{Z}[i]\) is unramified at \(3\). In the \(D_{2}\) case, the distinguished quadratic subring might be ramified at \(3\), but by Lemma 2.2.2 there exist three squarefree integers \(m,n,t\) and embeddings of \(\mathbb{Z}[\sqrt{m}]\), \(\mathbb{Z}[\sqrt{n}]\) and \(\mathbb{Z}[\sqrt{t}]\) into \(\operatorname{Gal}_{\mathbb{Q}}\) whose image is \(\operatorname{Gal}_{\mathbb{Q}}\)-stable. Since \(t=-mn\) up to squares, at least one of these three subrings is unramified at \(3\), and the argument of (a) can be carried out using this subring.
### Groups of order \(48\)
**Lemma 7.2.1**.: _Let \(E\) be an elliptic curve over the finite field \(\mathbb{F}_{p^{n}}\), and assume either that \(E\) is ordinary or that \(n=1\). Then any abelian surface \(X/\mathbb{F}\) isogenous to \(E^{2}\) is isomorphic to a product of elliptic curves over \(\mathbb{F}\)._
Proof.: Let \(\pi\in\operatorname{End}(E)\) be the Frobenius. Replacing \(E\) by an isogenous elliptic curve, we may assume that \(\operatorname{End}(E)=\mathbb{Z}[\pi]\)[3, SS7.2-7.3]. By [3, Theorem 1.1], the functor \(X\mapsto\operatorname{Hom}(X,E)\) is an equivalence between the category of abelian varieties isogenous to a power of \(E\) and isomorphism classes of finitely generated torsion-free \(\operatorname{End}(E)\)-modules. Since \(\operatorname{End}(E)\) is an order in a quadratic field, any finitely generated torsion-free \(\operatorname{End}(E)\)-module is a direct sum of rank \(1\) modules [3, Theorem 3.2], so the lemma follows.
**Lemma 7.2.2**.: _If \(G\subset A(\mathbb{Q})_{\operatorname{tors}}\) is a subgroup of order \(16\), then \(G\) is isomorphic to \((\mathbb{Z}/4\mathbb{Z})^{2}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\times\mathbb{Z}/4\mathbb{Z}\)._
Proof.: There is a unique isogeny class of abelian surfaces \(X\) over \(\mathbb{F}_{3}\) with \(16\mid\#X(\mathbb{F}_{3})\), namely the square of the elliptic curve \(E/\mathbb{F}_{3}\) with \(\operatorname{End}_{\mathbb{F}_{p}}(E)\simeq\mathbb{Z}[\sqrt{-3}]\) and \(\#E(\mathbb{F}_{3})=4\). By Lemma 7.2.1, \(A_{p}\) is isomorphic to a product of two elliptic curves both of which have four \(\mathbb{F}_{3}\)-rational points. Since such an elliptic curve has its group of \(\mathbb{F}_{3}\)-points isomorphic to either \(\mathbb{Z}/4\mathbb{Z}\) or \((\mathbb{Z}/2\mathbb{Z})^{2}\), \(A_{p}(\mathbb{F}_{3})\) is isomorphic to \((\mathbb{Z}/4\mathbb{Z})^{2}\) or \((\mathbb{Z}/4\mathbb{Z})\times(\mathbb{Z}/2\mathbb{Z})^{2}\) or \((\mathbb{Z}/2\mathbb{Z})^{4}\). By Proposition 5.2.1, the latter cannot happen. The lemma now follows since \(A(\mathbb{Q})[16]\hookrightarrow A_{p}(\mathbb{F}_{3})\).
**Proposition 7.2.3**.: \(\#A(\mathbb{Q})_{\operatorname{tors}}<48\)_._
Proof.: By Proposition 7.0.1 it is enough to show that \(A(\mathbb{Q})_{\operatorname{tors}}\neq 48\). Assume for the sake of contradiction that \(\#A(\mathbb{Q})_{\operatorname{tors}}=48\). The reduction \(A_{5}/\mathbb{F}_{5}\) must then be in the isogeny class \(2.5.d_{e}\). We see that \(\operatorname{End}^{0}((A_{p})_{\mathbb{F}_{5n}})\) contains a quaternion algebra if and only if \(3\) divides \(n\). Therefore the Galois group of the endomorphism field of \(A\) has order divisible by \(3\), so by Proposition 3.2.1 must be \(D_{3}\) or \(D_{6}\). Proposition 7.1.1 then implies \(A[2](\mathbb{Q})\subset\mathbb{Z}/2\mathbb{Z}\), contradicting the fact that \(A[2](\mathbb{Q})\) has size \(\geq 4\) (Lemma 7.2.2).
### Groups of order \(36\)
**Lemma 7.3.1**.: _If \(36\mid\#A(\mathbb{Q})_{\operatorname{tors}}\), then \(A(\mathbb{Q})_{\operatorname{tors}}\simeq(\mathbb{Z}/6\mathbb{Z})^{2}\)._
Proof.: Over \(\mathbb{F}_{5}\) there is exactly one isogeny class of abelian surface \(X\) with \(36\mid\#X(\mathbb{F}_{5})\) and whose geometric endomorphism algebra contains a quaternion algebra, namely \(2.5.a_{k}\), which is isogenous to the square of an elliptic curve. Thus the reduction \(A_{5}\) is isomorphic to a product of two elliptic curves (Lemma 7.2.1). Every elliptic curve in this isogeny class has \(E(\mathbb{F}_{5})\simeq\mathbb{Z}/6\mathbb{Z}\), hence \(A_{5}(\mathbb{F}_{5})\simeq(\mathbb{Z}/6\mathbb{Z})^{2}\).
**Proposition 7.3.2**.: \(\#A(\mathbb{Q})_{\operatorname{tors}}<36\)_._
Proof.: By Proposition 7.2.3 and Proposition 7.0.1, it is enough to show that \(A(\mathbb{Q})_{\operatorname{tors}}\) does not have order \(36\). By Lemma 7.3.1 such an \(A\) would have \(A(\mathbb{Q})_{\operatorname{tors}}\simeq(\mathbb{Z}/6\mathbb{Z})^{2}\). By Proposition 7.1.1, \(A\) cannot have endomorphism field \(D_{n}\) for every \(n\in\{2,3,4,6\}\) so \(A\) has \(\operatorname{GL}_{2}\)-type, which we have also already ruled out.
It follows that \(\#A(\mathbb{Q})_{\operatorname{tors}}\leq 24\). Before we show that this inequality is strict, we rule out the existence of rational points of order \(9\) and \(8\).
### Rational points of order \(9\)
**Proposition 7.4.1**.: \(A(\mathbb{Q})_{\rm tors}\) _contains no elements of order \(9\)._
Proof.: Suppose \(A(\mathbb{Q})\) has a point of order \(9\). Then the reduction \(A_{2}/\mathbb{F}_{2}\) must live in the isogeny class \(2.2.a_{e}\) or \(2.2.b_{b}\). The latter has commutative geometric endomorphism algebra, so cannot be the reduction of a \(\mathcal{O}\)-PQM surface by Proposition 4.5.2. The former is the isogeny class of the square of an elliptic curve \(E\) over \(\mathbb{F}_{2}\) with \(\#E(\mathbb{F}_{2})=3\), so by Lemma 7.2.1 we have \(A_{2}(\mathbb{F}_{2})\simeq(\mathbb{Z}/3\mathbb{Z})^{2}\).
### Rational points of order \(8\)
**Proposition 7.5.1**.: \(A(\mathbb{Q})_{\rm tors}\) _contains no elements of order \(8\)._
Proof.: Suppose otherwise. The reduction \(A_{3}/\mathbb{F}_{3}\) must be in the isogeny class \(2.3.a_{c}\), which is simple with endomorphism algebra \(\mathbb{Q}(\zeta_{8})=\mathbb{Q}(\sqrt{2},\sqrt{-2})\). (It cannot be in the isogeny class \(2.3.a_{g}\) by the proof of Lemma 7.2.2.) Since \(\#A_{3}(\mathbb{F}_{3})=8\), we must have \(A_{3}(\mathbb{F}_{3})=\mathbb{Z}/8\mathbb{Z}\). This eliminates the possibility that \(A(\mathbb{Q})\) contains a prime-to-\(3\) subgroup any larger than \(\mathbb{Z}/8\mathbb{Z}\). Note also that \(\#A_{3}(\mathbb{F}_{9})=64\) and \(A\) is isomorphic to a product of ordinary elliptic curves over \(\mathbb{F}_{9}\) by Lemma 7.2.1, at least one of which has \(E(\mathbb{F}_{9})\simeq\mathbb{Z}/8\mathbb{Z}\). It follows that the \(\mathbb{F}_{2}\)-dimension of \(A_{3}[2](\mathbb{F}_{9})\) is at most \(3\), and in particular not all \(2\)-torsion points are defined over \(\mathbb{F}_{9}\). On the other hand, all endomorphisms of \((A_{3})_{\bar{\mathbb{F}}_{3}}\) are defined over \(\mathbb{F}_{9}\), so we conclude by Lemmas 6.1.2 and 6.1.3 that the \(\mathcal{O}/2\mathcal{O}\)-module generated by any \(\mathbb{F}_{9}\)-rational point of order \(2\) has order \(4\).
Suppose first that \(2\) divides \({\rm disc}(B)\). Then the aforementioned \(\mathcal{O}\)-module must be \(A[J]\), where \(J\) is the ideal in \(\mathcal{O}\) such that \(J^{2}=2\mathcal{O}\) (see SS6.1). Let \(t\in J\) be any element not in \(2\mathcal{O}\). Then over \(\mathbb{F}_{9}\) we have an exact sequence
\[0\to A_{3}[J]\to A_{3}[2]\to A_{3}[J]\to 0\]
with the last map being multiplication by \(t\). Let \(P\in A_{3}[4](\mathbb{F}_{9})\) be a point of order \(4\). Without loss of generality we may assume \(Q=tP\) has order \(2\) (if not, just replace \(P\) by \(tP\)) and \(Q\notin A_{3}[J]\). Then we've seen that \(\mathcal{O}\cdot Q\neq A_{3}[2]\), so \(\mathcal{O}\cdot Q=A_{3}[J]\) but this contradicts \(Q\notin A_{3}[J]\).
Now suppose that \(2\) does not divide \({\rm disc}(B)\) so that \(\mathcal{O}\simeq{\rm Mat}_{2}(\mathbb{F}_{2})\). Let \(L/\mathbb{Q}\) be the endomorphism field. If \({\rm Gal}(L/\mathbb{Q})\simeq D_{2}\) then at least one of the quadratic subfields of \(L\) is not inert at \(3\). So \({\rm End}_{\mathbb{F}_{3}}(A_{3})\) must contain a quadratic order \(S\) in \(\mathbb{Z}[i]\) or \(\mathbb{Z}[\sqrt{2}]\) or in \(\mathbb{Z}[\sqrt{-2}]\). But we saw in Lemma 2.2.1 that \(S\) contains \(\mathbb{Z}[\sqrt{m}]\) with \(m\) squarefree. So \(S\)_is_\(\mathbb{Z}[i]\) or \(\mathbb{Z}[\sqrt{2}]\) or \(\mathbb{Z}[\sqrt{-2}]\). In all cases there exists \(t\in S\) such that \(t^{2}S=2S\), and so we have an endomorphism (defined over \(\mathbb{F}_{3}\)) which behaves like \(\sqrt{2}\) on \(A_{3}[2]\). But we also have a rational point \(P\) of order \(4\). Without loss of generality the orders of \(tP\) and \(t^{2}P\) are both \(2\). But \(t^{2}P\neq tP\), so \(\dim_{\mathbb{F}_{2}}A_{3}[2](\mathbb{F}_{3})>1\), which contradicts \(A_{3}(\mathbb{F}_{3})\simeq\mathbb{Z}/8\mathbb{Z}\). The case \({\rm Gal}(L/\mathbb{Q})=D_{4}\) does not happen when \({\rm disc}(B)\) is odd by Lemma 2.2.3, so we consider the case where \({\rm Gal}(L/\mathbb{Q})\) is \(D_{3}\) or \(D_{6}\). By Proposition 7.1.1(a), \(A[2]\simeq\mathcal{O}/2\mathcal{O}\) as \({\rm Gal}_{\mathbb{Q}}\)-modules. But then \(A_{3}[2]\simeq\mathcal{O}/2\mathcal{O}\) as \({\rm Gal}_{\mathbb{F}_{3}}\)-modules, contradicting the fact that \(A_{3}[2](\mathbb{F}_{3})\) contains no \(\mathcal{O}/2\mathcal{O}\)-generator.
We are left to consider the case \({\rm Gal}(L/\mathbb{Q})=D_{1}=C_{2}\), i.e. the \({\rm GL}_{2}\)-type case, which we have already treated in Proposition 5.3.5.
### Groups of order \(24\)
If \(A(\mathbb{Q})_{\rm tors}\) has order \(24\), then by Proposition 7.5.1, the group structure is either \((\mathbb{Z}/2\mathbb{Z})^{3}\times\mathbb{Z}/3\mathbb{Z}\) or \(\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/4\mathbb{Z}\times\mathbb{Z}/3\mathbb{Z}\). We show below that in fact neither can occur. First we gather some facts common to both cases.
**Lemma 7.6.1**.: _Suppose \(\#A(\mathbb{Q})_{\rm tors}=24\), and let \(L/\mathbb{Q}\) be the endomorphism field of \(A\). Then_
1. \({\rm Gal}(L/\mathbb{Q})\) _is isomorphic to_ \(D_{2}\) _or_ \(D_{4}\)_,_
2. \(\mathbb{Q}(\zeta_{3})\subset L\)_, and_
3. _if_ \({\rm Gal}(L/\mathbb{Q})\not\simeq D_{4}\) _then_ \(A\) _has unipotent rank_ \(1\) _over_ \(\mathbb{Q}_{3}\)__\((\)_in the terminology of SS_4.1\()\)_._
Proof.: Since \(A\) is not of \({\rm GL}_{2}\)-type, Proposition 7.1.1 implies that \({\rm Gal}(L/\mathbb{Q})\) is isomorphic to \(D_{2}\) or \(D_{4}\), proving \((a)\).
Checking isogeny classes over \(\mathbb{F}_{5}\), we see that the reduction \(A_{5}\) is in the isogeny class \(2.5a_{ac}\); the isogeny class \(2.5d_{e}\) is ruled out since it only acquires QM over \(\mathbb{F}_{5^{3}}\), which is not compatible with \((a)\). The fact that \(\#A_{5}(\mathbb{F}_{25})[3^{\infty}]=9\) shows that the point of order \(3\) in \(A(\mathbb{Q})\) is not an \(\mathcal{O}\)-module generator of \(A[3]\) (since the \(\mathcal{O}\)-action on \(A_{5}\) is defined over \(\mathbb{F}_{25}\)). By Proposition 7.1.1, we deduce that the quaternion algebra \(B\) is ramified at \(3\). Since \(A[3](\mathbb{Q})\) has a rational point, it follows from Theorem 6.2.4 that \(\mathbb{Q}(\sqrt{-3})=\mathbb{Q}(\zeta_{3})\subset L\), proving \((b)\).
Since \(3\) ramifies in \(L\), \(A\) has bad reduction over \(\mathbb{Q}_{3}\) by Proposition 3.2.5. If \(A[2](\mathbb{Q})\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\) then \(A\) achieves good reduction over every ramified quadratic extension of \(\mathbb{Q}_{3}\) by Proposition 5.3.7. If \(A/\mathbb{Q}_{3}\) has totally additive reduction, then the quadratic twist of \(A\) by \(\mathbb{Q}(\sqrt{3})\), say, will have good reduction at \(3\) by Lemma 4.2.4. But quadratic twisting does not change the endomorphism field by Lemma 3.2.6, so any quadratic twist of \(A\) must have endomorphism field which contains \(\mathbb{Q}(\sqrt{-3})\) and hence must have bad reduction at \(3\). We conclude that \(A\) must have unipotent rank \(1\) over \(\mathbb{Q}_{3}\) by Proposition 4.1.3.
If \(A[2](\mathbb{Q})\simeq(\mathbb{Z}/2\mathbb{Z})^{2}\) and \({\rm Gal}(L/\mathbb{Q})\not\simeq D_{4}\), then \({\rm Gal}(L/\mathbb{Q})\simeq D_{2}\) and so \(L/\mathbb{Q}\) is a biquadratic field containing \(\mathbb{Q}(\zeta_{3})\). It follows that \(A\) has all of its endomorphisms defined over \(\mathbb{Q}_{3}^{\rm nr}(\zeta_{3})\). If \(A\) still has bad reduction over \(\mathbb{Q}_{3}(\zeta_{3})\), then it must have totally additive bad reduction (since it has QM after enlarging the residue field) by Proposition 4.1.3, and we obtain a contradiction with Proposition 4.3.3 and the fact that \(A\) has a point of order \(4\). Thus, \(A\) attains good reduction over \(\mathbb{Q}_{3}(\zeta_{3})\), and arguing as above, we conclude that \(A\) has unipotent rank \(1\) over \(\mathbb{Q}_{3}\).
**Proposition 7.6.2**.: \(A(\mathbb{Q})_{\rm tors}\not\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\times\mathbb{Z}/ 3\mathbb{Z}\)_._
Proof.: Assume otherwise. Theorem 2.3.1 and Lemma 5.3.6 show that the endomorphism field \(L/\mathbb{Q}\) has Galois group \({\rm Gal}(L/\mathbb{Q})\simeq D_{2}\).
First assume there exists a prime \(p>3\) of bad reduction for \(A\). By Theorem 4.3.2, \(A\) must have unipotent rank \(1\) over \(\mathbb{Q}_{p}\), and hence \(p\) must ramify in \(L\) by Proposition 4.1.3. Next, recall that there are three \({\rm Gal}_{\mathbb{Q}}\)-stable quadratic subfields of \(B\), one of which is imaginary. Let \(L_{1}\), \(L_{2}\), and \(L_{3}\) be the corresponding quadratic subfields of \(L\), labeled so that \(B^{{\rm Gal}_{L_{1}}}\) is imaginary quadratic. Since \(L\) is biquadratic, exactly one of the \(L_{i}\) must be unramified over \(\mathbb{Q}_{p}\). Since \(A\) has unipotent rank \(1\), it must be \(L_{1}\) (by Proposition 4.1.3). But by Lemma 7.6.1(b) we have \(\mathbb{Q}(\zeta_{3})\subset L\) and \(\mathbb{Q}(\zeta_{3})\) is also unramified at \(p\), so \(L_{1}=\mathbb{Q}(\sqrt{-3})\). Now, \(A/\mathbb{Q}_{3}\) has unipotent rank \(1\) by Lemma 7.6.1(c). As above, Proposition 4.1.3 implies that the unique sub-extension \(L_{i}\) unramified at \(3\) must be \(L_{1}\). This contradicts \(L_{1}=\mathbb{Q}(\sqrt{-3})\).
Thus, it remains to consider the possibility that \(A\) has good reduction outside \(\{2,3\}\). This forces the endomorphism field to be unramified outside \(\{2,3\}\). Moreover, \(A\) has unipotent rank \(1\) over \(\mathbb{Q}_{3}\). Thus, \(A\) has unipotent rank \(1\) over \(\mathbb{Q}_{3}\).
**Proposition 7.6.3**.: _Suppose \(\#A(\mathbb{Q})_{\rm tors}=24\), and let \(L/\mathbb{Q}\) be the endomorphism field of \(A\). Then \(A\) has unipotent rank \(1\) over \(\mathbb{Q}_{3}\)._
Proof.: Let \(A\) be a prime \(p>3\) of bad reduction for \(A\). By Proposition 4.1.3, \(A\) must have unipotent rank \(1\) over \(\mathbb{Q}_{p}\), and hence \(p\) must ramify in \(L\) by Proposition 4.1.3. Next, recall that there are three \({\rm Gal}_{\mathbb{Q}}\)-stable quadratic subfields of \(B\), one of which is imaginary. Let \(L_{1}\), \(L_{2}\), and \(L_{3}\) be the corresponding quadratic subfields of \(L\), labeled so that \(B^{{\rm Gal}_{L_{1}}}\) is imaginary quadratic. Since \(L\) is biquadratic, exactly one of the \(L_{i}\) must be unramified over \(\mathbb{Q}_{p}\). Since \(A\) has unipotent rank \(1\), it must be \(L_{1}\) (by Proposition 4.1.3). But by Lemma 7.6.1(b) we have \(\mathbb{Q}(\zeta_{3})\subset L\) and \(\mathbb{Q}(\zeta_{3})\) is also unramified at \(p\), so \(L_{1}=\mathbb{Q}(\sqrt{-3})\). Now, \(A/\mathbb{Q}_{3}\) has unipotent rank \(1\) by Lemma 7.6.1(c). As above, Proposition 4.1.3 implies that the unique sub-extension \(L_{i}\) unramified at \(3\) must be \(L_{1}\). This contradicts \(L_{1}=\mathbb{Q}(\sqrt{-3})\).
Thus, it remains to consider the possibility that \(A\) has good reduction outside \(\{2,3\}\). This forces the endomorphism field to be unramified outside \(\{2,3\}\). Moreover, \(A\) has unipotent rank \(1\) over \(\mathbb{Q}_{3}\). Thus, \(A\) has unipotent rank \(1\) over \(\mathbb{Q}_{3}\).
**Proposition 7.6.4**.: _Suppose \(\#A(\mathbb{Q})_{\rm tors}=24\), and let \(L/\mathbb{Q}\) be the endomorphism field of \(A\). Then \(A\) has unipotent rank \(1\) over \(\mathbb{Q}_{3}\)._
Proof.: Let \(A\) be a prime \(p>3\) of bad reduction for \(A\). By Proposition 4.1.3, \(A\) must have unipotent rank \(1\) over \(\mathbb{Q}_{p}\), and hence \(p\) must ramify in \(L\) by Proposition 4.1.3. Next, recall that there are three \({\rm Gal}_{\mathbb{Q}}\)-stable quadratic subfields of \(B\), one of which is imaginary. Let \(L_{1}\), \(L_{2}\), and \(L_{3}\) be the corresponding quadratic subfields of \(L\), labeled so that \(B^{{\rm Gal}_{L_{1}}}\) is imaginary quadratic. Since \(L\) is biquadratic, exactly one of the \(L_{i}\) must be unramified over \(\mathbb{Q}_{p}\). Since \(A\) has unipotent rank \(1\), it must be \(L_{1}\) (by Proposition 4.1.3). But by Lemma 7.6.1(b) we have \(\mathbb{Q}(\zeta_{3})\subset L\) and \(\mathbb{Q}(\zeta_{3})\) is also unramified at \(p\), so \(L_{1}=\mathbb{Q}(\sqrt{-3})\). Now, \(A/\mathbb{Q}_{3}\) has unipotent rank \(1\) by Lemma 7.6
rank \(1\) reduction over \(\mathbb{Q}_{3}\), so \(L\) must contain an imaginary quadratic subfield that is unramified at \(3\). Hence \(L\) is isomorphic to \(\mathbb{Q}(\sqrt{-3},i)\) or \(\mathbb{Q}(\sqrt{-3},\sqrt{-2})\). We also know that \(B^{\operatorname{Gal}_{\mathbb{Q}(\sqrt{-3})}}\) is a real quadratic field, and \(L_{1}\) is either \(\mathbb{Q}(i)\) or \(\mathbb{Q}(\sqrt{-2})\).
Over \(\mathbb{F}_{7}\), there are two possible isogeny classes: \(2.7a_{\text{ac}}\) and \(2.7i_{be}\). Since \(7\) is inert in \(L_{1}\), \(L\) does not split completely at \(7\). The isogeny class is therefore not \(2.7i_{be}\), since all its endomorphisms are defined over \(\mathbb{F}_{7}\), hence the isogeny class is \(2.7a_{\text{ac}}\). Thus \(\operatorname{End}^{0}(A_{7})\simeq\mathbb{Q}(\sqrt{-3})\times\mathbb{Q}(\sqrt {-3})\). Since \(7\) splits in \(\mathbb{Q}(\sqrt{-3})\), we see that \(B^{\operatorname{Gal}_{\mathbb{Q}(\sqrt{-3})}}=\mathbb{Q}(\sqrt{-3})\), which shows that \(L_{1}=\mathbb{Q}(\sqrt{-3})\), contradicting what was said above.
**Proposition 7.6.3**.: _If \(A(\mathbb{Q})_{\operatorname{tors}}\not\simeq(\mathbb{Z}/2\mathbb{Z})\times( \mathbb{Z}/4\mathbb{Z})\times(\mathbb{Z}/3\mathbb{Z})\)._
Proof.: First suppose \(G\simeq D_{4}\), so that the distinguished subring \(S\) of Definition SS3.4.1 is isomorphic to \(\mathbb{Z}[i]\). Then \(2\mid\operatorname{disc}(B)\) by Lemma 2.2.3. Since \(B\) is ramified at \(2\) and \(3\) and \(A(\mathbb{Q})\) contains points of order \(4\) and \(3\), we see that \(L\) contains both \(\mathbb{Q}(i)\) and \(\mathbb{Q}(\zeta_{3})\), by Theorem 6.2.4. Over one of these two quadratic subfields, the \(\operatorname{Gal}_{\mathbb{Q}}\)-action on \(S=\mathbb{Z}[i]\) trivializes. Indeed, the \(\operatorname{Gal}_{\mathbb{Q}}\)-action on \(\mathbb{Z}[i]\) cannot be trivialized by the third quadratic subfield \(\mathbb{Q}(\sqrt{3})\) of \(L\), by Proposition 3.1.2. Looking over \(\mathbb{F}_{5}\) we see that \(\mathbb{Q}(i)\) could only trivialize a ring isomorphic to \(\mathbb{Z}[\sqrt{3}]\). Looking over \(\mathbb{F}_{7}\) we see that \(\mathbb{Q}(\zeta_{3})\) could only trivialize a ring isomorphic to \(\mathbb{Z}[\sqrt{-3}]\). So neither trivialize \(\mathbb{Z}[i]\), and we have a contradiction.
So we may now assume that \(G\simeq D_{2}\). Arguing as above, we may also assume that \(L\) does not contain \(\mathbb{Q}(i)\). We know \(A\) has unipotent rank \(1\) reduction over \(\mathbb{Q}_{3}\) by Lemma 7.6.1(c). It also has unipotent rank \(1\) reduction at all bad primes \(p>3\), by Theorem 4.3.2. By Proposition 4.1.3, the imaginary quadratic subfield \(L_{1}\subset L\) that trivializes the distinguished imaginary quadratic subring of \(\mathcal{O}\) is unramified outside \(\{2\}\). Since \(L_{1}\neq\mathbb{Q}(i)\), we must have \(L_{1}=\mathbb{Q}(\sqrt{-2})\), but this field does not embed in \(B\) (which is ramified at \(3\)), giving a contradiction.
As a corollary, we are now able to finish the proofs of Theorems 1.2 and 1.3.
Proof of Theorem 1.2.: Propositions 7.5.1, 7.6.2, and 7.6.3 show that \(\#A(\mathbb{Q})_{\operatorname{tors}}<24\). Hence \(\#A(\mathbb{Q})_{\operatorname{tors}}\leq 18\).
By the results of this section and the previous one, the group \(A(\mathbb{Q})_{\operatorname{tors}}\) has order \(2^{i}3^{j}\leq 18\) and does not contain any subgroup of the form \(\mathbb{Z}/8\mathbb{Z}\), \(\mathbb{Z}/9\mathbb{Z}\), or \((\mathbb{Z}/2\mathbb{Z})^{4}\). We deduce the following result, which is equivalent to Theorem 1.3.
**Theorem 7.6.4**.: _Let \(A/\mathbb{Q}\) be an abelian surface such that \(\operatorname{End}(A_{\overline{\mathbb{Q}}})\) is a maximal order in a non-split quaternion algebra. Then \(A(\mathbb{Q})_{\operatorname{tors}}=A[12](\mathbb{Q})\) and \(\#A(\mathbb{Q})_{\operatorname{tors}}\leq 18\). Moreover, \(A(\mathbb{Q})_{\operatorname{tors}}\) does not contain a subgroup isomorphic to \((\mathbb{Z}/2\mathbb{Z})^{4}\). In other words, \(A(\mathbb{Q})_{\operatorname{tors}}\) is isomorphic to one of the groups_
\[\{1\},\mathbb{Z}/2,\mathbb{Z}/3,\mathbb{Z}/4,(\mathbb{Z}/2\mathbb{Z})^{2}, \mathbb{Z}/6,(\mathbb{Z}/2\mathbb{Z})^{3},\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/4\mathbb{Z},(\mathbb{Z}/3\mathbb{Z})^{2},\]
\[\mathbb{Z}/12,\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/6\mathbb{Z},(\mathbb{Z}/ 2\mathbb{Z})^{2}\times\mathbb{Z}/4\mathbb{Z},(\mathbb{Z}/4\mathbb{Z})^{2}, \mathbb{Z}/3\mathbb{Z}\times\mathbb{Z}/6\mathbb{Z}.\]
Not all of the groups above are known to be realized as \(A(\mathbb{Q})_{\operatorname{tors}}\) for some \(\mathcal{O}\)-PQM surface \(A/\mathbb{Q}\). However, all groups that have been realized (including the largest one of order \(18\)) have been realized in the family of bielliptic Picard Prym surfaces [13]. It would be interesting to systematically analyze rational points on Shimura curves of small discriminant and with small level structure, to try to find more examples. It would also be interesting to see which groups can be realized by Jacobians, which is the topic we turn to next.
## 8. Proof of Theorem 1.5: PQM Jacobians
In this section, we consider \(\mathcal{O}\)-PQM surfaces \(A/\mathbb{Q}\) equipped with a principal polarization. Since \(A\) is geometrically simple, there exists an isomorphism of polarized surfaces \(A\simeq\operatorname{Jac}(C)\), where \(C\) is a smooth projective genus two curve over \(\mathbb{Q}\)[13, Theorem 3.1]. To emphasize this, we use the letter \(J\) instead of \(A\). The goal of this section to prove some additional constraints on the torsion group \(J(\mathbb{Q})_{\operatorname{tors}}\), i.e. we prove Theorem 1.5.
**Lemma 8.0.1**.: _Let \(M\) be the imaginary quadratic subfield of \(\operatorname{End}^{0}(A_{\bar{\mathbb{Q}}})\) corresponding to a principal polarization on \(J\) under Corollary 3.3.4. Then \(M\simeq\mathbb{Q}(\sqrt{-D})\), where \(D=\operatorname{disc}(B)\)._
Proof.: This is a direct consequence of the relation (3.3.2) of Proposition 3.3.1.
**Lemma 8.0.2**.: _The endomorphism field \(L/\mathbb{Q}\) has Galois group \(D_{1}=C_{2}\) or \(D_{2}=C_{2}\times C_{2}\)._
Proof.: See [14, Theorem 3.4 A(1)].
**Proposition 8.0.3**.: \(\#J(\mathbb{Q})_{\operatorname{tors}}<18\)_._
Proof.: By Theorem 1.3, we need only exclude \((\mathbb{Z}/2\mathbb{Z})\times(\mathbb{Z}/3\mathbb{Z})^{2}\). By Proposition 7.1.1(b) and Lemma 8.0.2, the endomorphism field of \(A\) would be a \(C_{2}\)-extension. In other words, \(A\) is of \(\operatorname{GL}_{2}\)-type, but this contradicts Theorem 1.4.
Finally, we rule out the group \((\mathbb{Z}/2\mathbb{Z})^{3}\) from appearing in \(J[2](\mathbb{Q})\). We have already proven this when \(J\) is of \(\operatorname{GL}_{2}\)-type (Proposition 5.3.8), so it remains to consider the case \(\operatorname{Gal}(L/\mathbb{Q})\simeq C_{2}\times C_{2}\). We deduce this from the following more general result.
**Proposition 8.0.4**.: _Suppose that \(A/\mathbb{Q}\) is \(\mathcal{O}\)-PQM, has \(C_{2}\times C_{2}\) endomorphism field and has \(A[2](\mathbb{Q})\simeq(\mathbb{Z}/2\mathbb{Z})^{3}\). Let \(d\) be the degree of the unique primitive polarization of \(A\). Then \(2\mid\operatorname{disc}(B)\) and there exists an integer \(m\equiv 1\mod 4\) such that \(\operatorname{disc}(B)\) and \(dm\) agree up to squares. In particular, \(d\) is even and \(A\) is not a Jacobian._
Proof.: Let \(L/\mathbb{Q}\) be the endomorphism field of \(A\) with Galois group \(G\). By Lemma 5.3.6, there exists an \(\mathbb{Q}\)-rational \(\mathcal{O}/2\mathcal{O}\)-generator \(P\in A[2](\mathbb{Q})\), hence \(A[2]\simeq\mathcal{O}/2\mathcal{O}\) as \(\operatorname{Gal}_{\mathbb{Q}}\)-modules. Therfore the \(G\)-action on \(\mathcal{O}/2\mathcal{O}\) has \((\mathbb{Z}/2\mathbb{Z})^{3}\) fixed points. By Lemma 2.3.7, \(2\mid\operatorname{disc}(B)\) and there exist positive integers \(m,n\) with \(m\equiv 1\mod 4\) and \(n\equiv 3\mod 4\) such that the three \(\operatorname{Gal}_{\mathbb{Q}}\)-stable quadratic subfields of \(B\) are \(\mathbb{Q}(\sqrt{-m}),\mathbb{Q}(\sqrt{n})\) and \(\mathbb{Q}(\sqrt{mn})\). Under Corollary 3.3.4, the unique primitive polarization of \(A\) corresponds to the subfield \(\mathbb{Q}(\sqrt{-m})\), and the relation (3.3.2) of Proposition 3.3.1 shows that \(d\operatorname{disc}(B)\) and \(m\) agree up to squares. In other words, \(\operatorname{disc}(B)\) and \(dm\) agree up to squares. Since \(\operatorname{disc}(B)\) is even and squarefree and \(m\) is odd, \(d\) must be even too.
Proof of Theorem 1.5.: Combine Theorem 1.3 and Propositions 8.0.3 and 8.0.4.
In Table 2 we give some examples of Jacobians with non-trivial torsion subgroups and \(\mathcal{O}_{D}\)-PQM, where \(\mathcal{O}_{D}\) is a maximal quaternion order of discriminant \(D\). These were found by computing the relevant covers of Shimura curves of level \(1\) and their full Atkin-Lehner quotients and then substituting into the Igusa-Clebsch invariants in [11, Appendix B]. The torsion and endomorphism data can be independently verified using MAGMA1. |
2305.04062 | A Blockchain-based Platform for Reliable Inference and Training of
Large-Scale Models | As artificial intelligence (AI) continues to permeate various domains,
concerns surrounding trust and transparency in AI-driven inference and training
processes have emerged, particularly with respect to potential biases and
traceability challenges. Decentralized solutions such as blockchain have been
proposed to tackle these issues, but they often struggle when dealing with
large-scale models, leading to time-consuming inference and inefficient
training verification. To overcome these limitations, we introduce BRAIN, a
Blockchain-based Reliable AI Network, a novel platform specifically designed to
ensure reliable inference and training of large models. BRAIN harnesses a
unique two-phase transaction mechanism, allowing real-time processing via
pipelining by separating request and response transactions. Each
randomly-selected inference committee commits and reveals the inference
results, and upon reaching an agreement through a smart contract, then the
requested operation is executed using the consensus result. Additionally, BRAIN
carries out training by employing a randomly-selected training committee. They
submit commit and reveal transactions along with their respective scores,
enabling local model aggregation based on the median value of the scores.
Experimental results demonstrate that BRAIN delivers considerably higher
inference throughput at reasonable gas fees. In particular, BRAIN's
tasks-per-second performance is 454.4293 times greater than that of a naive
single-phase implementation. | Sanghyeon Park, Junmo Lee, Soo-Mook Moon | 2023-05-06T14:21:41Z | http://arxiv.org/abs/2305.04062v1 | # A Blockchain-based Platform for Reliable Inference and Training of Large-Scale Models
###### Abstract
As artificial intelligence (AI) continues to permeate various domains, concerns surrounding trust and transparency in AI-driven inference and training processes have emerged, particularly with respect to potential biases and traceability challenges. Decentralized solutions such as blockchain have been proposed to tackle these issues, but they often struggle when dealing with large-scale models, leading to time-consuming inference and inefficient training verification.
To overcome these limitations, we introduce BRAIN, a Blockchain-based Reliable AI Network, a novel platform specifically designed to ensure reliable inference and training of large models. BRAIN harnesses a unique two-phase transaction mechanism, allowing real-time processing via pipelining by separating request and response transactions. Each randomly-selected inference committee commits and reveals the inference results, and upon reaching an agreement through a smart contract, then the requested operation is executed using the consensus result. Additionally, BRAIN carries out training by employing a randomly-selected training committee. They submit commit and reveal transactions along with their respective scores, enabling local model aggregation based on the median value of the scores.
Experimental results demonstrate that BRAIN delivers considerably higher inference throughput at reasonable gas fees. In particular, BRAIN's tasks-per-second performance is 454.4293 times greater than that of a naive single-phase implementation.
blockchain, large-scale models, verifiable random function, federated learning
## I Introduction
The rapid advancement of artificial intelligence (AI) based on large-scale deep artificial neural networks has transformed various industries by offering numerous powerful AI-based services [1, 7, 39, 23]. However, these networks typically depend on centralized servers for both learning and inference, raising concerns about trust, such as a lack of transparency in the learning process and the potential for manipulated inferences [16, 26]. For example, biases in AI algorithms can disproportionately affect certain racial and ethnic groups, as evidenced by issues in facial recognition systems [8] and criminal justice applications [6]. The lack of transparency in the learning process makes it challenging for users to assess the fairness and ethical implications of these services, undermining trust in AI systems. Similarly, centralized inference processes pose challenges due to potential manipulation, misuse, or difficulty in verifying the authenticity of AI-generated outputs. Examples include the proliferation of fake news, and the misappropriation of AI-generated inferences [45, 53].
This lack of transparency can erode trust in AI services, as users struggle to determine the veracity. It can lead to increased costs for users, who may need to resort to separate searches, audits, or third-party evaluations to ensure trust. Service providers may also face the burden of additional marketing efforts to convince users of their system's reliability.
### _Related Work & Challenges_
The centralization of AI training and inference processes has been a significant issue, leading to the exploration of decentralized technologies as a solution. One such technology is blockchain. However, integrating blockchain and AI has proven to be challenging. While some studies have made progress toward a solution, they have only partially succeeded.
[42] have presented services that use blockchain and AI together but have only used blockchain as a database and incentive platform. They have not presented a way to increase trust in AI's training and inference directly. [38] attempted to reduce aggregator costs through the blockchain but still failed to provide trust. Additionally, the high cost of storing model weights on the blockchain does not support large-scale neural networks effectively. Meanwhile, other studies [3, 27, 31] guided the training in the right direction through the
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline & \multicolumn{2}{c|}{Computational Engine} & \multicolumn{2}{c|}{Trust Machine} & \multicolumn{1}{c|}{Large} & No \\ \cline{2-6} & Training & Inference & Training & Inference & Models & Hardlock \\ \hline \hline DeepBrain [18] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ Cortex [11] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ AI News [42] & \(\times\) & \(\times\) & \(\bigcirc\) & & \(\bigcirc\) \\ Pchain [3] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ BRAIN [38] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ Fedcoin [27] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\times\) \\ BFLC [25] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ Blockflow [31] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ \hline ZEN [17] & \(\times\) & \(\times\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ ZRP-P [50] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ YOC [21] & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ Y. Fan [15] & \(\times\) & \(\times\) & \(\bigcirc\) & \(\bigcirc\) & \(\bigcirc\) \\ \hline \hline \end{tabular} \({}^{\dagger}\) Negative impacts on transaction throughput.
Moreover, it cannot support models whose inference time exceeds the block interval.
\({}^{\ddagger}\) Due to the massive costs involved, an Enteerum [47].
\({}^{\ddagger}\) The cost depends on the model size.
\({}^{\star}\) Due to the tremendous time required.
\end{table} TABLE I: A comparison of BRAIN with existing studies
use of incentives or slashing deposits, but did not provide confidence in inference. [25] presented a methodology for improving neural networks by evaluating and aggregating trained neural networks from committees, but it similarly did not consider inference. Furthermore, in [11], although trust was established by enabling inference to be performed on the blockchain and allowing all participating nodes to validate inference results, it cannot support large neural networks that require a long time for inference due to the need for nodes to wait without processing other transactions. It also does not concern itself with training. With [18], participating nodes' GPUs can be rented and used for training and inference, but the trustworthiness of the results is not guaranteed.
Zero-knowledge proofs (ZKPs) can be used to verify the training and inference, as demonstrated in [15, 17, 21, 50]. They do not require repeated verification; only proof is needed to verify that the operations were performed correctly. However, because of the tremendous proof generation time, ZKPs are limited to small models, such as MobileNet [22].
A summary of these related studies can be found in Table I. We denote the level of support as \(\bigcirc\) for full support, \(\bigcirc\) for partial support, \(\bigcirc\) for no support, and \(\times\) for not related features. In particular, it can be observed that blockchain has not been fully utilized as a computational engine or a trust machine. Even when trust is provided, large models cannot be supported due to the time constraint or the high cost.
### _Contributions_
Our proposed solution, Blockchain-based Reliable AI Network (BRAIN), effectively addresses the challenges faced in previous studies. To the best of our knowledge, BRAIN is the first blockchain platform capable of handling both inference and training for large neural networks while maintaining compatibility with existing blockchains.
The main contributions of BRAIN are as follows:
* We propose BRAIN, an innovative architecture for large-scale neural networks, that addresses latency, redundancy, verification, and compatibility issues in existing chains.
* One of the key components of BRAIN is aggregator-free federated learning. This approach enables asynchronous model updates by utilizing smart contracts, leading to convergence without the need for an aggregator.
* Our simulation results demonstrate BRAIN's performance improvements and reasonable cost, and we provide guidelines for optimal hyperparameter configurations.
* We emphasize the importance of blockchain-powered AI for trust, transparency, and traceability. In addition, highlight BRAIN's potential impact on various applications.
Our implementation and experiment codes are publicly available for replication and further research at GitHub1.
Footnote 1: [https://github.com/BRAIN-chain](https://github.com/BRAIN-chain)
## II Background
This section provides essential background related to the BRAIN protocol. We briefly discuss **Verifiable Random Functions**, which enable cryptographic sortition, and **Federated Learning**, a training approach with multiple participants.
### _Verifiable Random Functions_
Verifiable Random Functions (VRFs) [30] are functions that output verifiable pseudorandom values. They consist of three functions: keygen, evaluate, and verify.
* \(\mathsf{keygen}(r)\rightarrow(pk,sk)\): generates a public key \(pk\) and a secret key \(sk\) pair for a given random input \(r\).
* \(\mathsf{evaluate}_{sk}(x)\rightarrow(y,\pi)\): returns pseudorandom output \(y\) and a proof \(\pi\) from the secret key \(sk\) and \(x\) as inputs.
* \(\mathsf{verify}_{pk}(x,y,\pi)\in\{\mathsf{true},\mathsf{false}\}\): takes the public key \(pk\), the pseudorandom output \(y\), the proof \(\pi\), and the message \(x\) as inputs, and returns true if \(y\) is actually the output produced by \(sk\) and \(x\). If not, it returns false.
Cryptographic Sortition is a method of selecting members based on cryptographic calculations, instead of predefined rules. Participants can confirm their own election but not others until the verifiable proofs are published. In BRAIN, Cryptographic Sortition is implemented using VRF, similar to Algorand [19]. Since the \(sk\) are not shared with one another, it is not possible to know who was elected to the committee for this round until the results are submitted. This makes it impossible to engage in malicious activities such as bribery.
### _Federated Learning_
Federated learning is a technique that allows clients to train a global model by training local models on their own data. The aggregator then collects updates on the local models and integrates them to create the next global model. One well-known method for achieving this is Federated Averaging (FedAvg) [29], which aggregates updates from the clients through a weighted average based on the amount of data they hold. Furthermore, the FedAsync [49] study demonstrates that learning can converge even in asynchronous environments.
BRAIN is designed to allow participants to perform model updates in parallel, inspired by the method of FedAsync. However, BRAIN employs smart contracts instead of an aggregator for decentralization. In addition, BRAIN utilizes a k-fold cross-validation technique similar to BFLC [25] to evaluate update proposals. This technique uses local datasets held by each participant as a validation set, making the system resistant to attacks such as poisoning attacks [5, 51].
## III BRAIN: Overview
BRAIN is a decentralized, blockchain-based AI platform that provides both training and inference services. This section presents a comprehensive overview of BRAIN, focusing on the roles of **participants** and the significance of **hyperparameters** in maintaining the platform's optimal operation.
### _Types of Participants_
BRAIN's ecosystem consists of various types of participants, each of which can assume multiple roles:
**Users.** Users are individuals who access the inference services provided by BRAIN. They spend assets, such as ETH, to
submit inference requests through transactions and instruct how the resulting outputs should be processed. In Fig. 1, 1 Request represents the initiation of an inference request.
**BRAIN Nodes.** Nodes are participants who have deposited sufficient assets, enabling them to contribute computational resources for inference, training, or both within the BRAIN ecosystem. Depositing assets allows nodes to actively participate in the system, earning fees from users while also risking having their deposits slashed through smart contracts in the event of malicious behavior. In this paper, it is assumed that all nodes have deposited the same amount of assets. This assumption is made without loss of generality, as the BRAIN does not limit the number of participants, allowing for flexibility in the distribution of deposits into multiple nodes. A BRAIN node may belong to an Inference Committee, a Training Committee, both committees, or neither.
**Inference Committee.** The Inference committee is composed of BRAIN nodes selected through a VRF and is responsible for handling inference requests. A different configuration of nodes is selected for each request. Committee members receive incentives for participating in the processing of inference.
**Training Committee.** The Training committee, similarly consisting of nodes chosen through a VRF, is responsible for evaluating model update proposals. They receive incentives for their contributions to the evaluation of model updates.
To prevent free-riders who do not perform tasks but merely copy another entity's result, BRAIN adopts a Commit-and-Reveal scheme, denoted as 2-a Commit in Fig. 1, followed by 2-b Reveal. After the revelation, the agreed-upon result is used for the AI service's response as 2-c Execute or to update the model parameters as 2-c Update.
If participants engage in malicious or lazy behavior that harms the protocol, they will be punished to deter such negative behavior. In BRAIN, punishment is carried out by slashing assets deposited. Users who are non-deposit entities may also engage in malicious behavior, such as launching Denial-of-Service attacks with numerous meaningless inference requests [40, 41]. However, these attempts can be thwarted by making attackers spend massive assets, similar to how transaction fees are used in blockchain systems. Furthermore, preventing malicious behavior by block producers is the responsibility of the underlying blockchain protocol, not BRAIN.
### _Hyperparameters_
In BRAIN, predefined values known as hyperparameters play a crucial role in ensuring the platform's optimal operation.
* \(E\) and \(d\): \(E\) denotes the epoch at which the VRF input changes, while \(d\) adjusts the probability of committee selection. \(E\) should be set appropriately to prevent VRF verification failure or slow quorum formation. Adjusting \(d\) can minimize the number of selected nodes, reducing costs but trading off security from redundancy. Inference and training have their own epochs (\(E_{I}\) and \(E_{T}\), respectively) and probabilities (\(d_{I}\) and \(d_{T}\), respectively).
* \(f\): The finality factor ensures the VRF _seed_'s unchangeability from chain forks. The unit of \(f\) is round, which increases by \(1\) each time the 2-a phase is completed.
* \(Q_{C}\) and \(Q_{R}\): They determine the number of nodes needed for quorum in 2-a and 2-b respectively. Adjusting them in relation to \(d\) can balance liveliness and performance.
* \(T_{C}\), \(T_{R}\), \(T_{E}\), and \(T_{U}\): Timeout values control various aspects of BRAIN. \(T_{C}\) in 2-a should be set to cancel unwanted requests, such as computationally-intensive requests. While \(T_{E}\) and \(T_{U}\) in 2-c may be set to infinity since each request has its own \(timeout\) field. \(T_{R}\) in 2-b is used to penalize nodes that don't reveal on time.
* \(R_{C}\), \(R_{E}\), \(R_{U}\), \(R_{R}\), and \(R_{S}\): These symbols represent rewards for successfully completing Commit, Execute, Update, Reveal, and Suggest tasks, respectively. Rewards incentivize nodes to perform their assigned tasks. Essentially, the funding for this is provided through the user's inference request fee. It is recommended to set \(R_{C}\) to \(0\) to encourage nodes to submit the following Reveal.
* \(P_{C}\), \(P_{R}\), and \(P_{S}\): Specifically, \(P_{C}\) is the penalty for a faulty Commit, \(P_{R}\) is for an invalid or no Reveal, and \(P_{S}\) is for a faulty Suggest. Penalties prevent attacks and encourage honest inference and evaluation. They should be set appropriately to promote positive resource usage.
We use these notations throughout the paper. In particular, in section VII, we discuss and examine how hyperparameters affect BRAIN's performance and the stability of its services.
Fig. 1: The figure illustrates the unified process flow for both inference (dotted blue line) and training (dash-dotted green line) in the BRAIN platform. 1 A participant submits a request for either inference or model update evaluation, which is then added to the queue. 2-a. Each member of the respective committee (inference or training) submits a commitment. 2-b. Each committee member reveals their submitted value (inference output or model score). 2-c. In the case of inference, anyone can execute the requested action using the inference output. In the case of training, BRAIN nodes obtain the median value of evaluation scores and use that value to compute the updated model locally.
### _Design Goal & Features_
As described in section I-A, integrating AI and blockchain poses various challenges, particularly when dealing with large models. We have designed several features to address the corresponding challenges, which are briefly discussed below:
**Minimized Latency with Transaction Pipelining.** We design a system that minimizes the latency associated with AI, to mitigate negatively impacting the overall performance of the blockchain. To achieve this, we have implemented a two-phase transaction process, represented as 1 and 2 in Fig. 1, that separates the request and response transactions. Furthermore, the 2 phase is further divided into three subphases: 2-a, 2-b, and 2-c. This approach allows pipelining for efficient real-time processing, reducing the strain on the blockchain.
**Scalable Verification with Cryptographic Sortition.** We implement a scalable verification mechanism that balances the need for trust with the computational costs of AI inference and training. To achieve this, BRAIN utilizes cryptographic sortition based on verifiable random functions to select a random subset of nodes, called the committee, to perform the inference or training process. This approach enhances verification efficiency by eliminating redundancy while ensuring that a centralized entity does not manipulate the process.
**Decentralized Training with Aggregator-Free Federated Learning.** We aim to establish a decentralized training process capable of handling large-scale models and preventing manipulation. To achieve this, BRAIN employs aggregator-free federated learning. A randomly selected committee of nodes, the training committee, evaluates and reaches a consensus on the score for newly submitted model updates. These agreed-upon scores are saved in the smart contract and used as weights to calculate the integrated model. Each node performs this calculation locally without the need for a centralized aggregator. This approach enables decentralized and efficient learning that can withstand the presence of malicious nodes.
**Compatibility with Existing Blockchains.** BRAIN is designed to be compatible with existing contract-aware blockchains, eliminating the need for hard forks and enabling smooth integration with existing high-secure networks.
## IV BRAIN: Inference
The inference component of BRAIN is designed to provide trust and security in a decentralized environment. It has several features, including a **two-phase transaction** structure, **cryptographic sortitions**, and a **commit-and-reveal** scheme.
### _Two-Phase Transaction_
One way to enable inference on a blockchain is to add inference operations as commands, as in [11]. This allows for direct invocation of inference on the chain, but the real-time nature of complex neural network inference may not be feasible due to slower processing speeds compared to ordinary transactions and even block generation times.
The BRAIN protocol addresses the issue of delays in processing AI tasks on the blockchain by separating the request and response into two distinct transactions. The first phase involves pushing the inference request into a queue, while the second phase involves popping it from the queue, performing the inference, and disclosing the result in a separate transaction. This two-phase configuration allows the inference request transaction to be processed immediately, enabling block producers to move on to the next transaction in the block without waiting for a response. We use a priority queue implemented through smart contracts to hold and process request transactions. Requests are added to the queue and processed in order of priority, allowing users to prioritize their requests by paying for more assets. This is similar to the concept of transaction fees (gas) in Ethereum [47].
### _Cryptographic Sortition_
Cryptographic Sortition is a key feature of the BRAIN protocol that ensures both effectiveness and trustworthiness of the inference committee. It reduces costs and improves network performance by limiting the number of nodes that perform redundant operations. An attacker seeking to produce false inference results would need to occupy a majority of the inference committee and take control of the entire process of phase 2. However, this is made difficult as the identities of the selected committee members are kept secret.
Since BRAIN nodes are already participants in the blockchain system, they possess their own \(pk\) and \(sk\) keys. Hence, these keys can be used in the VRF process instead of generating new ones using the keygen function. As shown in Fig. 2, \(msg\), the input to the VRF functions verify and evaluate, consists of several elements used to ensure the reliability and stability of the inference results. The highest-priority inference request \(q_{1}\) is included to indicate that it corresponds to the correct request. The random value \(seed_{r-f}\), stored in the contract, introduces randomness into the message to reduce the chances of manipulating the process, like [19]. Using the \(f\)-th past seed rather than the latest \(seed_{r}\) minimizes the risk of the \(msg\) changing due to forks. \(\lfloor h/E_{I}\rfloor\) increases the diversity of \(msg\) according to pre-defined hyperparameter \(E_{I}\), which adds additional committee members to ensure the liveness of the inference process.
There are two ways to calculate VRF on the contract: 1) Implementing the whole verification logic through contracts. However, this method incurs high costs due to the complexity of the elliptic curve operations. 2) Using the precompiled-contract ecrecover that is already built into the EVM. It significantly reduces costs compared with the first method. The security level is decreased from 32 bytes (hash) to 20 bytes (address) when using this method, while this is generally sufficient. Section VII-B provides a comprehensive analysis of the costs associated with both methods.
### _Commit-and-Reveal_
The commit-and-reveal scheme during the phase 2 is designed to prevent free-riding. Without this scheme, nodes could potentially view the others' inference results in transactions and submit the transaction with the same value without
actually performing the required inference operations. The commit-and-reveal scheme in BRAIN ensures that the result with a random value remains hidden, so no information can be obtained by viewing the hash value. Additionally, the hash input includes an address, essentially making it a commit that other nodes cannot use. Therefore, the nodes of the committee must perform the required operations and prove they have committed the correct result during the subsequent reveal phase, thus holding them accountable for their actions.
### _Inference Process_
**Phase 1**. As in Fig. 2, phase 1 of the two-phase transaction pushes an inference request \(q\) into the priority queue. The request includes a unique identifier \((net,ver)\) for what neural network model is requested. Along with \(input\), we use a random \(seed\) to address the non-deterministic features of the model. Other AI-related parameters, such as _max_length_ and _temperature_, are likewise included in the request as serialized bytes in \(args\) to coordinate the neural network. Furthermore, the request specifies the address \(target\), the function \(funcsig\), and the native asset amount \(value\) to be called with the inference result. This request is stored in the queue for later processing by the committee. \(timeout\) is specified to prevent requests from being processed too late.
The fee for the request is typically calculated based on the length of both the \(input\) and \(output\), similar to the OpenAI pricing [34]. However, as the length of the \(output\) cannot be determined in advance, users are required to pay sufficient fees upfront. To achieve this, the system employs a fee pricing mechanism using the parameters \(feePrice\) and \(feeLimit\), which is similar to the gas system used by Ethereum before EIP-1559 [9]. The request is canceled if the combined length of the \(input\) and \(output\) exceeds the specified \(feeLimit\). The total amount paid by the user in advance is determined by multiplying \(feeLimit\) by \(feePrice\), but the actual amount paid is calculated as \((input+output)\times feePrice\), and the remainings are refunded in 2-c phase. The priority of the request is determined by the \(feePrice\).
**Phase 2**. BRAIN is designed to handle inference requests securely. It does so through a VRF-based cryptographic sortition process that elects inference committees and a commit-and-reveal scheme that registers values and reveals them later. As illustrated in Fig. 2, the Commit process is divided into 2-a phases, while the Reveal process is divided into 2-b phases. In addition, the steps of processing a user's request with the inference \(output\) after the revelation are divided into 2-c Execute phases, where execution is combined with the last revelation into a single transaction by the contract.
**Phase 2**-a**. Nodes self-verify their election for the inference committee for a particular inference request, using VRF. If the result of evaluate, 256-bit value \(y\), is less than or equal to the difficulty condition hyperparameter \(d_{I}\), the node is eligible to participate as an inference committee member. The nodes that pass the verify by the contract are eligible to register the inference result as a member of the inference committee.
Fig. 2: Overview of the process of verifying inference in BRAIN. A user sends an inference request in phase 1. When the request is popped from the priority queue, phase 2 begins. The results are then committed in phase 2-a through an inference committee. In phase 2-b, the original results are revealed. If a quorum is reached, the final result is obtained through a majority in phase 2-c and the requested operation is executed. The reward payment and punishment are made in the PostOp step. In addition, the contract refunds the remaining fee to the user. In the figure, a black square with a double border represents a transaction that is publicly recorded on the chain. The green block numbers indicate the example order of events in the process.
The hashed result \(output\), along with the address \(addr\) and a random value \(r\), is registered in the contract through a Commit transaction. To proceed from 2-a to 2-b, more than \(Q_{C}\) Commit transactions must be published.
**Phase 2-b.** The process of revealing the previously committed value is performed. To verify the revealed value matches the previously registered value in the contract, the same components used to construct the hash value -- \(output\), \(addr\), and \(r\) -- are passed to the contract. The contract calculates the hash again using these values, and if the revealed value differs from the committed value, the Reveal transaction is rejected. When nodes with a quorum of at least \(Q_{R}\) reveal their committed values, the process can proceed to phase 2-c.
**Phase 2-c.** Since BRAIN does not have a trusted subject, the final result is derived through a majority of the revealed values. Then function \(funcsig\) requested by the user has executed on the target address \(target\) with the \(value\) and the agreed-upon result of the inference \(output\) through the ExecuteOp step in Execute transaction. In general, the results of all nodes should be consistent because of fixed \(seed\). If the majority of the inference committee is honest, the real answer can always be agreed upon, and a node that reveals a result that violates this can be deemed to have taken malicious action and be penalized through the PostOp step. In addition, at the same step, the remaining fees are refunded to the user.
Thanks to the separation of subphases **a**, **b**, and **c** in phase 2, inference processing can be pipelined. Phase 2-b and 2-c of the highest priority inference request \(q_{1}\) can be performed concurrently with phase 2-a of the next priority \(q_{2}\), enabling a low-latency service. Therefore, the throughput of inference transactions is only limited by the \(Q_{C}\), \(d\), and \(T_{C}\), which determine the processing speed of the 2-a phase.
## V BRAIN: Training
BRAIN employs federated learning, in which the proposed model is evaluated by a training committee using the **model scoring**, and the update proposal is then aggregated through the **Aggregator-Free Federated Learning** methodology to derive a global model. This method enable anyone to propose updates at any time, and since they are validated and then aggregated, the model can be continuously improved.
### _Model Scoring_
A straightforward method for training verification would be to have the elected committee nodes perform the same learning process using fixed seeds and the same data. While this approach can ensure the reliability of model updates, it is inefficient due to the high cost of training, as a large number of nodes would perform redundant operations.
To address these issues, BRAIN employs techniques found in previous studies [25, 31]. Each node reports a score after evaluating the model on its own dataset, similar to K-fold cross-validation [25]. To ensure fairness in scoring, the node that proposed the update is not allowed to participate in the committee, even if it has been qualified through the VRF. The score is typically based on accuracy. The median value is used to reach a consensus on the global score, as this method can provide a reliable result unless the majority of nodes are malicious. The model scoring method of BRAIN simplifies the role of the training committee in evaluating the model's performance rather than verifying the training process.
### _Aggregator-Free Federated Learning_
In traditional federated learning, a central aggregator computes and shares a global model that is the weighted average of updates. However, BRAIN has no central aggregator and no servers. Instead, each node can locally aggregate the model using the Weighted Moving Average (WMA) from the stored scores at the contract. Although [38] uses contracts for weight storage, it is limited by massive transaction fees and cannot support large models, as discussed in section I-A. BRAIN can significantly reduce upload costs regardless of the model size by storing only the model's score and not the model itself. This approach is similar to the concept of reputation used by [44], as well as the voting mechanisms employed by [37, 52].
```
1:window size \(n\geq 2\), threshold \(score_{th}\leq a_{r}\)
2:locally calculated global model \(\overline{M}\)
3:procedureAaflWma(\(n\))
4: Initialize \(r\gets 0,\overline{M}_{0}\gets M_{0},a_{0}=1\)
5:for all new \((M_{r},a_{r})\) and \(r\geq 1\)do
6:\(\overline{M_{r}}\leftarrow(1-\alpha)\overline{M_{r-1}}+\alpha M_{r}\),
7:where \(\alpha=\begin{cases}\frac{a_{r}}{a_{r-n+1}+...+a_{r}}&\text{if }r\geq n-1\\ \frac{a_{r}}{a_{0}+...+a_{r}}&\text{otherwise}\end{cases}\)
```
**Algorithm 1** Aggregator-Free Federated Learning
As described in algorithm 1, the initial global model \(\overline{M}_{0}\) is set equal to \(M_{0}\). For each \(r\)-th global model \(\overline{M_{r}}\), nodes calculate the WMA of the previous \(n\) models, including the current update suggestion \(M_{r}\), using corresponding scores \(a_{r-n+1},...,a_{r}\) as weights. Since all nodes start from the same model and integrate updates with the same scores, locally derived global models will be consistent with each other, unless a node intentionally omits updates or misuses weights.
The convergence of this technique can be proven by substituting the WMA equation into the form of FedAsync [49], which has been mathematically proven to converge.
Proof.: Because \(\overline{M_{r-1}}=\frac{a_{r-n+1}M_{r-n+1}+...+a_{r-1}M_{r-1}}{a_{r-n+1}+...+ a_{r-1}}\),
\[\overline{M_{r}} =\frac{(a_{r-n+1}M_{r-n+1}+...+a_{r-1}M_{r-1}+a_{r}M_{r})}{(a_{r- n+1}+...a_{r})}\] \[=(1-\frac{a_{r}}{a_{r-n+1}+...a_{r}})\overline{M_{r-1}}+(\frac{a_ {r}}{a_{r-n+1}+...a_{r}})M_{r}\]
Taking \(\alpha=\frac{a_{r}}{a_{r-n+1}+...+a_{r}}\), we have
\[\overline{M_{r}}=(1-\alpha)\overline{M_{r-1}}+\alpha M_{r}\]
which its convergence has been proven by [49]. Similar to \(\alpha=\frac{a_{r}}{a_{0}+...+a_{r}}\) where \(r<n-1\), without loss of generality.
### _Training Process_
**Phase 1**. As depicted in Fig. 1, during the first phase of the training process, model update \(M\) is published to the blockchain by Suggest transaction and pushed to a FIFO queue for processing. A circular queue is utilized to improve storage-cost efficiency, with details provided in section VII-B. The proposed transaction includes an identifier \((net,ver)\), which specifies the target updated neural network with the version, and a \(timeout\) field that indicates whether the update is outdated or not. The \(ver\) value is calculated as a hash of the neural network weights. Model updates are shared through any network such as IPFS [4] rather than the blockchain, allowing each node to validate the shared model by obtaining a hash of the weights and comparing it to the \(ver\) stored in the contract.
**Phase 2**. During the second phase of the training process in BRAIN, the model update proposal stored in the queue is processed by the training committee. This phase is similar to the inference process, with a few key differences. First, values that are hidden and revealed during phases 2-a and 2-b are \(score\)s rather than inference \(output\)s. Second, the median value is used instead of a majority during phase 2-c. In addition, there is no need for an ExecuteOp step in the training process. During the PostOp step, rewards \(R_{U}\) are given for the executing update. If the agreed-upon \(score\) exceeds a certain threshold, the node that proposed the model update is rewarded with \(R_{S}\). On the other hand, if the threshold is not met, the node is penalized with \(P_{S}\).
Similar to the inference process, evaluation in the training process can be pipelined. This allows for efficient updates, as subphase 2-b of the first update suggestion \(M_{1}\) and phase 2-a of the second suggestion \(M_{2}\) can be started simultaneously.
## VI BRAIN: Implementation Details
In this section, we provide an overview of the implementation details for the BRAIN platform, focusing on the fallback mechanisms employed during the inference process. The platform is designed to handle various scenarios where the quorum is not met or round timeouts occur, thus providing a robust and realtime decentralized AI platform.
In phase 2-a, if a block above the timeout \(T_{C}\) passes without the quorum \(Q_{C}\) being met, the process falls into the **a-fallback** of algorithm 2, and the inference request \(q\) is canceled and all assets are returned to the user. The last Commit transaction that satisfies the quorum and meets the necessary requirements performs a \(pop\) operation on the queue, ending phase 2-a and beginning phase 2-b.
If no response is received from nodes at least quorum \(Q_{R}\) until the timeout \(T_{R}\), a **b-fallback** is performed. As shown in the algorithm 3, depending on the trade-off between safety and liveness the service intends to provide, one of the following two methods is defined in advance:
1. b-I-fallback. Although the quorum was not reached, only the revealed values proceeded to phase 2-c. The degree of safety may decrease, but the service's liveness increase.
2. b-II-fallback. The process ends here. The inference request \(q\) is set to have the secondary-highest priority \(p_{max}-1\) and is pushed back into the priority queue so that another set of inference committees can immediately process it. In this case, safety can be provided at a high level, but the degree of liveness decreases.
```
0:\(top()\neq\)null
1:procedureResultCommit(\(Q_{C}\), \(T_{C}\), \(d_{I}\))
2:\(q_{1}\gets top()\)
3:for block height \(h\gets h_{start},...\)do
4:for\(k\in\{nodes\}-K_{C}\) in parallel do
5:\(msg\leftarrow(q_{1}\parallel seed_{r-f}\parallel[h/E_{I}])\)
6:\((y,\pi)\leftarrow\texttt{evaluate}_{sk}(msg)\)
7:if\(y\leq d_{I}\)then
8:\(output^{k}\leftarrow\textsc{Inference}(q_{1})\)
9:\(H\gets H(output^{k}\parallel addr^{k}\parallel r^{k})\)
10:callCommit\({}^{k}_{q_{1}}(msg\), \(y\), \(\pi\), \(H)\)
11:if\(|K_{C}|\geq Q_{C}\)thenbreak\(\triangleright\) End of 2-a
12:elseif\(h-h_{start}>T_{C}\)thenbreak\(\triangleright\) a-fallback
13:functionInference(\(q\))
14:return\(q.net_{q,ver}(q.input,q.seed,q.args)\)
15:verify\({}_{pk}(msg,y,\pi)\), \(y\leq d_{I}\), \(h_{start}\leq h\leq h_{now}\)
16:transactionCommit\({}^{k}_{q}\)(\(y\), \(\pi\), \(h\), \(H\))
17: Store \(H^{H}_{q}\) to BRAIN contract; \(K_{C}\gets K_{C}\cup\{k\}\)
18:if\(|K_{C}|==1\)then\(update(q,p_{max})\)
19:if\(|K_{C}|\geq Q_{C}\)then\(\triangleright\) End of 2-a
20:\(pop(q)\); \(h_{C}\gets h_{now}\)
21:\((seed_{r},\pi)\leftarrow\texttt{evaluate}_{sk}(seed_{r-1}\parallel r)\)
22:elseif\(h_{now}-h_{start}>T_{C}\)then\(\triangleright\) a-fallback
23:\(pop(q)\); \(K_{C}\leftarrow\varnothing\)
24:\(seed_{r}\gets H(seed_{r-1}\parallel r)\)
25: Refund \((q.feeLimit-|q.input|)\times q.feePrice\)
26: Refund \(q.value\)ETH
```
**Algorithm 2**2**2-a. Commit Phase Pseudocode
```
0:\(K_{C}\neq\varnothing\), phase 2-b fallback type \(\tau\in\{\tau_{I},\tau_{II}\}\)
27:procedureResultReveal\({}_{q}\)(\(Q_{R}\), \(T_{R}\), \(\tau\))
28:for block height \(h\gets h_{C},...\)do
29:for\(k\in K_{C}-K_{R}\) in parallel do
30:callReveal\({}^{k}_{q}\)(\(output^{k}\), \(addr^{k}\), \(r^{k}\))
31:if\(|K_{R}|\geq Q_{R}\)thenbreak\(\triangleright\) End of 2-b
32:elseif\(h-h_{C}>T_{R}\)thenbreak\(\triangleright\) b-fallback
33:\(H(output\parallel addr\parallel r)==H^{k}_{q}\), \(addr==addr^{k}\)
34:transactionReveal\({}^{k}_{q}\)(\(output\), \(addr\), \(r\))
35: Store \(output\) to BRAIN contract; \(K_{R}\gets K_{R}\cup\{k\}\)
36:if\(|K_{R}|\geq Q_{R}\)thencallExecute\(\triangleright\) End of 2-b
37:elseif\(h_{now}-h_{C}>T_{R}\)then
38:if\(\tau==\tau_{I}\)then\(h_{R}\gets h_{now}\)\(\triangleright\) b-I-fallback
39:else\(push(q,p_{max}-1)\); \(K_{C}\leftarrow\varnothing\); \(K_{R}\leftarrow\varnothing\)\(\triangleright\) b-II-fallback
## VII Experiments
To evaluate the performance and effectiveness of the BRAIN, we conducted a series of experiments: **inference transaction throughput** and **contract overhead analysis**. The results of these experiments provide insight into the performance and scalability of the BRAIN and demonstrate its ability to handle large neural networks.
### _Inference Transaction Throughput_
In this section, we evaluate the performance of the BRAIN platform in providing on-chain inference, focusing on metrics such as tasks-per-second and the number of timeouts.
The key variables examined in this experiment are: We investigate the frequency of request transactions \(freq\), with a default value of \(0.0577=(5752729/99638953)\), derived from the ratio of OpenSea [35] -- the most transaction-intensive service on Ethereum as of Q2 2022 -- to total transactions. We also consider the timeout value for inference requests, \(q.timeout\), with a default value of 20 blocks, which is approximately 4 minutes on the Ethereum. Another variable of interest is the difficulty level \(d_{I}\) for being elected as a member of the inference committee. Lowering \(d_{I}\) reduces the probability of an individual being elected. We experimented with a fixed base difficulty of \(2^{255}\), signifying that each member has a 50% (\(=2^{255}/2^{256}\)) chance of being elected. For comparison, the \(2^{253}\) used in the experiment has a 12.5% (\(=2^{253}/2^{256}\)) chance of being elected. We set the total number of BRAIN nodes to 21, inspired by the EOS blockchain's BFT-DPoS consensus mechanism's setting [14]. This choice balances decentralization, governance, and scalability while maintaining network security [33]. Thus, the minimum number of nodes required to pass phase 2-a, \(Q_{C}\), has a maximum limit of 21. The default value of \(Q_{C}\) is set to 11, which is approximately half and over 50% of 21. Refer to section VII-A3 for a higher quorum value of 15, similar to EOS.
Other settings were controlled to ensure they did not affect the performance of the experiments. The blockchain specifications used in the BRAIN platform were based on Ethereum, with a block interval of 12.06 seconds and an average of 155 transactions per block as of January 13, 2023. To measure the performance of the platform, we quantified the execution time of non-inference transactions as 1 ms, based on the EVM transaction performance times reported in [24]. An epoch \(E_{I}\) that changes the message used as input for the election of the inference committee was fixed at 8 blocks. The duration of each phase 2 period is considered infinite. Under the widely-accepted assumption in blockchain systems that more than half of the participating nodes are honest [32], the BRAIN's inference and learning system, which utilizes majority and median values, always operates correctly. However, while the integrity of the results cannot be altered, malicious attacks that intentionally withhold committed outputs may affect the system's bandwidth. This can be measured by an increase in the quorum ratio proportional to the number of attackers, which in turn results in decreased bandwidth and increased timeouts. These metrics are discussed in the following section VII-A3. In this experiment, we consider \(Q_{R}\) to be always equal to \(Q_{C}\). This is because we can account for the existence of malicious nodes by increasing \(Q_{C}\), and since no malicious nodes are present in our experiment, the assumption that \(Q_{R}=Q_{C}\) represents the worst-case scenario, the minimum throughput. This robustly explains our results and provides a conservative estimate of the system's performance.
In our experiments, we used the float16 version of GPT-J6B [46], a 6 billion parameter open-source version of GPT-3 [7]. No optimization techniques were applied during inference. All test datasets for inference were sourced from the SAMSum [20]. A total of 819 datasets were used for evaluation. To determine the priority of each request when pushing to the priority queue, we sampled values along a Pareto distribution [2, 36] to simulate real-world \(feePrice\) allocation. Priorities have a discrete range of 0 to 1000, with larger values indicating higher priority.
#### Vii-A1 **Inference Response Latency**
To measure the performance of large neural networks separate from BRAIN, we conducted an experiment measuring the time required for inference on a single system consisting of one RTX 3090 graphics card, running Ubuntu 22.04.1. This experiment was conducted 10 times using a fixed seed to obtain the average time spent. The minimum inference time recorded was 0.0975 seconds, while the maximum time reached was 50.6394 seconds, resulting in an average time of 18.5421 seconds.
With the inference time for each request, we then implemented a simulator to measure the delay of BRAIN on a blockchain. We set the \(freq\) of the inference requests compared with general transactions as a default value of 0.0577, and other hyperparameters also used their default values. The experiment was simulated 100 times to obtain the average value. As a result, BRAIN required a minimum of 2 blocks, a maximum of 23 blocks, and an average of 7.1653 blocks (\(\sigma=3.5919\)) to receive a response after the request. It is obvious that a minimum of 2 blocks is necessary because at least two transactions, Commit and Reveal, are mandated by the commit-and-reveal scheme. They must be located in different blocks since each BRAIN node can send a Reveal transaction only after recognizing the end of phase 2-a.
Since Ethereum's block interval is 12.06 seconds, this corresponds to a minimum of 24.12 seconds and a maximum of 277.38 seconds, with an average of 86.4135 seconds. By comparing this to the time required for inference on a single computer, we find that the time cost of reliably performing AI inference using BRAIN is approximately \(67.8714=(86.4135-18.5421)\) seconds. This value is heavily influenced by the quorum \(Q_{C}\), and inference performance with different values of \(Q_{C}\) can be observed in the following section VII-A3.
#### Vii-A2 **Tasks Per Second**
Since the inference operations in BRAIN are off-chain, there is no computational on-chain delay for inference transactions. Consequently, the Transactions-Per-Second (TPS) is measured at the same value (\(1/0.001=1000\)) as usual, given that we assumed the transaction execution
time to be 1 ms. To accurately evaluate the performance of BRAIN's decentralized inference system, we have defined the _tasks-per-second_ metric. This metric counts the number of tasks completed per second, grouping all transactions related to a single request as one task. Each task may consist of multiple transactions and takes the total execution time of those transactions to complete. Transactions unrelated to inference are considered separate tasks individually.
Fig. 3 presents the average and standard deviation of 100 simulations for different \(freq\) of inference requests. The performance of BRAIN's tasks-per-second is illustrated in the line graph marked by a blue circle. As \(freq\) increases, the tasks-per-second decreases since processing an inference request necessitates the aggregation of multiple transactions, including Request and all Commit and Reveal transactions, into a single task. However, due to the BRAIN's pipelining ability, the decrease is not linear and is mitigated.
For comparison, we include results from a naive implementation, which simply waits for a response from a large neural network on-chain without using a two-phase transaction structure. As indicated in the line graph marked by a green triangle in Fig. 3, the naive implementation exhibits significantly lower tasks-per-second, emphasizing the effectiveness of BRAIN. When the activity level is set to \(freq=0.0577\), BRAIN can process an average of 458.0193 (\(\sigma=0.8641\)) tasks-per-second. In contrast, the naive one only achieves an average performance of 1.0079. This demonstrates that BRAIN can process tasks 454.4293 times faster than the naive approach.
Furthermore, as the frequency of inference requests increases, a corresponding increase in the number of timed-out requests is observed, particularly beyond a threshold value of \(freq=0.1\). However, this issue can be mitigated by adjusting hyperparameters such as increasing \(d_{I}\), as illustrated in Fig. 4, or increasing \(q.timeout\) for each request, as depicted in Fig. 5.
#### Iv-B3 **Tasks Per Second on various \(q_{c}\)**
Fig. 4 illustrates the experimental results of the tasks-per-second and the number of timeouts in relation to changes in quorum \(Q_{C}\), which is the number of quorum in phase 2-a. As the number of required nodes in one inference increases, the overall transaction requirement generated by the inference committee likewise increases, leading to a decrease in the tasks-per-second. Conversely, a smaller quorum leads to higher performance. However, it is important to note that balancing the trade-off between security and performance is key when setting the quorum size, as reducing it can increase performance but also threaten the reliability of inference results.
As shown in Fig. 3(a), increasing the quorum slightly increases the number of timeouts, but BRAIN can still handle a large number of quorums in an environment with default values of \(E_{I}=8\) and \(q.timeout=20\). However, as shown in Fig. 3(b), in a difficult \(2^{253}\), the number of timeouts increases significantly at a quorum level of 15, reaching an average of 274.37 timeouts at 21, as the platform is broken under this hyperparameter setting. Adjusting the difficulty \(d_{I}\) instead of \(Q_{C}\) can decrease the number of timeouts, but this in turn decreases the computational resource efficiency and increases the cost per inference. Therefore, when setting hyperparameters such as \(T_{C}\), \(E_{I}\), \(d_{I}\), \(q.timeout\), and \(Q_{C}\), it is important to determine an appropriate range that the BRAIN can handle based on the desired level of performance, security, and timeout tolerance under the expected service activation degree \(freq\).
#### Iv-B4 **Number of Timeouts on various \(q.timeout\)**
The time-out value included in the request corresponds to the validity period, measured in blocks. Fig. 5 demonstrates that shortening the \(q.timeout\) increases the likelihood of more requests exceeding their expiration date, particularly when the value falls below a certain threshold. In the default value environment, timeouts become severe when \(q.timeout\leq 10\).
Meanwhile, Fig. 4(a) and Fig. 4(b) compare experimental results for difficulty levels of \(2^{255}\) and \(2^{253}\), respectively. Al
Fig. 4: Tasks-per-second and the number of timeouts on various \(Q_{C}\). The above x-axis represents the percentage of \(Q_{C}\) to the total number of nodes. (a) shows the results at 50% VRF election probability and (b) at 12.5%, depending on \(d_{I}\).
Fig. 5: Tasks-per-second and the number of timeouts on various inference request’s \(q.timeout\). (a) shows the results at 50% election probability and (b) at 12.5%, based on \(d_{I}\).
Fig. 3: Graph illustrating the relationship between the frequency of inference requests with the number of tasks per second (line) and the number of timed-out requests (bar).
though decreasing the probability of being selected as a VRF with a fixed quorum might increase the likelihood of timeouts due to longer quorum fulfillment times, the experiments show only a small difference. This indicates that under appropriate settings for \(E\), \(Q_{C}\), and \(freq\), BRAIN can handle changes in \(d_{I}\) within an acceptable range. Conversely, when the \(Q_{C}\) and \(d_{I}\) settings are inappropriate, the number of timeouts increases dramatically, as shown in Fig. 3(b).
### _Contract Overhead Analysis_
We analyze the cost overhead incurred by various techniques used in BRAIN to ensure authenticity, security, and low latency. Specifically, we examine the **verifiable random function**, the **commit-and-reveal** scheme, and the **queue**. We evaluate the associated gas consumption and corresponding costs on the Ethereum [47] and Polygon [43] in Table II. This provides a quantitative analysis of the overhead incurred by these techniques. We have used the gas price of 14 gwei (\(14*10^{-9}\) ETH) and 51.6 gwei (\(51.6*10^{-9}\) MATIC) on Ethereum and Polygon respectively, as of January 13, 2023.
Regarding the VRF, we observed that using the **ecrecover-based**verify\({}_{fast}\) function significantly reduces the cost to an average of $2.98 per verification, compared with $32.5 for full verification on Ethereum.
Concerning the commit-and-reveal scheme, we measured the cost of Commit and Reveal transactions for 819 actual inference results on the SAMSum [20] test dataset. We trivially observed that the difference in gas consumption between using the hash function on inference \(output\) during Commit or not was insignificant. However, there was a noticeable difference in gas consumption between using and not using the hash function during Reveal, since using the hashed value in the transaction uniformizes the gas consumption. The overhead cost associated with the commit-and-reveal scheme in Ethereum is considered practical, with around $1 to $2.
We measured the gas cost for \(push\) and \(pop\) operations to evaluate the efficiency of the circular queue and priority queue. The results showed that the priority queue uses slightly more gas than the regular queue. However, overall, the overhead cost on Ethereum is low, ranging from $0.5 to $2.0.
## VIII Applications
BRAIN offers potential applications across various industries by providing security, traceability, and decentralization.
**Unbiased Generative AI.** Generative AI models have gained significant attention due to their ability to create realistic, high-quality content [1, 7, 23, 39]. Despite their potential for creative expression, they present challenges regarding potential biases from training data. To address this issue, BRAIN offers a decentralized training approach, enabling aggregator-free federated learners to join the training and evaluate the model based on their private dataset, considered the most trustworthy.
**Recommendation System.** AI-driven content recommendations are widely used, including related movie, video, and music suggestions [12, 48]. However, users often can't determine whether these recommendations result from genuine AI inferences or manipulated advertisements. With a recommendation system on BRAIN that leverages the traceability of blockchain, users can verify that the suggested content originates from actual inferences.
**Intelligent NFT.** Non-fungible tokens (NFTs) [13] have emerged as a popular means of representing unique digital assets on the blockchain, often used for avatars and profile pictures [10, 28]. By integrating the BRAIN, NFT creators can embed AI models within their digital assets, allowing for intelligent capabilities like dynamic animations and chat. The decentralized nature of BRAIN ensures that these AI-powered NFTs are not subject to centralized control.
## IX Conclusion
BRAIN is a contract-driven decentralized platform designed to enable the training and inference of large-scale neural networks for a variety of AI-based decentralized services. To ensure reliability and security, BRAIN employs a two-phase transaction structure and a VRF-based committee sortition mechanism. Additionally, we propose an aggregator-free federated learning mechanism that eliminates the need for a centralized server, providing cost efficiency.
Our experiments have showcased the effectiveness of these features in terms of tasks-per-second performance, achieving a 454.4293 times improvement compared to a naive implementation. By fine-tuning the hyperparameters, we successfully strike a balance between performance, security, and timeout tolerance, while effectively controlling the number of timeouts and tasks-per-second. Moreover, we have demonstrated that BRAIN incurs a low additional gas cost overhead.
Note that, at present, BRAIN does not include privacy-preserving features, and this is not the focus of the current paper. In future research, we plan to explore methods for enhancing privacy on the BRAIN platform.
## Acknowledgment
This work was supported partly by Kakao Brain corporations.
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{min} & \multirow{2}{*}{max} & \multirow{2}{*}{avg} & \multicolumn{2}{c}{USD (avg)} \\ \cline{3-6} & & & & Ethereum & Polygon \\ \hline \hline \multicolumn{6}{c}{Verifiable Random Function} \\ \hline verify & 1543493 & 1862450 & 1643712 & $ 52.504 & 5 0.077 \\ \hline verify\({}_{fast}\) & 106360 & 352838 & 150715 & 5 2.980 & 5 0.007 \\ \hline \hline \multicolumn{6}{c}{Commit-and-reveal without \& with the Hash Function \(H\)} \\ \hline commit & 44825 & 62072 & 45732 & $ 0.904 & 5 0.002 \\ \hline commit\({}_{\#}\) & 44861 & 44897 & 44895 & 5 0.888 & 5 0.002 \\ \hline reveal & 2781 & 796620 & 87124 & 5 1.723 & 5 0.004 \\ \hline reveal\({}_{\#}\) & 47355 & 47391 & 47389 & 5 0.937 & 5 0.002 \\ \hline \multicolumn{6}{c}{(Circular) Queue \& Priority Queue} \\ \hline push & 51324 & 68424 & 51345 & 5 1.015 & 5 0.002 \\ \hline pop & 29013 & 46113 & 29034 & 5 0.574 & 5 0.001 \\ \hline push\({}_{prior}\) & 84353 & 137955 & 91699 & 5 1.813 & 5 0.004 \\ \hline POP\({}_{prior}\) & 34909 & 116942 & 100861 & 5 1.995 & 5 0.005 \\ \hline \multicolumn{6}{c}{() push\({}_{prior}\)} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \(\Delta\)*verify\({}_{fast}\) + commit\({}_{H}\) + popError & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \(\Delta\)*b reveal\({}_{H}\) & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \hline \end{tabular}
\end{table} TABLE II: Gas Consumption on Ethereum and Polygon |
2310.09928 | Homology and K-theory of dynamical systems. IV. Further structural
results on groupoid homology | We consider the homology theory of \'etale groupoids introduced by Crainic
and Moerdijk, with particular interest to groupoids arising from topological
dynamical systems. We prove a K\"unneth formula for products of groupoids and a
Poincar\'e-duality type result for groupoids which are principal with orbits
homeomorphic to a Euclidean space. We conclude with a few example computations
for systems associated to nilpotent groups such as self-similar actions, and we
generalize previous homological calculations by Burke and Putnam for systems
which are analogues of solenoids arising from algebraic numbers. For the latter
systems, we prove the HK conjecture, even when the resulting groupoid is not
ample. | Valerio Proietti, Makoto Yamashita | 2023-10-15T19:33:48Z | http://arxiv.org/abs/2310.09928v1 | # Homology and K-theory of dynamical systems
###### Abstract.
We consider the homology theory of etale groupoids introduced by Crainic and Moerdijk, with particular interest to groupoids arising from topological dynamical systems. We prove a Kunneth formula for products of groupoids and a Poincare-duality type result for groupoids which are principal with orbits homeomorphic to a Euclidean space. We conclude with a few example computations for systems associated to nilpotent groups such as self-similar actions, and we generalize previous homological calculations by Burke and Putnam for systems which are analogues of solenoids arising from algebraic numbers. For the latter systems, we prove the HK conjecture, even when the resulting groupoid is not ample.
Key words and phrases:groupoid homology, Kunneth formula, Poincare duality, topological dynamics, derived functors 2020 Mathematics Subject Classification: 37B02; 22A22, 18G10
###### Contents
* 1 Preliminaries
* 2 Kunneth formula
* 3 Poincare duality
* 4 Expanding maps on compact manifolds
* 5 Number theoretic generalization of solenoid
## Introduction
In this paper we prove two structural results for the homology groups of topological dynamical systems, continuing our work [1, 2]. One is a Kunneth type formula for the product of two systems, while the other reduces the homological computation to the (compactly supported) cohomology of the underlying space under certain conditions.
Previously we have focused on totally disconnected systems, which are represented by _ample groupoids_, i.e., etale groupoids whose unit space is totally disconnected. In this paper we do not work under this restriction but we deal with (more general) dynamical systems of _finite topological dimension_. A motivating class of examples is that of _Smale spaces_[10], which capture hyperbolicity on compact metric spaces, such as the dynamics of Anosov diffeomorphisms and more generally those on the basic sets of Axiom A diffeomorphisms. This has led to interesting intersection between the theory of dynamical systems and operator algebras.
We mainly work within the homology theory introduced by M. Crainic and I. Moerdijk [1]. It defines homology groups with coefficient in equivariant sheaves to any etale groupoid, based on sheaves, derived formalism, and simplicial methods.
In another direction, given a locally compact etale groupoid (and more generally a locally compact groupoid with a continuous Haar system), there is a convolution product on the space of compactly supported continuous functions on the groupoid, which can be completed to a C\({}^{*}\)-algebra [12]. The \(K\)-groups of this C\({}^{*}\)-algebra can be regarded as a homological invariants of
the original groupoid which has better connection to index theory, classification of C\({}^{*}\)-algebras, and other topics involving operator \(K\)-theory.
Comparison between these two theories has been a major driving force behind our previous works, and it plays an important role in this note too.
Let us summarize the main conceptual results in this note.
**Theorem A** (Theorem 2.1).: _Let \(G\) and \(H\) be etale groupoids, and \(\mathcal{S}\) and \(\mathcal{T}\) be \(G\)- and \(H\)-equivariant sheaves. Then there is a split short exact sequence_
\[0\to\bigoplus_{a+b=k}H_{a}(G,\mathcal{S})\otimes H_{b}(H, \mathcal{T})\to H_{k}(G\times H,\mathcal{S}\boxtimes\mathcal{T})\\ \to\bigoplus_{a+b=k-1}\operatorname{Tor}(H_{a}(G,\mathcal{S}),H_{ b}(H,\mathcal{T}))\to 0.\]
**Theorem B** (Theorem 3.1).: _Let \(\tilde{G}\) be a locally compact groupoid such that \(\tilde{G}^{x}\) is homeomorphic to \(\mathbb{R}^{n}\) for some fixed \(n\), and \(\tilde{G}^{x}_{y}\) is at most a singleton for all \(x\), \(y\) in \(\tilde{G}^{(0)}\). Let us choose a generalized transversal \(T\) in \(\tilde{G}^{(0)}\), so that \(G=\tilde{G}|_{T}\) is an etale groupoid. We then have an isomorphism_
\[H_{k}(G,\underline{\mathbb{Z}})\cong H_{c}^{n-k}(\tilde{G}^{(0)},\underline{ \mathbb{Z}}\times_{\mathbb{Z}/2\mathbb{Z}}\underline{\varrho}),\]
_where on the right-hand side cohomology is computed with coefficients in the orbit-wise orientation sheaf over the unit space of \(\tilde{G}\)._
In operator \(K\)-theory, Theorem A has a direct analogue which is the Kunneth formula of \(K\)-groups of tensor product C\({}^{*}\)-algebras due to Schochet [10]. Theorem B can also be also interpreted as an analogue of Connes's Thom isomorphism for \(K\)-groups [14] (for a groupoid of the form \(\mathbb{R}^{n}\ltimes X\), this result reduces operator \(K\)-groups of the associated \(C^{*}\)-crossed product to the topological \(K\)-groups of \(X\), up to a degree shift determined by \(n\)).
However, the proofs are completely different from the \(K\)-theoretic ones. While the proof of Theorem A is a standard Eilenberg-Zilber type manipulation of multicomplexes, for Theorem B we make an essential use of the sheaf theoretic idea and the formalism of derived functors.
Theorem B is particularly useful in computing the homology of groupoids coming from dynamical systems on manifolds and related structures. Moreover, the idea behind this theorem is also useful to study systems of number theoretic origin. We look at a class of Smale system \((Y^{(c)},\phi)\) that appears in the theory of algebraic actions due to Schmidt [10]. Given an algebraic number \(c\), the space \(Y^{(c)}\) is given as the Pontryagin dual of the additive group of a certain subring of \(K=\mathbb{Q}(c)\), and \(\phi\) is the natural map induced by the multiplication by \(c\) (see Section 5 for details on this construction).
**Theorem C** (Theorem 5.7).: _Denote by \(G\) the (un)stable etale groupoid associated to \((Y^{(c)},\phi)\). Then the groupoid homology of \(G\) can be presented as the inductive limit of exterior power of a certain subgroup \(\Gamma\) of the additive group of \(K\),_
\[H_{k}(G,\underline{\mathbb{Z}})=\varinjlim\bigwedge^{k}\Gamma,\]
_with respect to the connecting map \(\theta_{k}\colon\bigwedge^{k}\Gamma\to\bigwedge^{k}\Gamma\) which is the unique extension of \(N\bigwedge^{k}m_{c^{-1}}\), with a natural number \(N\) and \(m_{c^{-1}}\) denoting multiplication by \(c^{-1}\)._
An analogous computation can be carried out for the K-groups of the groupoid C\({}^{*}\)-algebra \(C^{*}G\). Consequently, the K-group is isomorphic to the direct sum of groupoid homology with the same degree parity:
\[K_{i}(C^{*}G)\cong\bigoplus_{k\in\mathbb{Z}}H_{i+2k}(G,\underline{\mathbb{Z}}).\]
This confirms, for the class of groupoids from Theorem C, the "HK conjecture" formulated by Matui [16] in the setting of ample groupoids, which asks if there is an isomorphism between the \(K\)-groups and periodicized homology as above.
Theorem C is interesting for a few reasons. Firstly, the computations given here generalize previous work by Burke and Putnam [1]. Secondly, the groupoids appearing in this context are not necessarily ample, providing us with examples where the HK conjecture holds beyond its original assumptions (the groupoid associated to an irrational flow on the torus provides another example, see Example 3.3 for details). Thirdly, a key intermediate result in this setting (Proposition 5.1) showcases a "variant" of Theorem B above, more akin to the Thom isomorphism in cohomology than to a proper duality.
The paper is organized as follows. In Section 1 we recall a few basic notions and briefly summarize our previous work to set the conventions and background for this note.
In Section 2 we prove the Kunneth formula (Theorem A) for the homology of the product of etale groupoids. In Section 3 we show the Poincare duality-type result (Theorem B), and in the last two sections we present some concrete computations for notable examples of topological dynamical systems arising from expanding maps on compact manifolds, and and analogues of solenoids in algebraic number fields.
### Acknowledgments
V.P.: this research was supported by: Foreign Young Talents' grant (National Natural Science Foundation of China), CREST Grant Number JPMJCR19T2 (Japan), Marie Sklodowska-Curie Individual Fellowship (project number 101063362).
M.Y.: this research was funded, in part, by The Research Council of Norway [project 300837]. Part of the work for this project was carried out during M.Y.'s stay at Research Institute for Mathematical Sciences, Kyoto University. He thanks Narutaka Ozawa and others at RIMS for their hospitality.
## 1. Preliminaries
We fix conventions in use throughout the paper. We only briefly recall definitions and generally follow the treatment in [10, 11].
### Topological groupoids and homology groups
We mainly work with second countable, locally compact, Hausdorff groupoids. Given such a groupoid \(G\), we denote its base space by \(G^{(0)}\), with structure maps \(s,r\colon G\to G^{(0)}\), and the \(n\)-th nerve space (for \(n\geq 1\)) given by
\[G^{(n)}=\{(g_{1},\ldots,g_{n})\in G^{n}\colon s(g_{i})=r(g_{i+1})\}.\]
We say that \(G\) is _etale_ if \(s\) and \(r\) are local homeomorphisms, and _ample_ if it is etale and its base space is totally disconnected.
We consider the homology of etale groupoids as defined by Crainic and Moerdijk [12]. A \(G\)_-sheaf_ is a sheaf \(F\) on \(G^{(0)}\) endowed with a continuous action of \(G\), modeled by a map of sheaves \(s^{*}F\to r^{*}F\) on \(G\). A \(G\)-sheaf \(F\) is said to be (c-)soft when the underlying sheaf on \(G^{(0)}\) has that property, that is, for any (compact) closed subset \(S\subset G^{(0)}\) and any section \(x\in\Gamma(S,F)\), there is an extension \(\tilde{x}\in\Gamma(X,F)\).
Our standing assumption is that the cohomological dimension of any open sets of \(G^{(0)}\) is bounded by some fixed integer \(N\): to be precise, \(H^{k}_{c}(U,F)=0\) for any open subset \(U\subset G^{(0)}\) and any \(k>N\).
This is guaranteed when \(G^{(0)}\) is a subspace of a metrizable space of (Lebesgue) topological dimension \(N\), which covers all the concrete examples we consider. To see this, observe that the topological dimension of any compact subset \(A\subset G^{(0)}\) is bounded by \(N\)[13, Section 3.1], then the Cech cohomology \(\tilde{H}^{\bullet}(A;F)\) will vanish in degree above \(N\). By paracompactness, \(\tilde{H}^{\bullet}(A;F)\) agrees with the sheaf cohomology \(H^{\bullet}(A;F)=R^{\bullet}\Gamma_{A}(F)\) for the right derived functor of the functor \(\Gamma_{A}(F)=\Gamma(A,F)\), then we can combine [1, Remarque II.4.14.1 and Theoreme II.4.15.1] to get the claim.
Let \(F\) be a \(G\)-sheaf. Then there is a resolution of \(F\) as above by c-soft \(G\)-sheaves, and the _homology with coefficient \(F\)_, denoted \(H_{\bullet}(G,F)\), is defined as the homology of the total complex
of the double complex \((C^{j}_{i})_{0\leq i,j}\) with terms
\[C^{j}_{i}=\Gamma_{c}(G^{(i)},s^{*}F^{j}),\]
which has homological degree \(i-j\). More generally, when \(F_{\bullet}\) is a homological complex of \(G\)-sheaves bounded from below, take a resolution of each \(F_{j}\) by c-soft \(G\)-sheaves \(F^{k}_{j}\) as above. Then the _hyperhomology with coefficient \(F_{\bullet}\)_, denoted by \(\mathbb{H}_{\bullet}(G,F_{\bullet})\), is the homology of triple complex with terms
\[C^{k}_{i,j}=\Gamma_{c}(G^{(i)},s^{*}F^{k}_{j}),\]
which has homological degree \(i+j-k\).
When two etale groupoids \(G\) and \(H\) are Morita equivalent, there are natural correspondences between the \(G\)-sheaves and \(H\)-sheaves inducing an isomorphism of groupoid homology. In particular, if \(f\colon H\to G\) is a Morita equivalence homomorphism, we have
\[\mathbb{H}_{\bullet}(G,F_{\bullet})\cong\mathbb{H}_{\bullet}(H,f^{*}F_{\bullet})\]
for any complex \(F_{\bullet}\) of \(G\)-sheaves as above.
### Derived functor formalism
We briefly recall the derived functor formalism of groupoid homology from [10, Section 4]. Let \(G\) and \(G^{\prime}\) be etale groupoids, and \(\phi\colon G\to G^{\prime}\) be a continuous groupoid homomorphism. Then, for each \(x\in G^{\prime(0)}\), the _comma groupoid_\(x/\phi\) is defined as the groupoid whose objects are the pairs \((y,g^{\prime})\), where \(y\in G^{(0)}\) and \(g^{\prime}\in G^{\prime\phi(y)}_{x}\), and an arrow from \((y_{1},g^{\prime}_{1})\) to \((y_{2},g^{\prime}_{2})\) is given by \(g\in G^{y_{2}}_{y_{1}}\) such that \(\phi(g)g^{\prime}_{1}=g^{\prime}_{2}\). This is an etale groupoid that comes with a homomorphism \(\pi_{x}\colon x/\phi\to G\).
When \(F\) is a \(G\)-sheaf, we consider a simplicial system of \(G^{\prime}\)-sheaves, denoted by \(B_{\bullet}(\phi,F)\), which at the level of stalks is given by
\[B_{n}(\phi,F)_{x}=\Gamma_{c}((x/\phi)^{(n)},s^{*}\pi_{x}^{*}F).\]
The _left derived functor_\(\mathcal{L}\phi_{!}F_{\bullet}\) for a homological complex of \(G\)-sheaves bounded from below \(F_{\bullet}\) is represented by the total complex of the triple complex of \(G^{\prime}\)-sheaves with terms \(B_{i}(\phi,F^{k}_{j})\) with homological degree \(i+j-k\), where \(F^{\bullet}_{j}\) is a resolution of \(F_{j}\) by c-soft \(G\)-sheaves. This is well defined up to quasi-isomorphism of \(G^{\prime}\)-sheaves. The _\(n\)-th derived functor_, denoted by \(L_{n}\phi_{!}F_{\bullet}\), is the \(G^{\prime}\)-sheaf given as the \(n\)-th homology of \(\mathcal{L}\phi_{!}F_{\bullet}\). By construction the fiber of this sheaf is given by [10, Proposition 4.3]
\[(L_{n}\phi_{!}F_{\bullet})_{x}=H_{n}(x/\phi,\pi_{x}^{*}F_{\bullet}). \tag{1}\]
When \(G^{\prime}\) is the trivial groupoid and \(\phi\) is the unique homomorphism \(G\to G^{\prime}\), this recovers the definition of \(H_{n}(G,F_{\bullet})\).
Besides the pullback functor (the inverse image functor) for sheaves, we will also make use of the _direct image_ functor, simply defined as \(g_{*}F(U)=F(g^{-1}(U))\)[12]. In the setting of equivariant sheaves, \(g_{*}\) can also be defined, and it is still right adjoint to the pullback functor, see [10, Section 2.3] for details. It is worth noting that, if \(g\) is proper, then \(g_{*}\) coincides with the functor \(g\); sometimes called direct image functor with compact supports.
### Smale spaces
Many examples of groupoids in this note appear as (reduction of) the stable and unstable groupoid of a _Smale space_. A Smale space is a certain kind of hyperbolic dynamics modeled on a compact metric space \(X\) with a self-homeomorphism \(\phi\). See [11] for precise definition and conventions. In particular, there are two distinguished equivalence relations on \(X\), defined as follows.
* Two points \(x\) and \(y\) are _stably equivalent_ (denoted \(x\sim_{s}y\)) if \[\lim_{n\to\infty}d(\phi^{n}(x),\phi^{n}(y))=0;\]
* similarly, \(x\) and \(y\) are _unstably equivalent_ (denoted \(x\sim_{u}y\)) if \[\lim_{n\to\infty}d(\phi^{-n}(x),\phi^{-n}(y))=0.\]
The equivalence classes of the stable (resp. unstable) equivalence relation are called the _stable sets_ (resp. _unstable sets_).
The graph of the stable (resp. unstable) equivalence relation has a structure of locally compact groupoid with a Haar system [10], that we denote by \(R^{s}(X,\phi)\) (resp. \(R^{u}(X,\phi)\)). Following the construction detailed in [11], we obtain an etale groupoid by restricting \(R^{u}(X,\phi)\) to an appropriate subspace contained in a finite union of stable sets.
## 2. Kunneth formula
Suppose \(G\) and \(H\) are etale groupoids such that groupoid homology is definable, and \(\mathcal{S}\) and \(\mathcal{T}\) are equivariant sheaves of abelian groups over \(G^{(0)}\) and \(H^{(0)}\) respectively. Furthermore, denote by \(p\) and \(q\) the canonical projections from \(G\times H\) to \(G\) and \(H\) respectively. We define the sheaf \(\mathcal{S}\boxtimes\mathcal{T}\) as \(p^{*}\mathcal{S}\otimes q^{*}\mathcal{T}\). Note this is a \(G\times K\)-equivariant sheaf over \(G^{(0)}\times K^{(0)}\).
**Theorem 2.1**.: _Under the above setting, there is a split short exact sequence_
\[0\to\bigoplus_{a+b=k}H_{a}(G,\mathcal{S})\otimes H_{b}(H, \mathcal{T})\to H_{k}(G\times H,\mathcal{S}\boxtimes\mathcal{T})\\ \to\bigoplus_{a+b=k-1}\operatorname{Tor}(H_{a}(G,\mathcal{S}),H_ {b}(H,\mathcal{T}))\to 0.\]
Proof.: Let us take bicomplexes \(A=A_{a,i}\) and \(B=B_{b,j}\) computing \(H_{\bullet}(G,\mathcal{S})\) and \(H_{\bullet}(H,\mathcal{T})\), respectively. These are obtained from c-soft cohomological complex of sheaves \(\tilde{\mathcal{S}}^{\bullet}\) and \(\tilde{\mathcal{T}}^{\bullet}\), each quasi-isomorphic to \(\mathcal{S}\) and \(\mathcal{T}\) concentrated at degree \(0\). To obtain a homological complex we invert the degree, so that \(A_{a,i}=\Gamma_{c}(G^{(a)},\tilde{\mathcal{S}}^{-i})\) for example.
Up to the identifications
\[A_{k,i}\otimes B_{k,j}=\Gamma_{c}(G^{(k)},\tilde{\mathcal{S}}^{-i})\otimes \Gamma_{c}(H^{(k)},\tilde{\mathcal{T}}^{-j})\cong\Gamma_{c}((G\times H)^{(k)},\tilde{\mathcal{S}}^{-i}\boxtimes\tilde{\mathcal{T}}^{-j}),\]
the total complex of the triple complex \((A_{k,i}\otimes B_{k,j})\) computes \(H_{k}(G\times H,\mathcal{S}\boxtimes\mathcal{T})\). The claim follows by a standard argument if we can show that this is quasi-isomorphic to the total complex of the quadruple complex \(A\otimes B=(A_{a,i}\otimes B_{b,j})_{a,b,i,j}\).
Now, observe that \(A\otimes B\) can be regarded as a bisimplicial object in the category of complexes, by totalizing in the \(i\)- and \(j\)-directions. By an Eilenberg-Zilber type theorem [1, Theorem IV.2.4], for fixed \(q\), the total complex of the bisimplicial group \(C_{q}(a,b)=\bigoplus_{q=i+j}A_{a,i}\otimes B_{b,j}\) is chain homotopic to the Moore complex of the simplicial group \(C^{\prime}_{q}(k)=\bigoplus_{q=i+j}A_{k,i}\otimes B_{k,j}\).
Now, take double complexes \(C_{k,q}=\bigoplus_{k=a+b}C_{q}(a,b)\) and \(C^{\prime}_{k,q}=C^{\prime}_{q}(k)\). Since the degree \(k\) is concentrated in \(k\geq 0\) while the degree \(q\) is in \(q\leq 0\), the spectral sequences \(E\) and \(E^{\prime}\) associated with filtration by \(q\)-degree on \(\operatorname{Tot}C\) and \(\operatorname{Tot}C^{\prime}\) are regular, in the sense that for any \(n\) there is \(s(n)\) such that we have \(E^{r}_{p,q}=0\) for \(p+q=n\), \(p<s(n)\). Then the spectral sequences converge to the total homologies, while we have the isomorphisms \(E^{r}_{p,q}\cong E^{rr}_{p,q}\) for \(r\geq 1\) by the above remark. We thus obtain the assertion.
As usual, the morphisms in the short exact sequence above are natural in any conceivable sense, however the splitting is not.
_Remark 2.2_.: Matui's result [15, Theorem 2.4] is a special case of the above result in the situation where the groupoids are totally disconnected and the coefficients are locally constant sheaves \(\underline{\mathbb{Z}}\). (Note that his convention of homology \(H_{n}(G)\) differs from \(H_{n}(G,\underline{\mathbb{Z}})\) unless \(G\) is totally disconnected.)
_Remark 2.3_.: In [11, Theorem 5.1] we have identified Putnam's homology groups for Smale spaces [10, 11] with the etale groupoid homology groups considered in this paper, where the groupoid is the unstable equivalence relation associated to a non-wandering Smale space with totally disconnected stable sets (this implies the associated groupoid is ample). In a companion paper [11] to the present one, we remove the hypothesis on the stable sets and prove the identification of homology groups for a general non-wandering Smale space. Combining this
result with Theorem 2.1 above, we obtain a general Kunneth formula for the product of two Smale spaces and their homology groups as defined by Putnam, generalizing [4, Theorem 5.2] and [11, Theorem 6.5].
## 3. Poincare duality
Suppose we have a locally compact groupoid \(\tilde{G}\) such that \(\tilde{G}^{x}\) is homeomorphic to \(\mathbb{R}^{n}\), and \(\tilde{G}^{x}_{y}\) is at most a singleton for all \(x\), \(y\) in \(\tilde{G}^{(0)}\). Let us choose a generalized transversal \(T\) in \(\tilde{G}^{(0)}\), so that \(G=\tilde{G}|_{T}\) is an etale groupoid. We also assume that groupoid homology for \(G\) is definable.
Under our assumption, the structure map \(s\colon\tilde{G}\to\tilde{G}^{(0)}\) is a model of the universal principal \(\tilde{G}\)-bundle \(E\tilde{G}\to B\tilde{G}\). Then the Baum-Connes conjecture suggests a close relation between \(H_{\bullet}(G,\underline{\mathbb{Z}})\) and the compactly supported cohomology of the space \(\tilde{G}^{(0)}\). Let us make this precise in the framework of groupoid homology.
Now, let us consider a groupoid \(\tilde{G}\) as in the beginning of this section, and take a transversal \(T\) and the associated etale groupoid \(G\) as before. We then consider the orbit-wise orientation sheaf \(\underline{o}\) on \(\tilde{G}^{(0)}\). Formally, its stalk at \(x\) is given by
\[\underline{o}_{x}=(\bigwedge^{n}T_{x}\tilde{G}^{x})/\mathbb{R}_{+}\cong \mathbb{R}^{\times}/\mathbb{R}_{+}\cong\{1,-1\}.\]
This sheaf has a natural action of \(\mathbb{Z}/2\mathbb{Z}\). Note that \(\underline{o}\) admits a global section (equivalently, it is trivializable) if and only if there is a global orientation on the orbits of \(\tilde{G}\).
**Theorem 3.1**.: _Under the above setting, we have an isomorphism_
\[H_{k}(G,\underline{\mathbb{Z}})\cong H_{c}^{n-k}(\tilde{G}^{(0)},\underline{ \mathbb{Z}}\times_{\mathbb{Z}/2\mathbb{Z}}\underline{o}).\]
Proof.: The claim is equivalent to the isomorphism of groupoid homology groups
\[H_{k}(G,\underline{\mathbb{Z}})\cong H_{k-n}(\tilde{G}^{(0)},\underline{ \mathbb{Z}}\times_{\mathbb{Z}/2\mathbb{Z}}\underline{o}),\]
where we treat \(\tilde{G}^{(0)}\) as a trivial groupoid on the right hand side.
Consider the \(G\)-space \(E=\tilde{G}^{T}\). Then we have a morphism of groupoid
\[\phi\colon G\ltimes E\to G\]
induced by the range map \(\tilde{G}^{T}\to T\). We want to apply the constructions in [12, Section 4] to this setting. We have a Morita equivalence between \(G\ltimes E\) and \(G^{(0)}\), induced by the source map \(s\colon E\to\tilde{G}^{(0)}\). By [12, Corollary 4.6], we have
\[H_{p-n}(G\ltimes E,s^{*}(\underline{\mathbb{Z}}\times_{\mathbb{Z}/2\mathbb{Z} }\underline{o}))\cong H_{c}^{n-p}(\tilde{G}^{(0)},\underline{\mathbb{Z}} \times_{\mathbb{Z}/2\mathbb{Z}}\underline{o}). \tag{2}\]
Let us consider the \((G\ltimes E)\)-sheaf \(F=s^{*}(\underline{\mathbb{Z}}\times_{\mathbb{Z}/2\mathbb{Z}}\underline{o})\), and its left derived functors \(L_{k}\phi_{!}F\).
Fix \(x\in T\). By our assumption on \(\tilde{G}\), the object space of \(x/\phi\) can be identified with the disjoint union of \(\tilde{G}^{y}\) for \(y\in G_{x}\). Given objects \(g\in\tilde{G}^{y}\) and \(g^{\prime}\in\tilde{G}^{z}\) in \(x/\phi\), there is an arrow from \(g\) to \(g^{\prime}\) if and only if \(g=g^{\prime\prime}g^{\prime}\) for the unique \(g^{\prime\prime}\in\tilde{G}^{y}_{z}\). In particular, \(x/\phi\) is Morita equivalent to \(\tilde{G}^{x}\cong\mathbb{R}^{n}\).
If we restrict the pullback sheaf \(\pi_{x}^{*}F\) to \(\tilde{G}^{x}\), we get \(s^{*}((\underline{\mathbb{Z}}\times_{\mathbb{Z}/2\mathbb{Z}}\underline{o})\), but this is isomorphic to \(\underline{\mathbb{Z}}\) by the global orientation on \(\mathbb{R}^{n}\). We then have
\[H_{k}(x/\phi,\pi_{x}^{*}F)\cong H_{c}^{-k}(\mathbb{R}^{n},\underline{\mathbb{ Z}})\cong\begin{cases}\mathbb{Z}&(k=-n)\\ 0&(\text{otherwise}).\end{cases}\]
By (1), the \(G\)-sheaf \(L_{k}\phi_{!}F\) on \(G^{(0)}=T\) has the stalks isomorphic to \(\mathbb{Z}\) when \(k=-n\), and we have \(L_{k}\phi_{!}F=0\) otherwise. Of course, the same can be said about \(L_{n}\phi_{!}\underline{\mathbb{Z}}\). However, looking at the action of \(\tilde{G}\), the extra factor \(\underline{o}\) corrects the "sign" of the map \(H_{k}(x/\phi,\pi_{y}^{*}\mathbb{Z})\to H_{k}(x/\phi,\pi_{x}^{*}\underline{ \mathbb{Z}})\) induced by \(g\in\tilde{G}^{x}_{y}\), and we have the isomorphism of \(G\)-sheaves between \(L_{-n}\phi_{!}F\) and \(\underline{\mathbb{Z}}\).
Now, the Leray-type spectral sequence from [12, Theorem 4.4]
\[E^{2}_{pq}=H_{p}(G,L_{q}\phi_{!}F)\Rightarrow H_{p+q}(G\ltimes E,F)\]
is degenerate at the \(E^{2}\)-sheet for degree reasons. Thus we get that \(H_{p}(G,\underline{\mathbb{Z}})\) is isomorphic to \(H_{p-n}(G\ltimes E,F)\). (A bit more conceptually, \(L_{\bullet}\phi_{!}\underline{\mathbb{Z}}\) is quasi-isomorphic to the degree shift of \(\underline{\mathbb{Z}}\), and groupoid hyperhomology and degree shift of coefficient commute.) Combined with (2), we obtain the assertion.
_Remark 3.2_.: Let \((X,\phi)\) be a non-wandering Smale space whose unstable sets are contractible. In [15, Question 8.3.2], Putnam conjectured that the stable homology \(H^{s}_{\bullet}(X,\phi)\) is isomorphic to \(H^{\bullet}(X)\) up to a degree shift. In view of Remark 2.3, Theorem 3.1 answers this conjecture in affirmative if the unstable sets are homeomorphic to \(\mathbb{R}^{d}\) and there is a consistent choice of orientation on these spaces. As pointed out in [1], the original conjecture is false without such orientability, and Theorem 3.1 provides a necessary modification for non-orientable cases.
### Examples
#### 3.1.1. Substitution tilings
In the case of groupoids for substitution tilings, the isomorphism of Theorem 3.1 appears in [11, Section 5.2].
#### 3.1.2. Anosov diffeomorphisms
Let \(\phi\) be an Anosov diffeomorphism \(\phi\) of a compact manifold \(X\), with dimension of stable sets \(n\). By the stable manifold theorem [10] (see also [1, Section 5.6]), the stable set of any point \(x\in X\) is an immersed copy of \(\mathbb{R}^{n}\). Then the monodromy groupoid of stable foliation on \(X\) satisfies the assumption for \(\tilde{G}\).
When \(X\) agrees with the set of the non-wandering points of \(\phi\), we have a non-wandering Smale space \((X,\phi)\), and the above monodromy groupoid is just \(R^{s}(X,\phi)\).
_Example 3.3_.: As a concrete example, let us consider the Smale spaces associated to hyperbolic toral automorphisms \((X,\phi)=(\mathbb{R}^{2}/\mathbb{Z}^{2},A)\), where \(A\) is a 2-by-2 matrix with integer entries and determinant equal to 1 (see [15, Section 7.4]). This implies the \(\mathbb{R}\)-linear endomorphism associated to \(A\) descends to a map of the 2-torus. Note that \(A\) is called "hyperbolic" when its eigenvalues \(\lambda_{1},\lambda_{2}\) satisfy \(\lambda_{1}<1\), \(\lambda_{2}>1\).
The stable and unstable orbits of \(A\) coincide with the lines spanned by the eigenvectors associated to \(\lambda_{1}\) and \(\lambda_{2}\). Denote by \(R^{s}(X,\phi)\) the stable equivalence relation associated to \((X,\phi)\), suitably reduced to a transversal so that \(R^{s}(X,\phi)\) is etale (in this case a transversal is given for example by the \(\lambda_{2}\)-eigenline). Applying Theorem 3.1, we obtain
\[H_{-1}(R^{s}(X,\phi))\cong\mathbb{Z},\qquad H_{0}(R^{s}(X,\phi))\cong\mathbb{ Z}^{2},\qquad H_{1}(R^{s}(X,\phi))\cong\mathbb{Z}. \tag{3}\]
Even though the HK conjecture is only formulated for ample groupoids, it is worth pointing out that this homology calculation corresponds (after periodicization) to the \(K\)-groups of the stable \(\mathrm{C}^{*}\)-algebra associated to \((X,\phi)\). Indeed, this algebra is the foliation \(\mathrm{C}^{*}\)-algebra of the Kronecker flow along the \(\lambda_{1}\)-eigenline, hence it is Morita equivalent to the rotation algebra with angle the slope of the eigenline. The \(K\)-groups of this algebra is well-known to be \(\mathbb{Z}^{2}\) in both even and odd degree.
The Morita equivalence between arises from an equivalence of groupoids as explained in detail in [12, Chapter 3]. Since the homology groups are Morita invariant [11, Section 4], the calculation in (3) is valid for the topological groupoid \(\mathbb{Z}\ltimes S^{1}\) underlying the irrational rotation algebra with angle the slope of the \(\lambda_{1}\)-eigenline.
_Example 3.4_.: Let \(M=L/\Gamma\) be an infra-nilmanifold, i.e., a quotient of a nilpotent, connected, and simply connected Lie group \(L\) by a torsion-free group \(\Gamma\) of affine automorphisms of \(L\) such that \(\Lambda=L\cap\Gamma\) is a finite index subgroup of \(\Gamma\). Moreover, let \(\psi\) be a hyperbolic affine automorphism of \(M\)[16]. Then \(R^{u}(M,\psi)\) and \(R^{s}(M,\psi)\) satisfy the assumption of Theorem 3.1. Indeed, an unstable set of \((M,\psi)\) can be identified with the subspace of the Lie algebra \(\mathrm{I}\) of \(L\) spanned by eigenvectors of the "linear part" of \(\psi\) for corresponding to its eigenvalues bigger than 1, while a stable set can be identified with the span of the other eigenvectors.
_Remark 3.5_.: If we denote by \(S\) the \(C^{*}\)-algebra of the stable equivalence relation, Takai [13] has conjectured that \(K_{i}(S)\) is isomorphic to \(K_{i+n}(X)\), which can be understood as an instance of the Baum-Connes conjecture for foliations (for general formulations of the Baum-Connes
conjecture for groupoids see for example [12, 13]). Theorem 3.1 gives an affirmative answer to the homological version of the Takai conjecture.
#### 3.1.3. Self-similar action
Another class of examples comes from the theory of self-similar actions, which is also closely related to Example 3.4.
Let \(\Gamma\) be a finitely generated group, and \(\phi\) be an injective and surjective contracting virtual endomorphism of \(\Gamma\). Then \(\Gamma\) is virtually nilpotent, and admits a self-similar action \((\Gamma,X)\) where the alphabet set \(X\) is a system of representatives of \(\Gamma/\operatorname{dom}\phi\). Its _limit \(\Gamma\)-space_\(\mathcal{X}_{\Gamma,X}\) can be identified with a nilpotent connected and simply connected Lie group \(L\) on which \(\Gamma\) acts by affine transformations in a proper and cocompact way [21, Section 6.1].
In general, when \((\Gamma,X)\) is a contracting, recurrent, and regular self-similar action, we have the associated Smale space \(\mathcal{S}_{\Gamma,X}\) (the _limit solenoid_ of \((\Gamma,X)\)), and its unstable sets can be identified with \(\mathcal{X}_{\Gamma,X}\), see [21]. Thus, under the above assumption on \((\Gamma,\phi)\), the unstable groupoid \(G=R^{u}(\mathcal{S}_{\Gamma,X})\) satisfies the assumption of Theorem 3.1.
In the next section we look at examples from compact Riemannian manifolds that fall in this setting.
## 4. Expanding maps on compact manifolds
Let us combine the Poincare duality and transfer maps in homology to obtain a more elaborate computation of groupoid homology.
Suppose that \(M\) is an \(n\)-dimensional connected compact Riemannian manifold and \(g\colon M\to M\) is an expanding map. Then \(g\) admits a fixed point \(x\), \(\Gamma=\pi_{1}(M,x)\) is a torsion-free group of polynomial growth (hence virtually nilpotent), and \(\mathbb{R}^{n}\) is a universal cover for \(M\), see [21, Section 6.1].
With the virtual endomorphism \(\phi\) represented by \(g^{-1}\), we are in the setting of Section 3.1.3. Then the Smale space \(\mathcal{S}_{\Gamma,X}\) is given by \(Y=\varprojlim_{g}M\) and the associated self homeomorphism \(\phi\) of \(Y\). Again the groupoid \(\tilde{G}=R^{u}(Y,\phi)\) is Morita equivalent to the etale groupoid \(G=\Gamma\ltimes\Omega\) where \(\Omega=\varprojlim\Gamma/g^{i}(\Gamma)\), and we have
\[H_{k}(G,\underline{\mathbb{Z}})\cong H^{n-k}(Y,\underline{\mathbb{Z}}\times_{ \mathbb{Z}/2\mathbb{Z}}\underline{o}).\]
Let us write this as a group cohomology with coefficient.
Since \(G\) is a transformation groupoid, we also have
\[H_{k}(G,\underline{\mathbb{Z}})\cong H_{k}(\Gamma,C(\Omega,\mathbb{Z}))\]
with respect to the induced action of \(\Gamma\) on \(C(\Omega,\mathbb{Z})\). As \(M\) is a model of the Eilenberg-MacLane space \(K(\Gamma,1)\), \(\Gamma\) is a Poincare duality group [12, Section VIII.10], and we have
\[H_{k}(\Gamma,C(\Omega,\mathbb{Z}))\cong H^{n-k}(\Gamma,C(\Omega,\mathbb{Z}) \otimes D)\]
where \(D\) is an infinite cyclic group with the "sign" representation of \(\Gamma\).
Again the Morita equivalence between \(\Gamma\ltimes(\Gamma/g^{i}(\Gamma))\) and \(g^{i}(\Gamma)\) leads to a presentation of these (co)homology groups as inductive limits of the groups \(H_{k}(g^{i}(\Gamma),\mathbb{Z})\cong H_{k}(\Gamma,\mathbb{Z})\) with connecting maps given by the transfer maps. This corresponds to the isomorphism
\[K_{\bullet}(\Gamma\ltimes C(\Omega))\cong\varinjlim_{i}K_{\bullet}(C^{s}g^{i}( \Gamma))\]
that follows from the Baum-Connes conjecture for coefficients for \(\Gamma\), see also [13, 14] for the case of flat manifolds.
### Klein bottle
Let us describe a concrete example in the class of Section 3.1.3 that arises from a non-orientable surface. Consider the group
\[\Gamma=\langle a,b\mid b^{-1}ab=a^{-1}\rangle,\]
and its action on \(\mathbb{R}^{2}\) given by \(a(x,y)=(x+1,y)\), \(b(x,y)=(-x,y+1)\). Let \(K\) be the orbit space of this action, which is a model of the Klein bottle space. Since \(\mathbb{R}^{2}\) is the universal cover of \(K\), we have \(\Gamma\cong\pi_{1}(K,[0])\), and \(\mathbb{R}^{2}\) is a model of \(E\Gamma\).
Let \(g\) be the uniform scaling on \(\mathbb{R}^{2}\) given by \(g(x,y)=(3x,3y)\). This induces to an _expanding endomorphism_ of \(K\) that fixes [10], which we denote again by \(g\). The associated endomorphism of \(\Gamma\) can be written as \(g_{*}(a)=a^{3}\), \(g_{*}(b)=b^{3}\) up to the above identification, and the virtual endomorphism \(\phi\) represented by \(g^{-1}\colon g(\Gamma)\to\Gamma\) satisfies the assumptions of Section 3.1.3.
Then, \(K\) can be identified with the _limit space_\(\mathcal{J}_{\Gamma,X}\) of the associated self similar action. Thus, the associated limit solenoid \(\mathcal{S}_{\Gamma,X}\) is \(Y=\varprojlim_{g}K\), with the induced self-homeomorphism again denoted by \(\phi\). Moreover, the action of \(\Gamma\) on the nilpotent Lie group \(L\), which appears as the limit \(\Gamma\)-space as above, is conjugate to the above action of \(\Gamma\) on \(\mathbb{R}^{2}\) by the universality of \(\mathbb{R}^{2}\) as the total space of the classifying space for principal \(\Gamma\)-bundles.
Let us write the coset spaces for the image of powers of \(g\) as \(\Omega_{i}=\Gamma/g_{*}^{i}(\Gamma)\). We have natural projection maps \(\Omega_{i+1}\to\Omega_{i}\), and their projective limit \(\Omega=\varprojlim_{\xi}\Omega_{i}\). Then \(Y\) can be identified with the homotopy quotient \(\Omega\times_{K}\mathbb{R}^{2}\). Thus, the groupoid \(G=R^{u}(Y,\phi)\) can be identified with the transformation groupoid of \(\mathbb{R}^{2}\) acting on \(Y\) by translation. Taking the image \(T\) of \(\Omega\times\{(0,0)\}\) as the transversal, the associated etale groupoid \(G=\tilde{G}|_{T}\) is the transformation groupoid of the canonical action of \(\Gamma\) on \(\Omega\).
Turning to the groupoid homology, at degree \(k=0\) we have
\[H_{0}(G,\underline{\mathbb{Z}})\cong\mathbb{Z}[1/9]=\mathbb{Z}[1/3]\]
by [10, Theorem 4.1 and Proposition 6.3]. More generally, as remarked in [10, Section 6.2], \(H_{k}(G,\underline{\mathbb{Z}})\) is the direct limit of group homology groups \(H_{k}(\Gamma,\mathbb{Z})\) where the connecting map is induced by the Morita equivalence between \(\Gamma\ltimes\Omega_{i}\) and \(g^{i}(\Gamma)\cong\Gamma\). Concretely, these maps are the _transfer maps_ of group homology,
\[H_{k}(\Gamma,\mathbb{Z})\to H_{k}(g^{i}(\Gamma),\mathbb{Z}),\]
see [1, Section III.9].
At degree \(k=1\), let us write
\[H_{1}(\Gamma,\mathbb{Z})=\Gamma^{\text{ab}}\cong\mathbb{Z}\oplus\mathbb{Z}/2 \mathbb{Z},\qquad\qquad H_{1}(g(\Gamma),\mathbb{Z})=g(\Gamma)^{\text{ab}} \cong 3\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}.\]
where the images of \(a\) and \(g(a)\) become the generator of the summand \(\mathbb{Z}/2\mathbb{Z}\), and those of \(b\) and \(g(b)\) become generators of \(\mathbb{Z}\) and \(3\mathbb{Z}\) respectively. The transfer map is given by
\[H_{1}(\Gamma,\mathbb{Z})\to H_{1}(g(\Gamma),\mathbb{Z}),\qquad\qquad[a] \mapsto[g(a)],\qquad\qquad[b]\mapsto 3[g(b)],\]
see for example [1, Exercise III.9.2].
Thus, the inductive system \((H_{1}(g^{i}(\Gamma),\mathbb{Z}))_{i}\) can be identified with the constant system \(\mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}\) whose connecting map is given by \((x,y)\mapsto(3x,y)\). In particular we obtain
\[H_{1}(G,\underline{\mathbb{Z}})\cong\mathbb{Z}[1/3]\oplus\mathbb{Z}/2\mathbb{Z}.\]
For \(k\geq 2\), we have \(H_{k}(G,\underline{\mathbb{Z}})=0\). One way to see this is to use the isomorphism
\[H_{k}(G,\underline{\mathbb{Z}})\cong H^{2-k}(Y,\underline{\mathbb{Z}}\times_ {\mathbb{Z}/2\mathbb{Z}}\underline{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \
Let us consider the following class of Smale spaces from [14, Section 7]. Let \(c\in\bar{\mathbb{Q}}\subset\mathbb{C}\) be an algebraic number such that \(\left|gc\right|\neq 1\) for any element \(g\) in the absolute Galois group \(\operatorname{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\). Consider the algebraic number field \(K=\mathbb{Q}(c)\) generated by \(c\). For each place \(v\) of \(K\), let us fix an absolute value function \(\left|a\right|_{v}\) representing \(v\). We denote the set of finite and infinite places of \(K\) by \(P_{f}^{K}\) and \(P_{\infty}^{K}\) respectively.
Now, set
\[P(c)=P_{\infty}^{K}\cup\{v\in P_{f}^{K}\ \left|\left.\left|c\right|_{v}\neq 1\right\}.\]
(By our assumption each \(v\in P_{\infty}^{K}\) satisfies \(\left|c\right|_{v}\neq 1\).) Then the ring
\[R_{c}=\{a\in K\mid\forall v\in P_{f}^{K}\smallsetminus P(c)\colon\left.\left|a \right|_{v}\leq 1\right\}\]
is a cocompact subring of the direct product of local fields \(\prod_{v\in P(c)}K_{v}\), and the quotient
\[Y^{(c)}=\bigg{(}\prod_{v\in P(c)}K_{v}\bigg{)}/R_{c}\]
with respect to the translation action can be identified with the Pontryagin dual of the additive group of \(R_{c}\).
The periodic points of the self-homeomorphism
\[\phi\colon Y^{(c)}\to Y^{(c)},\quad[a_{v}]_{v\in P(c)}\mapsto[ca_{v}]_{v}\]
form a dense subset of \(Y^{(c)}\)[14, Section 5], and the \(K_{v}\)-direction is contracting (resp. expanding) for \(\phi\) if and only if \(\left|c\right|_{v}<1\) (resp. \(\left|c\right|_{v}>1\)). Thus, the system \((Y^{(c)},\phi)\) is a non-wandering Smale space.
Consider the subsets
\[P^{s}(c)=\{v\in P(c)\mid\left|c\right|_{v}<1\},\quad P^{u}(c)=\{v\in P(c)\mid \left|c\right|_{v}>1\}\]
of \(P(c)\). Then the stable equivalence class of each point of \(Y^{(c)}\) is identified with \(\prod_{v\in P^{s}(c)}K_{v}\). Thus, the unstable groupoid of \((Y^{(c)},\phi)\) is Morita equivalent to the transformation groupoid \(R_{c}\ltimes\prod_{v\in P^{s}(c)}K_{v}\). Similarly, the stable groupoid is Morita equivalent to \(R_{c}\ltimes\prod_{v\in P^{u}(c)}K_{v}\).
### Groupoid homology
Let us compute the homology of the above groupoids. In the following, when omitted, the coefficient sheaf for homology is \(\underline{\mathbb{Z}}\).
**Proposition 5.1**.: _Let \(P_{f}^{s}(c)=P^{s}(c)\cap P_{f}^{K}\), and \(d=\sum_{v\in P^{s}(c)\cap P_{\infty}^{K}}\dim_{\mathbb{R}}K_{v}\). We then have_
\[H_{p}\bigg{(}R_{c}\ltimes\prod_{v\in P^{s}(c)}K_{v}\bigg{)}\cong H_{p+d}\bigg{(} R_{c}\ltimes\prod_{v\in P_{f}^{s}(c)}K_{v}\bigg{)}.\]
Proof.: As in the proof of Theorem 3.1, consider the groupoid homomorphism
\[\pi\colon R_{c}\ltimes\prod_{v\in P^{s}(c)}K_{v}\to R_{c}\ltimes\prod_{v\in P _{f}^{s}(c)}K_{v}\]
induced by the projection of base space to the factors labeled by \(P_{f}^{s}(c)\). The fibers of this homomorphism can be identified with \(\mathbb{R}^{d}\), and the action of \(R_{c}\) preserves orientation. We thus have \(L_{-d}\pi_{\underline{\mathbb{Z}}}\cong\underline{\mathbb{Z}}\) and \(L_{q}\pi_{\underline{\mathbb{Z}}}=0\) for \(q\neq-d\) on \(\prod_{v\in P_{f}^{s}(c)}K_{v}\). Then the Leray-type spectral sequence (see [1, Theorem 4.4])
\[E_{pq}^{2}=H_{p}\bigg{(}R_{c}\ltimes\prod_{v\in P_{f}^{s}(c)}K_{v},L_{q}\pi_{ \underline{\mathbb{Z}}}\bigg{)}\Rightarrow H_{p+q}\bigg{(}R_{c}\ltimes\prod_{v \in P^{s}(c)}K_{v},\underline{\mathbb{Z}}\bigg{)}\]
is degenerate at the \(E^{2}\)-sheet for degree reasons, and we obtain the claim.
Again we have a similar isomorphism of groupoid homology, up to a degree shift, between \(R_{c}\ltimes\prod_{v\in P^{s}(c)}K_{v}\) and \(R_{c}\ltimes\prod_{v\in P^{s}_{f}(c)}K_{v}\) for \(P^{s}_{f}(c)=P^{s}(c)\cap P^{K}_{f}\).
For each \(v\in P^{s}_{f}(c)\), let \(O_{v}\subset K_{v}\) denote the corresponding local ring, and \(\pi_{v}\in O_{v}\) be a generator of its maximal ideal. Moreover take \(m_{v}\in\mathbb{N}_{>0}\) such that \(\left|c\right|_{v}=\left|\pi_{v}\right|_{v}^{m_{v}}\), and set
\[X^{(n)}=\prod_{v\in P^{s}_{f}(c)}K_{v}/\pi_{v}^{nm_{v}}O_{v}.\]
We have a projective system of the discrete \(R_{c}\)-spaces \(X^{(n)}\) with proper connecting maps, and
\[R_{c}\ltimes\prod_{v\in P^{s}_{f}(c)}K_{v}=\varprojlim_{n}R_{c}\ltimes X^{(n)}. \tag{4}\]
Now, \(X^{(n)}\) is a transitive \(R_{c}\)-set with stabilizer
\[\Gamma_{n}=\{a\in R_{c}\mid\forall v\in P^{s}_{f}(c)\colon\left|a\right|_{v} \leq\left|c\right|_{v}^{n}\}.\]
We thus have \(H_{\bullet}(R_{c}\ltimes X^{(n)})\cong H_{\bullet}(\Gamma_{n})\), and
\[H_{k}\bigg{(}R_{c}\ltimes\prod_{v\in P^{s}_{f}(c)}K_{v}\bigg{)}\cong\varprojlim _{n}H_{k}(R_{c}\ltimes X^{(n)})\]
can be identified with the inductive limit of the homology groups \(H_{k}(\Gamma_{n})\) with respect to the transfer maps \(\theta\colon H_{k}(\Gamma_{n})\to H_{k}(\Gamma_{n+1})\) associated with the finite index inclusion \(\Gamma_{n+1}<\Gamma_{n}\) (see [1, Chapter III, Section 9]).
The transfer map can be made more explicit as follows. Let us denote by \(m_{c^{-1}}\) the isomorphism \(\Gamma_{n+1}\to\Gamma_{n}\) given by multiplication by \(c^{-1}\).
**Proposition 5.2**.: _Let \(N=[\Gamma_{0}\colon\Gamma_{1}]\), and \(\theta^{\prime}\) be the endomorphism of \(\bigwedge^{k}\Gamma_{0}\) uniquely extending \(N\bigwedge^{k}m_{c^{-1}}\colon\bigwedge^{k}\Gamma_{1}\to\bigwedge^{k}\Gamma_{0}\). The groupoid homology \(H_{k}(R_{c}\ltimes\prod_{v\in P^{s}_{f}(c)}K_{v})\) is isomorphic to the inductive limit \(\varinjlim\bigwedge^{k}\Gamma_{0}\) for the connecting maps \(\theta^{\prime}\)._
Proof.: First, since \(\Gamma_{n}\) is a torsion-free commutative group, its integral homology \(H_{\bullet}(\Gamma_{n},\mathbb{Z})\) is naturally isomorphic to the exterior algebra \(\bigwedge^{\bullet}\Gamma_{n}\) generated by \(\Gamma_{n}\), see [1, Theorem V.6.4]. Second, the restriction of \(\theta\) to \(H_{k}(\Gamma_{n+1})\) is the multiplication map by \(N\) (this is easy to see by comparison with singular homology, see for example [1, Section 3G]). Moreover, \(\bigwedge^{k}\Gamma_{n}\) is torsion-free and \(\bigwedge^{k}\Gamma_{n+1}\) is its finite index subgroup. Thus the transfer \(\bigwedge^{k}\Gamma_{n}\to\bigwedge^{k}\Gamma_{n+1}\) is the unique extension of \(N\mathrm{id}\) on \(\bigwedge^{k}\Gamma_{n+1}\). Then the composition of transfer \(\bigwedge^{k}\Gamma_{0}\to\bigwedge^{k}\Gamma_{1}\) and the induced isomorphism \(\bigwedge^{k}m_{c^{-1}}\) gives \(\theta^{\prime}\), hence we obtain the claim.
Next let us consider the automorphism of groupoid homology induced by the homeomorphism \(\phi\). As the basepoint we can take \((0)_{v}\in Y^{(c)}\), which is fixed by \(\phi\). This choice of point is convenient because the action of \(\phi\) preserves its stable equivalence class, which can be identified with the multiplication by \(c\) on \(\prod_{v\in P^{s}(c)}K_{v}\), and the groupoid automorphism of \(R_{c}\ltimes\prod_{v\in P^{s}(c)}K_{v}\) again given by multiplication by \(c\) on all factors.
Note that the isomorphism of Proposition 5.1 is compatible with the automorphism induced by \(\phi\), and the analogous one on \(R_{c}\ltimes\prod_{v\in P^{s}_{f}(c)}K_{v}\), up to multiplying by \(-\mathrm{id}\) certain factors of \(K_{v}\) when \(K_{v}\cong\mathbb{R}\). This is because multiplication by \(c\) on \(K_{v}\), for Archimedean \(v\), is properly homotopic to the identity map \(\mathrm{id}\), or possibly to \(-\mathrm{id}\) when \(K_{v}\cong\mathbb{R}\).
_Remark 5.3_.: In the above argument we used the invariance of groupoid homology under proper homotopy, which can be obtained by considering the following chain of isomorphisms:
\[H_{k}(R_{c}\ltimes\prod K_{v},\underline{\mathbb{Z}})\cong H_{k}(R_{c},C(\prod K _{v},\mathbb{Z}))\cong H_{k}(BR_{c},C(\prod K_{v},\mathbb{Z})),\]
where we have group homology in the middle, and the last group is the homology of the classifying space with coefficient the local system induced by \(\prod K_{v}\). See also [1, Proposition 2.7.5] for a proof of homotopy invariance in the setting of sheaves.
In [10, Theorem 6.1.1], Putnam gave a remarkable analogue of Lefschetz formula for non-wandering Smale spaces \((X,\phi)\), which gives the number of periodic points as
\[|\{x\in X\mid\phi^{n}(x)=x\}|=\sum_{k}(-1)^{k}\operatorname{Tr}_{H^{s}_{k}(X, \phi)\otimes\mathbb{Q}}((\phi^{-n})_{*})\]
using the transformation on his stable homology \(H^{s}_{k}(X,\phi)\) induced by \(\phi^{-n}\).
For the Smale spaces \((Y^{(c)},\phi)\), we can make both sides more explicit.
**Proposition 5.4**.: _For \(n\geq 1\), the fixed points for \(\phi^{n}\) on \(Y^{(c)}\) are bijectively parameterized by_
\[(c^{n}-1)^{-1}\mathcal{O}_{K}/(\mathcal{O}_{K}\cap(c^{n}-1)^{-1}\mathcal{O}_{ K}).\]
Proof.: Since \(Y^{(c)}\) is defined as the quotient space \(\prod K_{v}/R_{c}\), the points with period \(n\) have coordinates \(x\) (for all places \(v\)) such that \((c^{n}-1)x\) belongs to \(R_{c}\). These points are parameterized by \((c^{n}-1)^{-1}R_{c})/(R_{c}\cap(c^{n}-1)^{-1}R_{c})\). We can further simplify this quotient and arrive at the claim by writing \(R_{c}\) as an increasing union of copies of \(\mathcal{O}_{K}\), and noticing the compatibility of the quotient with the inductive structure.
**Proposition 5.5**.: _The automorphism of \(H_{k}(R_{c}\ltimes\prod_{v\in P^{s}_{f}(c)}K_{v})\otimes\mathbb{Q}\) induced by \(\phi^{-1}\) has trace equal to that of \(N\bigwedge^{k}m_{c^{-1}}\) acting on \(\bigwedge^{k}\Gamma_{0}\otimes\mathbb{Q}\)._
Proof.: Given \(a\in\bigwedge^{k}\Gamma_{0}\), let us write \(a_{j}\) for the element of \(\varinjlim\bigwedge^{k}\Gamma_{0}\) represented by \(a\) in the \(j\)-th copy of \(\bigwedge^{k}\Gamma_{0}\). Unpacking the correspondence in Proposition 5.2, the automorphism on \(\varinjlim\bigwedge^{k}\Gamma_{0}\) induced by \(\phi\) is the map \(f(a_{j})=a_{j+1}\) for \(a\in\bigwedge^{k}\Gamma_{0}\). Thus \(\phi^{-1}\) induces the transform
\[\tilde{f}(a_{j})=a_{j-1}=N(\bigwedge^{k}m_{c^{-1}})(a)_{j},\]
hence we obtain the claim.
### Comparison with \(K\)-groups
The discussion above has a straightforward counterpart in \(K\)-theory for the corresponding groupoid \(C^{*}\)-algebras. As we are working with the transformation groupoid \(R_{c}\ltimes\prod_{v\in P^{s}(c)}K_{v}\), the algebra of interest is the crossed product \(C^{*}\)-algebra \(C_{0}(\prod_{v\in P^{s}(c)}K_{v})\rtimes R_{c}\). (Of course, the stable case is analogous.)
Let us first see the analogue of Proposition 5.1 in \(K\)-theory.
Let \(\mathbb{C}_{d}\) denote the (complex) Clifford algebra of the Euclidean space \(\mathbb{R}^{d}\), which is a \(\mathbb{Z}_{2}\)-graded \(\mathrm{C}^{*}\)-algebra. Then there is a Dirac morphism, given by a (strictly) equivariant unbounded Fredholm module \(D\) between \(C_{0}(\mathbb{R}^{d})\otimes\mathbb{C}_{d}\) and \(\mathbb{C}\) for the translation action of \(\mathbb{R}^{d}\) on \(C_{0}(\mathbb{R}^{d})\), hence a class
\[[D]\in\operatorname{KK}^{\mathbb{R}^{d}}\bigl{(}C_{0}(\mathbb{R}^{d})\otimes \mathbb{C}_{d},\mathbb{C}\bigr{)}=\operatorname{KK}^{\mathbb{R}^{d}}_{d}\bigl{(} C_{0}(\mathbb{R}^{d}),\mathbb{C}\bigr{)}.\]
This class \([D]\) is invertible by Connes's Thom isomorphism theorem in KK-theory [11]. This can be also interpreted as the strong Baum-Connes conjecture for \(\mathbb{R}^{d}\)[12] (see also [14, Corollary 2.3]).
Now, with \(d\) as in Proposition 5.1, take the group embedding of \(R_{c}\) into \(\mathbb{R}^{d}\) corresponding to the completion at infinite places. Since \(D\) is given by a strictly \(\mathbb{R}^{d}\)-equivariant Fredholm module, we obtain a class
\[[D_{c}]\in\operatorname{KK}^{R_{c}}_{d}(C_{0}(\mathbb{R}^{d}),\mathbb{C}).\]
**Proposition 5.6**.: _The class \([D_{c}]\) induces an isomorphism_
\[K_{\bullet}\biggl{(}C_{0}\biggl{(}\prod_{v\in P^{s}_{f}(c)}K_{v}\biggr{)} \rtimes R_{c}\biggr{)}\cong K_{\bullet+d}\biggl{(}C_{0}\biggl{(}\prod_{v\in P ^{s}(c)}K_{v}\biggr{)}\rtimes R_{c}\biggr{)}.\]
Proof.: Let us set \(A=C_{0}(\prod_{v\in P^{s}(c)}K_{v})\) and \(B=C_{0}(\prod_{v\in P^{s}_{f}(c)}K_{v})\), so that we have \(A\cong B\otimes C_{0}(\mathbb{R}^{d})\). Then \(f=\operatorname{id}_{B}\otimes[D_{c}]\) defines an invertible class in \(\operatorname{KK}^{R_{c}}(A\otimes\mathbb{C}_{d},B)\). Taking descent, we see that the crossed product algebras \((A\otimes\mathbb{C}_{d})\rtimes R_{c}=A\rtimes R_{c}\otimes\mathbb{C}_{d}\) and \(B\rtimes R_{c}\) are KK-equivalent. Taking into account of the degree-shift induced by the Clifford algebra, we obtain the claim.
Next let us present an analogue of Proposition 5.2. From the inverse limit presentation of (4), we obtain an inductive limit structure
\[C_{0}\biggl{(}\prod_{v\in P^{*}_{f}(c)}K_{v}\biggr{)}\rtimes R_{c}\cong\varinjlim_ {n}C_{0}(X^{(n)})\rtimes R_{c}. \tag{5}\]
For a \(\mathbb{Z}\)-module \(A\), let us define \(\bigwedge^{[\![i]\!]}A\) as the direct sum of all exterior powers of \(A\) whose degree is congruent to \(i\) modulo \(2\). Analogously, we define \(H_{[\![i]\!]}(G,\underline{\mathbb{Z}})\) as the direct sum of homology groups with corresponding degree parity.
**Theorem 5.7**.: _There is an isomorphism_
\[K_{i}\biggl{(}C_{0}\biggl{(}\prod_{v\in P^{*}_{f}(c)}K_{v}\biggr{)}\rtimes R_{ c}\biggr{)}\cong\varinjlim\bigwedge^{[i]}\Gamma_{0},\]
_with connecting maps given by the unique extension of \(N\bigwedge^{[\![i]\!]}m_{c^{-1}}\colon\bigwedge^{[\![i]\!]}\Gamma_{1}\to \bigwedge^{[\![i]\!]}\) as in Proposition 5.2._
Proof.: On one hand, taking \(K\)-groups is compatible with taking inductive limits of \(C^{*}\)-algebras. On the other, \(X^{(n)}\) is a transitive \(R_{c}\)-set with stabilizer equal to \(\Gamma_{n}\). Combining these and (5), we have
\[K_{i}\biggl{(}C_{0}\biggl{(}\prod K_{v}\biggr{)}\rtimes R_{c}\biggr{)}\cong \varinjlim_{n}K_{i}(C^{*}(\Gamma_{n})).\]
Next, let us identify \(K_{i}(C^{*}(\Gamma_{n}))\) with \(\bigwedge^{[\![i]\!]}\Gamma_{n}\). Consider the subgroup
\[\Gamma_{n}^{k}=\{a\in\Gamma_{n}\mid v\in P^{u}_{f}(c)\colon\,|a|_{v}\leq|c|_{v }^{k}\}<\Gamma_{n}.\]
We then have \(\bigcup_{k}\Gamma_{n}^{k}=\Gamma_{n}\). Each group \(\Gamma_{n}^{k}\) is finitely generated, because it is a finite union of groups which are isomorphic to the ring of integers \(\mathcal{O}_{K}\), which is free abelian of rank equal to the degree of \(K\). Thus \(\Gamma_{n}^{k}\), being torsion free and finitely generated commutative group, is also free abelian. From this we obtain
\[K_{i}(C^{*}(\Gamma_{n}^{k}))\cong\bigwedge^{[\![i]\!]}\Gamma_{n}^{k}.\]
As both sides are compatible with colimit, we obtain
\[K_{i}(C^{*}(\Gamma_{n}))\cong\bigwedge^{[\![i]\!]}\Gamma_{n}.\]
It remains to identify the connecting map
\[K_{i}(C^{*}(\Gamma_{n}))\to K_{i}(C^{*}(\Gamma_{n+1})). \tag{6}\]
Generally, suppose one has a finite index inclusion of discrete groups \(H<G\). Consider the \(G\)-equivariant map \(\mathbb{C}\to C(G/H)\), and the induced map \(C^{*}_{r}(G)\to C(G/H)\rtimes_{r}G\). Then the map of the reduced group \(\mathrm{C}^{*}\)-algebras
\[K_{\bullet}(C^{*}_{r}(G))\to K_{\bullet}(C(G/H)\rtimes_{r}G)\cong K_{\bullet} (C^{*}_{r}(H))\]
can be computed as follows. We fix a system representatives \((g_{i})_{i=1}^{N}\) of \(G/H\). Then we get an isomorphism
\[C(G/H)\rtimes_{r}G\cong M_{N}(\mathbb{C})\otimes C^{*}_{r}(H)\]
that sends \(\delta_{g_{i}}\in C(G/H)\) to \(e_{i,i}\in M_{N}\) and \(\lambda_{g}\in C^{*}_{r}(G)\) to \(\sum_{i}e_{\sigma_{g}(i),i}\otimes\delta_{h(i,g)}\), where \(\sigma_{g}(i)\) and \(h(i,g)\) are characterized by the relation \(gg_{i}=g_{\sigma_{g}(i)}h(i,g)\).
If we use \(G=\Gamma_{n}\) and \(H=\Gamma_{n+1}\), by commutativity, the image of \(\lambda_{g}\in C^{*}_{r}(\Gamma_{n+1})\) under the map is a diagonal matrix embedding of \(g\). Thus, (6) is an extension of the multiplication by \(N\) on \(K_{\bullet}(C^{*}(\Gamma_{n+1}))\). Taking into account of the isomorphisms \(\bigwedge^{[\![i]\!]}\Gamma_{n}\cong\bigwedge^{[\![i]\!]}\Gamma_{0}\) given by composition of \(m_{c^{-1}}\), we obtain the claim.
**Corollary 5.8**.: _The HK conjecture holds for the groupoid \(R_{c}\ltimes\prod_{v\in P^{*}_{f}(c)}K_{v}\)._
_Remark 5.9_.: While the original HK-conjecture was formulated in the setting of ample groupoids, it has obvious generalization to the setting of etale groupoids. Unstable and stable groupoids of Smale spaces, reduced to transversal subspaces, give rich example of such groupoids. While there are counterexamples to the conjecture (in the ample case) [10, 11], Corollary 5.8 holds for the stable and unstable groupoids of the Smale space \((Y^{(c)},\phi)\). In this case Propositions 5.1 and 5.6 can be viewed as a reduction step to the ample case.
### Degree \(1\) case
The case of \(K(c)=\mathbb{Q}\), i.e., \(c\in\mathbb{Q}\), is considered by Burke and Putnam [1], where they computed the stable and unstable Putnam homology groups. To simplify the presentation let us consider the case of two prime factors, as follows.
Let \(p<q\) be two prime numbers, and put \(M=pq\), \(c=\frac{q}{p}\), \(R_{c}=\mathbb{Z}[1/M]\). Our Smale space is given by the compact space
\[X=Y^{(c)}=(\mathbb{R}\times\mathbb{Q}_{p}\times\mathbb{Q}_{q})/R_{c}\]
and the self-homeomorphism \(\phi([x,y,z])=[cx,cy,cz]\). Consequently the etale groupoid \(G=R^{u}(X,\phi)|_{X^{s}(x_{0})}\) is the transformation groupoid \(R_{c}\ltimes\mathbb{Q}_{q}\), while \(G^{\prime}=R^{s}(X,\phi)|_{X^{u}(x_{0})}\) is the transformation groupoid \(R_{c}\ltimes(\mathbb{R}\times\mathbb{Q}_{p})\).
Now the subgroup \(\Gamma_{n}<R_{c}\) is given by
\[\Gamma_{n}=\bigg{\{}\frac{aq^{n}}{p^{k}}\mid a\in\mathbb{Z},k\in\mathbb{N} \bigg{\}},\]
hence we have \(N=q\). The exterior algebra \(\bigwedge^{\bullet}\Gamma_{0}\) is
\[\bigwedge^{0}\Gamma_{0} =\mathbb{Z}, \bigwedge^{1}\Gamma_{0} =\mathbb{Z}\bigg{[}\frac{1}{p}\bigg{]}, \bigwedge^{k}\Gamma_{0} =0\quad(k>1).\]
The map \(N\bigwedge^{k}m_{c^{-1}}\) is multiplication by \(q\) for \(k=0\) and multiplication by \(p\) for \(k=1\). We thus obtain
\[H_{0}(R_{c}\ltimes\mathbb{Q}_{q})\cong\mathbb{Z}\bigg{[}\frac{1}{q}\bigg{]}, \qquad H_{1}(R_{c}\ltimes\mathbb{Q}_{q})\cong\mathbb{Z}\bigg{[}\frac{1}{p} \bigg{]},\qquad H_{k}(R_{c}\ltimes\mathbb{Q}_{q})=0\quad(k\neq 0,1). \tag{7}\]
This gives a description of the groupoid homology \(H_{k}(G,\underline{\mathbb{Z}})\).
As for the stable equivalence relation, Proposition 5.1 gives
\[H_{k}(R_{c}\ltimes(\mathbb{R}\times\mathbb{Q}_{p}))\cong H_{k+1}(R_{c}\ltimes \mathbb{Q}_{p}).\]
This, together with (7) (switching the role of \(p\) and \(q\)) gives
\[H_{-1}(G^{\prime},\underline{\mathbb{Z}})\cong\mathbb{Z}\bigg{[}\frac{1}{p} \bigg{]}, \qquad H_{0}(G^{\prime},\underline{\mathbb{Z}})\cong\mathbb{Z}\bigg{[}\frac{ 1}{q}\bigg{]}, \qquad H_{k}(G^{\prime},\underline{\mathbb{Z}})=0\quad(k\neq 0,-1).\]
_Remark 5.10_.: In view of Remark 2.3, the computation of Putnam's stable homology for \((Y^{(c)},\phi)\) with \(c\in\mathbb{Q}\) carried out in [1] already gives the formulas (7). The method presented here is arguably more direct and does not require a deep understanding of the dynamics of the system, on the other hand the method in [1] is intimately tied to Markov partitions and allows to understand the system in terms of symbolic coding.
### Degree \(2\), imaginary case
Next consider the case of quadratic extensions. To illustrate the situation associated with prime ideals that are not singly generated, we look at the case \(K=\mathbb{Q}(\sqrt{-5})\), so that \(\mathcal{O}_{K}=\mathbb{Z}[\sqrt{-5}]\) is not a principal ideal domain.
To achieve this, let us take
\[c=\frac{1+\sqrt{-5}}{2}.\]
The relevant finite places of \(K\) correspond to the prime ideals
\[\mathfrak{p}_{1} =(2,1+\sqrt{-5}), \mathfrak{p}_{2} =(3,1+\sqrt{-5}).\]
For \(i=1,2\), we write \(v_{i}\) for the corresponding place, \(\left|a\right|_{i}\) for the absolute value, \(K_{i}\) for the completed local field, \(O_{i}<K_{i}\) for the local ring, and \(\pi_{i}\in O_{i}\) for the uniformizer. Thus, \(x\in\mathcal{O}_{K}\) has absolute value \(\left|x\right|_{i}=\left|\pi_{i}\right|_{i}^{m}\) if and only if the principal ideal \((x)<\mathcal{O}_{K}\) decomposes as
\[(x)=\mathfrak{p}_{i}^{m}\mathfrak{a}\]
for some ideal \(\mathfrak{a}<\mathcal{O}_{K}\) such that \(\mathfrak{p}_{i}\not|\,\mathfrak{a}\). Then
\[(2)=\mathfrak{p}_{1}^{2}, (1+\sqrt{-5})=\mathfrak{p}_{1}\mathfrak{p}_{2}\]
imply that we have
\[\left|c\right|_{1} =\left|\pi_{1}\right|_{1}^{-1}, \left|c\right|_{2} =\left|\pi_{2}\right|_{2}.\]
Writing \(v_{\infty}\) for the unique infinite place, we have
\[P^{s}(c) =\{v_{2}\}, P^{u}(c) =\{v_{\infty},v_{1}\}.\]
From this we get
\[R_{c}=\bigg{\{}\frac{x}{2^{k}(1+\sqrt{-5})^{k}}\;\bigg{|}\;x\in\mathcal{O}_{K },k\in\mathbb{N}\bigg{\}}.\]
(We cannot have \(3\) in the denominator as the prime ideal \(\mathfrak{p}_{3}=(3,1-\sqrt{-5})\) satisfies \((3)=\mathfrak{p}_{2}\mathfrak{p}_{3}\), hence \(\frac{1}{3}\) would have absolute value bigger than \(1\) for the corresponding absolute value.) Thus, the Smale space \((Y^{(c)},\phi)\) is given by
\[Y^{(c)}=(\mathbb{C}\times K_{1}\times K_{2})/R_{c},\]
and the diagonal action of \(c\) gives
\[R^{u}(X,\phi)|_{X^{s}(x_{0})} \cong R_{c}\ltimes K_{2}, R^{s}(X,\phi)|_{X^{u}(x_{0})} \cong R_{c}\ltimes(\mathbb{C}\times K_{1})\]
for \(x_{0}=(0,0,0)\).
To compute its groupoid homology for unstable groupoid, the general constructions from above give
\[\Gamma_{0} =\bigg{\{}\frac{x}{2^{k}}\;\bigg{|}\;x\in\mathcal{O}_{K},k\in \mathbb{N}\bigg{\}}, \Gamma_{1} =\bigg{\{}\frac{x}{2^{k}}\;\bigg{|}\;x\in\mathfrak{p}_{2},k\in \mathbb{N}\bigg{\}}.\]
Thus, \(\Gamma_{0}/\Gamma_{1}\cong\mathcal{O}_{K}/\mathfrak{p}_{2}\) and we have \(N=[\Gamma_{0}:\Gamma_{1}]=3\). As for the exterior algebra, we have
\[\bigwedge^{2}\Gamma_{0} =\bigg{\{}\frac{1\wedge\sqrt{-5}}{2^{j}}k\;\bigg{|}\;j\in \mathbb{N},k\in\mathbb{Z}\bigg{\}}, \bigwedge^{k}\Gamma_{0} =0\quad(k>2).\]
Similarly, we have
\[\bigwedge^{2}\Gamma_{1} =\bigg{\{}\frac{3\wedge(1+\sqrt{-5})}{2^{j}}k\;\bigg{|}\;j\in \mathbb{N},k\in\mathbb{Z}\bigg{\}}\]
as \(\mathfrak{p}_{2}\) is free of rank \(2\) as a \(\mathbb{Z}\)-module, with basis \(3\) and \(1+\sqrt{-5}\).
Next let us identify the extensions of
\[N\bigwedge^{k}m_{c^{-1}}\colon\bigwedge^{k}\Gamma_{1}\to\bigwedge^{k}\Gamma_{0}\]
to \(\bigwedge^{k}\Gamma_{0}\), and the inductive limit with respect to this map. When \(k=0\), by \(N=3\) it is the multiplication by \(3\) on \(\mathbb{Z}\), hence the limit is \(\mathbb{Z}[\frac{1}{3}]\). When \(k=1\), it is multiplication by \(6/(1+\sqrt{-5})=1-\sqrt{-5}\), hence the limit is
\[\varinjlim\Gamma_{0}=\mathbb{Z}\bigg{[}\sqrt{-5},\frac{1}{2},\frac{1}{1-\sqrt{ -5}}\bigg{]}.\]
When \(k=2\), this map is
\[\bigwedge^{2}\Gamma_{1}\to\bigwedge^{2}\Gamma_{0}, \frac{3\wedge(1+\sqrt{-5})}{2^{j}}k\mapsto 3\frac{(1-\sqrt{-5}) \wedge 2}{2^{j}}k=6\frac{1\wedge\sqrt{-5}}{2^{j}}k.\]
On the other hand, in \(\bigwedge^{2}\Gamma_{0}\) we have
\[\frac{3\wedge(1+\sqrt{-5})}{2^{j}}k=3\frac{1\wedge\sqrt{-5}}{2^{j}}k.\]
Hence the above map extends to multiplication by \(2\) on \(\bigwedge^{2}\Gamma_{0}\). Thus, the limit is given by
\[\varinjlim\bigwedge^{2}\Gamma_{0}\cong\mathbb{Z}\bigg{[}\frac{1}{2}\bigg{]}.\]
Summarizing, the etale groupoid \(G=R^{u}(X,\phi)|_{X^{s}(x_{0})}\) has the integral homology groups
\[H_{0}(G)\cong\mathbb{Z}\bigg{[}\frac{1}{3}\bigg{]},\ \ H_{1}(G)\cong\mathbb{Z} \bigg{[}\sqrt{-5},\frac{1}{2},\frac{1}{1-\sqrt{-5}}\bigg{]},\ \ H_{2}(G)\cong\mathbb{Z}\bigg{[}\frac{1}{2}\bigg{]},\ \ H_{k}(G)=0\ \ \ (k\neq 0,1,2),\]
on which the induced action of \(\phi^{-1}\) is respectively by multiplication by \(3\), \(1-\sqrt{-5}\), and \(2\).
The homology of \(G^{\prime}=R^{s}(X,\phi)|_{X^{u}(x_{0})}\) can be computed in a similar way as above, combined with Proposition 5.1, and we get
\[H_{-2}(G^{\prime})\cong\mathbb{Z}\bigg{[}\frac{1}{2}\bigg{]},\qquad H_{-1}(G^{ \prime})\cong\mathbb{Z}\bigg{[}\sqrt{-5},\frac{1}{2},\frac{1}{1+\sqrt{-5}} \bigg{]},\qquad H_{0}(G^{\prime})\cong\mathbb{Z}\bigg{[}\frac{1}{6}\bigg{]},\]
on which the induced action of \(\phi\) is respectively by multiplication by \(2\), \(1+\sqrt{-5}\), and \(3\), and \(H_{k}(G^{\prime})=0\) for \(k\neq 0,-1,-2\).
### Degree \(2\), real case
Let us next consider a case with nontrivial unit. We look at a unit in real quadratic field.
Concretely, let us take
\[c=\frac{1+\sqrt{5}}{2},\]
hence \(K=\mathbb{Q}(\sqrt{5})\). In this case we find that \(c\) is invertible in \(\mathcal{O}_{K}=\mathbb{Z}[c]\), with \(-c^{-1}\) being the Galois conjugate of \(c\) (the nontrivial automorphism of \(K\) is given by \(a+b\sqrt{5}\mapsto a-b\sqrt{5}\)).
Then the relevant places are the Archimedean places \(v_{\infty}\) and \(v^{\prime}_{\infty}\), for which the corresponding absolute values are the usual one and its twist by the automorphism of \(K\) that sends \(c\) to \(-c^{-1}\). Thus, we have \(R_{c}=\mathcal{O}_{K}\), which is isomorphic to \(\mathbb{Z}^{2}\) as a commutative group. Then its Pontryagin dual \(Y^{(c)}\) is can be identified with \(\mathbb{T}^{2}\).
As a Smale space, the corresponding homeomorphism \(\phi\) is induced by the matrix presentation of multiplication by \(c\) on \(\mathcal{O}_{K}\cong\mathbb{Z}^{2}\), that is,
\[\begin{bmatrix}0&1\\ 1&1\end{bmatrix}.\]
This way we obtain the hyperbolic toral automorphism as in Example 3.3.
In this case, Theorem 3.1 for \(\tilde{G}=R^{u}(X,\phi)\) and \(G=R^{u}(X,\phi)|_{X^{s}(x_{0})}\) gives
\[H_{k}(G)\cong\begin{cases}\mathbb{Z}&(k=-1,1)\\ \mathcal{O}_{K}\cong\mathbb{Z}^{2}&(k=0)\\ 0&(\text{otherwise})\end{cases}\]
with \(\phi^{-1}\) acting by \(\pm 1\) for \(k=\pm 1\), and by \(-c^{-1}\) on \(\mathcal{O}_{K}\) for \(k=0\). We also have an analogous presentation of homology for \(G=R^{s}(X,\phi)|_{X^{u}(x_{0})}\).
_Remark 5.11_.: In general, the conjugacy classes of hyperbolic matrices in \(\operatorname{SL}_{n}(\mathbb{Z})\) with distinct eigenvalues bijectively correspond to the ideal classes in the integer rings of certain totally real fields, see [1, 1, 2].
|
2304.14158 | Dissipation and the information content of the deviation from
hamiltonian dynamics | We explain a dissipative version of hamiltonian mechanics, based on the
information content of the deviation from hamiltonian dynamics. From this
formulation we deduce minimal dissipation principles, dynamical inclusions, or
constrained evolution with hamiltonian drift reformulations. Among applications
we recover a dynamics generalization of Mielke et al quasistatic
rate-independent processes.
This article gives a clear and unitary presentation of the theory of
hamiltonian inclusions with convex dissipation or symplectic
Brezis-Ekeland-Nayroles principle, presented under various conventions first in
arXiv:0810.1419, then in arXiv:1408.3102 and, for the appearance of
bipotentials in relation to the symplectic duality, in arXiv:1902.04598v1. | Marius Buliga | 2023-04-27T12:58:44Z | http://arxiv.org/abs/2304.14158v2 | # Dissipation and the information content of the deviation from hamiltonian dynamics
###### Abstract
We explain a dissipative version of hamiltonian mechanics, based on the information content of the deviation from hamiltonian dynamics. From this formulation we deduce minimal dissipation principles, dynamical inclusions, or constrained evolution with hamiltonian drift reformulations. Among applications we recover a dynamics generalization of Mielke et al quasistatic rate-independent processes.
This article gives a clear and unitary presentation of the theory of hamiltonian inclusions with convex dissipation or symplectic Brezis-Ekeland-Nayroles principle, presented under various conventions first in [3] arXiv:0810.1419, then in [4] arXiv:1408.3102 and, for the appearance of bipotentials in relation to the symplectic duality, in [2] arXiv:1902.04598v1.
## 1 General notations and hamiltonian dynamics
In hamiltonian mechanics, a physical system is described by a pair \(z=(q,p)\), where \(q\in Q\) is a state vector and \(p\in P\) is a momentum vector. The spaces \(Q\) and \(P\) are topological, locally convex, real vector spaces, with a duality
\[(q,p)\mapsto\langle q,p\rangle\in\mathbb{R}\]
The duality is such that for any linear and continuous \(A:Q\to\mathbb{R}\) there is an unique \(p\in P\) such that for all \(q\in Q\) we have \(A(q)=\langle q,p\rangle\). The same is true for the other side: for any linear and continuous \(B:P\to\mathbb{R}\) there is an unique \(q\in Q\) such that for all \(p\in P\) we have \(B(p)=\langle q,p\rangle\).
On the space \(Q\times P\) there is a symplectic form, which can be seen as a duality of \(Q\times P\) with itself, defined by
\[\omega(z^{\prime},z")=\langle q",p^{\prime}\rangle-\langle q^{\prime},p"\rangle\]
The dynamics of the physical system is described via an energy function \(H=H(q,p,t)\), called the hamiltonian of the system. We suppose that \(H\) is a differentiable function
\[H:Q\times P\times\mathbb{R}\to\mathbb{R}\]
The partial derivatives of \(H\) at a point \(z=(q,p)\in Q\times P\) are defined via the duality between \(Q\) and \(P\), so that the partial derivative of \(H\) with respect to \(p\) is an element of \(Q\), with
\[\langle\frac{\partial H}{\partial p}(q,p,t),p^{\prime}\rangle\,=\,\lim_{ \varepsilon\to 0}\frac{1}{\varepsilon}\left(H\left(q,p+\varepsilon p^{\prime},t \right)-H(q,p,t)\right)\]
for any \(p^{\prime}\in P\), and the partial derivative of \(H\) with respect to \(q\) is an element of \(P\), given by
\[\langle q^{\prime},\frac{\partial H}{\partial q}(q,p,t)\rangle\,=\,\lim_{ \varepsilon\to 0}\frac{1}{\varepsilon}\left(H\left(q+\varepsilon q^{ \prime},p,t\right)-H(q,p,t)\right)\]
for any \(q^{\prime}\in Q\). The derivative of \(H\) with respect to the time \(t\) is a real number
\[\frac{\partial H}{\partial t}(q,p,t)\,=\,\lim_{\varepsilon\to 0}\frac{1}{ \varepsilon}\left(H\left(q,p,t+\varepsilon\right)-H(q,p,t)\right)\]
The symplectic gradient of \(H\), denoted by \(XH(q,p,t)\in Q\times P\), is
\[XH(q,p,t)=\left(\frac{\partial H}{\partial p}(q,p,t),-\frac{\partial H}{ \partial q}(q,p,t)\right) \tag{1}\]
The evolution equation for hamiltonian dynamics is:
\[\dot{c}(t)\,=\,XH(c(t),t)\]
or, in a more clear form
\[\left\{\begin{array}{rcl}\dot{q}&=&\frac{\partial H}{\partial p}(q,p,t)\\ \dot{p}&=&-\frac{\partial H}{\partial q}(q,p,t)\end{array}\right. \tag{2}\]
where \(\dot{q}\), \(\dot{p}\) denote derivatives with respect to time of \(q\), resp. \(p\).
Hamiltonian dynamics is conservative, in the following sense. Consider an evolution curve \(z(t)=(q(t),p(t))\) and compute
\[\frac{d}{dt}H(z(t),t)-\frac{\partial H}{\partial t}(z(t),t)\,=\,\langle\frac {\partial H}{\partial p}(q,p,t),\dot{p}\rangle+\langle\dot{q},\frac{\partial H }{\partial q}(q,p,t)\rangle\]
From (2) we obtain:
\[\langle\frac{\partial H}{\partial p}(q,p,t),\dot{p}\rangle+\langle q^{\prime},\frac{\partial H}{\partial q}(q,p,t)\rangle\,=\,\langle\frac{\partial H}{ \partial p}(q,p,t),-\frac{\partial H}{\partial q}(q,p,t)\rangle+\langle\frac{ \partial H}{\partial p}(q,p,t),\frac{\partial H}{\partial q}(q,p,t)\rangle\,=\,0\]
Therefore
\[\frac{d}{dt}H(z(t),t)-\frac{\partial H}{\partial t}(z(t),t)\,=\,0\]
## 2 Likelihoods
In the symplectic space \(Q\times P\) we introduce the maximal likelihood between two vectors as:
\[\pi_{max}(z^{\prime},z")\,=\,e^{\min\{0,\omega(z^{\prime},z")\}} \tag{3}\]
This is a number in \((0,1]\), for example when \(z^{\prime}\) and \(z"\) are colinear their maximal likelihood is \(1\), or if \(\omega(z^{\prime},z")>0\) then again the maximal likelihood is \(1\).
**Definition 2.1**: _A likelihood function is a function \(\pi:(Q\times P)^{3}\to[0,1]\) with the properties: for any \(z,z^{\prime},z"\in Q\times P\)_
1. _if either of the maxima exist:_ \(\max\limits_{v\in Q\times P}\pi(z,z^{\prime},v)\,,\,\max\limits_{w\in Q\times P }\pi(z,w,z")\)_, then they are equal to_ \(0\) _or_ \(1\)__
2. _the functions_ \(v\in Q\times P\mapsto-\ln\pi(z,z^{\prime},v)\,,\,w\in Q\times P\mapsto-\ln\pi(z,w,z")\) _are convex and lower semicontinuous (lsc)._
_The information content of the likelihood function \(\pi\) is \(I:(Q\times P)^{3}\to[0,+\infty]\),_
\[I(z,z^{\prime},z")=-\ln\pi(z,z^{\prime},z")\]
_with the convention that \(-\ln 0=+\infty\)._
_A likelihood \(\pi:(Q\times P)^{3}\to[0,1]\) is tempered if moreover for any \(z,z^{\prime},z"\in Q\times P\)_
1. \(\pi(z,z^{\prime},z")\leq\pi_{max}(z^{\prime},z")\)__
Clearly, the maximal likelihood is a tempered likelihood function according to definition 2.1, but we shall see that many other, interesting likelihood exist. In order to describe them we need to introduce some convex analysis notions, the classical ones from Moreau [12], adapted to the symplectic space \(Q\times P\).
Let \(d:(Q\times P)^{2}\to\mathbb{R}\) be a duality of \(Q\times P\) with itself.
**Definition 2.2**: _For a function \(f:Q\times P\to\mathbb{R}\cup\{+\infty\}\) and a point \((q,p)\in Q\times P\), the left subgradient at \((q,p)\) of \(f\) with respect to the duality \(d\) is the set \(\partial_{d}^{L}f(q,p)\) of all \((q^{\prime},p^{\prime})\in Q\times P\) with the property that for any \((q",p")\in Q\times P\)_
\[f(q,p)+d((q^{\prime},p^{\prime}),(q",p")-(q,p))\,\leq\,f(q",p")\]
_The left polar of \(f\) with respect to the duality \(d\) is the function \(f_{d}^{*L}:Q\times P\to\mathbb{R}\cup\{+\infty\}\),_
\[f_{d}^{*L}(q",p")\,=\,\sup\big{\{}d((q",p"),(q^{\prime},p^{\prime}))-f(q^{ \prime},p^{\prime})\,:\,(q^{\prime},p^{\prime})\in Q\times P\big{\}}\]
_Likewise, the right subgradient at \((q,p)\) of \(f\) with respect to the duality \(d\) is the set \(\partial_{d}^{R}f(q,p)\) of all \((q^{\prime},p^{\prime})\in Q\times P\) with the property that for any \((q",p")\in Q\times P\)_
\[f(q,p)+d((q",p")-(q,p),(q^{\prime},p^{\prime}))\,\leq\,f(q",p")\]
_The right polar of \(f\) with respect to the duality \(d\) is the function \(f_{d}^{*R}:Q\times P\to\mathbb{R}\cup\{+\infty\}\),_
\[f_{d}^{*R}(q",p")\,=\,\sup\big{\{}d((q^{\prime},p^{\prime}),(q",p"))-f(q^{ \prime},p^{\prime})\,:\,(q^{\prime},p^{\prime})\in Q\times P\big{\}}\]
By unwinding this definition we arrive to the following Fenchel inequalities theorem.
**Theorem 2.3**: _(Fenchel inequalities.) Let \(f:Q\times P\to\mathbb{R}\cup\{+\infty\}\) be convex and lsc. Then for any \((q^{\prime},p^{\prime}),(q",p")\in Q\times P\)_
\[f(q^{\prime},p^{\prime})+f_{d}^{*R}(q",p")\,\geq\,d((q^{\prime},p^{\prime}),(q",p"))\]
_The function \(f_{d}^{*R}\) is convex lsc. The equality_
\[f(q^{\prime},p^{\prime})+f_{d}^{*R}(q",p")\,=\,d((q^{\prime},p^{\prime}),(q",p"))\]
_is equivalent with_
\[(q",p")\in\partial_{d}^{R}f(q^{\prime},p^{\prime})\]
_Likewise, for any \((q^{\prime},p^{\prime}),(q",p")\in Q\times P\)_
\[f_{d}^{*L}(q^{\prime},p^{\prime})+f(q",p")\,\geq\,d((q^{\prime},p^{\prime}),(q",p"))\]
_The function \(f_{d}^{*L}\) is convex lsc. The equality_
\[f_{d}^{*L}(q^{\prime},p^{\prime})+f(q",p")\,=\,d((q^{\prime},p^{\prime}),(q", p"))\]
_is equivalent with_
\[(q^{\prime},p^{\prime})\in\partial_{d}^{L}f(q",p")\]
_Finally, \(g=f_{d}^{*R}\) implies \(f=g_{d}^{*L}\), and \(g=f_{d}^{*L}\) implies \(f=g_{d}^{*R}\)._
This gives us a way to construct likelihoods.
**Theorem 2.4**: _For any function \(f:Q\times P\to\mathbb{R}\cup\{+\infty\}\) which is convex, lsc, and for any duality \(d:\left(Q\times P\right)^{2}\to\mathbb{R}\) be a duality of \(Q\times P\) with itself, the following functions_
\[\pi_{f}^{R}(z,z^{\prime},z")\,=\,\exp\left(d(z^{\prime},z")-f(z^{\prime})-f_{ d}^{*R}(z")\right)\]
\[\pi_{f}^{L}(z,z^{\prime},z")\,=\,\exp\left(d(z^{\prime},z")-f_{d}^{*L}(z^{ \prime})-f(z")\right)\]
_are likelihoods in the sense of definition 2.1._
**Proof.** Let \(I_{f}^{R}\) be the information content of the likelihood \(\pi_{f}^{R}\):
\[I_{f}^{R}(z,z^{\prime},z")\,=\,f(z^{\prime})+f_{d}^{*R}(z")-d(z^{\prime},z")\]
From the definition we see that \(I_{f}^{R}(z,z^{\prime},z")\) is convex and lsc in each of the arguments \(z^{\prime}\) and \(z"\). The Fenchel inequality implies that \(I_{f}^{R}(z,z^{\prime},z")\in[0,+\infty]\), equivalently that \(\pi_{f}^{R}(z,z^{\prime},z")\in[0,1]\). If there exists \(\max\limits_{w\in Q\times P}\pi_{f}^{R}(z,w,z")\) and it is different from \(0\), then there exists a \(z^{\prime}\in Q\times P\) such that
\[\min\limits_{w\in Q\times P}\left(f(w)+f_{d}^{*R}(z")-d(w,z")\right)\,=\,f(z^ {\prime})+f_{d}^{*R}(z")-d(z^{\prime},z")\in\mathbb{R}\]
This implies that for any \(w\in Q\times P\) we have
\[f(w)-d(w,z")\,\geq\,f(z^{\prime})-d(z^{\prime},z")\]
therefore \(z"\in\partial_{d}^{R}f(z^{\prime})\), which by Fenchel equality implies \(I_{f}^{R}(z,z^{\prime},z")=0\). Therefore the \(\max\limits_{w\in Q\times P}\pi_{f}^{R}(z,w,z")\) equals \(1\).
Finally, let's denote \(g=f_{d}^{*R}\). Then \(f=g_{d}^{*L}\), so
\[\pi_{f}^{R}(z,z^{\prime},z")\,=\,\pi_{g}^{L}(z,z^{\prime},z")\]
which allows us to end the proof, simply by repeating the same reasoning for \(g\). \(\Box\)
For a convex, lsc function \(f:Q\times P\to\mathbb{R}\cup\{+\infty\}\), let us denote by
\[b_{f}(z^{\prime},z")\,=\,f(z^{\prime})+f_{d}^{*R}(z")\]
This is called the separable bipotential associated to \(f\). Bipotentials were introduced in [14] as a convex analysis notion which is well adapted for applications to non-associated constitutive laws. Bipotentials were used in soil mechanics, plasticity, damage or friction. In all these applications the dualities considered were among static variables, but the definition and theoretical results about bipotentials do apply for any duality. For the theory of bipotentials and applications in quasistatic mechanics, see the review paper [6]. In [3] the symplectic duality was used for the first time and the hamiltonian inclusions with dissipation were introduced and applied to dynamic damage mechanics. We showed there that we obtain a generalization in dynamics of Mielke et al theory of rate-independent processes [9], [10], [11]. Later this was continued in [4] where separable bipotentials with respect to the symplectic duality were used. The corresponding subgradients and polars were called "symplectic" and the hamiltonian inclusions with dissipation were reformulated as the Symplectic Brezis-Ekeland-Nayroles principle (SBEN) and used to show that in the quasistatic approximation we can recover classical variational principles of Brezis-Ekeland [1] and Nayroles [13].
In our context, for a space \(Q\times P\) which is in duality \(d:(Q\times P)^{2}\to\mathbb{R}\) with itself, bipotentials have the following definition.
**Definition 2.5**: _A function \(b:\left(Q\times P\right)^{2}\to\mathbb{R}\cup\{+\infty\}\) is a bipotential if:_
* \(b(z^{\prime},z")\geq d(z^{\prime},z")\) _for any_ \(z^{\prime},z"\in Q\times P\)__
* \(b(z^{\prime},z")=d(z^{\prime},z")\) _if and only if_ \(z"\in\partial_{d}^{R}b(\cdot,z")(z^{\prime})\) _if and only if_ \(z^{\prime}\in\partial_{d}^{L}b(z^{\prime},\cdot)(z")\)__
Likelihoods are related to bipotentials. The relation has been noted before, where information contents of likelihoods appear as syncs, definition 2.3 [7]. The same was first observed in relations (51), (52) [8]. The proof of the following theorem is the same as the one of proposition 2.4 [7], only adapted to the notations of the present article.
**Theorem 2.6**: _Let \(\pi:(Q\times P)^{3}\to[0,1]\) be a function and \(d:\left(Q\times P\right)^{2}\to\mathbb{R}\) be a duality of \(Q\times P\) with itself. Denote by \(I=-\ln\pi\) the information content of the function \(\pi\), and by_
\[b_{d}(z,z^{\prime},z")=I(z,z^{\prime},z")+d(z^{\prime},z")\]
_The function \(\pi\) is a likelihood if and only if the function \(b_{d}(z,\cdot,\cdot)\) is a bipotential with respect to the duality \(d\)._
As we can see, while likelihoods are independent of dualities, bipotentials are relative to a duality. We can easily transform a bipotential \(b\) with respect to the duality \(d\) into another bipotential \(b^{\prime}\) with respect to another duality \(d^{\prime}\), by the formula:
\[b^{\prime}(z^{\prime},z")-d^{\prime}(z^{\prime},z")\,=\,b(z^{\prime},z")-d(z^{ \prime},z")\]
For the particular duality \(\omega\), the symplectic form, to any likelihood we associate its symplectic bipotential.
**Definition 2.7**: _Let \(\pi:(Q\times P)^{3}\to[0,1]\) be a likelihood. With \(I(z,z^{\prime},z")=-\ln\pi(z,z^{\prime},z")\) the information content of \(\pi\), the symplectic bipotential associated to this likelihood is_
\[b_{\omega}^{\pi}(z,z^{\prime},z")\,=\,I(z,z^{\prime},z")+\omega(z^{\prime},z")\]
_The minimal symplectic bipotential is the symplectic bipotential of the maximal likelihood \(\pi_{max}\), i.e._
\[b_{\omega}^{min}(z^{\prime},z")\,=\,\max\left\{0,\omega(z^{\prime},z")\right\}\]
A likelihood \(\pi\) is tempered if and only if for any \(z,z^{\prime},z"\in Q\times P\)
\[b_{\omega}^{\pi}(z,z^{\prime},z")\geq 0\]
or equivalently
\[b_{\omega}^{\pi}(z,z^{\prime},z")\geq b_{\omega}^{min}(z^{\prime},z")\]
## 3 Deviation from hamiltonian dynamics
We give here a dissipative modification of hamiltonian mechanics (2), which continues [2], [3], [4], [5]. In this modification we need a hamiltonian and a tempered likelihood function.
The dynamics of a disipative physical system is obtained by the introduction of new variables, gathered in the gap vector \(\eta=(\eta_{q},\eta_{p})\in Q\times P\). We shall also need new equations, which come from the likelihood function.
**Definition 3.1**: _Given a hamiltonian \(H\) and a tempered likelihood function \(\pi\)_
\[H:Q\times P\times\mathbb{R}\to\mathbb{R}\quad,\quad\pi:\left(Q\times P\right) ^{3}\to[0,1]\]
_the dynamics of a physical system is defined by the modification of the hamiltonian dynamics equations (2)_
\[\left\{\begin{array}{lll}\dot{q}&=&\frac{\partial H}{\partial p}(q,p,t)\,+ \,\eta_{q}\\ \dot{p}&=&-\frac{\partial H}{\partial q}(q,p,t)\,+\,\eta_{p}\end{array}\right. \tag{4}\]
_together with the new equation:_
\[\pi(z,\dot{z},\eta)\,=\,1 \tag{5}\]
_with the notations \(z=(q,p),\eta=(\eta_{q},\eta_{p})\in Q\times P\), \(\dot{z}=(\dot{p},\dot{q})\in Q\times P\)._
Let's put the equation (5) in a more explicit form. For a duality \(d\), we know from theorem 2.6 that
\[b_{d}(z,z^{\prime},z")=I(z,z^{\prime},z")+d(z^{\prime},z")\]
is a bipotential with respect to \(d\), where \(I\) is the information content of the likelihood \(\pi\). The equation (5) is then equivalent with
\[\eta\in\partial_{d}^{R}b_{d}(c(t),\cdot,\eta)(\dot{c}) \tag{6}\]
which is also equivalent with
\[\dot{c}\in\partial_{d}^{L}b_{d}(c(t),\dot{c},\cdot)(\eta) \tag{7}\]
We define for any \(z\in Q\times P\) and \(t\in[0,T]\) the set \(Gap(z,t)\) of all \(z"\in Q\times P\) such that
\[z"\in\partial_{d}^{R}b_{d}(z,\cdot,z")(z"+XH(z,t)) \tag{8}\]
Remark that \(z"\in Gap(z,t)\) if and only if
\[b_{d}(z,z"+XH(z,t),z")=d(z"+XH(z,t),z")\]
We thus get the following equivalent formulations of the problem 3.1.
**Theorem 3.2**: \(t\in[0,T]\mapsto(c(t),\eta(t))\) _is a solution of 3.1 with the initial condition \(c(0)=z_{0}\) if and only if the curve \(t\in[0,T]\mapsto c(t)\) satisfies \(c(0)=z_{0}\) and any of the following are true for any \(t\in[0,T]\) :_
* _(dynamical inclusion)_ \[\dot{c}(t)\in XH(c(t),t)+\partial_{d}^{R}b_{d}(c(t),\cdot,\dot{c}(t)-XH(c(t), t))(\dot{c}(t))\] (9)
* _(dynamical inclusion)_ \[\dot{c}(t)\in\partial_{d}^{L}b_{d}(c(t),\dot{c}(t),\cdot)(\dot{c}(t)-XH(c(t),t))\] (10)
* _(constraint with hamiltonian drift)_ \[\dot{c}(t)\in XH(c(t),t)+Gap(c(t),t)\] (11)
* _(implicit evolution)_ \[b_{d}(c(t),\dot{c}(t),\dot{c}(t)-XH(c(t),t))\,=\,d(\dot{c}(t),\dot{c}(t)-XH( c(t),t))\] (12)
_and \(\eta\) is defined as:_
\[\eta(t)\,=\,\dot{c}(t)-XH(c(t),t) \tag{13}\]
Recall that for the same information content, for different dualities we obtain different bipotentials, therefore
\[b_{d}(z,z^{\prime},z")-d(z^{\prime},z")\,=\,b_{\omega}^{\pi}(z,z^{\prime},z") -\omega(z^{\prime},z")\,=\,I(z,z^{\prime},z")\]
**Definition 3.3**: _The dissipation along a curve \(t\in[0,T]\mapsto c(t)\in Q\times P\) is the functional:_
\[Diss^{\pi}(c,0,T)\,=\,\int_{0}^{T}b_{\omega}^{\pi}(\dot{c}(t),\dot{c}(t)-XH(c(t ),t))\,\,\mbox{dt} \tag{14}\]
_Remark that for any curve_
\[Diss^{\pi}(c,0,T)\,\geq\,0\]
_because the likelihood \(\pi\) is tempered._
In the following theorem we give the energy balance and dissipation inequalities.
**Theorem 3.4**: _Let \(t\in[0,T]\mapsto(c(t),\eta(t))\) be a solution of 3.1 with the initial condition \(c(0)=z_{0}\). Then for any \(t\in[0,T]\):_
* _(energy balance)_ \[H(c(t),t)\,=\,H(z_{0},0)+\int_{0}^{t}\frac{\partial H}{\partial t}(c(\tau), \tau)\,\,\mbox{d}\tau\,-\,Diss^{\pi}(c,0,t)\] (15)
* _(dissipation inequalities)_ \[\frac{d}{dt}H(c(t),t))-\frac{\partial H}{\partial t}(c(t),t)\,\leq\,0\] _For any curve_ \(t\in[0,T]\mapsto c^{\prime}(t)\) _which satisfies_ \(c^{\prime}(0)=c(0)\) _we have_ \[Diss^{\pi}(c^{\prime},0,t)+H(c^{\prime}(t),t)\,\geq Diss^{\pi}(c,0,t)+H(c(t),t)\] (16)
Proof.From theorem 3.2 (d), the curve \(t\in[0,T]\mapsto(c(t),\eta(t))\) be a solution of 3.1 with the initial condition \(c(0)=z_{0}\) if and only if \(\eta\) is given by (13) and \(c\) satisfies
\[b_{\omega}^{\pi}(\dot{c}(t),\dot{c}(t)-XH(c(t),t))-\omega(\dot{c}(t),\dot{c}(t )-XH(c(t),t))\,=\,0\]
But the same calculation as in the section about hamiltonian dynamics gives
\[-\omega(\dot{c}(t),\dot{c}(t)-XH(c(t),t))\,=\,-\omega(XH(c(t),t),\dot{c}(t))\, =\,\frac{d}{dt}H(c(t),t))-\frac{\partial H}{\partial t}(c(t),t)\]
Therefore we obtain
\[\frac{d}{dt}H(c(t),t))\,=\,\frac{\partial H}{\partial t}(c(t),t)-b_{\omega}^{ \pi}(\dot{c}(t),\dot{c}(t)-XH(c(t),t))\]
We integrate this equality over \([0,t]\) and we obtain the energy balance (a).
The first dissipation inequality from (b) is a consequence of the positivity of the symplectic bipotential. In order to obtain the second inequality from (b), we introduce the information content gap functional \(G(c,0,t)\) for any curve \(t\in[0,T]\mapsto c(t)\):
\[G(c,0,t)\,=\,\int_{0}^{t}I(c(\tau),\dot{c}(\tau),\dot{c}(\tau)-XH(c(\tau), \tau))\,\,\mbox{d}\tau\]
In more detail:
\[G(c,0,t)=\int_{0}^{t}I\left(c(\tau),\dot{c}(\tau),\dot{q}(\tau)-\frac{\partial H} {\partial p}(c(\tau),\tau),-\dot{p}(\tau)-\frac{\partial H}{\partial q}(c(\tau ),\tau)\right)\,\mathrm{d}\tau \tag{17}\]
We compute, for an arbitrary curve:
\[G(c,0,t)\,=\,\int_{0}^{t}\left[b_{\omega}^{\pi}(c(\tau),\dot{c}(\tau),\dot{c} (\tau)-XH(c(\tau),\tau))\right)-\omega(\dot{c}(\tau),\dot{c}(\tau)-XH(c(\tau), \tau))\right]\,\,\mathrm{d}\tau\]
\[G(c,0,t)\,=\,Diss^{\pi}(c,0,t)+H(c(t),t)-H(c(0),0)\]
For a solution \(t\in[0,T]\mapsto c(t)\) of 3.1 and for any curve \(t\in[0,T]\mapsto c^{\prime}(t)\) which satisfies \(c^{\prime}(0)=c(0)\), we have:
\[G(c^{\prime},0,t)\,\geq\,0\,=\,G(c,0,t)\]
therefore we obtain a principle of minimal information content disclosed by the curve \(c\):
\[G(c^{\prime},0,t)\,\geq\,G(c,0,t) \tag{18}\]
Previous computations show that (18) is equivalent with
\[Diss^{\pi}(c^{\prime},0,t)+H(c^{\prime}(t),t)-H(c^{\prime}(0),0)\,\geq\,Diss^{ \pi}(c,0,t)+H(c(t),t)-H(c(0),0)\]
But \(c^{\prime}(0)=c(0)\), therefore \(H(c^{\prime}(0),0)=H(c(0),0)\). The previous inequality becomes (16). \(\Box\)
The inequality (16) can be seen as a principle of minimal dissipation. Alternatively, as in [2], we can see this as a principle of minimal information content (18) disclosed by the deviation from hamiltonian evolution, measured by the information content gap functional (17).
## 4 Applications
Pure dissipative evolution.We pick the minimal symplectic bipotential:
\[b_{\omega}^{\pi}(z,z^{\prime},z")\,=\,b_{\omega}^{min}(z^{\prime},z")\,=\, \max\left\{0,\omega(z^{\prime},z")\right\}\]
see definition 2.7. Then a solution of 3.1 in the sense of theorem 3.2 (d) is a \(t\in[0,T]\mapsto c(t)\) which satisfies the initial condition \(c(0)=z_{0}\) and
\[b_{\omega}^{min}(\dot{c}(t),\dot{c}(t)-XH(c(t),t))\,=\,\omega(\dot{c}(t),\dot {c}(t)-XH(c(t),t))\]
This is just
\[\max\left\{0,\omega(\dot{c}(t),\dot{c}(t)-XH(c(t),t))\right\}\,=\,\omega( \dot{c}(t),\dot{c}(t)-XH(c(t),t))\]
which is equivalent with
\[\omega(\dot{c}(t),\dot{c}(t)-XH(c(t),t))\,\geq\,0\]
Recall that
\[\omega(\dot{c}(t),\dot{c}(t)-XH(c(t),t))\,=\,-\frac{d}{dt}H(c(t),t))+\frac{ \partial H}{\partial t}(c(t),t)\]
therefore any curve \(c\) which satisfies the initial condition and has the property
\[\frac{d}{dt}H(c(t),t))-\frac{\partial H}{\partial t}(c(t),t)\,\leq\,0\]
for any \(t\in[0,T]\) is a solution of 3.1.
Let's compute the gap set: \(z"\in Gap(z,t)\) if and only if
\[b_{\omega}^{\pi}(z"+XH(z,t),z")=\omega(z"+XH(z,t),z")\]
which is just
\[\omega(XH(z,t),z")\,\geq\,0\]
The theorem 3.2 (c) formulation, i.e. constraint with hamiltonian drift, becomes
\[\dot{c}(t)\in XH(c(t),t)+\eta(c(t))\,,\,\,\omega(XH(c(t),t),\eta(c(t)))\,\geq\,0\]
Pure Hamiltonian evolution.Let's pick the information content (2.1) to be:
\[I(z,z^{\prime},z")\,=\,\chi_{0}(z")=\left\{\begin{array}{ll}0&\mbox{if $z"=0$}\\ +\infty&\mbox{otherwise}\end{array}\right.\]
This corresponds to a likelihood function:
\[\pi(z,z^{\prime},z")\,=\,\left\{\begin{array}{ll}1&\mbox{if $z"=0$}\\ 0&\mbox{otherwise}\end{array}\right.\]
The maximization of the likelihood (5) implies that the gap vector \(\eta=0\), therefore the evolution equations (4) reduce to the pure Hamiltonian evolution equations (2).
Dominance.In general, we may compare the sets of solutions for two symplectic bipotentials
\[b_{\omega}^{1}(z,z^{\prime},z")\,\geq\,b_{\omega}^{2}(z,z^{\prime},z")\]
This corresponds to two likelihoods
\[\pi^{1}(z,z^{\prime},z")\,\leq\,\pi^{2}(z,z^{\prime},z")\]
It is then easy to see that any solution of 3.1 for the likelihood \(\pi^{1}\) is also a solution for the same problem, but for the likelihood \(\pi^{2}\). We say that \(\pi^{2}\) dominates \(\pi^{1}\).
Any tempered likelihood \(\pi\) is by definition 2.1 (c) dominated by \(\pi_{max}\), so it is not surprizing that any solution of 3.1 for the likelihood \(\pi\) is also a solution of the same problem, but for the likelihood \(\pi_{max}\), i.e. as shown in the previous example, it dissipates.
Smooth dissipation.We consider a symplectic bipotential as in theorem 2.4
\[b_{\omega}(z,z^{\prime},z")\,=\,f(z^{\prime})+f^{*R}_{\omega}(z")\]
where \(f\) is a smooth positive convex function. Mind that we have to choose \(f\) such that \(b_{\omega}\) is nonnegative! Suppose we are in this situation, let's see what are the equations of 3.1 in this case. We use theorem 3.2. The dynamical inclusion (a) becomes
\[\dot{c}(t)\in XH(c(t),t)+\partial_{\omega}^{R}f(\dot{c}(t))\]
But \(z"\in\partial_{\omega}^{R}f(z^{\prime})\) if and only if for any \(z\in Q\times P\) we have
\[f(z^{\prime})+\omega(z-z^{\prime},z")\,\leq\,f(z)\]
which is equivalent with
\[f(z^{\prime})-\omega(z^{\prime},z")\,\leq\,f(z)-\omega(z,z")\]
Because \(f\) is smooth, this is equivalent with
\[z"\,=\,Xf(z^{\prime})\]
Therefore our equation becomes
\[\dot{c}(t)\,=\,XH(c(t),t)+Xf(\dot{c}(t)) \tag{19}\]
For simplicity let us suppose that \(f(z)\,=\,\phi(q).\) Then
\[Xf(\dot{c}(t))\,=\,(0,-\frac{\partial\phi}{\partial q}(\dot{q}(t)))\]
Let us compute also \(f^{*R}_{\omega}(z")\) in this case, where \(z"=(q",p")\):
\[f^{*R}_{\omega}(q",p")\,=\,\sup\left\{\langle q",p\rangle+\langle q,-p" \rangle-\phi(q)\,:\,z=(q,p)\in Q\times P\right\}\]
\[f^{*R}_{\omega}(q",p")\,=\,\chi_{0}(q")+\phi^{*}(-p")\]
where \(\phi^{*}\) is the usual polar with respect to the duality between \(Q\) and \(P\). The symplectic bipotential is then
\[b_{\omega}((q^{\prime},p^{\prime}),(q",p"))\,=\,\phi(q^{\prime})+\phi^{*}(-p" )+\chi_{0}(q")\]
and \(b_{\omega}((q^{\prime},p^{\prime}),(q",p"))\,\geq\,0\) for all \(z^{\prime},z"\in Q\times P\) is equivalent with
\[\phi(q)+\phi^{*}(p)\,\geq\,0\]
for all \(q\in Q,\,p\in P\). In the familiar case where \(Q\) and \(P\) are the same Hilbert space with norm \(\|\cdot\|\), if we pick \(\phi(q)\,=\,\frac{a}{2}\|q\|^{2}\) (for some \(a>0\)) then \(\phi^{*}(-p)\,=\,\frac{1}{2a}\|p\|^{2}\) and the symplectic bipotential is nonnegative. We discover Rayleigh dissipation:
\[\left\{\begin{array}{rcl}\dot{q}&=&\frac{\partial H}{\partial p}(q,p,t)\\ \dot{p}&=&-\frac{\partial H}{\partial q}(q,p,t)\,-\,\frac{\partial\phi}{ \partial q}(\dot{q})\end{array}\right.\]
On discontinuous solutions.For nonsmooth symplectic bipotentials we might need to reformulate 3.1 in order to accept discontinuous solutions. Indeed, for example in contact problems (also in plasticity or damage, as shown in [2], [3]) the solution curve \(c(t)=(q(t),p(t))\) may be continuous in the position variable \(q\) but discontinuous in the momentum variable \(p\), or alternatively some of the variables are continuous and some are discontinuous. In order to cover such cases we have to pick a weak formulation of (4), in the sense of distributions. A good choice seems to be to pick \(c\) as a function of bounded variation over \([0,T]\) and \(\eta\) a measure with singular part with respect to \(\;\mathrm{d}t\) concentrated on the jump set of \(c\). We leave these technicalities for another paper, even if they are significant in the case of the next application.
Relation with rate-independent processes.We show that we can cover a dynamical version of Mielke and collaborators - Mielke, Theil [10], Mielke, Theil and Levitas [11], [9] - quasistatic rate-independent evolutionary processes. This was done first time in [3], but here we can give a short and useful description.
We pick a symplectic bipotential of the form:
\[b_{\omega}(z,z^{\prime},z")\,=\,f(z^{\prime})+f_{d}^{*R}(z")\]
as in the smooth dissipation example, but now \(f\) is convex, positive and 1-homogeneous: for any positive scalar \(a>0\)
\[f(az^{\prime})\,=\,af(z^{\prime})\]
The function \(f\) is no longer smooth (because not derivable in 0, at least). For the dual
\[f_{\omega}^{*R}(z")\,=\,\sup\left\{\omega(z,z")-f(z)\,:\,z\in Q\times P\right\}\]
notice that for any \(a>0\)
\[f_{\omega}^{*R}(z")\,=\,\sup\left\{\omega(az,z")-f(az)\,:\,az\in Q\times P \right\}\,=\,af_{d}^{*R}(z")\]
therefore \(f_{\omega}^{*R}(z")\in\{0,+\infty\}\), i.e. it is a characteristic function of a set \(C\in Q\times P\) with \(0\in C\), more precisely:
\[C\,=\,\left\{z"\in Q\times P:\,\omega(z,z")\,\leq\,f(z)\,\forall z\in Q\times P\right\} \tag{20}\]
Therefore
\[b_{\omega}(z,z^{\prime},z")\,=\,f(z^{\prime})+\chi_{C}(z")\]
and the equations for the problem 3.1 are, in terms of gap sets:
\[Gap(z,t)\,=\,\left\{z"\in C:\,f(z"+XH(z,t))=\omega(XH(z,t),z")\right\} \tag{21}\]
\[\dot{c}(t)-XH(c(t),t)\in Gap(c(t),t) \tag{22}\]
\(f\) is a dissipation potential and the dissipation functional (14) is
\[Diss^{\pi}(c,0,T)\,=\,\int_{0}^{T}f(\dot{c}(t))+\chi_{C}(\dot{c}(t)-XH(c(t),t) )\,\,\mathrm{d}t\]
Theorem 3.4 applied for this case gives us the energy balance equations, dissipation inequalities and the principle of minimal dissipation, thus it allows us to extend the formulation of Mielke et al rate-independent processes, but this time in dynamics. |
2307.15716 | An express monitoring procedure for low pressure MWPC efficiency value
in heavy ion induced complete fusion nuclear reactions | A simple routine is proposed for monitoring the efficiency value of a low
pressure pentane filled multi-wire proportional chamber (MWPC) in long term
experiments. The proposed algorithm utilizes a two parameter approximation for
the background function. It is based on a linear approximation of the
background in the energy range of 4.8 to 10.0 MeV and an exponential
approximation of the neutron induced tail in the 1.5 to 4.2 MeV region. This
specific energy interval is used to measure the efficiency value. Prior to
discussing the algorithm, a description of the DGFRS-2 setup detection module
is provided. Additionally, an example of its application is presented for the
complete fusion of a heavy ion induced 232Th+48Ca = Ds complete fusion nuclear
reaction. Descriptions of two other two-parameter functional dependencies for
background approximations are also provided. A feature of this algorithm is
that there is no interruption of the main experiment for a calibration test
(reaction). An alternative scenario is considered in brief too. In consist of a
measurement of scattered target like ions in 30 to 45 MeV energy interval to
estimate an efficiency value. | Yu. S. Tsyganov, D. Ibadullayev, A. N. Polyakov, V. B. Zlokazov | 2023-07-26T11:11:52Z | http://arxiv.org/abs/2307.15716v1 | ###### Abstract
###### Abstract
_A simple routine is proposed for monitoring the efficiency value of a low-pressure pentane-filled multi-wire proportional chamber (MWPC) in long-term experiments. The proposed algorithm utilizes a two-parameter approximation for the background function. It is based on a linear approximation of the background in the energy range of 4.8 to 10.0 MeV and an exponential approximation of the neutron-induced tail in the 1.5 to 4.2 MeV region. This specific energy interval is used to measure the efficiency value. Prior to discussing the algorithm, a description of the DGFRS-2 setup detection module is provided. Additionally, an example of its application is presented for the complete fusion of a heavy-ion-induced \({}^{232}Th+^{48}Ca\to Ds*\) complete fusion nuclear reaction. Descriptions of two other two-parameter functional dependencies for background approximations are also provided. A feature of this algorithm is that there is no interruption of the main experiment for a calibration test (reaction). An alternative scenario is considered in brief too. In consist of a measurement of scattered target like ions in \(\sim\) 30 to 45 MeV energy interval to estimate an efficiency value._
_Keywords: silicon detector, gaseous MWPC detector, cyclotron, evaporation residue, algorithm, superheavy nuclei._
**An express monitoring procedure for low pressure MWPC efficiency value in heavy ion induced complete fusion nuclear reactions**
\({}^{\mbox{\small{Yu.S.Tsyganov}}}\)\({}^{\mbox{\small{a}}}\)\({}^{\mbox{\small{D.Ibadullayev}}}\)\({}^{\mbox{\small{a}}}\)\({}^{\mbox{\small{b}}}\)\({}^{\mbox{\small{c}}}\)\({}^{\mbox{\small{a}}}\)\({}^{\mbox{\small{Joint Institute for Nuclear Research, 141980 Dubna, Russian Federation.}}}\)
\({}^{\mbox{\small{b}}}\)\({}^{\mbox{\small{Institute of Nuclear Physics, 050032 Almaty, Kazakhstan.}}}\)
\({}^{\mbox{\small{c}}}\)\({}^{\mbox{\small{L.N. Gumilyov Eurasian National University, 010000 Astana, Kazakhstan.}}}\)
*- Corresponding author: post address: 141980 Joliot-Curie 6, Dubna, Moscow Region, Russia;
e-mail: [email protected]
## 1 Introduction
Experiments utilizing gas-filled electromagnetic separators and corresponding detection systems have successfully synthesized elements with atomic numbers 113 to 118. These experiments involved the use of \({}^{48}\)Ca ions [1-4]. By employing improved detection systems, rare alpha and spontaneous fission decays of superheavy nuclei (SHN) were isolated from background events in reactions such as \({}^{48}\)Ca + actinide target \(\rightarrow\) SHN + xn, which were carried out at the U-400 cyclotron, FLNR JINR [4-5]. The cross sections for these reactions varied from 0.1 to 10 picobarns. However, it is anticipated that the production cross-sections of SHN will significantly decrease in future experiments utilizing \({}^{50}\)Ti and \({}^{54}\)Cr ion beams. This necessitates stricter requirements for the properties of the separator and detection system. Consequently, the study of rare decay events of superheavy nuclei and the investigation of their detection characteristics are becoming increasingly important. Moreover, the method of active correlations [4, 6-9], used to suppress background, becomes particularly vital when intense beams of heavy ions (up to 5-10 p\(\upmu\)A) are employed. |
2310.16808 | Fingervein Verification using Convolutional Multi-Head Attention Network | Biometric verification systems are deployed in various security-based
access-control applications that require user-friendly and reliable person
verification. Among the different biometric characteristics, fingervein
biometrics have been extensively studied owing to their reliable verification
performance. Furthermore, fingervein patterns reside inside the skin and are
not visible outside; therefore, they possess inherent resistance to
presentation attacks and degradation due to external factors. In this paper, we
introduce a novel fingervein verification technique using a convolutional
multihead attention network called VeinAtnNet. The proposed VeinAtnNet is
designed to achieve light weight with a smaller number of learnable parameters
while extracting discriminant information from both normal and enhanced
fingervein images. The proposed VeinAtnNet was trained on the newly constructed
fingervein dataset with 300 unique fingervein patterns that were captured in
multiple sessions to obtain 92 samples per unique fingervein. Extensive
experiments were performed on the newly collected dataset FV-300 and the
publicly available FV-USM and FV-PolyU fingervein dataset. The performance of
the proposed method was compared with five state-of-the-art fingervein
verification systems, indicating the efficacy of the proposed VeinAtnNet. | Raghavendra Ramachandra, Sushma Venkatesh | 2023-10-25T17:38:16Z | http://arxiv.org/abs/2310.16808v1 | # Fingervein Verification using Convolutional Multi-Head Attention Network
###### Abstract
Biometric verification systems are deployed in various security-based access-control applications that require user-friendly and reliable person verification. Among the different biometric characteristics, fingervein biometrics have been extensively studied owing to their reliable verification performance. Furthermore, fingervein patterns reside inside the skin and are not visible outside; therefore, they possess inherent resistance to presentation attacks and degradation due to external factors. In this paper, we introduce a novel fingervein verification technique using a convolutional multihead attention network called VeinAttNet. The proposed VeinAttNet is designed to achieve light weight with a smaller number of learnable parameters while extracting discriminant information from both normal and enhanced fingervein images. The proposed VeinAttNet was trained on the newly constructed fingervein dataset with 300 unique fingervein patterns that were captured in multiple sessions to obtain 92 samples per unique fingervein. Extensive experiments were performed on the newly collected dataset FV-300 and the publicly available FV-USM and FV-PolyU fingervein dataset. The performance of the proposed method was compared with five state-of-the-art fingervein verification systems, indicating the efficacy of the proposed VeinAttNet.
## 1 Introduction
Biometric verification systems have enabled magnitude of access control applications including border control, smartphone access, banking, and finance applications. Finervein biometric characteristics are widely deployed in various applications, particularly in banking sector. Finervein biometrics represent the vein structure underneath the skin of the finger, which can be captured using near-infrared sensing. The blood flow in the fingervein absorbs near-infrared light and appears dark compared to the neighborhood region, indicating the visibility of the fingervein (refer Figure 1). The fingervein structure has been shown to be unique [1, 34, 28] between fingers of same data subject and between the data subjects. Compared to other biometric characteristics, fingervein biometrics are known for their accuracy and usefulness, and are less vulnerable to distortion. Furthermore, fingervein biometrics provide a natural way of protecting biometric features, as they reside inside the skin and thus more challenging to spoof.
Fingervein biometrics have been widely studied in the literature, resulting in various fingervein biometric verification algorithms [33, 9]. Early works are based on extracting fingervein patterns such that the vein region is labeled as one and the background is labeled as zero. Techniques such as Maximum Curvature Points (MCP) [22], Repeated Line Tracking (RLT) [21], Wide Line Detectors (WLD) [10], Mean Curvature (MC) [37] and Random transform [26] have been developed for reliable fingervein recognition. As these techniques can extract the structure of the fingervein pattern, the use of a simple comparator based on template matching using correction can achieve reliable performance. However, these features are sensitive to a small degree of fingervein rotation, noise, and reflection properties of the skin and NIR illuminator.
The global feature representation of fingervein patterns such as Local Binary Patterns (LBP) [15], Gabor filters [19], Local Directional Code [35], Wavelet Transform [23], Histogram of Gradients (HoG) [19] and pyramid image fea
Figure 1: Example fingervein images with and without image enhancement for the same identity collected in first (top row) and second session (bottom row).
tures [18] are also developed for the fingervein verification. These features are often used with Support Vector Machines (SVM) or Euclidean distances as comparators. As these techniques are based on global features, they are highly sensitive to variations in finger rotation and illumination.
The representation of a fingervein image to binary codes was developed to improve template security, together with reliable verification. Binary coding techniques include Discriminative Binary Codes [16], binary hash codes [38], DoG code [28], ordinal code [28], contour Code [28] and competitive codes [28]. Because these techniques can generate binary codes for the finger vein, the Hamming distance is used as the comparator. Binary coding techniques exhibit good verification accuracy; however, these features are sensitive to variations in rotation and illumination.
Deep-learning-based fingervein verification has been extensively studied in the literature. Table 1 summarizes the deep-learning-based techniques proposed for fingervein recognition. Early works are based on the serial convolution architecture, which is inspired from existing CNN architectures that are evaluated on the ImageNet dataset. Both shallow serial CNN networks with two convolution layers and a deep CNN network with 12 convolutional layers have been studied in the literature. However, the quantitative results indicate that lightweight serial networks with a smaller number of convolution layers exhibit better performance than deep serial networks. The possible degraded performance of deep serial networks can be attributed to limited data availability. The use of a pre-trained CNN for feature extraction has also been explored in the literature, together with fine-tuning and augmented pre-trained CNN networks. The quantitative results reported indicate a performance similar to that of end-to-end trained deep CNN networks. The Siamese network for fingervein verification was studied using different CNN configurations and U-NET-based architectures. The quantitative performance is similar to that of the serial CNN architecture. Recently, attention modules with lightweight (three to four convolution layers) serial networks have been widely explored. Different types of attention modules, including spatial, channel, and multi-attention modules, were introduced. The quantitative performance of the attention networks are comparable to that of other deep learning-based techniques implemented for fingervein verification.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Authors** & **Year** & **Deep Learning Technique** \\ \hline \hline Huafeng Qin et al., [24] & 2015 & Serial CNN architecture with 3 convolution layers and 2 fully connected layer. \\ \hline Iqan et al., [11] & 2016 & Serial CNN architecture with 3 convolution layers and 1 fully connected layer. \\ \hline Syafeeza Radzi et al., [27] & 2016 & Serial CNN architecture with 2 convolution layers. \\ \hline Huafeng Qin et al., [25] & 2017 & Serial CNN architecture with 4 convolution layers. Path based training of CNN. \\ \hline Chiui Xie et al., [41] & 2019 & Siamese network with 5 conventional layers and triplet loss function. \\ \hline Jong Min Song et al.,[36] & 2019 & Serial CNN architecture with 8 convolution layers. Composite fingervein image is generated by converting the 1-channel input image to 3-channel input image. \\ \hline Rig Das et al., [5] & 2018 & Serial CNN architecture with 5 convolution layers. \\ \hline Hyung Gil Hong et al.,[7] & 2017 & Serial CNN architecture with 12 convolution layers and 3 fully connected layers. \\ \hline Su Tang et al., [39] & 2019 & Siamese network with residual CNN architecture. \\ \hline Borui Hou et al., [8] & 2019 & Convolutional autoencoder. \\ \hline Junying Zeng et al., [43] & 2020 & Deformable convolution with U-NET type architecture. \\ \hline Ridvan Salih Kuzu et al., [14] & 2020 & Serial CNN architecture with 6 convolution layers and 2 fully connected layers with LSTM for classification. \\ \hline Hengyi Ren et al., [31] & 2021 & Feature extraction using ResNet with squeeze and excitation \\ & & on the encrypted fingervein images. \\ \hline Rdvan Salih Kuzu et al., [13] & 2021 & \begin{tabular}{l} Custom DenseNet 161 with additive angular penalty and \\ large margin cosine penalty loss function. \\ \end{tabular} \\ \hline Weili Yang et al., [42] & 2022 & Multi-view fingervein with individual CNNs and view pooling. \\ \hline Huafeng Qin et al., [42] & 2022 & U-Net based architecture with attention module. \\ \hline Tingting Chai et al., [4] & 2022 & Serial CNN architecture with 5 convolution layers and one fully connected layer. \\ \hline Ismail et al., [3] & 2022 & Serial CNN architecture with 3 convolution layers and two fully connected layer. \\ \hline Weiye Liu et al., [17] & 2023 & Residual Attention block with inception architecture. \\ \hline Zhongxia Zhang et al., [44] & 2023 & Light weight CNN with spatial and channel attention module. \\ \hline Chunxin Fang et al., [6] & 2023 & Light weight Siamese network with attention module. \\ \hline Bin Wa et al., [20] & 2023 &
\begin{tabular}{l} Serial CNN architecture with 3 convolution layers and \\ bilinear pooling with multiple attention module. \\ \end{tabular} \\ \hline \hline
**This work** & **2024** &
\begin{tabular}{l} **Serial CNN architecture with 3 convolution layers and** \\ **multi-head attention module connected in parallel** \\ **with normal and enhanced fingervein.** \\ \end{tabular} \\ \hline \end{tabular}
\end{table}
Table 1: State-of-the-art fingervein verification using deep learning techniques
Even though the deep learning techniques are widely studied for the reliable fingerevin verification, the existing deep learning techniques indicates the following drawbacks: (a) limited data: Existing techniques are evaluated on the small-scale datasets that has 6-12 samples per data subject. This limits the effectiveness of deep learning and leads to an over fitting. (b) Lack of a consistent evaluation protocol: Even though most of the existing works have used public datasets, the evaluation protocols are not consistent across existing studies. This results in a limited comparison of existing techniques for finger vein verification. In this study, we address the above-mentioned limitation by introducing a new large-scale dataset with 75 data subjects, resulting in 300 unique identities (as we collected four fingers per data subject). For each unique fingerevin, we collected 92 samples in multiple sessions, varying from 1-4 days duration. Furthermore, we propose a novel lightweight CNN architecture based on a convolutional multi-head attention module. The main contributions of this study are as follows:
* A novel fingerevin verification technique based on a convolutional multi-head attention network (VeinAttNet) is proposed.
* Introduced a new fingerevin dataset with 300 unique identities captured from 75 data subjects, resulting in \(300\times 92=27600\) fingerevin images. The dataset is available publicly for research purpose.
* Extensive experiments were performed on both the newly introduced dataset and the publicly available FV-USM and FV-PolyU datasets. The performance of the proposed method was compared with that of five state-of-the-art fingerevin verification methods.
The rest of the paper is organised as follows: Section 2 discuss the proposed method for the fingerevin verification, Section 3 presents the quantitative results of the proposed method with the state-of-the-art techniques and Section 4 draws the conclusion.
## 2 Proposed Method
Figure 2 shows a block diagram of the proposed VeinAttNet architecture for reliable fingerevin verification. The novelty of the proposed approach is that it leverages the convolutional Multi-Head Attention (MHA) framework to achieve accurate and reliable fingerevin verification. The utility of MHA, together with convolutional features, leads to a discriminant feature representation that can contribute to the robust performance of the fingerevin verification.
The proposed VeinAttNet is a lightweight architecture with three Consecutive Convolution Layers (CCL) and a Multihead Self-Attention (MSA) mechanism. VeinAttNet is connected independently with normal and enhanced fingerevin images whose comparison scores from the softmax layer are fused to make the final verification decision. Given the captured fingerevin image, preprocessing is performed using Contrast Limited Adaptive Histogram Equalization (CLAHE) [29] to enhance the fingerevin pattern. In this work, we employed the Contrast Limited Adaptive Histogram (CLAHE) as the fingerevin enhancement method by considering (a) the high quality of the fingerevin enhancement achieved when compared to other enhancement techniques, as discussed in [9]. (b) Widely employed enhancement techniques in fingerevin literature that have reported high verification accuracy. Both the normal (without enhancement) and enhanced fingerevin images were resized to \(224\times 224\times 3\) pixels. The CCL performs the initial feature learning of the fingerevin images, which is further processed to obtain a rich feature representation using MSA. Given the fingerevin image \(F_{v}^{R\times C\times D}\), the final output features of MSA can be represented as follows:
\[F_{MSA}=MSA(F_{CCL});whereF_{CCL}=CCL(F_{v}); \tag{1}\]
Figure 2: Block diagram of the proposed method for fingerevin verification
Where, \(F_{MSA}\) denote the output features from MSA, \(F_{CCL}\) denotes the output features of CCL block. The \(F_{MSA}\) is then used with softmax classifier to make the final decision. In the following, we discuss the building blocks of the proposed VeinAtnNet.
### Consecutive Convolution Layers (CCL)
The CCL has three convolution modules that are serially connected. Each convolution module has four different convolution layers (conv): a group normalization (norm), an activation function layer (ReLu), and a pooling layer (maxpool). Three convolution layers were used to extract the global features from the fingervein images. The conv-1 layer has a filter size of \(7\times 7\) the conv-2 layer has a \(5\times 5\), conv-3 has a \(3\times 3\) filter, and the number of filters in all three conv layers is set to 32. The gradual decrease in filter size ensures fine grinding (from global to local) of the fingervein features. The convolution features were normalized using group normalization, which reduced the sensitivity of the network for initialization. In particular, we employed group normalization because it outperforms batch normalization with a small size. The normalized features are then fed to the activation unit (ReLU), which can introduce sparsity and improve the network training speed. Finally, a pooling operation was performed to achieve a compact feature representation. In this study, we employed max pooling, which can capture texture information suitable for fingervein verification. The output after three convolution modules is then passed through the group-average pooling layer to obtain a compact representation of the features. Finally, the features were flattened before being fed into the MSA module.
### Multihead Self-Attention (MSA)
The features from the CCL module are then fed to the MSA module to further refine the features \(F_{CCL}\) to extract discriminant features suitable for fingervein verification. In this study, we employed multihead attention [40] with four different heads and 64 channels for keys and queries. Basically, MSA runs the attention mechanism across all heads multiple times in parallel. The independent attention outputs are then concatenated and transformed linearly. MSA can be represented as follows [40]:
\[Mu-Head(Q.K,V)=[H_{1},H_{2},H_{3},H_{4}]W \tag{2}\]
where \(W\) is the learnable parameter and \(Q\), \(K\) and \(V\) represent the queries, keys, and values, respectively. In this study, we employed scaled dot-product attention across heads using \(Q\), \(K\) and \(V\) as follows:
\[Attention(Q.K,V)=softmax(\frac{QK^{T}}{sqrt(d_{k})})V \tag{3}\]
The outcome of the MSA module was passed through the layer normalization layer to generalize the final features. Finally, the normalized features are passed through the fully connected and softmax layers to obtain the comparison score.
### Score Level Fusion
The proposed VeinAtnNet was employed independently on normal and enhanced finger vein images. Thus, given the test fingervein image, the proposed method provides two comparison scores corresponding to normal and enhanced fingerveins. We combined these two comparison scores using the sum rule to make the final verification decision. Let the comparison score from the normal fingervein image be \(C_{n}\) and enhanced fingervein image be \(C_{e}\), then final verification score is computed as \(V_{s}=(C_{n}+C_{e})\).
### Implementation Details
The proposed network is based on Adaptive Moment Estimation (ADAM) optimization to calculate loss. In this work, we employ the cross-entropy loss, which can be defined as \(-\frac{1}{N}\sum_{n=1}^{N}\sum_{i=1}^{K}(T_{ni}\log(Y_{ni}))+(1-T_{ni})\log(1- Y_{ni})\), where \(N\) and \(K\) denote the number of samples and classes, respectively, \(T_{ni}\) is the corresponding target value to \(Y_{ni}\). During training, the learning rate was set to 0.0001, the mini-batch size was set to 16, and the number of epochs was set to 150. Furthermore, we performed data augmentation, which included image reflection, translation, rotation, reflection, scaling, and random noise with three different variances. This resulted in nine different images for every image used in training the proposed method. Finally, the proposed method is lightweight with only 58.2 K learnable parameters. While the existing SOTA employed in this work namely; Bin Wa et al., [20] has approximately 17.8M and Ismail et al., [3] has approximately 467.1K learnable parameters respectively.
## 3 Experiments and Results
In this section, we discuss the quantitative results of the proposed and existing fingervein verification algorithms. The quantitative performance is presented using the False Match Rate (FMR) and False Non-Match Rate (FNMR), together with the Equal Error Rate (EER) value computed at FMR = FNMR. The performance of the proposed method was compared with recently proposed fingervein recognition algorithms based on multiple attentions [20] and deep fusion [3] by considering their verification performance. Furthermore, we compared the performance of the proposed method with well-established fingervein verification techniques, such as MCP [22], RLT[21] and WLD [10]. In the following section, we describe the newly collected fingervein dataset, followed by the quantitative results.
### FV-300 Fingervein dataset
In this study, we introduced a new fingervein dataset comprising 300 unique fingerveins corresponding to 75 unique data subjects. The fingervein images were collected using a custom camera system desired using a monochrome CMOS camera with a resolution of \(744\times 480\) pixels with two lighting sources to illuminate the finger from both the back and side. The design aspects of the finger vein capture device were inspired by [30]. The data collection was carried out under indoor conditions, and for every data subject, two fingers (index and middle) were captured from both the left and right hands, resulting in four unique fingers. For each data subject, we captured 92 fingervein images corresponding to individual fingers in multiple sessions. The duration between sessions varies from 1-4 days. The FV-300 dataset contained 75 data subjects \(\times\) 4 fingers \(\times\) 92 = 27600 fingervein samples. Figure 3 shows an example of the fingervein images from the FV-300 dataset.
### Experimental protocol
To effectively benchmark the performance of the proposed method, we used three fingervein datasets: FV-300, FV-USM [2] and FV-PolyU [12]. To evaluate the performance of the fingervein algorithm on FV-300 dataset, the fingervein samples corresponding to each finger were divided into three independent sets such that the training set had 70 images, the validation set had 12 images, and the testing set had 10 images. This resulted in 300 \(\times\) 10 = 3000 genuine and 300 \(\times\) 299 \(\times\) 10 = 897000 impostor scores, respectively.
The verification performance of the FV-USM [2] dataset was evaluated by training the fingervein verification algorithms on FV-300 dataset and fine-tuning the trained networks on the FV-USM dataset. The FV-USM [2] dataset comprised 492 unique fingervein identities captured in two sessions with six samples each. Thus, the proposed method (and the existing methods employed in this work that includes multiple attention [20] and deep fusion [3]) are trained on the FV-300 dataset and fine-tuned using the first session data (from FV-USM dataset) that has 6 samples per subject. Testing was performed using the second-session data (from FV-USM dataset) with six samples per subject. However, the conventional fingervein state-of-the-art techniques (MCP [22], RLT [21] and WLD [10]) employed in this study do not require a training set for learning. Therefore, we used the first-session data from the FV-USM dataset as enrolment, and the second session data were used for testing. This resulted in 492 \(\times\) 6 = 2952 genuine and 492 \(\times\) 491 \(\times\) 6 = 1449432 impostor scores.
The verification performance of the fingervein algorithms (deep learning based on the proposed method) on the FV-PolyU dataset was performed using a procedure similar to that discussed for the FV-USM dataset. The fingervein algorithms trained on the FV-300 dataset were fine-tuned using the FV-PolyU dataset. The FV-PolyU dataset [12] employed in this work comprises 156 unique identities, from which the finger vein index and middle fingers are captured in two sessions with six samples each. Thus, the FV-PolyU dataset has 312 unique identities, and data from the first session are used to fine-tune both the proposed and SOTA deep learning methods, which include multiple attention [20] and deep fusion [3]) that are trained on the FV-300 dataset. Testing was performed on the second session data, which resulted in 312 \(\times\) 6 = 1872 genuine and 312 \(\times\) 311 \(\times\) 6 = 582192 impostor scores.
### Results and discussion
Table 2 shows the quantitative performance of the proposed and existing fingervein verification techniques on both FV-300, FV-PolyU and FV-USM datasets, and Figure 4 shows the DET curves. Existing methods were trained using enhanced fingervein images to optimise the best performance. Based on the results, the following are important observations:
* Training and testing on the same dataset will indicate the improved verification results of the deep learning based techniques. Therefore, the performance of the deep learning techniques indicated an improved performance on FV-300 compared to FV-USM and FV-PolyU dataset.
* Traditional fingervein techniques (MCP [22], RLT [21] and WLD [10]) that are based on template matching using correlation indicates the superior performance on the FV-300 dataset compared to FV-USM and FV-PolyU dataset. However, the performance of RLT [21] and WLD [10] do not indicate a significant difference
Figure 3: Example fingervein images from FV-300 dataset representing same identity captured in three different sessions.
in the verification performance between three different fingervein datasets employed in this work.
* Among three traditional fingervein techniques employed in this work, the MCP [22] indicated the best performance on both datasets. Furthermore, MCP [22] demonstrated improved performance compared to the state-of-the-art deep learning methods employed in this study.
* The proposed method has indicated an outstanding verification performance with EER = 0.54% and TAR = 90.36% @ FMR = 0.01% on FV-300 dataset. The proposed method also indicated the best performance with an EER of 15.35% on the FV-USM dataset. Similar performance is also noted on the FV-PolyU dataset with an EER = 5.52%. However, the verification performance degraded at a lower FMR on both FV-USM and FV-PolyU dataset.
* Based on the results, it is worth nothing that, the deep learning techniques depends on the training data and indicate the limitation to generalize on the another dataset due to the limited number of samples available for fine-tuning. However, compared with existing deep learning methods, the proposed VeinAtnNet exhibits
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Data set**} & \multirow{2}{*}{**Algorithms**} & \multirow{2}{*}{**EER(\%)**} & \multicolumn{3}{c|}{**TAR = (100-FNMR\%) @ FMR =**} \\ \cline{3-6} & & & **1\%** & **0.1\%** & **0.01\%** \\ \hline \multirow{6}{*}{FV300} & MCP [22] & 4.74 & 88.93 & 64.41 & 44.35 \\ \cline{2-6} & RLT [21] & 31.30 & 19.35 & 9.52 & 4.68 \\ \cline{2-6} & WLD [10] & 13.55 & 77.47 & 74.11 & 73.15 \\ \cline{2-6} & Ismail et al., [3] & 9.14 & 72.92 & 40.95 & 7.34 \\ \cline{2-6} & Bin Wa et al., [20] & 19.94 & 21.25 & 5.31 & 0.95 \\ \cline{2-6} & **Proposed Method** & **0.54** & **99.73** & **99.13** & **90.36** \\ \hline \hline \multirow{6}{*}{FV-USM} & MCP [22] & 17.74 & 46.95 & 27.25 & 18.36 \\ \cline{2-6} & RLT [21] & 29.63 & 32.29 & 29.70 & 29.16 \\ \cline{2-6} & WLD [10] & 18.17 & 54.17 & 34.89 & 16.73 \\ \cline{2-6} & Ismail et al., [3] & 41.78 & 4.38 & 1.16 & 0.34 \\ \cline{2-6} & Bin Wa et al., [20] & 37.53 & 5.25 & 1.65 & 0.35 \\ \cline{2-6} & **Proposed Method** & **15.35** & **40.45** & **14.25** & **5.11** \\ \hline \hline \multirow{6}{*}{PolyU} & MCP [22] & 14.25 & 52.29 & 34.40 & 20.64 \\ \cline{2-6} & RLT [21] & 33.48 & 19.26 & 8.25 & 4.28 \\ \cline{1-1} \cline{2-6} & WLD [10] & 16.53 & 65.29 & 48.16 & 38.53 \\ \cline{1-1} \cline{2-6} & Ismail et al., [3] & 42.12 & 5.96 & 2.29 & 1.37 \\ \cline{1-1} \cline{2-6} & Bin Wa et al., [20] & 40.90 & 3.21 & 0.45 & 0.45 \\ \cline{1-1} \cline{2-6} & **Proposed Method** & **5.52** & **74.77** & **30.27** & **19.26** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative Performance of the proposed and state-of-the-art fingervein verification methods
Figure 4: DET Curves showing the verification performance of the proposed and state-of-the-art fingervein verification methods
superior verification performance on three fingervein datasets employed in this work.
### Ablation Study of the proposed method
In this section, we present an ablation study of the proposed method by using the FV-300 dataset. We considered three different cases in which Case-1 represent the performance with Conv-1 and MSA together. Case-2 shows the performance of Conv-1, Conv-2, and MSA while Case-3 indicates the performance of the proposed method with Conv-1, Conv-2, Conv-3, and MSA. Table 4 and Figure 5 show the performance of the proposed method for different ablation studies. The addition of convolutional layers with MSA can improve the overall performance of the proposed VeinAttNet for reliable fingervein verification.
We further investigated the role of adding additional convolution layers with MSA to improve the verification accuracy. To this extent, we start computing the verification accuracy starting with one Conv layer and increasing it to five consecutive Conv layers with MSA. Figure 6 shows the verification performance with EER for different depths of convolution layers. It should be noted that the use of three consecutive layers with MSA can achieve the best performance and further increase the depth by adding convolution layers. This further justifies the choices made in designing the proposed method that has indicated the best generalized verification performance compared to the five different SOTA.
### Interpretation of the proposed method
To interpret the decision achieved by the proposed method, we employed Local interpretable model-agnostic explanations (LIME) [32] to explain the perdition's on the probe fingervein images. Because the proposed method is based on both normal and enhanced fingervein images, we present the qualitative and quantitative results for both fingervein image types. Table 3 indicates the quantitative performance of the proposed method with normal and enhanced image alone. The obtained results indicated a similar verification performance with EER and higher FMR values. However, with lower FMR values, the proposed method exhibited better performance with enhanced fingervein samples. Thus, the availability of the enhanced fingervein pattern indicates more discriminant information to
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} **Consecutive Convolution Layers (CCL)** \\ \end{tabular} & \begin{tabular}{c} **Multi-head** \\ \end{tabular} &
\begin{tabular}{c} **Proposed method** \\ \end{tabular} \\ \hline Conv-1 & Conv-2 & Conv-3 & Self-Attention (MSA) & EER (\%) \\ \hline ✓ & X & ✓ & 8.29 \\ \hline ✓ & ✓ & X & ✓ & 2.38 \\ \hline ✓ & ✓ & ✓ & ✓ & 0.54 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation study of the proposed method on FV-300 dataset
Figure 5: DET Curves indicating the performance of the proposed method with different cases of ablation study
Figure 6: EER of the proposed method with different number of convolution layers.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \begin{tabular}{l} **Data Type** \\ \end{tabular} & \begin{tabular}{l} **Algorithms** \\ \end{tabular} & \begin{tabular}{l} **EER(\%)** \\ \end{tabular} &
\begin{tabular}{l} **TAR = (100-FNMR\%) @ FMR =** \\ **1\%** \\ \end{tabular} \\ \hline Normal Fingervein & Proposed Method & 1.85 & 97.89 & 83.55 & 54.18 \\ \hline Enhanced Fingervein & Proposed Method & 1.13 & 98.87 & 90.53 & 60.76 \\ \hline \end{tabular}
\end{table}
Table 3: Verification performance of the proposed method with normal and enhanced fingervein data
improve verification accuracy at low FMR values.
Figure 7 shows the qualitative results of the LIME method for visualizing important regions in the fingerevin image, which has contributed to successful verification. The LIME explanation is shown on the fingerevin images from the FV-300 dataset for successful verification prediction at FAR = 0.01%. As shown in Figure 7, the proposed method utilises more image regions with normal fingerevin images compared with the enhanced fingerevin to make the decision. However, with enhanced fingerevin images, the decision is based on a smaller number of regions associated with vein pattern and particularly on the minutiae points of the fingerevin. These observations justify the improved performance of the proposed method with enhanced fingerevin images compared with normal fingerevin images.
## 4 Conclusion
Fingervein biometrics are widely employed in various secure access control applications. In this study, we proposed a novel method based on a convolutional multi-head attention module for reliable fingerevin verification. The proposed VeinAttNet is based on three consecutive convolution layers and multihead attention with four heads and 64 channels connected in parallel to the normal and enhanced fingerevin samples. Finally, the decision is made using the score-level fusion of the normal and enhanced fingerevins. Extensive experiments were performed on both publicly and newly collected finger vein datasets. The quantitative performance of the proposed method was benchmarked using five state-of-the-art fingerevin verification methods. The obtained results indicate the superior performance of the proposed method on both publicly available and newly collected fingerevin datasets.
|
2301.06382 | Exploring the role of composition and mass-loading on the properties of
hadronic jets | Astrophysical jets are relativistic outflows that remain collimated for
remarkably many orders of magnitude. Despite decades of research, the origin of
cosmic rays (CRs) remains unclear, but jets launched by both supermassive black
holes in the centre of galaxies and stellar-mass black holes harboured in X-ray
binaries (BHXBs) are among the candidate sources for CR acceleration. When CRs
accelerate in astrophysical jets, they initiate particle cascades that form
{\gamma}-rays and neutrinos. In the so-called hadronic scenario, the population
of accelerated CRs requires a significant amount of energy to properly explain
the spectral constraints similarly to a purely leptonic scenario. The amount of
energy required often exceeds the Eddington limit, or even the total energy
available within the jets. The exact energy source for the accelerated protons
is unclear, but due to energy conservation along the jets, it is believed to
come from the jet itself via transfer of energy from the magnetic fields, or
kinetic energy from the outflow. To address this hadronic energy issue and to
self-consistently evolve the energy flux along the flows, we explore a novel
treatment for including hadronic content, in which instabilities along the
jet/wind border play a critical role. We discuss the impact of the different
jet composition on the jet dynamics for a pair dominated and an electron-proton
jet, and consequently the emitted spectrum, accounting for both leptonic and
hadronic processes. Finally, we discuss the implications of this mass-loading
scenario to address the proton energy issue. | Dimitrios Kantzas, Sera Markoff, Matteo Lucchini, Chiara Ceccobello, Koushik Chatterjee | 2023-01-16T12:04:37Z | http://arxiv.org/abs/2301.06382v1 | # Exploring the role of composition and mass-loading on the properties of hadronic jets
###### Abstract
Astrophysical jets are relativistic outflows that remain collimated for remarkably many orders of magnitude. Despite decades of research, the origin of cosmic rays (CRs) remains unclear, but jets launched by both supermassive black holes in the centre of galaxies and stellar-mass black holes harbored in X-ray binaries (BHXBs) are among the candidate sources for CR acceleration. When CRs accelerate in astrophysical jets, they initiate particle cascades that form \(\gamma\)-rays and neutrinos. In the so-called hadronic scenario, the population of accelerated CRs requires a significant amount of energy to properly explain the spectral constraints similarly to a purely leptonic scenario. The amount of energy required often exceeds the Eddington limit, or even the total energy available within the jets. The exact energy source for the accelerated protons is unclear, but due to energy conservation along the jets, it is believed to come from the jet itself via transfer of energy from the magnetic fields, or kinetic energy from the outflow. To address this hadronic energy issue and to self-consistently evolve the energy flux along the flows, we explore a novel treatment for including hadronic content, in which instabilities along the jet/wind border play a critical role. We discuss the impact of the different jet composition on the jet dynamics for a pair dominated and an electron-proton jet, and consequently the emitted spectrum, accounting for both leptonic and hadronic processes. Finally, we discuss the implications of this mass-loading scenario to address the proton energy issue.
keywords: acceleration of particles - stars: jets - galaxies: jets
## 1 Introduction
Accreting black holes can efficiently launch relativistic outflows, known as astrophysical jets, by converting gravitational energy to kinetic energy. Large-scale jets launched by supermassive black holes (SMBH) share some common physical laws to the small-scale jets launched by stellar-mass black holes in X-ray binaries (BHXBs; Heinz and Sunyaev, 2003; Merloni et al., 2003; Falcke et al., 2004), and hence black hole jets appear to be scale invariant in some of their properties. For example, SMBHs with masses of the order of \(\sim 10^{6}-10^{9}\,M_{\odot}\) power jets that remain collimated up to Mpc scales (Waggett et al., 1977), whereas BHXBs with mass of the order of a few solar masses display jets that remain collimated up to sub-pc scales (Mirabel and Rodriguez, 1994). Galactic BHXBs are of particular importance because they transition between different jetted and non-jetted states over human-like timescales, giving us the chance to understand plasma evolution in extreme conditions and better probe jet physics (see, e.g., Markoff et al., 2001, 2003, 2005; Reig et al., 2003; Giannios et al., 2004; Maitra et al., 2009; Vila and Romero, 2010; Zdziarski et al., 2014; Connors et al., 2019; Lucchini et al., 2021).
The exact physical mechanism responsible for jet launching is not clear yet. On one hand, the Blandford-Znajek mechanism (Blandford and Znajek, 1977) describes a way to extract the rotational energy of a spinning black hole and power relativistic jets that can be pair-plasma dominated (see, e.g., Broderick and Tchekhovskoy, 2015; Parfrey et al., 2019) On the other hand, magnetic fields anchored in the accretion disc can launch baryon/proton/ion-dominated jets via the Blandford-Payne mechanism (Blandford and Payne, 1982). The difference in jet composition from the two launching mechanisms would have an important impact on the interpretation of the spectral energy distribution (SED) observed from such black hole systems as well as the consideration of relativistic jets as candidate sources of cosmic rays (CRs).
CRs are charged particles that exhibit a large range of energies
going up to ultra-high energies of the order of \(10^{20}\) eV (The Pierre Auger Observatory et al., 2017; Abbasi et al., 2020). The detected CR spectrum shows two very prominent features, known as the "knee"and the "ankle" where the spectrum steepens and hardens, respectively. The "knee" is observed at \(10^{15}\) eV (PeV) and is likely to be the maximum energy that CR protons accelerated in Galactic sources can reach, but the identification of these particular sources remains a mystery despite the decades of studies. The "ankle", located at \(\sim 10^{18}\) eV (EeV), is where extragalactic sources are thought to start dominating the spectrum. The exact CR composition is not clear and strongly depends on the particle energy. GeV CRs primarily comprise of protons (\(\sim 99\) per cent; Shikaze et al., 2007), with electrons and positrons mainly contributing to the rest of the spectrum. It is likely that heavier elements/ions accelerated in Galactic sources start dominating the CR spectrum between the "knee" and the "ankle" (Aloisio et al., 2012), beyond which the composition is unclear (Abbasi et al., 2019; Yushkov et al., 2019; Corstanje et al., 2021).
Similar to large-scale jets of active galactic nuclei (AGN), which are among the dominant candidate sources of the extragalactic CRs (Protheroe and Kazanas, 1983), recent studies suggest the small-scale jets of BHXBs as potential CR acceleration sites (Romero et al., 2003; Fender et al., 2005; Cooper et al., 2020). There are currently only a few tens of Galactic BHXBs detected in the Milky Way (Tetarenko et al., 2016), but population-synthesis simulations (see, e.g., Olejak et al., 2020) suggest that a few thousand black holes likely reside in the Galactic disc, in agreement with the recent X-ray observations of the Galactic centre by Hailey et al. (2018) and Mori et al. (2021). Based on such observations, Cooper et al. (2020) proposed that a few thousand BHXBs are capable of contributing to the observed CR spectrum above the "knee".
Whether or not BHXBs jets can indeed accelerate CRs up to the "knee", and AGN jets beyond the "ankle", strongly depends on two further issues: (1) can astrophysical jets, in general, accelerate particles to high energies, and (2) are astrophysical jets actually comprised of protons and/or heavier elements? On the former, observations of non-thermal emission from radio bands (see, e.g., Lister et al., 2016) up to GeV/TeV \(\gamma\)-rays from both SMBHs (see, e.g., Lister et al., 2009) and BHXBs (see, e.g., Zanin et al., 2016), suggest that both classes of jets can efficiently accelerate particles. Numerous numerical studies, moreover, suggest that jets can indeed be viable sites of particle acceleration either via shocks (Hillas, 1984), or via magnetic reconnection (Drenkhahn and Spruit, 2002; Guo et al., 2014; Sironi and Spitkovsky, 2014; Matthews et al., 2020).
The jet composition however remains an open question. The two different proposed launching mechanisms mentioned above yield an entirely different jet content at the base that significantly alters not only the jet dynamics, but the emitted spectrum as well (Petropoulou et al., 2019). A pair-dominated jet would allow only for leptonic processes, such as synchrotron and inverse Compton scattering (ICS; Blumenthal and Gould, 1970). A leptonic plus hadronic jet, on the other hand, allows for further non-thermal processes, when inelastic collisions occur between the accelerated protons and the cold flow or radiation (e.g., Mannheim, 1993; Rachen and Biermann, 1993; Mannheim and Schlickeiser, 1994; Rachen and Meszaros, 1998). Such hadronic processes can lead to the production of astrophysical neutrinos, but usually require a much larger jet energy budget than the leptonic ones, sometimes requiring super-Eddington jet powers (Bottcher et al., 2013; Liodakis and Petropoulou, 2020). Such super-Eddington powers challenge the accretion paradigm (Zdziarski and Bottcher, 2015), but they still seem feasible for relativistic AGN jets (Ghisellini et al., 2014).
Several BHXB jets, such as the peculiar case of SS433 or the prototypical Cygnus X-1, show evidence of baryonic jet content (Fabrika, 2004 and Gallo et al., 2005; Heinz, 2006, respectively). Both the compact objects of SS433 and Cygnus X-1 are accompanied by a high mass donor star that may be the source of the heavy composition through its stellar wind. There is evidence of baryon-loaded jets though, even in the case of a low-mass companion, such as the black hole candidate 4U 1630-47, based on iron emission lines (Diaz Trigo et al., 2013). The cases of MAXI J1820+070 (Tetarenko et al., 2021; Zdziarski et al., 2022), MAXI J1836-194 (Lucchini et al., 2021), XTE J1752-223, MAXI J1659-152, and XTE J1650-500 (Cao et al., 2021) on the other hand, favour a jet composition of the order of a few to a few tens of pairs per proton based on energetic arguments.
The composition is also difficult to constrain in extragalactic jets. Circular polarisation measurements indicate that the jets of the blazar 3C 279 are pair-dominated (Liodakis et al., 2021), and energetic arguments of the radio galaxy 3C 120 are consistent with a pair-dominated jet (Zdziarski et al., 2022). Celotti and Fabian (1993), on the other hand, based on very-large baseline interferometry and spectral arguments for numerous sources, support an electron-proton plasma. The blazar TXS 0506+056, finally, due to the correlation with the high-energy neutrino IceCube-170922A, supports a baryon content in its jets as well (Aartsen et al., 2018).
Currently, the state-of-the-art to model jet launching and dynamics in a more _a priori_ way are high-resolution simulations that solve the magneto hydrodynamic equations in the general relativistic regime (GRMHD). Such simulations have furthered our understanding of the accretion-launching paradigm and have shown that a Poynting flux dominated outflow can convert a significant amount of its initial magnetic energy into kinetic energy to accelerate the bulk flow (McKinney, 2006; Komissarov et al., 2007; Tchekhovskoy et al., 2008, 2009; Komissarov et al., 2009). The same simulations, have established that the accretion disc can significantly impact the spatial evolution of the jets not only at \(r_{g}\)-scale distances (\(r_{g}=GM_{\rm bh}/\rm c^{2}\), where \(M_{\rm bh}\) is the mass of the black hole), but also further out. In particular, Chatterjee et al. (2019, hereafter CLTM19) performed a series of high-resolution GRMHD simulations of strongly magnetised systems to better understand the loading of jets with matter from the wind of the accretion disc. When the jets propagate in a medium, pinch instabilities can occur in the interface between the jet and the ambient medium to give rise to eddies that eventually allow for matter to entrain the jet (Eichler, 1993; Spruit et al., 1997; Begelman, 1998; Giannios and Spruit, 2006; CLTM19; Sironi et al., 2021). Such mass entrainment can significantly affect the jet kinematics and hence the non-thermal emission.
Such GRMHD simulations, though, usually make the ideal gas assumption and therefore cannot capture dissipative processes like particle acceleration self-consistently. Kinetic simulations of particles-in-cell (PIC), on the other hand, calculate the trajectories of individual particles based on first principles, allowing for a more detailed and comprehensive understanding of the relativistic outflows. Both GRMHD and PIC simulations, however, are very computational expensive, and they cannot easily be compared to observations through statistical methods that explore the full parameter phase space.
In this work, we develop a new treatment for incorporating mass-loading and thus evolving compositions in jets, and apply it to a multi-zone jet model. This treatment is inspired by recent GRMHD simulations such as CLTM19, to explore jet composition and its impact on the total jet power as well as its electromagnetic emission. In particular, we build on the multi-zone jet model developed by
Markoff et al. (2005) that relies on the pioneering ideas of Blandford & Konigl (1979), Hjellming & Johnston (1988), and Falcke & Biermann (1995). After many developments, the latest version of the model is BHJet (Lucchini et al., 2022), a multi-zone jet model that better connects the jet acceleration and jet physical quantities to the radiative output. For the first time, we connect the physically motivated model BHJet with hadronic acceleration, accounting for self-consistent energy conservation. We further present HadJet, a multi-zone, lepto-hadronic, mass-loaded jet model. In this work, we discuss the main physical properties of both models and how HadJet can be used to address the jet-power crisis of lepto-hadronic models.
The paper is structured as follows. In Section 2 we describe the semi-analytical calculations for the magnetically accelerated jet accounting for both leptonic and hadronic acceleration and radiative processes. We present the results of the above jet model in Section 3. In Section 4, we describe the details of the mass-loaded jet model (HadJet) and present the results in Section 5. Finally, in Section 6 we discuss the implication of our new models on the proton power issue and conclude in Section 7.
## 2 Magnetically accelerated steady-state jets
We assume two initially cold, Poynting flux dominated jets of either leptonic or lepto-hadronic content, that accelerate up to some maximum velocity because of magnetic energy dissipation (Vlahakis & Konigl, 2003; McKinney, 2006; Komissarov et al., 2007). At the region where the bulk velocity reaches the maximum value (acceleration region henceforth, denoted by \(z_{\rm acc}\)), we further assume that energy is also dissipated to accelerate particles to non-thermal energies (Blandford & Rees, 1974; Begelman et al., 1984). With our formalism, we cannot capture whether the magnetic energy dissipates immediately to particle acceleration (as in the case of magnetic reconnection) or if magnetic energy dissipates to kinetic energy first and this extra kinetic energy dissipates to particle acceleration through shocks (Bogovalov & Tsinganos, 2005). We assume instead that the total energy of the jet is conserved at the particle acceleration region. From this point outwards along the jets, we assume a constant particle acceleration rate and discuss below how this assumption affects the evolution of both the jet velocity and magnetic field. In Table 1, we define all the parameters and their fiducial values (if applicable) that we use in this section.
### Jet dynamical properties
Based on both semi-analytical and numerical calculations, the bulk jet Lorentz factor \(\gamma\) is expected to scale approximately as \(z^{1/2}\), where \(z\) is the distance along the jet (Beskin & Nokhrina, 2006; McKinney, 2006). We parametrise the jet Lorentz factor as Lucchini et al. (2018) (and see also Potter & Cotter, 2012)
\[\gamma(z\leq z_{\rm acc})=\gamma_{0}+(\gamma_{\rm acc}-\gamma_{0})\frac{z^{1/ 2}-z_{0}^{1/2}}{z_{\rm acc}^{1/2}-z_{0}^{1/2}}, \tag{1}\]
where \(\gamma_{0}\) is the initial Lorentz factor at the jet base and \(z_{0}\) is the distance of the jet base from the black hole and \(\gamma_{\rm acc}\) is the maximum bulk Lorentz factor at \(z_{\rm diss}\). We assume that the jets launch initially with the speed of sound, which for a relativistic flow with adiabatic index 4/3 is equal to 0.43 c, or \(\gamma_{0}=1.11\)(Crumley et al., 2017).
The jets are thus set to be initially parabolic while they accelerate and become conical when they achieve \(\gamma_{\rm acc}\)(Komissarov et al., 2009). We express the cross-sectional radius of the jet along the jet axis as
\[r=r_{0}+(z-z_{0})\tan(\theta), \tag{2}\]
where \(r_{0}\) is the radius of the jet base and \(\theta\) is the opening angle of the jets. Based on very long baseline interferometry observations and the Monitoring of jets in AGN with VLBA Experiments (MOJAVE; Pushkarev et al. see, e.g., 2009, 2017, we set the jet opening angle to be
\[\theta=\frac{0.15}{\gamma}. \tag{3}\]
While the number of particles along the jet is conserved, we express the number density of leptons as
\[n=n_{0}\left(\frac{\gamma\beta}{\gamma_{0}\beta_{0}}\right)^{-1}\left(\frac{r} {r_{0}}\right)^{-2}, \tag{4}\]
where \(\beta\) is the jet velocity normalized to the speed of light and \(n_{0}\) is the initial number density. We calculate \(n_{0}\) by the power \(L_{\rm jet}\) injected at the jet base in the comoving frame
\[L_{\rm jet}=2\beta_{0}\gamma_{0}c\pi r_{0}^{2}\omega_{0} \tag{5}\]
where we account for two identical jets (hence the factor of 2), and \(n_{0}\) depends on \(L_{\rm jet}\) and the initial conditions of the jet base as written out below. We write the jet enthalpy \(\omega\) as (Falcke & Biermann, 1995; Crumley et al., 2017)
\[\omega=\rho c^{2}+U_{j}+P_{j}=\rho c^{2}+U_{\rm p}+P_{\rm p}+U_{\rm e}+P_{\rm e }+U_{B}+P_{B}, \tag{6}\]
where \(U_{j}=U_{\rm p}+U_{\rm e}+U_{B}\) is the total internal jet energy density and \(P_{j}=P_{\rm p}+P_{\rm e}+P_{B}\) is the total jet pressure. In the above equation, \(\rho\) is the jet mass density
\[\rho=n_{\rm p}\,\rm mp_{\rm p}+n_{\rm e}\,\rm m_{\rm e}. \tag{7}\]
We express the number of protons in terms of the number of leptons as \(n_{\rm p}=n_{\rm e}/\eta_{e}\), where \(n_{\rm e/p}\) is the number density of leptons/protons, respectively, and \(\eta_{e}\geq 1\) is a free parameter that remains constant unless the jets are mass-loaded (see below).
For an ideal gas, we can write the pressure terms as
\[P_{\rm e,p}=(\Gamma_{\rm e,p}-1)\,U_{\rm e,p}, \tag{8}\]
where \(\Gamma_{\rm e,p}\) is the adiabatic index. For the rest of the paper, we assume a relativistic pair content (\(\Gamma_{\rm e}=4/3\)) at the jet base and a cold proton population (\(\Gamma_{\rm p}=5/3\)) until the particle acceleration region (see below). For the pair temperatures we are interested in this work, the flow remains cold even if is dominated by pairs at the base. For \(U_{B}=P_{B}=B^{2}/8\pi\), we write the jet enthalpy as
\[\omega=\rho c^{2}+\Gamma_{\rm p}U_{\rm p}+\Gamma_{\rm e}U_{\rm e}+\frac{B^{2}} {4\pi}. \tag{9}\]
We define the specific enthalpy of the gas as
\[h=\frac{U_{B}+P_{B}}{\rho c^{2}}=\frac{\Gamma_{\rm p}U_{\rm p}+\Gamma_{\rm e}U_{ \rm e}}{\rho c^{2}} \tag{10}\]
where we used equation (8). We calculate \(U_{\rm e,p}\) by computing the integral
\[U_{\rm e,p}=\int\frac{dn_{\rm e,p}}{dz_{\rm e,p}}\varepsilon_{\rm e,p}\eta_{\rm e,p}c^{2}\rm{d}\varepsilon_{\rm e,p}. \tag{11}\]
where \(\varepsilon_{\rm e,p}\) is the Lorentz factor of the particles, but we can also
express the internal energy density in terms of the average total energy of the particles
\[U_{\rm e,p}\simeq(\langle\varepsilon_{\rm e,p}\rangle-1)\,n_{\rm e,p}m_{\rm e,p }c^{2}, \tag{12}\]
where \(\langle\varepsilon_{\rm e,p}\rangle\) is the average Lorentz factor of the pairs/protons of the jet segment (see below for calculation). This equation is more convenient than equation (11) for the following discussion, however we note that it might not be accurate enough if a significant fraction of the leptons accelerate to non-thermal energies, in particular in a hard power law with slope \(<2\).
A useful parameter to characterise the jets is the magnetisation. We define the magnetisation of a flow as the Poynting flux over the total energy flux (Nokhrina et al., 2015)
\[\sigma =\frac{B^{2}}{4\pi\left(\rho c^{2}+U_{\rm g}+P_{\rm g}\right)}\Rightarrow \tag{13}\] \[\sigma =\frac{B^{2}}{4\pi\rho c^{2}\left(1+h\right)}.\]
When the flow is cold (\(h\ll 1\)), the above definition reduces to the well-known expression of
\[\sigma_{c}\simeq\frac{B^{2}}{4\pi\rho c^{2}}. \tag{14}\]
We write the enthalpy of equation (9) of a flow from equations 10 and 13 as
\[\omega=\rho c^{2}(1+\sigma)(1+h). \tag{15}\]
We can plug this equation into equation (5) to calculate the particle number density at the jet base
\[n_{0}=\frac{L_{\rm jet}}{2\beta_{0}\gamma_{0}c\pi r_{0}^{2}\left(m_{\rm p}/ \eta_{e}+m_{\rm e}\right)c^{2}(1+\sigma_{\rm c})}. \tag{16}\]
We further use the relativistic Bernoulli's equation to express the conservation of energy flux along the jet axis (Konigl, 1980)
\[\gamma\frac{\omega}{\rho}={\rm constant}, \tag{17}\]
and from equation (15) we rewrite the above equation such as to define:
\[\mu\equiv\gamma\left(1+\sigma\right)(1+h), \tag{18}\]
where \(\mu\) is the normalised total energy flux and is conserved along the jets (unless the jets entrain mass; see below). In a cold jet where the specific enthalpy \(h\) is negligible, equation (18) simplifies to \(\mu\simeq\gamma\left(1+\sigma_{\rm c}\right)\). This is a very well-known equation to express the maximum jet Lorentz factor when the majority of the Poynting flux has been converted to kinetic energy (\(\gamma_{\rm max}\simeq\mu\)). In this work, we keep this term in our calculations because \(h\) is an estimate of the energy that the accelerated particles carry in each jet segment, and in numerous instances can dominate both the magnetisation and the jet Lorentz factor.
While the jets accelerate between the launching point and the acceleration region \(z_{\rm acc}\), \(\mu\) remains constant. We write equation (18) at the jet base and equate it to the acceleration region and solve for the initial magnetisation
\[\gamma_{0}(1+\sigma_{0})(1+h_{0})=\gamma_{\rm acc}(1+\sigma_{ \rm acc})(1+h_{\rm acc})\Rightarrow \tag{19}\] \[\sigma_{0}=\frac{\gamma_{\rm acc}}{\gamma_{0}}\left(1+\sigma_{ \rm acc}\right)\left(\frac{1+h_{\rm acc}}{1+h_{0}}\right)-1,\]
and in general for every z below the acceleration region
\[\sigma(z\leq z_{\rm acc})=\frac{\gamma_{0}}{\gamma}\left(1+\sigma_{0}\right) \left(\frac{1+h_{0}}{1+h}\right)-1, \tag{20}\]
or
\[\sigma(z\leq z_{\rm acc})=\frac{\gamma_{\rm acc}}{\gamma}\left(1+\sigma_{\rm acc }\right)\left(\frac{1+h_{\rm acc}}{1+h}\right)-1. \tag{21}\]
With the magnetisation and the specific enthalpy at the acceleration region as free parameters (\(\sigma_{\rm acc}\) and \(h_{\rm acc}\), respectively), we set the initial magnetisation \(\sigma_{0}\) required for the flow to be Poynting flux dominated and to carry enough energy to efficiently accelerate particles to non-thermal energies. In particular, we use \(\sigma_{\rm acc}\) as a free parameter because this is the simplest way to force our semi-analytical model to have dissipated the majority of the magnetisation at the acceleration region, and we set \(h_{\rm acc}\) from equation (10) (see also the discussion on particle acceleration below). The initial specific enthalpy \(h_{0}\) is set by the free parameters at the jet base, and as we discuss below, it is negligible for the standard case of an initially cold jet that we study here (see subsection 3.1).
Above the acceleration region, we assume the toroidal component dominates the poloidal component of the magnetic fields similar to Blandford and Konigl (1979), so
\[B(z>z_{\rm acc})=B_{\rm acc}\left(\frac{z}{z_{\rm acc}}\right)^{-1}, \tag{22}\]
where \(B_{\rm acc}\) is the magnetic field strength at the acceleration region.
Based on equation (13), we generalize the expression of \(\sigma\) for every \(z\) above the acceleration region
\[\sigma(z\geq z_{\rm acc})=\sigma_{\rm acc}\frac{\rho_{\rm acc}(1+h_{\rm acc})} {\rho(1+h)}\left(\frac{z}{z_{\rm acc}}\right)^{-2}. \tag{23}\]
### The acceleration region and particle acceleration
We assume that the pairs at the jet base follow a Maxwell-Juttner distribution (MJ; the relativistic regime of the Maxwell-Boltzmann distribution) with a peak energy \(k_{\rm B}T_{\rm e}\) that is a free parameter. The population of protons on the other hand is cold, making the flow cold at the launching point.
By the time the flow reaches the acceleration region the Poynting flux dominated flow has dissipated the magnetic energy, hence the magnetisation has dropped to a value \(\sigma_{\rm acc}\). At the same region, we assume a constant fraction \(f_{\rm pl}\sim 0.1\) of particles accelerates to a non-thermal power law between a minimum and a maximum energy. For the leptonic scenario, we assume that only pairs accelerate in a power law from an energy \(\varepsilon_{\rm min}m_{\rm e}c^{2}=k_{\rm B}T_{\rm e}\) to some \(\varepsilon_{\rm max}\) that we calculate self-consistently by equating the acceleration timescale \(4\varepsilon_{\rm mc}c^{2}/(3f_{\rm sc}ecB)\) to the escape timescale (Jokipii, 1987; Aharonian, 2004). The acceleration efficiency \(f_{\rm sc}\) depends on the particle acceleration mechanism, but we fix it at a value between 0.01 and 0.1 leading to a maximum electron energy of the order of GeV for the case of a BHXB. For the lepto-hadronic scenario, we assume that protons accelerate as well in a power law from an \(\varepsilon_{\rm min}\)= 1 to some \(\varepsilon_{\rm max}\) that we calculate by equating the acceleration timescale to the (lateral) escape timescale \(r/\)c of the jet segment and for the case of BHXBs it may attain values of the order of 100 TeV and above (Pepe et al., 2015; Kantzas et al., 2021, 2022). We constrain the non-thermal particle distributions by assuming that they extend up to the maximum energy, and then they drop exponentially
\[\frac{{\rm d}n\left(\varepsilon\right)}{{\rm d}\varepsilon}=K\varepsilon^{-P}\, \exp\left(-\varepsilon/\varepsilon_{\rm max}\right), \tag{24}\]
where \(n\) is the particle number density for any species, \(K\) is the normalisation, and the slope \(p\) of the power law depends on the particle acceleration mechanism, but we use it as a free parameter between
1.7 and 2.4, assuming it remains the same between electrons and protons.
Finally, we derive the average Lorentz factor for every species from the equation
\[\langle\kappa\rangle=\frac{\int\kappa\frac{\mathrm{d}n}{\mathrm{d}\varepsilon} \mathrm{d}\varepsilon}{\int\frac{\mathrm{d}n}{\mathrm{d}\varepsilon}\mathrm{d} \varepsilon}. \tag{25}\]
### Jet evolution and particle acceleration
Beyond the acceleration region where particles accelerate to non-thermal energies as well, the specific enthalpy can become important because the average Lorentz factors of pairs and/or protons may have significantly increased (see equation 10). We write the bulk Lorentz factor for every jet segment above the acceleration region for an outflow from equation (18):
\[\gamma(z)=\gamma_{\mathrm{acc}}\left(\frac{1+h_{\mathrm{acc}}}{1+h}\right) \left(\frac{1+\sigma_{\mathrm{acc}}}{1+\sigma}\right). \tag{26}\]
### Radiative Processes
We suggest the interested readers to seek for further details on the radiative processes in Lucchini et al. (2022) for the leptonic processes, and in Kantzas et al. (2021) for the hadronic processes. We nevertheless briefly discuss the main processes here for completeness.
#### 2.4.1 Leptonic processes
The main three radiative processes of leptonic nature that we require in our analysis here are: synchrotron radiation, inverse Compton scattering (ICS) and pair production. In particular, the thermal pairs of the MJ distribution and the non-thermal power-law tail above the dissipation region, lose energy due to cyclo-synchrotron radiation (Blumenthal & Gould, 1970; Rybicki & Lightman, 2008). We only account for the average magnetic field strength of the particular jet segment and assume an isotropic distribution of pitch angles that we average over.
We further account for the ICS between the pairs and the radiation fields of the outflow (Blumenthal & Gould, 1970; Rybicki & Lightman, 2008). In particular, in this work we neglect any external photon field and only allow for ICS between the emitting pairs and the synchrotron photons (synchrotron self Compton; SSC). Plausible external photon fields may be important in the case of AGN jets but for the study-cases as BHXBs we discuss in this work, we have shown in previous works that the external photon fields are not critical (see e.g. Lucchini et al., 2021; Kantzas et al., 2021, however, see also Zdziarski et al., 2014 and Zacharias et al., 2022 for cases where the external photon fields may be important to explain the \(\gamma\)-ray spectrum). For simplicity, we also neglect any accretion disc in the following discussion, but we do account for it when examining particular sources, following Lucchini et al. (2022). For the ICS processes, we account for the Klein-Nishina regime when necessary, and allow for multiple scatterings to better capture the evolution of the exponential cutoff. This particular process is the most computationally expensive amongst the leptonic ones, we hence choose to neglect it when the radiative output becomes \(10^{4}\) times smaller than the synchrotron counterpart for the particular segment.
The final process of leptonic nature we account for is the photon annihilation to pair production and vice versa (Coppi & Blandford, 1990). These two processes are usually negligible, so we do not mention them unless we discuss their impact on the particle population or the spectrum (see, e.g., Connors et al., 2019).
#### 2.4.2 Hadronic Processes
We account for both proton-proton (pp) and proton-photon (p\(\gamma\)) processes when accelerated protons interact with the cold protons of the flow and the jet radiation, respectively. In particular, we use the semi-analytical parametrisation of Kelner et al. (2006) for the pp interactions, and Kelner & Aharonian (2008) for the p\(\gamma\). The above analysis provides the resulted distributions of secondary particles (pions that decay into muons, and the muons decay into neutrinos, pairs and \(\gamma\)-rays) and hence cannot account for any synchrotron radiation of muons and/or pions, but for the current systems we examine, we see that it is not required. We do however consider the cyclo-synchrotron radiation of secondary pairs due to the presence of the magnetic field.
In our particular analysis, we find that the synchrotron photons produced by the primary pairs act as the target for the p\(\gamma\) interactions. Based on this analysis, we can also produce the neutrino counterpart in a self-consistent manner (Kantzas et al. in prep).
## 3 Results for the steady-state jets
We first present the results of the analysis of the model where we do not account yet for any mass entrainment. In this flavour of the model, we try to better understand and constrain the number of leptons in the jets with respect to the number of protons \(\eta_{e}\). We further present the jet dynamical properties and their corresponding multiwavelength spectra before we compare them to ones when we account for mass-loading.
### Specific enthalpy and particle acceleration
We can express equation (10) as
\[h=\frac{\Gamma_{\mathrm{e}}(\langle\varepsilon_{\mathrm{e}}\rangle-1)+\Gamma _{\mathrm{p}}(\langle\varepsilon_{\mathrm{p}}\rangle-1)\frac{\mathrm{m}_{ \mathrm{p}}/\mathrm{m}_{\mathrm{e}}}{\eta_{e}}}{1+\frac{\mathrm{m}_{\mathrm{ p}}/\mathrm{m}_{\mathrm{e}}}{\eta_{e}}}, \tag{27}\]
where we used equations (7), (12), and \(n_{\mathrm{p}}=n_{\mathrm{e}}/\eta_{e}\).
From the above equation, we see that the specific enthalpy depends merely on the ratio between pairs and protons. Moreover, we see that \(h\) strongly depends on any mechanism (acceleration or cooling) that would significantly change the average Lorentz factor of the particles.
In Fig. 1, we plot the specific enthalpy \(h\) as a function of the pair-to-proton ratio \(\eta_{e}\) for various values of \(\langle\varepsilon_{\mathrm{e}}\rangle\) and \(\langle\varepsilon_{\mathrm{p}}\rangle\). Both \(\langle\varepsilon_{\mathrm{e}}\rangle\) and \(\langle\varepsilon_{\mathrm{p}}\rangle\) depend on the power law slope of the accelerated particles, as well as the minimum and the maximum particle energy. We let \(\eta_{e}\) to scale between a few and \(10^{6}\) although the latter values are extreme and perhaps not physically expected. A jet with more protons than leptons (\(\eta_{e}<1\)) would be positively charged and hence is unphysical. On the other hand, a very large number of pairs per protons would be difficult to explain the observed Lorentz factors on parsec scales (Ghisellini & Tavecchio, 2010).
In the top left plot of Fig. 1 where no protons accelerate at all, and in particular in the case of approximately equal amount of pairs and protons (\(\eta_{e}\sim 1\)), we see that the specific enthalpy
Figure 1: The jet specific enthalpy \(h\) as a function of the jet content \(\eta_{e}=n_{e}/n_{p}\). In all plots, we assume a soft non-thermal power law with \(p=2.2\) to derive the average particle Lorentz factors from equation (25). The color-map corresponds to the average Lorentz factor of electrons, with lighter colors to indicate more efficient acceleration. In the _left_ column and for a less efficient electron acceleration, the minimum Lorentz factor of the pairs is \(\varepsilon_{\rm e,min}=1.5\), whereas in the _right_ column with a more efficient electron acceleration, \(\varepsilon_{\rm e,min}=10\). In the _top_ panels, we assume only leptonic acceleration, in the _middle_, we assume non-efficient hadronic acceleration with \(\varepsilon_{\rm p,min}=1\) and \(\varepsilon_{\rm p,max}=100\), and in the _bottom_ panels, we assume efficient hadronic acceleration with \(\varepsilon_{\rm p,min}=10\) and \(\varepsilon_{\rm p,max}=10^{7}\). The vertical lines correspond to \(\eta_{e}=\rm{m_{p}/m_{e}}\). Overall, the specific enthalpy \(h\) may attain values greater than unity and may hence significantly alter the jet kinematics. See the text for more details.
is significantly smaller than unity (\(h\ll 1\)). This is in agreement with the initial setups of GRMHD simulations where the specific enthalpy is usually neglected (McKinney, 2006; Komissarov et al., 2007). In the other regime, where the flow is dominated by pairs (\(\eta_{e}\gtrsim 10^{3}\)), we see that \(h\sim\Gamma_{e}\langle\varepsilon_{\rm e}\rangle\) (equation 27). In the top right plot of Fig. 1 where we assume \(\varepsilon_{\rm e,min}=10\), we see a similar evolution of \(\eta_{e}\). The main difference is that \(\langle\varepsilon_{\rm e}\rangle\) goes to larger values, hence \(h\) goes to larger values as well. From both plots, we see that for a purely leptonic flow, the specific enthalpy is not negligible and in fact, it can be as important as the magnetisation and the kinetic energy in the evolution of the jets (as discussed below).
In the middle plots of Fig. 1, where protons accelerate in a similar power law as the accelerated pairs, we see a significantly different evolution of \(h\) for different jet content. In particular, in the case where \(\varepsilon_{\rm e,min}=1\) and \(\varepsilon_{\rm p,min}=1\) (middle left plot), we see that for an equal pair-to-proton jet content (\(\eta_{e}=1\)), \(h\) is driven by the accelerated protons and in fact, \(h\sim\Gamma_{p}\langle\varepsilon_{\rm p}\rangle\) (see equation 27). In the regime of a purely leptonic flow (\(\eta_{e}\gg 1\)), we see that \(h\sim\Gamma_{e}\langle\varepsilon_{\rm e}\rangle\) and depending on whether \(\langle\varepsilon_{\rm e}\rangle>\langle\varepsilon_{\rm p}\rangle\) or \(\langle\varepsilon_{\rm e}\rangle<\langle\varepsilon_{\rm p}\rangle\), \(h\) will increase or decrease, respectively. In the right-hand-side of the middle panels of Fig. 1, we get larger values of \(\langle\varepsilon_{\rm e}\rangle\) because of the larger value of \(\varepsilon_{\rm e,min}\) (for the particular \(p=2.2\)), and hence the specific enthalpy may attain significantly larger values reaching values of the order of \(\Gamma_{e}\langle\varepsilon_{\rm e}\rangle\).
In the bottom plots of Fig. 1 where protons accelerate in a power law from a \(\varepsilon_{\rm p,min}=10\), we see that a flow of \(\eta_{e}\sim 1\) has a significant fraction of energy in the specific enthalpy because \(h\sim\Gamma_{p}\langle\varepsilon_{\rm p}\rangle\sim 90\). In the purely leptonic regime (\(\eta_{e}\gg 1\)), we see that \(h\) can drop to values smaller than 10 depending on the average Lorentz factor of the pairs. In the case where pairs accelerate in a power law from a high energy as \(10\,\mathrm{m_{e}c^{2}}\) (right-hand-side plot of lowermost panels of Fig. 1), the energy content in the specific enthalpy remains significant for both \(\eta_{e}\sim 1\) and \(\eta_{e}\sim 10^{6}\).
From Fig. 1, we overall see that the specific enthalpy of a flow that accelerates particles can be important in the evolution of the flow (see also discussion below). In the case where only pairs accelerate in the jets and for an equal amount of electrons-to-protons as is commonly assumed in GRMHD (left-hand-side of the uppermost panels, and in particular in the case of one), we see that the specific enthalpy is indeed negligible (\(h\ll 1\)). In any other case where both pairs and protons accelerate in the jets, and regardless of the jet content (either pair-dominated or equal pair-to-proton content), the specific enthalpy of the flow might be of the order of a few-to-tens, and hence it is important for the evolution of the flow (see also discussion of CLTM19).
In the Appendix A we discuss the evolution of \(h\) in the case of a hard power law of accelerated particles with \(p=1.7\) power law index. Such hard values, resulting from efficient particle acceleration e.g., in magnetic reconnecting regions (Sironi et al., 2015; Ball et al., 2018) or relativistic shocks (see e.g. Bottcher & Baring, 2019), lead to even larger values of \(h\) of the order of thousands. Such large values of \(h\) along with large bulk Lorentz factors as observed in relativistic outflows in AGN and GRBs, would lead to significantly larger values of total energy flux \(\mu\) compared to those in the literature (Komissarov et al., 2007, 2009; Petropoulou et al., 2022). Furthermore, equation 18, which has broadly been used to provide an estimate for the maximum bulk Lorentz factor when
\begin{table}
\begin{tabular}{l c c l c} \hline \hline Parameter & Units & Fiducial value(s) & Definition & Equation \\ \hline \(z\) & \(r_{\rm g}\) & – & distance from the black hole along the jet axis & – \\ \(z_{0}\) & \(r_{\rm g}\) & 6 & distance of the jet base from the black hole & – \\ \(\gamma\) & – & \(1-3\) & bulk Lorentz factor of the flow & 1 \\ \(\gamma_{0}\) & – & \(1.1\) & bulk Lorentz factor at the jet base & – \\ \(r\) & \(r_{\rm g}\) & – & cross-sectional radius of the flow & 2 \\ \(\theta\) & rad & – & jet opening angle & 3 \\ \(n\) & cm\({}^{-3}\) & – & jet (total) particle number density & 4 \\ \(n_{0}\) & cm\({}^{-3}\) & – & jet number density at the jet base & 16 \\ \(n_{\rm e}\) & cm\({}^{-3}\) & – & jet pair number density & – \\ \(n_{\rm p}\) & cm\({}^{-3}\) & – & jet proton number density & – \\ \(\rho\) & g cm\({}^{-3}\) & – & jet mass density & 7 \\ \(\omega\) & erg cm\({}^{-3}\) & – & total jet enthalpy & 9 \\ \(h\) & – & – & jet specific enthalpy & 10 \\ \(\sigma\) & – & – & magnetisation of the flow & 13 \\ \(\sigma_{0}\) & – & \(1-100\) & magnetisation of the flow at the jet base & 19 \\ \(\mu\) & – & \(1-100\) & normalised total jet energy flux & 18 \\ \(\langle\varepsilon_{\rm e,p}\rangle\) & – & \(1-100\) & particle average Lorentz factor & 25 \\ \hline \(z_{\rm acc}\) & \(r_{\rm g}\) & \(10^{3}\) & location where jet acceleration reaches the max value & free parameter \\ \(r_{\rm acc}\) & – & 3 & maximum Lorentz factor of the flow at \(z_{\rm acc}\) & free parameter \\ \(r_{0}\) & \(r_{\rm g}\) & \(10-10^{2}\) & jet base radius & free parameter \\ \(I_{\rm jet}\) & \(I_{\rm Edd}\) & \(0.002\)-\(0.02\) & injected jet power at the jet base & free parameter \\ \(\eta_{e}\) & – & \(1-10^{6}\) & jet pair-to-proton content & free parameter \\ \(\sigma_{\rm acc}\) & – & \(0.1\) & magnetisation of the flow at the acceleration region & free parameter \\ \(k_{B}T_{e}\) & keV & – & electron peak energy at the jet base & free parameter \\ \hline \end{tabular}
\end{table}
Table 1: The definition of the jet quantities we use in this work with their units, some fiducial values (if applicable), the equation number where we define the parameter or whether it is a free parameter. See Sections 2 and 4 for further information.
the magnetic energy has been converted into kinetic energy, would not hold anymore and a more careful treatment where the specific enthalpy is calculated from first principles is needed.
### Total energy flux evolution for steady state jets
In Fig. 2, we plot the evolution of \(\mu\) along the jets with the different components: magnetisation (\(\sigma\)), bulk Lorentz factor (\(\gamma\)) and specific enthalpy (\(h\)). In the left plots of Fig. 2, we assume a jet content of equal number of leptons and protons (\(\eta_{e}=1\)) and in the right plots we assume a pair-dominated outflow (\(\eta_{e}=10000\)). In the top panels, we assume that only leptons accelerate to non-thermal energies, whereas in the bottom panels, we assume that hadrons accelerate as well in a power law with the same index.
In the top left panel, where we account only for leptonic acceleration with \(\langle\varepsilon_{\rm e}\rangle=6\), we see that the initial magnetisation of the outflow converts to bulk kinetic energy whereas the magnetisation drops to \(\sigma_{\rm acc}=0.1\) (a free parameter). The specific enthalpy starts as negligible at the cold jet base (\(h_{0}\ll 10^{-2}\)) and remains insignificant for the jet evolution above the particle acceleration region \(z_{\rm acc}\). This particular regime where the specific enthalpy is insignificant and the jet composition is one lepton per proton, is the regime considered by most GRMHD simulations (see also Section 4), and in fact, is the only regime that BHJet can probe self-consistently so far (Lucchini et al., 2022). With the current improvement of this work, we can now further explore the jet kinematics to other regimes where the distribution of the internal energy density is important in the evolution of the jet dynamics and the electromagnetic spectrum.
In the top right panel, where we assume a pair-dominated jet (\(\eta_{e}\gg 1\)) that accelerates only leptons, we see that the initial magnetisation converts almost equally to bulk kinetic energy and internal energy (\(h\) is now comparable to \(\gamma\)). The initial specific enthalpy at the jet base is larger compared to the previous case and based on equation (18), we see that also \(\mu\) has significantly increased (see also section 3.1).
In the bottom left panel of Fig. 2 where we account for hadronic acceleration with \(\langle\varepsilon_{\rm p}\rangle=4\), we see that the initial magnetisation dissipates almost equally to kinetic and internal energy. The initial specific enthalpy is negligible at the cold jet base but when particles accelerate at the acceleration region, \(h\) increases to values comparable to \(\gamma\). Finally, in the bottom right panel where the jet is pair-dominated, we see that the specific enthalpy at the jet base is of the order of 1 but still much smaller than the initial magnetisation.
In Fig. 2, according to the approach we follow here, \(h\) can overall be significant for the jet evolution depending on the hadronic acceleration and the jet content. The former, in particular, strongly depends on the jet properties, but we cannot capture this non-linear behaviour of the jet evolution, its effect on the particle acceleration and the consequent feedback of particle acceleration back to the jet evolution without significantly increasing the computational cost of the model. However, we can still investigate the jet properties to gain a better insight on jet physics.
In Appendix B we present a more detailed series of jet evolution for various jet quantities and different average particle Lorentz factors. Overall, we find that for many physical scenarios, the specific enthalpy becomes important for the jet evolution, especially in the case where hadrons accelerate in the jets as well, and for pair-dominated outflows (see also section 3.1).
### Electromagnetic spectrum of steady state jets
We plot in Fig. 3 the multiwavelength spectra that correspond to the four different models of Fig. 2. In particular, in the top panels we plot the purely leptonic scenarios, whereas in the bottom we plot the lepto-hadronic models. For the left plots, we assume one proton per electron (\(\eta_{e}=1\)), whereas on the right plot we examine the extreme case of \(\eta_{e}=10^{4}\).
For all four panels, we assume a quite "warm" MJ distribution of leptons with \(k_{B}T_{e}=1000\,\)keV, an initial jet-radius of \(10\,r_{B}\) in which we inject some power equal to \(10^{-2}\,L_{\rm Edd}\) for the leptonic models, and \(10^{-3}\) for the lepto-hadronic ones. The particle acceleration that happens at \(1000\,r_{\rm g}\) leads to a power-law of particles with an index of \(2.2\). In all panels, we show the contribution to the spectrum of the jet segments before the dissipation region (yellow-shaded) and above (blue-shaded). For the lepto-hadronic model of the bottom panels, we include the hadronic contribution as green-shaded. Finally, the densely-dashed line shows the synchrotron contribution, whereas the loosely-dashed line corresponds to the ICS.
In the top left panel of Fig. 3, we see the emission from the thermal electrons dominates in the UV and X-ray bands, whereas the outer jets dominate in the radio bands via synchrotron radiation, and in the GeV with ICS. In the case where we assume an increased ratio of pairs (top right panel), for the same initial conditions we see once more the emission from the thermal pairs to dominate the UV/X-ray bands but the X-ray luminosity is increased because the initial pair number density has increased (see equation 16).
In the lepto-hadronic cases of the bottom panels of Fig. 3, we see that the pair content may significantly affect the SED, and in particular the high-energy part. For the case of one proton per lepton, we see that the GeV-to-TeV spectrum first drops exponentially due to the synchrotron emission of the primary pairs, but later increases due to the hadronic contribution of the \(\nu\) interactions. The ICS contribution in this particular case is well below the hadronic contribution (loosely-dashed line). In the pair-dominated jet of the right-hand panel, we see that the increased number of pairs leads to a stronger GeV-to-TeV flux that dominates over the hadronic contribution.
## 4 Mass loaded jets
High-resolution GRMHD simulations of accreting black holes that launch highly collimated jets suggest that a significant portion of the wind from the accretion disc might end up in the jet via entrainment. While the jets accelerate in a dense surrounding medium, they are subject to lateral pressure from the wind of the accretion disc that results in jet-wind collisions, causing the jet to wobble. Pinch instabilities form at the jet-wind interface close to the black hole, almost independently of the initial magnetisation of the jet, as long as it starts out Poynting flux dominated. These instabilities dissipate magnetic energy to heat and increases the specific enthalpy of the jet (see, e.g., Eichler, 1993; Bowman et al., 1996; Spruit et al., 1997; Begelman, 1998; Giannios & Spruit, 2006; Bromberg & Tchekhovskoy, 2015).
Interestingly, two properties of a collimated jet change at distances \(\sim 10^{2}-10^{3}\,r_{g}\): (1) the toroidal component of the magnetic field starts to dominate over the poloidal component, and (2) the jet speed exceeds the local fast magnetosonic wave speed, i.e., becomes superfast, the magnetic analogue of the fluid becoming supersonic CLTM19. Beyond this region, the jet becomes more susceptible to instabilities forming at the interface between the flow and the
ambient medium. In particular, magnetic pinch instabilities lead to the formation of eddies that trap matter from the wind and drive it inwards through the jet-wind interface, allowing for mass entrainment (Mignone et al., 2013; Gourgouliatos and Komissarov, 2018; Bodo et al., 2021). Without such eddies, significant mass entrainment into the jet from the external medium may not be possible due to the jet's strong magnetic field. Hence, we link the region where the mass-loading becomes important because of instabilities explicitly to the region where non-thermal particle acceleration occurs. Following the results of CLTM19, we connect this region to the first particle acceleration region of jets as originally proposed by (Markoff et al., 2005; Polko et al., 2014).
\begin{table}
\begin{tabular}{l c l l} \hline \hline Parameter & Fiducial value(s) & Definition & Status \\ \hline \(\gamma_{0}\) & \(1.11\) & bulk Lorentz factor at the jet base \(z_{0}\) & fixed \\ \(\sigma_{0}\) & \(10-50\) & magnetisation of the flow at the jet base & free \\ \(k_{B}T_{e}\) /keV & \(500\) & electron peak energy at the jet base & free \\ \(\gamma_{\rm acc}\) & \(2-10\) & bulk Lorentz factor at \(z_{\rm acc}\) & free \\ \(h_{\rm acc}\) & \({h_{0}}^{\dagger}\) & jet specific enthalpy at \(z_{\rm acc}\) & fixed \\ \(f_{g}\) & \(10\) & jet mass density increase factor & fixed \\ \(z_{\rm diss}/r_{g}\) & \(100\) & region where the mass entrainment initiates2 & free \\ \(z_{\rm load,end}/z_{\rm diss}\) & \(100\) & region where the mass entrainment finishes & free \\ \hline \end{tabular}
\end{table}
Table 2: The fixed and the free (fitted) parameters that drive the mass-loading jet dynamics. See Section 4 for further information.
\({}^{\dagger}\)calculated by the temperature of the electrons at the jet base (see equation 10),
\({}^{\ddagger}\)same as \(z_{\rm acc}\).
Figure 2: The energy jet components \(\mathbf{\gamma}\) (the bulk Lorentz factor), \(\sigma\) (the magnetisation), and \(h\) (the specific enthalpy) that follow the relation \(\mu=\gamma(1+\sigma)\,(1+h)\) (equation 18). In all plots we use \(z_{0}=6\,r_{g}\), \(z_{\rm acc}=10^{3}\,r_{g}\), \(\gamma_{\rm acc}=3\), \(\sigma_{\rm acc}=0.1\) and \(\langle{\varepsilon_{\rm e}}\rangle=6\) (see Table 1 for definitions). We show a pair/proton flow with \(\eta_{\rm e}=1\) in the _left_ column and a pair-dominated flow with \(\eta_{\rm e}=10000\) in the _right_ column. In the _top_ panels, we only account for leptonic acceleration and in the _bottom_ panels, we consider hadronic acceleration as well with \(\langle{\varepsilon_{\rm p}}\rangle=4\). The specific enthalpy \(h\) leads to different jet dynamical quantities based on whether hadronic acceleration takes place and jet content.
In this work, we parametrise the fiducial model B10 of CLTM19 to derive a semi-analytical formalism that connects the mass loading region to the particle acceleration region, and study its impact on the emitted electromagnetic spectrum by studying both the leptonic and the hadronic processes we discussed above. We consider B10 for our problem because the jet undergoes strong collimation out to very large scales. Other models explored in CLTM19 either have too small an accretion disk, such that there is hardly any lateral pressure from the disk wind. The jets therefore become uncollimated and thus conical within \(1000\,r_{g}\), and hence do not properly represent the highly collimated, parabolic, large-scale jets we are targeting. Further, the fast lateral expansion of the uncollimated jet suppresses pinch instabilities (Moll et al., 2008; Granot et al., 2011; Porth and Komissarov, 2015) and thus exhibits little to no mass-loading (CLTM19).
CLTM19 confirm that the magnetic energy converts to kinetic energy, accelerating the jets similar to what was found in previous works (McKinney, 2006; Komissarov et al., 2007, 2009). When matter is entrained by the jets, further magnetic energy is dissipated to heat up the jet, and the inertia of the entrained gas slows down the jet. The mass entrainment leads to a decrease of the total (specific) energy flux \(\mu\) along the jets up to the distance where the mass loading stops. Beyond distances of a few \(10^{4}\,r_{g}\), the CLTM19 jet properties have not achieved steady-state as the slow, mass-loaded jet is still punching through the ambient medium at this point of time in the GRMHD simulation. Indeed, the simulations suggest that as the jet slowdown due to massloading suppresses pinch instabilities further along the jet, and therefore, massloading becomes considerably weaker beyond \(10^{4}\,r_{g}\). As a result, when the simulated jets attain steady-state out to \(\gtrsim 10^{5}\,r_{g}\), we expect that \(\mu\) should be conserved for the rest of the jets and there would be jet re-acceleration while both the magnetisation and the specific enthalpy decrease.
Inspired by the simulation results, our semi-analytical "mass-loaded" jet model assumes that the mass loading initiates at a distance \(z_{\rm diss}\sim 100\,r_{g}\) and ends at \(100\,z_{\rm diss}\), with a net increase of \(f_{\rho}=10\) in the jet mass density. Beyond \(100\,z_{\rm diss}\), we assume a
Figure 3: The predicted spectral energy distributions for the four models of Fig. 2. In the _top_ panels, we only account for leptonic acceleration, and in the _bottom_ ones, we consider both leptonic and hadronic. In the two _left_ plots, we assume one proton per electron (\(\eta_{e}=1\)) and in the _right_ ones we assume \(\eta_{e}=10^{4}\). In all four panels, we use \(k_{B}T_{e}=500\,\)keV, and \(z_{\rm diss}=1000\,r_{g}\) for a \(10\,{\rm M}_{\odot}\) BHXB at \(3\,\)kpc. We also assume \(L_{\rm jet}=2\times 10^{-2}\,L_{\rm Edd}\) for the leptonic scenarios and \(L_{\rm jet}=2\times 10^{-3}\,L_{\rm Edd}\) for the hadronic. The aforementioned values lead to \(\langle\varepsilon_{\rm e}\rangle=5\). We highlight the contribution of the jet-segments before the dissipation region (yellow shaded) and that of the jet-segments above the dissipation region (blue shaded). We show the synchrotron emission with densely-dashed green line, and the contribution of the ICS with loosely-dashed blue line. Finally, the green shaded region is the hadronic contribution where we include both neutral pion decay and the synchrotron radiation of the secondary electrons.
constant \(\mu\) and steady jet acceleration. In Fig. 4, we plot the mass density of a mass loaded jet (solid line) and compare it to a non-loaded steady state jet, assuming one proton per lepton. We show the resulting energy components (\(\gamma\), \(\sigma\), and \(h\)) of the B10 model of CLTM19 in Fig. 5 with dashed lines, and below, we discuss the way we parametrise these quantities.
### Mass loading region
In this section, we present the parametrisation of the values of \(\sigma\), \(\gamma\) and \(h\) of the mass loading region based on the B10 model described above. In particular, we fit a polynomial to the CLTM19 profiles along the jet between \(z_{\rm diss}=100\,r_{\rm g}\) and \(z_{\rm load,\,end}=10^{4}\,r_{\rm g}\). The profiles of the three quantities \(\sigma\), \(\gamma\) and \(h\) are hard to predict in such a complex and non-linear system, we hence decide to fit only for these three quantities and derive \(\mu\) from equation (18).
\[\begin{split}\log_{10}(\sigma)&=0.621\,x^{5}-3.005 \,x^{4}+4.599\,x^{3}\\ &\qquad-2.502\,x^{2}+0.242\,x+0.563,\\ \log_{10}(\gamma)&=-0.276\,x^{6}+1.412\,x^{5}-2.207 \,x^{4}+0.853\,x^{3}\\ &\qquad\qquad\qquad\qquad+0.257\,x^{2}-0.075\,x+0.394,\\ \log_{10}(h)&=0.467\,x^{5}-1.903\,x^{4}+1.100\,x^{3} \\ &\qquad+2.482\,x^{2}-1.171\,x-1.826,\end{split} \tag{28}\]
where \(x=\log_{10}\left(z/z_{\rm diss}\right)\) and \(1\leq x\leq\log_{10}\left(z_{\rm diss}/z_{\rm load,\,end}\right)\).
We connect the jet base to the mass-loading region assuming that the specific enthalpy is constant to its initial value at the jet base as we calculate it with equation (10). We assume that the flow is launched at some speed equal to the speed of sound (see equation 1) and reaches a value \(\gamma_{\rm acc}\), which is a free parameter, following a logarithmic dependence. In Table 2, we show the parameters of the mass-loading jet model, indicating whether they are fixed or fitted parameters.
### Jet segments beyond the mass loading region
Given our assumption that once mass-loading stops, the total energy flux is again conserved, i.e., \(\mu\) is constant. Thus we fix \(\mu\) at its value at the end of the mass-loading region, and to better constrain the profile of \(\sigma\) and \(h\) beyond the mass-loading region, we fit a first order polynomial between \(10^{4}\) and \(10^{5}\,\mathrm{r_{g}}\), with coefficients:
\[\log_{10}(\sigma)=-0.097\,x-0.178, \tag{29}\]
\[\log_{10}(h)=-0.245\,x-0.576, \tag{30}\]
where \(x\) is the same as above. Here we choose to interpolate the profile of \(\gamma\) and \(\sigma\) from the simulation data that closely follows the expected slow acceleration profile seen in semi-analytical MHD solutions of particle-dominated (\(\sigma\leq 1\)) jets (see e.g., Tchekhovskoy et al., 2009).
Having derived the values of \(\mu\), \(\sigma\) and \(h\), we calculate the bulk Lorentz factor for every jet segment above the \(z_{\rm diss}\)
\[\gamma(z\geq z_{\rm diss})=\frac{\mu}{\sigma+h+1}. \tag{31}\]
### Particle acceleration and mass loaded jets
At the location where matter is entrained into the jets, particles start to accelerate to non-thermal energy as well. Based on the definition of \(h\), we solve for the energy density of the protons
\[U_{p}=\frac{h\rho c^{2}-\Gamma_{e}U_{e}}{\Gamma_{p}}, \tag{32}\]
where we calculate \(U_{e}\) from equation (11) for an MJ+non-thermal power-law distribution of electrons with a fixed ratio of 10 per cent
Figure 4: The mass density profile of a mass loaded jet (solid line) compared to a steady state jet without mass loading (dashed line). Both profiles are normalized to the initial mass density at the jet base. The mass loading initiates at a distance \(z_{\rm diss}\) and at 100 \(z_{\rm diss}\) the mass density has increased by a factor of 10 compared to a non-loading, steady-state jet.
Figure 5: The energy flux components of a mass loaded jet, where \(\mu\) is the ratio between the total energy flux and the rest-mass flux, \(\gamma\) is the bulk Lorentz factor, \(\sigma\) is the magnetisation, and \(h\) is the specific enthalpy. The mass entrainment occurs between \(10^{2}\) and \(10^{4}\,r_{\rm g}\) (vertical lines), but the entrained matter becomes comparable to the mass of the jet at a distance of \(10^{3}\,\mathrm{r_{g}}\) (middle vertical line). Finally, we over-plot with dashed lines the fiducial model B10 of CLTM19 on which we base our analysis (see Section 4).
between the thermal and the non-thermal electrons, and a fixed power-law slope \(p\). We finally, derive the normalisation of the non-thermal protons
\[K_{p}=\frac{U_{p}}{\mathrm{m_{p}c^{2}\int\varepsilon^{-p+1}\exp(-\varepsilon/ \varepsilon_{\mathrm{max}})d\varepsilon}}, \tag{35}\]
where
\[\frac{\mathrm{d}n_{p}}{\mathrm{d}e}=K_{p}e^{-p}\exp(-\varepsilon/\varepsilon_{ \mathrm{max}}). \tag{36}\]
Following the above approach, we manage to self-consistently connect the mass-loading that leads to an increase of the specific enthalpy \(h\) to the electromagnetic radiation due to the proton acceleration.
## 5 Results for mass-loaded jets
### Total energy flux evolution for mass-loaded jets
In Fig. 6, we present the energy components for two different mass-loaded jets following the prescription of Section 4. We assume that both jets are Poynting flux dominated at the jet base with an initial magnetisation of \(\sigma_{0}=40\) and accelerate to a bulk Lorentz factor of \(\gamma_{\mathrm{acc}}=3\). In the left panel, we assume one electron per proton at the jet base \(\eta_{e}=1\), and in the right, we assume a pair-dominated jet of \(\eta_{e}=10000\). In both cases, we set the temperature of the thermal electrons at the jet base at \(k_{B}T_{e}=200\,\mathrm{keV}\). In the particular case of the pair-dominated jets, the specific enthalpy reaches values that are comparable to or even exceeding that of the bulk Lorentz factor and the magnetisation, especially at the loading region (see also equation 27). Despite the initially pair-dominated jet base, the matter entrained into the jets is in approximately equal number of electrons and protons because we assume that the most likely composition of an accretion disc wind is a neutral gas of electrons and protons. The jet composition hence changes from pair-dominated at the regions before the loading to almost equal number of protons and pairs (Angles-Castillo et al., 2020).
In the right panel of Fig. 6, we see that the increased number of pairs at the jet base leads to an increase of \(h\). In the extreme case where \(\eta_{e}\gg 1000\), the peak of the profile of \(h\) may lead to an artificial and unphysical increase of \(\mu\) in the loading region. In Appendix C we discuss how we constrain the increase of \(h\) to avoid such an artificial "mass loss".
### Electromagnetic spectra of steady state mass-loaded jets
In Fig. 7, we plot in the left the predicted SED of the fiducial mass-loaded jet model based on the dynamical quantities that we show in Fig. 5. We further assume a jet base of radius \(10\,r_{g}\), an electron temperature of \(200\,\mathrm{keV}\) at the jet base, an injected jet power of \(10^{-3}\,L_{\mathrm{Edd}}\) and the power-law slope of the accelerated particles \(p=2.2\) for both leptons and protons. Similar to above, we show the contribution of the leptonic emission of the jet segments before the dissipation/loading region, the leptonic contribution from the dissipation/loading region and beyond, and the hadronic contribution that is due to \(p\). In the right subplot, we show the spectrum of a non-loaded jet with similar initial conditions. The main differences are in the jet emission from the jet base (yellow-shaded region) and the hadronic contribution. The jet-base emission is higher in the non-loaded case due to the magnetisation profile we assume here that leads to greater values for the first few jet segments up to the acceleration region (see, e.g., Fig. 2).
In Fig. 8 we plot the SEDs that correspond to the two models of Fig. 6, where we account for mass loading at a distance \(100\,r_{g}\) and we assume the injected jet power to be \(10^{-3}\,L_{\mathrm{Edd}}\). We show the different components of the spectrum in Appendix D.
## 6 Discussion
### Steady state jets
In the first part of this work, we present the analytical jet model that includes the specific enthalpy in the jet kinematics and the spatial evolution.
#### 6.1.1 Specific enthalpy, particle acceleration and jet evolution
The specific enthalpy \(h\) is a good estimate of whether a jet is cold or hot, with values of \(h\ll 1+\sigma\) to indicate a cold flow, and values of \(h\gtrsim 1+\sigma\) to indicate a hot flow. Astrophysical jets launched by black holes are overall considered cold and strongly magnetised. The majority of semi-analytical models that focus on the radiative output rather than the detailed description of the jets, neglect the specific enthalpy for simplicity (Markoff et al., 2005; Bosch-Ramon et al., 2006; Vila et al., 2012; Zdziarski et al., 2014). When particles accelerate though, and in particular in the case where these accelerated particles carry a significant fraction of the jet energy, the specific enthalpy increases. As we show in Fig. 1, the exact value of the specific enthalpy may get values that can easily compare to the bulk Lorentz factor (values of the order of 1 and \(\sim 10\)) and/or the jet magnetisation (values greater than unity for a magnetised outflow). The exact value of \(h\) strongly depends on three aspects: the matter composition of the jet, the efficiency of the leptonic acceleration, and whether hadrons accelerate as well or not.
#### 6.1.1.1 Leptonic acceleration
In the case where only leptons accelerate inside the jets, we expect the electron average Lorentz factor to increase as the acceleration efficiency increases (top subplots of Fig. 1) and hence the specific enthalpy to increase as well, according to equation (27). The total specific enthalpy however depends on the jet composition as well. When a jet is of one electron per proton (\(\eta_{e}=1\)), the values of \(h\) are \(\sim 0.01\) regardless of the exact average Lorentz factor of the electrons (as long as the average Lorentz factor of the leptons remains less than \(\mathrm{m_{p}/m_{e}}\approx 1836\)). This is the typical scenario that current GRMHD and semi-analytical jet models consider when studying the exact jet evolution, both in space and time. As we mention above though, based on observations of both extragalactic and Galactic jets, it is very likely that jets are pair-dominated (or at least the scenario of one proton per electron is disfavoured in some cases). Such a jet content leads to an increase in the specific enthalpy compared to the case of \(\eta_{e}=1\) (see equation 27). The specific enthalpy hence of a jet that is pair-dominated at launching may contribute significantly to the spatial evolution of the jet, and the more relativistic (or warmer) the distribution of pairs, the larger the impact of \(h\) on the jet evolution. A pair dominated jet in fact requires specific enthalpy that can be two to three orders of magnitude larger than the jet case of an equal number of electrons and protons (see, e.g., top plots of Fig. 1). Consequently, to achieve bulk flow acceleration up to the same bulk Lorentz factor, a pair-dominated jet, also requires a larger value of magnetisation at the jet base if energy flux is conserved along the jet.
#### 6.1.1.2 Lepto-hadronic acceleration
The energy content of the particles can further increase when jets accelerate both leptons and hadrons to non-thermal energies. In fact, the more efficient the particle acceleration, the larger the specific enthalpy, which may get values of the order of \(\Gamma_{p}(\epsilon_{p})\), regardless of the jet content, as long as \(\eta_{e}\leq\rm{m_{p}/m_{e}}\) (Fig. 1). It is hard to predict the exact value of the specific enthalpy in a jet that efficiently accelerates particles, but overall, it may get values equal to or even exceed that of the bulk Lorentz factor and/or the magnetisation, that would mean that the outflow converts to particle dominated instead. We hence suggest that the specific enthalpy should be treated with extra care and should not always be considered negligible.
#### 6.1.2 Specific enthalpy and spectrum
The SED of the steady jets strongly depends not only on the hadronic acceleration (or lack of it), but also on the jet content. The most important difference is in the GeV-to-TeV spectrum. A pair dominated jet is characterised by the ICS and any contribution from the hadronic processes is suppressed. In the case of a jet with equal number of protons and pairs, and accounting for an efficient hadronic acceleration, the hadronic component dominates in the GeV/TeV bands via the neutral pion decay, which has a distinguishable shape than that of ICS in the Klein-Nishina regime.
The IR-to-X-ray spectrum of BHXBs may be contaminated by different components, such as the companion star and/or the accretion disc. In the case of a pair-dominated jet, though, the X-ray spectrum shows the multiple Compton scatterings due to the increased electron density that can potentially replicate the role of the theoretical corona (Markoff et al., 2005, 2015; Lucchini et al., 2021;
Figure 6: Similar to Fig. 5, but for: _left_ an initial lepton temperature at the jet base of \(k_{B}T_{e}=200\,\rm{keV}\) and one electron per proton (\(\eta_{e}=1\)), and in the _right_ for a pair-dominated jet (\(\eta_{e}=10000\)). Both scenarios are for an initial magnetisation of \(\sigma_{0}=40\) and \(\gamma_{\rm{acc}}=3\). The increased pair content of the right subplot leads to an increased initial specific enthalpy of the jets.
Figure 7: _Left_: The predicted spectral energy distribution of a mass-loaded jet that corresponds to the dynamical quantities of Fig. 5 for a \(10\,\rm{M_{\odot}}\) BHXB at \(3\,\rm{kpc}\). We assume a jet base of \(200\,\rm{keV}\) and radius of \(10\,r_{\rm{g}}\). We show the contribution of the jet-segments before the mass loading (yellow shaded region), and the contribution of the mass-loaded segments of both leptonic (blue-shaded) and hadronic (green-shaded). The hadronic contributes includes both the neutral pion decay and the synchrotron radiation of the secondary electrons/positrons. _Right_: Similar to the left, but for a non-loaded jet with similar initial conditions.
Cao et al., 2021). Such an X-ray signature can prove a useful tool to distinguish between different jet compositions, especially with the next-generation X-ray telescopes, such as for instance the Imaging X-ray Polarimetry Explorer (IXPE; Weisskopf et al., 2016), the Advanced Telescope for High-energy Astrophysics (Athena; Nandra et al., 2013) and the Advanced X-ray Imaging Satellite (AXIS; Mushotzky et al., 2019).
### Mass-loaded jets - HadJet
The initial jet composition at the jet base significantly alters the specific enthalpy of the jet along its axis, even if we assume that at the mass-loading region the jet converts to a pair-proton outflow. We see, in particular, that a pair-proton jet base with a thermal pair distribution that peaks at some energy of the order of 500 keV, which is a reasonable value for BHXBs, resulting in insignificant specific enthalpy compared to the rest energy components, namely the magnetisation and the bulk Lorentz factor (see, e.g., the left subplot of Fig. 6). If the jet base, on the other hand, is pair-dominated, then similar to our discussion above, the initial specific enthalpy at the jet base is increased and hence its effect on the jet dynamical evolution might be more important because the energy content carried by particles might be similar to the bulk kinetic energy (see, e.g, the right subplot of Fig. 6).
The initial conditions at the jet base have a significant impact on the electromagnetic spectrum that is our tool to distinguish between the different scenarios. For the two different scenarios we study here, where the one shows a pair-proton jet base and the other a pair dominated jet base, there are two prominent differences in the multiwavelength SEDs. The most important one is in the GeV/TeV regime, where the larger specific enthalpy of the initially pair-dominated jet base allows for more energy to be transferred to protons. The increased energy available for non-thermal proton acceleration allows for a stronger TeV flux, which is dominated by the neutral pion decay due to by interactions. Such TeV flux, depending on the distance of the BHXB (see, e.g., Kantzas et al., 2022) might be significant to be detected by current TeV facilities, such as the Large High Altitude Air Shower Observatory (LHAASO) or future \(\gamma\)-ray facilities, such as the Cherenkov Telescope Array (CTA). The fact that an initially pair dominated jet can potentially lead to a stronger TeV flux may sound counterintuitive, but in fact it is natural in our treatment due to the assumption that the mass loading is linked with energy dissipation into particle acceleration. The increase of the specific enthalpy depends on the initial conditions of the jet launching, and in this work we base our formalism on one specific setup of GRMHD simulations. A different setup is very likely to lead to less efficient heating of the jets, the specific enthalpy nevertheless will still increase due to energy transfer (see discussion of CLTM19). To explore the full range of possible physical scenarios with GRMHD simulations is currently too computationally expensive. We can however examine semi-analytically how the impact on the jet kinematics depends on the level of dissipation by replacing the heating parameter \(f_{\rm heat}\) (that was used in previous work to estimate the heating of the thermal particles at the particle acceleration region; see, e.g., discussion in Lucchini et al., 2021) with the fraction of the magnetic energy that is additionally allowed to go into energising particles. With such a parameterisation, \(h\) will increase by a factor \(f_{\rm heat}\sigma\) along the jet, whereas the magnetisation will be reduced as \((1-f_{\rm heat})\sigma\). We show in Fig. 9 the impact of this free parameter in the energy components. To avoid a steep increase of \(h\) that looks like a step-function, we use a function \(\tanh^{2}{(z/z_{\rm diss})}\), instead. The underlying model is that of the left-hand panel of Fig. 6 where we assume a "hot" jet base (500 keV) and one proton per electron.
A further spectral difference between a pair-proton and a pair-dominated jet base is in the lower energy regime of the spectrum, and in particular, in the UV-to-X-ray spectrum. For the same initial magnetisation and injected power, the number density of the pairs at the pair-dominated jet base is enhanced (see, e.g., equation 16) resulting in increased Compton scatterings that lead to a significant difference in the \(\sim\)1-100 keV range. The X-ray spectrum in particular shows a hard spectral index (\(\nu F_{\nu}\propto v^{-\alpha+1}\), with \(\alpha<1\); see right-hand plot in Fig. 8) that is similar to the expected output of a thermal corona (Sunyaev and Titarchuk, 1980; Haardt and Maraschi, 1993; Titarchuk, 1994; Narayan and Yi, 1994; Magdziarz and Zdziarski, 1995).
### Proton energy crisis
With the conserved, mass-loading jet model we develop here, we are able to constrain the total energy that is allocated to the protons and is used to accelerate them to non-thermal energies. In that way, the total
Figure 8: Similar to Fig. 7 but for the mass-loaded jets that correspond to the dynamical quantities of Fig. 6. The overall spectral distribution can significantly change under the assumption of a pair-dominated jet base (\(\eta_{e}=10000\)) in the _right_ plot.
energy carried by the accelerated protons never exceeds the available energy of the jets that has been a major issue in the past (Bottcher et al., 2013; Zdziarski and Bottcher, 2015; Liodakis and Petropoulou, 2020; Kantzas et al., 2022). In Fig. 10, we plot the specific enthalpy of the protons \(P_{p}U_{p}/\rho c^{2}\) divided by \(\mu\) as a function of the total jet enthalpy \(h\). This quantity expresses the fraction of the total energy flux of the jet that is used by the accelerated protons, and we show its dependence on \(h\) for different average electron Lorentz factors, as indicated by the colormap. Regardless of the average electron energy \(\langle\varepsilon_{e}\rangle\), the protons can hardly carry more than \(\sim\)10 per cent of the total energy in the jets because higher fractions would require specific enthalpy \(h\) of the order of a few or above (upper-right corner of the plot) resulting in strongly magnetised flows (\(\sigma\gtrsim\gamma h\)). Moreover, for particular values of \(\langle\varepsilon_{e}\rangle\) (see the blue lines for instance that correspond to values of the order of 1 to 7), the protons can only be accelerated at \(z_{\rm diss}\) and beyond if the total specific enthalpy \(h\) is greater than some critical value \(h>h_{\rm crit}\) where
\[h_{\rm crit}=\frac{(\langle\varepsilon_{e}\rangle-1)\Gamma_{e}}{1+\frac{ \eta_{\rm p}/\eta_{e}}{\eta_{e}}}, \tag{37}\]
hence the cutoffs for different \(\langle\varepsilon_{e}\rangle\) at small values of \(h\). In this particular figure, we use \(\eta_{e}=10\), but as we show in Appendix E for smaller (larger) values of \(\eta_{e}\) the only difference is that the cutoffs are located to smaller (larger) values of \(h\).
From Fig. 10, we see that the energy of the accelerated protons never exceeds that of the jet because the specific enthalpy of the non-thermal protons is always less than the total normalised energy flux (\(\Gamma_{p}U_{p}/\rho c^{2}<\mu\)) and hence never violates the energy budget.
## 7 Summary and Conclusions
Relativistic jets are efficient CR accelerators, but we still do not fully understand the particle acceleration mechanism. To fully interpret the jet kinematics, and how they relate to particle acceleration, we need to better understand how to link the observed spectra emitted by jetted sources over more than ten orders of magnitude in photon frequency to the jet physical properties. Currently uncertainties about the composition as well as a lack of conserved dynamical models have contributed to a degeneracy between leptonic and lepto-hadronic models.
To break this degeneracy, we have developed a new multi-zone approach that links the jet composition to the jet dynamics. The total energy flux along the jet is conserved, while magnetic energy can be dissipated into both kinetic energy and gas enthalpy via particle acceleration. This new approach makes clear the key role that the specific enthalpy \(h\) can have on the evolution and exchange of energy along the jet. In particular the enthalpy should be explicitly taken into account in models where: i) electrons accelerate to large average energies, ii) protons accelerate in the jets as well, and/or iii) when the jet is pair-dominated, as suggested for numerous Galactic and extragalactic jets launched by black holes.
When protons are accelerated into a non-thermal power law, the energy requirement often exceeds the total energy that can be provided by the jet and/or the accretion energy onto the black hole, potentially violating energy conservation. We have developed a new model HadJet based on our earlier lepto-hadronic work, that now conserves energy and includes a prescription for proton entrainment. Such a mass loading may in fact inhibit proton acceleration. By allowing the jets to entrain protons over a range of distance, as seen to occur in GRMHD simulations via eddies forming at the jet/accretion disc interface (CLTM19), we demonstrate a new method to avoid the "hadronic power" problem in a more self-consistent approach. In a future work, we plan to further explore the impact of mass loading on the multiwavelength emission of both BHXB jets and AGN jets.
## Acknowledgements
We would like to thank the anonymous reviewer for the thorough commenting that significantly improved the manuscript. DK and
Figure 10: The specific enthalpy of the protons \(\Gamma_{p}U_{p}/\rho c^{2}\) divided by \(\mu\) shows the total energy that is allocated to protons with respect to the total available jet energy, as a function of the jet specific enthalpy \(h\). We plot the proton energy density for a number of different electron energy densities that correspond to different values of \(\langle\varepsilon_{e}\rangle\) as shown in the colormap, and we use \(\eta_{e}=10\).
Figure 9: Similar to the left sub-plot of Fig. 6, but with different \(\gamma_{\rm heat}\) parameters as shown in the plot. The \(\gamma_{\rm heat}\) parameter expresses the fraction of the magnetic energy that is allocated to the specific enthalpy to allow a further exploration of dissipation beyond our single GRMHD-based paramarisation.
SM are grateful for support by the Netherlands Organisation for Scientific Research (NWO) VICI grant (no. 639.043.513).
## Data Availability
No new data were generated or analysed in support of this research.
|
2307.14408 | Identifying Spin Properties of Evaporating Black Holes through
Asymmetric Neutrino and Photon Emission | Kerr black holes radiate neutrinos in an asymmetric pattern, preferentially
in the lower hemisphere relative to the black hole's rotation axis, while
antineutrinos are predominantly produced in the upper hemisphere. Leveraging
this asymmetric emission, we explore the potential of high-energy, $E_\nu
\gtrsim 1$ TeV, neutrino and antineutrino detection to reveal crucial
characteristics of an evaporating primordial black hole at the time of its
burst when observed near Earth. We improve upon previous calculations by
carefully accounting for the non-isotropic particle emission, as Earth occupies
a privileged angle relative to the black hole's rotation axis. Additionally, we
investigate the angular dependence of primary and secondary photon spectra and
assess the evaporating black hole's time evolution during the final explosive
stages of its lifetime. Since photon events outnumber neutrinos by about three
orders of magnitude, we find that a neutrino measurement can aid in identifying
the initial angular momentum and the black hole hemisphere facing Earth only
for evaporating black holes within our solar system, at distances $\lesssim
10^{-4}$ pc, and observed during the final 100 s of their lifetime. | Yuber F. Perez-Gonzalez | 2023-07-26T17:59:21Z | http://arxiv.org/abs/2307.14408v2 | # Identifying Spin Properties of Evaporating Black Holes through
###### Abstract
Kerr black holes radiate neutrinos in an asymmetric pattern, preferentially in the lower hemisphere relative to the black hole's rotation axis, while antineutrinos are predominantly produced in the upper hemisphere. Leveraging this asymmetric emission, we explore the potential of high-energy, \(E_{\nu}\gtrsim 1\) TeV, neutrino and antineutrino detection to reveal crucial characteristics of an evaporating primordial black hole at the time of its burst when observed near Earth. We improve upon previous calculations by carefully accounting for the non-isotropic particle emission, as Earth occupies a privileged angle relative to the black hole's rotation axis. Additionally, we investigate the angular dependence of primary and secondary photon spectra and assess the evaporating black hole's time evolution during the final explosive stages of its lifetime. Although photon events outnumber neutrinos by about three orders of magnitude, we find that simultaneous measurements of these particles are indispensable for identifying the initial angular momentum and the black hole hemisphere facing Earth. This is particularly important for evaporating black holes within our solar system, at distances \(\lesssim 5\times 10^{-4}\) pc, and observed during the final 100 s of their lifetimes. Codes used in this work will be publicly available in \(\mathsf{\Omega}\).
+
Footnote †: preprint: IPPP/23/36
## I Introduction
The Early Universe's high density provides an ideal environment for the formation of black holes (BH) with masses significantly smaller than those observed in Gravitational Waves or direct astrophysical measurements [1, 2, 3]. These _primordial_ black holes (PBHs) could profoundly impact the evolution of our Universe [4, 5, 6]. Given their potentially tiny masses, as small as the Planck mass, it becomes imperative to take into account quantum effects in the evolution of PBHs. Using a semi-classical approximation, Hawking demonstrated that BHs evaporate by emitting a flux of particles with a thermal spectrum [7, 8]. As a consequence of energy conservation arguments, the BH mass decreases at a rate \(\dot{M}\sim M^{-2}\), initially leading to a slow evaporation. However, particle emission accelerates, triggering a runaway effect that culminates in an explosive stage. Such evaporation process, assuming the Standard Model's (SM) degrees-of-freedom (dofs), enables to estimate the initial mass of a PBH whose lifetime matches the age of the Universe, yielding a value of \(\sim 10^{15}\) g. As a result, PBHs with masses smaller than this value would have already evaporated, establishing constraints to be derived on the initial PBH abundance if they evaporated during the Big-Bang Nucleosynthesis [9, 10, 11], reionization [12, 13] or the formation of the CMB [9, 10]. Furthermore, due to the universality of Hawking evaporation, PBHs could have produced dofs that do not interact with the SM sector, potentially contributing to the observed Dark Matter (DM) [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32], to the relativistic dofs, parameterized by \(\Delta N_{\rm eff}\)[16, 17, 27, 33, 34, 35], or modify the generation of the matter-antimatter asymmetry [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47]. Conversely, if their initial mass exceeds \(10^{15}\) g, these PBHs could potentially contribute to a portion of the observed DM [48, 49, 50, 10, 41, 42, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 80, 82, 84, 83, 86, 88, 89, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 110, 1111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. Conversely, if their initial mass exceeds \(10^{15}\) g, these PBHs could potentially contribute to a portion of the observed DM [48, 49, 50, 10, 41, 42, 10, 43, 44
momenta [71; 72]. Thus, investigating the angular momentum properties of EPBHs offers an intriguing window into potential BSM physics.
Previous studies have examined the impact of non-zero angular momentum on the photon and neutrino emissions from an EPBH [61; 72]. These works used the BlackHawk code [74; 75], which calculates both the primary particle spectrum directly emitted during evaporation and the secondary spectrum originating from the decay of unstable particles produced by the black hole. However, it is important to note that BlackHawk computes the angle-integrated spectrum, which is not suitable for a Kerr EPBH since rotating EPBHs exhibit a non-isotropic particle emission due to their axisymmetric nature. Consequently, if an EPBH were to burst close to Earth, observatories would measure the particle flux at a unique angle with respect to the rotation axis, leading to distinct observational characteristics, see Fig. 1 for an artistic depiction of the Earth-EPBH system considered in this work. Additionally, in the specific case of neutrinos, it has been observed that due to the parity violation in weak interactions, their emission from an EPBH follows an asymmetric pattern: neutrinos are preferentially emitted in the EPBH's "southern" hemisphere, while antineutrinos are predominantly produced in the "northern" hemisphere [76; 77; 78; 79; 80]. This well-established behavior opens up new avenues for exploring the angular momentum of an EPBH since neutrino emission is intrinsically linked to the black hole's spin and varies with the polar angle relative to its rotation axis.
In this work, we conduct a comprehensive analysis of the angular distribution of neutrino and photon emissions, both primary and secondary, from Kerr EPBHs. Additionally, we investigate the potential of gamma-ray and neutrino observatories to discern crucial EPBH properties, such as its angular momentum and the hemisphere facing Earth. Despite the photon events outnumbering neutrino events by approximately three orders of magnitude, the measurement of neutrinos is instrumental in determining the EPBH hemisphere facing Earth, particularly in the scenario where such an object is in close proximity to our planet.
The structure of this paper is as follows. In Sec. II, we provide a detailed description of the neutrino asymmetric emission and the photon angular distribution. Furthermore, within the same section, we first outline a numerical procedure to compute the angular dependence of the secondary particle emission. Moving forward, in Sec. III, we delve into the time evolution of a Kerr EPBH during its final burst, critically assessing its impact on the neutrino emission asymmetry. Sec. IV is exclusively dedicated to exploring the detection prospects, with specific emphasis on the IceCube [81; 82] and HAWC [57] observatories. Moreover, we thoroughly examine the feasibility of determining the initial characteristics of EPBHs through the combination of neutrino and photon measurements. In Sec. V, we draw our conclusions. We have included two appendices: App. A provides a short review on the Dirac equation in the Kerr spacetime, and App. B contains the effective potentials for scalar and vectors used to obtain absorption probabilities. We consider natural units where \(\hbar=c=k_{\rm B}=1\), and define the Planck mass to be \(M_{p}=1/\sqrt{G}\), with \(G\) the gravitational constant, throughout this manuscript.
## II Angular distribution of particle emission from Kerr black holes
Let us consider the Hawking radiation emitted from a Kerr BH with an instantaneous mass \(M\) and a dimensionless spin parameter \(a_{\star}\equiv{\rm J}/(GM)^{2}\in[0,1)\), where \({\rm J}\) represents the BH angular momentum. To analyse the angular distribution of particles, we adopt the Boyer-Lindquist coordinate system \((t,r,\theta,\phi)\). In these coordinates, the line element for the Kerr spacetime is given by [83]
\[{\rm d}s^{2}=\frac{\Delta}{\Sigma}({\rm d}t-a\sin^{2}\theta{\rm d}\phi)^{2}- \frac{\sin^{2}\theta}{\Sigma}(-a{\rm d}t+(r^{2}+a^{2}){\rm d}\phi)^{2}-\frac{ \Sigma}{\Delta}{\rm d}r^{2}-\Sigma{\rm d}\theta^{2}, \tag{1}\]
where \(\Delta\equiv r^{2}-2GMr+a^{2}\), \(\Sigma=r^{2}+a^{2}\cos\theta\), with \(a=a_{\star}GM\). It has been demonstrated that the equations
Figure 1: An illustration of the Earth-Evaporating Kerr Black Hole system considered in this work. Earth is placed at an angle \(\theta\) with respect to the rotation axis and at a distance \(d_{L}\). Earthrise picture reproduced from [73].
of motion for bosons and fermions are separable, leading to the well-known Teukolsky master equations [84; 85; 86]. Such a separability will allow us to analyse the angular distribution of particles, and thus the possible determination of the EPBH initial spin at the onset of a burst. Next, we will consider the primary neutrino emission and its dependence on the polar angle \(\theta\), as well as the photon emission. Subsequently, we will examine the secondary spectra resulting from the decay of unstable particles produced by the BH evaporation.
### Primary Neutrino Emission and Asymmetry
Early studies focused on neutrinos as a prototype for understanding the properties of massless fermions in the Kerr spacetime [76]. This choice was influenced by the prevailing belief at that time that neutrinos were massless. However, we now know that neutrinos are indeed massive particles that exhibit mixing. Taking this into account, we have recently investigated the implications of neutrino mass on the process of Hawking evaporation for Schwarzschild BHs [87]. Here, we extend the discussion of massive neutrinos to Kerr BHs, and examine their asymmetric emission as a function of the polar angle.
The Dirac equation and its separability for massive fermions in the Kerr background has been extensively studied in Refs. [88; 89; 90; 91]. For sake of completeness, we briefly describe the properties of this equation in App. A. After proposing an ansatz, we obtain equations for radial \(R_{1,2}(r)\) and angular functions \(S_{1,2}(\theta)\) appearing in the spinor components, depending on the energy \(\omega\), and \(l,m\) the total and axial angular momentum quantum numbers [91] (for details see App. A)
\[\sqrt{\Delta}(\partial_{r}-iK/\Delta)R_{1} =(_{\frac{1}{2}}A_{lm}+i\mu r)R_{2}, \tag{2a}\] \[\sqrt{\Delta}(\partial_{r}+iK/\Delta)R_{2} =(_{\frac{1}{2}}A_{lm}-i\mu r)R_{1}, \tag{2b}\]
and
\[\left(\partial_{\theta}-\frac{1}{2}\cot\theta-m\csc\theta+a\omega \sin\theta\right)S_{1}(\theta) =(+_{\frac{1}{2}}A_{lm}+a\mu\cos\theta)S_{2}(\theta), \tag{3a}\] \[\left(\partial_{\theta}-\frac{1}{2}\cot\theta+m\csc\theta-a \omega\sin\theta\right)S_{2}(\theta) =(-_{\frac{1}{2}}A_{lm}+a\mu\cos\theta)S_{1}(\theta), \tag{3b}\]
where \(K\equiv(r^{2}+a^{2})\omega-am\) and \(\mu\) is the fermion mass. The quantities \({}_{\frac{1}{2}}A_{lm}\) represent the angular separation constants, which are simply \({}_{\frac{1}{2}}A_{lm}=\mathcal{P}(l+1/2)\), \(\mathcal{P}=\pm 1\), for a Schwarzschild BH.
The solutions to the radial equations, Eqs. (2), allow us to determine the absorption probabilities \({}_{\frac{1}{2}}\Gamma_{lm}\), which are crucial for calculating the Hawking emission rate as they describe the effects of centrifugal and gravitational potentials on particle production [7; 8; 66; 67; 92]. Numerical methods are required to find these solutions. We employ the general procedure for Kerr-Newman BHs established in Ref. [92], where a transformation of variables is applied to simplify the numerical treatment. Once the solutions have been obtained, we calculate the absorption probabilities \({}_{\frac{1}{2}}\Gamma_{lm}\) using the expressions provided in the appendix of Ref. [92].
To investigate the dependence of neutrino production on the polar angle \(\theta\) from an EPBH, it is also necessary to solve the angular equations, Eqs. (3). Several numerical methods have been proposed in the literature to solve these equations, see e.g. [93; 94; 90; 95]. In this work, we closely follow the approach described in Ref. [90]. The method involves employing a series expansion for the eigenfunctions \({}_{\pm\frac{1}{2}}S_{lm}\equiv S_{1,2}\) using associated Legendre polynomials, leading to a continued fraction equation. By numerically solving this equation, we obtain the angular eigenvalues \({}_{\frac{1}{2}}A_{lm}\). Subsequently, the coefficients of the series expansion for the eigenfunctions are computed using these angular eigenvalues. Through this numerical procedure, we find the angular eigenfunctions up to a normalization factor. To fix such a factor, we impose the normalization condition,
\[\int|_{\pm\frac{1}{2}}S_{lm}(\theta)|^{2}\,\mathrm{d}\Omega=1. \tag{4}\]
The rates of neutrino emission as a function of time, energy, and solid angle \(\Omega\), are computed after considering the quantization of fermions in a Kerr background. The quantum field theory for massless fermions has been extensively studied in the literature, and interested readers can find detailed discussions in Refs. [76; 78; 96; 97; 98]. However, to the best of our knowledge, currently there is no available treatment for the quantization of massive fermions in the Kerr geometry; for potential issues related to this particular case, see Ref. [97]. Nevertheless, it is worth noting that the neutrino mass \(m_{\nu}\lesssim 1\) eV is negligible compared to the energies observed in neutrino telescopes, \(E_{\nu}\gtrsim 100\text{ GeV}\). Consequently, we can reasonably _assume_ that the quantization procedure used
for massless fermions is applicable with a high level of accuracy to neutrinos originating from an EPBH, a supposition that we will adopt from now on.
The main physical quantity of interest is the expectation value of the current operator1
Footnote 1: Note that the commutator in the current definition should be taken only with respect to the quantum operators and not the spinor structure.
\[J^{\mu}=\frac{1}{2}[\overline{\nu},\gamma^{\mu}P_{L}\nu] \tag{5}\]
where \(\nu\) represents the neutrino field. Here, \(P_{L}=\frac{1}{2}(1-\gamma^{5})\) is the left chiral projector defined in terms of the chirality matrix \(\gamma^{5}\), see App. A. As discussed in Ref. [87], neutrinos and antineutrinos are expected to be emitted as mass eigenstates, \(\nu_{1,2,3}\). The neutrino field \(\nu\) will represent generally any of the three fields associated to the mass eigenstates henceforth. The radial component of the current \(J^{r}\) describes the flow of neutrinos minus antineutrinos [78; 97]. The net neutrino minus antineutrino flux, \(\mathcal{A}\equiv N_{\nu}-N_{\overline{\nu}}\), per unit time and solid angle in the radial direction _away_ from the BH corresponds to the expectation value [78]
\[\frac{\mathrm{d}^{2}\mathcal{A}}{r^{2}\mathrm{d}\Omega\mathrm{d}t}=\langle U^ {-}|J^{r}|U^{-}\rangle, \tag{6}\]
where \(|U^{-}\rangle\) represents the past Unruh vacuum [99]. This vacuum state, defined as the state that does not contain any particles incoming from the null past infinity \(\mathcal{J}^{-}\), is the relevant for our purposes since we are considering the evaporation of a BH originated from a gravitational collapse event.
In terms of the angular functions and absorption probabilities defined previously, the net neutrino emission flux can be expressed as follows [76; 77; 78; 79; 97]
\[\frac{\mathrm{d}^{2}\mathcal{A}}{\mathrm{d}\Omega\mathrm{d}t}=\frac{1}{4\pi} \sum_{l=1/2}\sum_{m=-l}^{l}\int_{0}^{\infty}\mathrm{d}\omega\frac{\frac{1}{2} \Gamma_{lm}}{\exp(\varpi/T)+1}\{|_{-\frac{1}{2}}S_{lm}(\theta)|^{2}-|_{+\frac{ 1}{2}}S_{lm}(\theta)|^{2}\}, \tag{7}\]
where \(\varpi=\omega-m\vartheta\) with \(\vartheta=a_{\star}/(2GM(1+\sqrt{1-a_{\star}^{2}}))\) representing the horizon's angular velocity. From the form of the emission asymmetry, we can interpret the term proportional to \({}_{-\frac{1}{2}}S_{lm}(\theta)\) as the contribution coming from neutrinos, while \({}_{-\frac{1}{2}}S_{lm}(\theta)\) corresponds to the antineutrino term, with the caveat that such interpretation is valid only far from the EPBH [96]. In Eq. (7), it is necessary to write explicitly the sum over the angular momentum quantum numbers as the PBH spin breaks the spherical symmetry, given the explicit dependence of the Hawking rate on \(m\). The Hawking temperature \(T\) is given by
\[T=\frac{1}{4\pi GM}\,\frac{\sqrt{1-a_{\star}^{2}}}{1+\sqrt{1-a_{\star}^{2}}}. \tag{8}\]
Thus, we can define the net particle emission rate within time \(\mathrm{d}t\), solid angle \(\mathrm{d}\Omega\) and energy \([\omega,\omega+\mathrm{d}\omega]\) as
\[\frac{\mathrm{d}^{3}\mathcal{A}}{\mathrm{d}\omega\mathrm{d}t\mathrm{d}\Omega} =\frac{1}{4\pi}\sum_{l=1/2}\sum_{m=-l}^{l}\frac{\frac{1}{2}\Gamma_{lm}}{\exp( \varpi/T)+1}\{|_{-\frac{1}{2}}S_{lm}(\theta)|^{2}-|_{+\frac{1}{2}}S_{lm}( \theta)|^{2}\}. \tag{9}\]
Notice that the integration of the rate over the solid angle produces a vanishing net particle emission, as expected. Analogously, let us define the total emission rate, i.e., neutrino plus antineutrino rate as
\[\frac{\mathrm{d}^{3}N_{\nu+\overline{\nu}}}{\mathrm{d}\omega\mathrm{d}t \mathrm{d}\Omega}=\frac{1}{4\pi}\sum_{l=1/2}\sum_{m=-l}^{l}\frac{\frac{1}{2} \Gamma_{lm}}{\exp(\varpi/T)+1}\{|_{-\frac{1}{2}}S_{lm}(\theta)|^{2}+|_{+\frac {1}{2}}S_{lm}(\theta)|^{2}\}. \tag{10}\]
This total flux of neutrinos comes directly from the evaporation of the BH, thus constituting what is referred to as the primary spectrum in the literature.
On the top left of Fig. 2, we present the instantaneous neutrino minus antineutrino net flux \(\mathrm{d}^{3}\mathcal{A}/\mathrm{d}\omega\mathrm{d}\Omega\mathrm{d}t\) as function of the neutrino energy for a BH having a mass of \(M=10^{9}\) g and spin parameter \(a_{\star}=0.999\). The polar angle at which the observer is placed varies from \(0^{\circ}\) to \(180^{\circ}\). When \(\theta=0\), we observe that the net rate is negative and presents a single peak. This distinctive feature arises because only the angular eigenfunctions with \(l=m=\frac{1}{2}\) contribute at the poles, while the remaining modes vanish there. Moreover, the nega
tive sign indicates that more antineutrinos than neutrinos are emitted. As we increase the polar angle, the contribution from higher \(l\) modes becomes significant, resulting in additional peaks in the net flux. The maximum emission asymmetry occurs at approximately \(\theta\approx 60^{\circ}\), which aligns with previous findings [78]. For polar angles within the range \(60^{\circ}\lesssim\theta<180^{\circ}\), the net flux is reduced, and at \(\theta=\pi\), the net flux completely vanishes due to a cancellation between the angular eigenfunctions \({}_{\pm}\frac{1}{2}S_{lm}(\theta)\) at the equator. In the southern hemisphere, \(90^{\circ}\leq\theta\leq 180^{\circ}\), the net flux becomes positive, indicating a higher number of emitted neutrinos compared to antineutrinos. Despite this sign change, the behavior in the southern hemisphere is analogous to that in the northern hemisphere, \(0^{\circ}\leq\theta\leq 90^{\circ}\),. The net flux reaches its maximum value around \(\theta\approx 120^{\circ}\), while at \(\theta=180^{\circ}\), only the \(l=m=\frac{1}{2}\) mode contributes.
To quantify the EPBH emission asymmetry with respect to the total instantaneous neutrino flux, we introduce the asymmetry ratio \(\mathcal{R}\) which compares the net neutrino minus antineutrino flux to the total emission rate,
\[\mathcal{R}\equiv\frac{\frac{\mathrm{d}^{3}\mathcal{A}}{\mathrm{d}\omega \mathrm{d}t\mathrm{d}\Omega}}{\frac{\mathrm{d}\omega\mathrm{d}t\mathrm{d} \Omega}{\mathrm{d}\omega\mathrm{d}t\mathrm{d}\Omega}} \tag{11}\]
We present on the bottom left of Fig. 2 the asymmetry ratio \(\mathcal{R}\) for the same values for the polar angles \(\theta\) as in the top pannel. The results indicate that at the poles (\(\theta=0^{\circ},180^{\circ}\)), the asymmetry ratio approaches -1 and 1, respectively. This implies that at these angles, the flux is predominantly composed of antineutrinos and neutrinos. For angles different from the poles, the absolute value of the ratio \(|\mathcal{R}|\) remains below 1, and it is constant for neutrino energies \(E_{\nu}\lesssim 5\times 10^{4}\) GeV, where the dominating mode is \(l=m=\frac{1}{2}\). However, at higher energies, the contribution of higher-\(l\) modes becomes evident, resulting in ripples in the ratio. For the specific case of \(\theta=60^{\circ}\), the asymmetry ratio \(\mathcal{R}\approx-0.5\) for energies \(E_{\nu}\lesssim 5\times 10^{4}\) GeV. This confirms that at this angle, there are approximately 50% more antineutrinos than neutrinos being emitted. Conversely, at \(\theta=120^{\circ}\), we observe the opposite behavior, as expected from the previous discussion. Finally, at the equator, \(\theta=90^{\circ}\), the ratio \(\mathcal{R}\) is precisely 0, indicating that the number of emitted neutrinos and antineutrinos is equal at this angle.
To comprehend the dependence of the neutrino-antineutrino net flux on the BH spin parameter, we present in the right panel of Fig. 2 the energy-integrated emission asymmetry, \(\mathrm{d}^{2}\mathcal{A}/\mathrm{d}\Omega\mathrm{d}t\), cf. Eq.(7), for varying spin parameter values, \(a_{\star}=0.999\) (purple full), \(a_{\star}=0.8\) (dashed blue), \(a_{\star}=0.5\) (dotted dark yellow), and \(a_{\star}=0.1\) (dot-dashed red). For a close-to-maximally rotating BH with \(a_{\star}=0.999\), we observe that the net emission is maximal at a value \(|\cos\theta|\approx 0.5\), whereas for other spin parameter values, the emission asymmetry peaks at the poles. This behavior is a consequence of higher-\(l\) modes increasingly dominating for \(a_{\star}\gtrsim 0.8\), which in turn con
Figure 2: (Top left) Instantaneous neutrino minus antineutrino net particle flux, \(\mathcal{A}=N_{\nu}-N_{\overline{\nu}}\), as function of energy for a BH having a mass of \(M=10^{9}\) g and spin parameter \(a_{\star}=0.999\) for different values of the polar angle \(\theta\in[0,\pi]\). (Bottom left) Instantaneous asymmetry ratio \(\mathcal{R}\) between the net neutrino minus antineutrino flux to the total emission rate \(\mathrm{d}^{3}N_{\nu+\overline{\nu}}/\mathrm{d}\omega\mathrm{d}t\mathrm{d}\Omega\), cf. Eq. (10). (Right) Energy-integrated instantaneous neutrino emission asymmetry as function of the polar angle \(\theta\) for different values of the spin parameter, \(a_{\star}=0.999\) (purple full), \(a_{\star}=0.8\) (dashed blue), \(a_{\star}=0.5\) (dotted dark yellow), and \(a_{\star}=0.1\) (dot-dashed red).
tribute away from the poles. On the other hand, for \(a_{\star}\lesssim 0.8\), the \(l\) mode that predominantly dominates is the \(l=m=\frac{1}{2}\), resulting in a maximum emission at the poles. As a result, the net emission asymmetry peaks at the poles for these values of the spin parameter. For all spin parameter values presented, we observe that the net flux is negative for \(\cos\theta>0\), indicating a preference for antineutrino emission over neutrinos in the northern hemisphere. Conversely, for negative values of \(\cos\theta\), the behavior is opposite, with a preference for neutrino emission. At the equator, the net emission asymmetry vanishes, ensuring that the numbers of emitted neutrinos and antineutrinos coincide. This overall pattern demonstrates that antineutrinos are preferentially emitted in the northern hemisphere, whereas neutrinos are predominantly produced in the southern hemisphere.
Neutrino physics holds an unresolved question regarding the fundamental fermionic nature of these particles. Neutrinos can be either their own antiparticles, known as Majorana fermions, or as distinct entities from their antiparticles, referred to as Dirac neutrinos. In Ref. [87], we investigated the impact of each scenario on Hawking radiation from Schwarzschild black holes. Now, the question arises as to the main effect for Kerr black holes. In the case of Dirac neutrinos, additional dofs come into play, right-handed neutrinos and left-handed antineutrinos. Consequently, the black hole would emit these states asymmetrically, with right-handed neutrinos predominantly emitted in the northern hemisphere and left-handed antineutrinos in the southern hemisphere. This raises the possibility that the previously mentioned overall asymmetry might be removed. However, the detection of these additional states via weak interactions is hindered by a helicity factor of \(m_{\nu}/E_{\nu}\sim 10^{-11}\), for the energy range of interest. As a result, the presence of the additional states becomes negligible, and the asymmetry remains intact.
For Majorana neutrinos, only two dofs exist per active mass eigenstate, corresponding to positive and negative helicities. While there is technically no distinct state known as an antineutrino, weak interactions differentiate between the helicity states due to parity violation. Consequently, a positive-helicity state exhibits a different interaction compared to a negative-helicity one in the ultra-relativistic limit, which is of interest to us here. Thus, in such a limit, it is customary to denote the negative-helicity states as "neutrinos" and the positive-helicity ones as "antineutrinos" [100; 101]. As a result, the asymmetric emission is also present for Majorana neutrinos. Consequently, detecting a discrepancy between the numbers of neutrinos and antineutrinos in a future detection of an EPBH could indicate the presence of a nonzero angular momentum.
### Primary Photon Emission
The emission of particles with higher spin, such as photons, is enhanced for rotating BHs [66]. Consequently, we anticipate that photon emission will be strongly influenced by both the EPBH spin and the polar angle \(\theta\). To investigate this dependence, we consider the equations for the radial \(\psi_{s}\) and the angular functions \(S(\theta)\) for a massless field with spin \(s\),
\[\frac{\mathrm{d}^{2}\psi_{s}}{\mathrm{d}r_{\star}^{2}}+(\omega^{2 }-V_{s}(r_{\star}))\psi_{s} =0, \tag{12a}\] \[\frac{1}{\sin\theta}\frac{\mathrm{d}}{\mathrm{d}\theta}\left(\sin \theta\frac{\mathrm{d}S}{\mathrm{d}\theta}\right)+\left[(c^{2}\cos^{2}\theta- s)^{2}-(m\csc\theta+s\cot\theta)-s(s-1)+{}_{s}A_{lm}\right]S =0, \tag{12b}\]
where \(\lambda_{s}\equiv{}_{s}A_{lm}+c^{2}-2mc\), being \(c=a\omega\), and \({}_{s}A_{lm}\) are the angular eigenvalues. The Schrodinger-like equation Eq. (12a) is obtained through the Chandrasekhar-Detweiler method as presented in Refs. [102; 103; 104]. The potentials \(V_{s}\) depend on the spin of the particle, which, for completeness, are provided in App. B. To compute the absorption probabilities \({}_{s}\Gamma_{lm}\) for massless bosons, we solve the wave equation, Eq. (12a), and compute the transmission coefficient for a purely ingoing wave2. On the other hand, for the angular eigenvalues and eigenfunctions, we adopt the approach outlined in Refs. [105; 106], where the angular separation constants are obtained by numerically solving a continued fraction equation, similar to the case of massive fermions.
Footnote 2: It is worth noting that we have verified the consistency of our results for the absorption probabilities \({}_{s}\Gamma_{lm}\) with those obtained from the BlackHawk code [74; 75].
Similarly to the neutrino case, we define the total photon emission rate as function of energy, time and solid angle as [107; 108]
\[\frac{\mathrm{d}^{3}N_{\gamma}}{\mathrm{d}\omega\mathrm{d}t\mathrm{ d}\Omega} =\frac{1}{4\pi}\sum_{l=s}\sum_{m=-l}^{l}\frac{{}_{1}\Gamma_{lm}}{ \exp(\varpi/T)-1}\times\] \[\left\{|{}_{-1}S_{lm}(\theta)|^{2}+|{}_{+1}S_{lm}(\theta)|^{2} \right\}. \tag{13}\]
Fig. 3 illustrates the instantaneous photon emission rate for various polar angles of an EPBH with a mass of \(M=10^{9}\) g and a spin of \(a_{\star}=0.999\). The overall emission rate behavior is similar to that of neutrino-antineutrino
emission asymmetry. At a polar angle of \(\theta=0\) (black curve), only the \(l=1\) mode contributes to the emission. As the polar angle increases, the contribution from other angular modes becomes noticeable, while the contribution from the \(l=1\) mode diminishes. An important observation is that the emission rate exhibits a symmetry under the transformation \(\theta\rightarrow\pi-\theta\), as the angular modes satisfy [106, 108],
\[{}_{-s}S_{lm}(\theta)=(-1)^{-l-m}{}_{s}S_{lm}(\pi-\theta).\]
Consequently, this symmetry implies that the measurement of photons emitted by an EPBH cannot determine, in principle, the hemisphere pointing towards Earth. Upon careful examination of the total emission rate, Eq. (13), we observe that the two different angular eigenfunctions possess distinct polarisations. Therefore, in principle, measuring the polarisation of the gamma rays could determine the value of \(\theta\). Polarisation is commonly measured through the Compton scattering angle of photons for energies below electron-positron pair production, \(E_{\gamma}\lesssim 1\) MeV [109]. For energies up to \(E_{\gamma}\lesssim 10\) MeV, the event distribution of electron-positron pairs can be analysed [110, 111, 112]. However, at higher energies, these techniques face limitations due to factors like multiple Coulomb scatterings. Some works have proposed ideas for measuring polarisation up to energies of \(E_{\gamma}\sim 30\) GeV [109, 110, 111, 113]. Nonetheless, it remains uncertain if these techniques are applicable to energies of interest here, \(E_{\gamma}\gtrsim 1\) TeV. Furthermore, Earth's magnetic field may influence the polarisation of incoming photons from the EPBH. Thus, in this study, we adopt a conservative approach and assume that future experiments will not measure gamma-ray polarisation. To resolve the ambiguity in measuring \(\theta\), we propose a multimessenger approach that incorporates the detection of neutrinos.
### Secondary spectra
Since the EPBH also emits other SM dofs, most of which are unstable, there is an additional production of neutrinos/photons arising from their subsequent decay. This additional contribution is known as the neutrino/photon secondary spectrum. To obtain this secondary spectrum, we convolve the primary spectrum of each particle species, denoted by \(i\), with the number of daughter neutrinos/photons resulting from their decay as a function of energy and solid angle. The expression for the secondary spectrum is given by
\[\frac{\mathrm{d}^{3}N_{\nu(\gamma)}^{\mathrm{sec}}}{\mathrm{d} \omega\mathrm{d}t\mathrm{d}\Omega}=\int_{0}^{\infty}\mathrm{d}\omega^{\prime} \int\mathrm{d}\Omega^{\prime}\sum_{i}\frac{\mathrm{d}^{3}N_{i}}{\mathrm{d} \omega^{\prime}\mathrm{d}t\mathrm{d}\Omega^{\prime}}\frac{\mathrm{d}^{2}n_{i \rightarrow\nu(\gamma)}}{\mathrm{d}\omega\mathrm{d}\Omega}(\omega,\omega^{ \prime}). \tag{14}\]
where \(\mathrm{d}^{2}n_{i\rightarrow\nu(\gamma)}/\mathrm{d}\omega\mathrm{d}\Omega\) denotes the energy and angular distribution of neutrinos (photons) resulting from the decay of the \(i\)-th particle, and \(d^{3}N_{i}/d\omega^{\prime}dt\mathrm{d}\Omega^{\prime}\) represents the primary emission rate of the \(i\)-th particle species defined as
\[\frac{\mathrm{d}^{3}N_{i}}{\mathrm{d}\omega\mathrm{d}t\mathrm{d} \Omega}=\frac{g_{i}}{4\pi}\sum_{l=s_{i}}\sum_{m=-l}^{l} \frac{s_{i}\Gamma_{lm}}{\exp(\varpi/T)-(-1)^{2s_{i}}}\times \tag{15}\] \[\{|_{s_{i}}S_{lm}(\theta)|^{2}+|_{-s_{i}}S_{lm}(\theta)|^{2}\},\]
being \(s_{i}\) the spin and \(g_{i}\) the internal dofs of the particle species \(i\). To compute the secondary spectrum, we thus need to determine the angular and energy distributions of neutrinos resulting from the decay of each SM particle. In our analysis, we have employed a similar approach to that used by BlackHawk [74, 35] which uses the PYTHIA event generator [114] to calculate these distributions. We use PYTHIA since it has the full information of the four momenta of the produced particles, which is crucial for our purposes.
The numerical strategy employed to determine \(\mathrm{d}^{2}n_{i\rightarrow\nu(\gamma)}/\mathrm{d}\omega\mathrm{d}\Omega\) can be summarized as follows. First, we set the center-of-mass energy of the collision in PYTHIA to be twice the energy of the primary particle \(\omega^{\prime}\). We generate a large number of events, typically around \(10^{5}\), for a specific channel, \(e^{+}+e^{-}\to i+\tilde{i}\rightarrow\cdots\), where \(i\) represents the primary particle of interest. For each event, we record the energy of the primary particle and calculate the relative angle between each daughter particle and the primary particle. To facilitate the analysis, we perform a three-dimensional rotation that aligns the z-axis with the direction of the primary particle. This rotation is then applied to all the final state particles,
Figure 3: Primary instantaneous photon emission spectrum from a BH having a mass \(M=10^{9}\) g and spin parameter \(a_{*}=0.999\) for different values of the polar angle, \(\theta=0.^{\circ}\) (black), \(15^{\circ}\) (beige), \(30^{\circ}\) (light orange), \(60^{\circ}\) (fucisa), \(75^{\circ}\) (purple), and \(90^{\circ}\) (blue).
ensuring consistent orientation. Next, we construct a two-dimensional histogram to capture the distribution of energy and relative angle for each value of the center-of-mass energy. Each entry in the histogram represents the frequency of occurrence for a particular combination of energy and angle, normalised by the total number of events. We repeat this process by systematically varying the center-of-mass energy in the range of interest, corresponding to the range of \(1~{}\mathrm{GeV}-10^{6}~{}\mathrm{GeV}\). Finally, to convert the dependence on relative angle to a dependence on the solid angle of the particle distributions, we perform a rebinning of the histograms. This rebinning takes into account the different possible orientations of the polar and azimuthal angles of the decay products for a given angle \(\theta\) of the primary particle.
In Fig. 4, we present the total--summed over all neutrino states--primary and secondary neutrino (full) and antineutrino (dot-dashed) emission rates \(d^{3}N_{\nu(\overline{\nu})}/d\omega d\Omega dt\), obtained by adding and subtracting Eqs. (9) and (10), for \(\theta=15^{\circ}\) (purple), \(\theta=90^{\circ}\) (red), and \(\theta=120^{\circ}\) (green). The primary spectra presents the peaks related to the \(l\) modes as expected from the neutrino-antineutrino emission rate in Fig. 2. We observe that for the polar angle \(\theta=15^{\circ}\), belonging to the upper hemisphere, the primary emission rate of antineutrinos is \(\sim 2\) orders of magnitude larger than the one from neutrinos. Meanwhile, the rates at the equator, \(\theta=90^{\circ}\) are equal. For \(\theta=120^{\circ}\), the behaviour is the opposite, the neutrino primary emission rate is larger, although only a factor of \(\sim 3.5\). Regarding the secondary spectra, there are no significant differences between the neutrino and antineutrino fluxes. This is expected since these neutrinos originate from the decay of other particles that do not exhibit the same emission asymmetry in their production that primary neutrinos present [80]. The main distinction arises from the dependence on the polar angle, which is consistent with the behavior observed for primary photons and neutrinos: at angles close to the poles, higher \(l\) modes contribute less compared to the first mode, whereas near the equator, the contribution of higher \(l\) modes becomes maximal.
## III Time evolution
Once a PBH reaches its final stages of life, particle emission intensifies, culminating in a potential observable burst for future observatories. These observatories will track the temporal evolution of a PBH, with an initial mass linked to the observed duration of the burst. Consequently, it becomes essential to understand the underlying time evolution. To derive the evolution equations for the PBH mass and spin, we multiply the integrated emission rate in solid angle,
\[\frac{\mathrm{d}^{2}N_{i}}{\mathrm{d}\omega\mathrm{d}t} =\int\mathrm{d}\Omega\,\frac{\mathrm{d}^{3}N_{i}}{\mathrm{d}\omega \mathrm{d}t\Omega}\] \[=\frac{g_{i}}{2\pi}\sum_{l=s_{i}}\sum_{m=-l}^{l}\frac{s_{i}\Gamma _{lm}}{\exp(\varpi/T)-(-1)^{2s_{i}}}, \tag{16}\]
by the total energy of a given particle \(\omega\) or by the \(m\) quantum number, and then we integrate over the phase space. Defining the evaporation functions for mass and angular momentum, \(\epsilon_{i}(M,a_{\star})\) and \(\gamma_{i}(M,a_{\star})\) per particle \(i\), respectively, as
\[\varepsilon_{i}(M,a_{\star}) =\frac{g_{i}}{2\pi}\int_{0}^{\infty}\sum_{l=s_{i}}\sum_{m=-l}^{l} \frac{\omega\,_{s_{i}}\Gamma_{lm}}{\exp(\varpi/T)-(-1)^{2s_{i}}}\,d\omega\,, \tag{17a}\] \[\gamma_{i}(M,a_{\star}) =\frac{g_{i}}{2\pi}\int_{0}^{\infty}\sum_{l=s_{i}}\sum_{m=-l}^{l} \frac{m\,_{s_{i}}\Gamma_{lm}}{\exp(\varpi/T)-(-1)^{2s_{i}}}\,d\omega\,, \tag{17b}\]
and summing over _all_ existing species, we obtain the following system of coupled equations for the time evolution [67, 14, 33, 66]
\[\frac{\mathrm{d}M}{\mathrm{d}t} =-\epsilon(M,a_{\star})\frac{M_{p}^{4}}{M^{2}}\,, \tag{18a}\] \[\frac{\mathrm{d}a_{\star}}{\mathrm{d}t} =-a_{\star}[\gamma(M,a_{\star})-2\epsilon(M,a_{\star})]\frac{M_{p }^{4}}{M^{3}}\,. \tag{18b}\]
Figure 4: Total neutrino (full) and antineutrino (dot-dashed) instantaneous emission rates for three different values of the polar angle, \(\theta=15^{\circ}\) (purple), \(\theta=90^{\circ}\) (red), and \(\theta=130^{\circ}\) (green) and for a BH having a mass of \(M=10^{9}\) g and spin parameter \(a_{\star}=0.999\). Note that neutrino and antineutrino lines for the secondary spectra and for the primary spectra for \(\theta=90^{\circ}\) lie on top of each other.
Assuming only the presence of SM dofs, the numerical solution of these equations reveals that a nearly maximal spinning BH loses its angular momentum at a rate significantly faster than its mass. However, if there exists a large sector of scalar particles, such as those in the string axiverse [68], the BH spin does not completely evaporate but tends towards an asymptotic value [69, 70, 71]. This emphasizes the importance of determining the angular momentum of the BH prior to its evaporation as a means to constrain these models. In what follows, and in order to be model-independent, we will assume that the EPBH follows the time evolution given the SM dofs at the onset of the final burst. The dependence with BSM scenarios will be considered elsewhere.
Hence, it becomes crucial to investigate whether the neutrino-antineutrino emission asymmetry could be observed in the final PBH burst after taking into account the BH time evolution. We integrate the neutrino (antineutrino) particle flux over time taking into account the time evolution of both mass and angular momentum of the BH,
\[\frac{\mathrm{d}^{2}N_{\nu(\overline{\nu})}}{\mathrm{d}\omega\mathrm{d} \Omega}=\int_{0}^{\tau}\,\mathrm{d}t\,\frac{\mathrm{d}^{3}N_{\nu(\overline{\nu })}}{\mathrm{d}\omega\mathrm{d}t\mathrm{d}\Omega}(M(t),a_{\star}(t)), \tag{19}\]
where \(\tau\) represents the remaining lifetime of the evaporating black hole, assumed to be equal to the observed burst duration. The parameter \(\tau\) also determines the PBH mass at the onset of the burst, given a specific initial \(a_{\star}^{\mathrm{in}}\). For instance, if the burst duration is \(\tau=100\) s and we consider a nearly maximal spin case, \(a_{\star}^{\mathrm{in}}=0.999\), the initial mass is \(M_{\mathrm{in}}\sim 8.3\times 10^{9}\) g. Conversely, for a non-rotating Schwarzschild BH, \(a_{\star}^{\mathrm{in}}=0\), the initial mass would be smaller, approximately \(M_{\mathrm{in}}\sim 6.3\times 10^{9}\) g, due to the increased lifetime of a non-rotating BH.
The total time integrated neutrino and antineutrino fluxes for a burst duration of \(\tau=100\) s and assuming an initial \(a_{\star}^{\mathrm{in}}=0.999\) are presented in Fig. 5. Consistent with previous figures, we present the fluxes for polar angles \(\theta=15^{\circ}\) (purple), \(\theta=90^{\circ}\) (red), and \(\theta=120^{\circ}\) (green), distinguishing between the contributions of the primary and secondary spectra. Despite the spin is depleted faster than the mass, we observe an asymmetry in the emission for neutrino and antineutrino fluences depending on the polar angle. For instance, at \(\theta=15^{\circ}\), the antineutrino fluence is greater than the neutrino fluence by a factor of \(\sim 5\) at an energy of \(E_{\nu}\sim 8\) TeV. However, we observe that the peak structure tends to vanish for this value of the polar angle at higher energies, which in turn will modify the number of events in neutrino telescopes. For other values of the polar angle the behaviour is similar to the one encountered for the instantaneous spectra in Fig. 4; however, the peak structure is less pronounced. Moreover, the secondary spectra does not present any asymmetry between neutrinos and antineutrinos. These fluence profiles suggest potential variations in the number of neutrino vs antineutrino events in neutrino telescopes, which could enable the determination of the EPBH's spin during its final stages. This topic is further explored in the subsequent section.
## IV Detection prospects
In this section we will focus on the possibility of determining the angular momentum and polar angle orientation of an EPBH in both neutrino and photon detectors.
### Neutrino Telescopes
Current and future neutrino telescopes hold the potential to observe the final stages of PBH evaporation by detecting the flux of high-energy neutrinos emitted during this process. Let us consider the observation of \(\mu\)-tracks in the IceCube observatory resulting from an EPBH burst occurring in close proximity to Earth. The exceptional angular resolution of IceCube for high-energy tracks, achieving an impressive precision of \(\lesssim 1^{\circ}\) for energies in the TeV range [81, 82], will play a crucial role in determining the origin of these events and distinguishing them from background signals. Furthermore, IceCube has the capability to differentiate between neutrinos and antineutrinos based on their characteristic inelasticity distributions, particularly for neutrino energies below \(\lesssim 10\) TeV [81]. This energy range is of particular interest to us.
Figure 5: Total time integrated neutrino (full) and antineutrino (dot-dashed) emission rates for two different values of the polar angle, \(\theta=15^{\circ}\) (purple), \(\theta=90^{\circ}\) (red), and \(\theta=130^{\circ}\) (green), and for a burst duration of \(\tau=100\) s and initial spin of \(a_{\star}^{\mathrm{in}}=0.999\).
To accurately estimate the number of \(\mu\)-track events detected on Earth, we must account for neutrino flavor oscillations that occur during their journey from the source to the detector. As mentioned before, the primary neutrinos and antineutrinos are assumed to be produced as mass eigenstates. Consequently, the fluence of muon neutrinos (antineutrinos) originating from the primary spectra at Earth can be expressed as
\[F^{\rm primary}_{\nu_{\mu}(\overline{\nu}_{\mu})}(\theta)=\frac{1}{d_{L}^{2}} \sum_{i=1,2,3}|U_{\mu i}|^{2}\frac{d^{2}N_{\nu_{i}(\overline{\nu}_{i})}}{d \omega d\Omega}, \tag{20}\]
where \(U\) denotes the Pontecorvo-Maki-Nakagawa-Sakata mixing matrix, \(d^{2}N_{\nu_{i}(\overline{\nu}_{i})}/d\omega d\Omega\) represents the time-integrated fluence of primary neutrinos (antineutrinos) for mass eigenstate \(i\), and \(d_{L}\) corresponds to the distance between the PBH and Earth. Notably, the absence of the usual \(4\pi\) factor in the denominator is due to the lack of spherical symmetry of particle emission from a Kerr EPBH, as emphasized previously. Since the energies at which these primary neutrinos are emitted are significantly higher than the neutrino masses, the primary spectra for all three mass eigenstates are identical.
In contrast, secondary neutrinos are generated through weak interactions, leading them to be produced in flavor eigenstates. Neutrino oscillations over the PBH-Earth distances present decoherence since these distances are expected to greatly exceed the standard oscillation lengths. Consequently, the fluence for the secondary component can be expressed as
\[F^{\rm secondary}_{\nu_{\mu}(\overline{\nu}_{\mu})}(\theta)=\frac{1}{d_{L}^{ 2}}\sum_{i=1}^{3}\sum_{\alpha=e}^{\tau}|U_{\mu i}|^{2}|U_{\alpha i}|^{2}\frac{ d^{2}N_{\nu_{\alpha}(\overline{\nu}_{\alpha})}^{\rm sec}}{d\omega d\Omega} \tag{21}\]
The number of muon neutrino(antineutrino) events in IceCube for a given zenith angle \(\zeta\), corresponding to the location of the EPBH, involves the following expression
\[N_{\nu_{\mu}(\overline{\nu}_{\mu})}(\theta)=\int_{\omega_{\rm min}}^{\omega_{ \rm max}}F_{\nu_{\mu}(\overline{\nu}_{\mu})}(\theta)\mathscr{A}_{\rm eff}( \omega,\zeta)d\omega, \tag{22}\]
where \(F_{\nu_{\mu}(\overline{\nu}_{\mu})}=F^{\rm primary}_{\nu_{\mu}(\overline{\nu}_{ \mu})}+F^{\rm secondary}_{\nu_{\mu}(\overline{\nu}_{\mu})}\) represents the total neutrino fluence at the detector, and \(\mathscr{A}_{\rm eff}(\omega,\zeta)\) denotes the effective area of IceCube [82]. In principle, the effective areas should be different for neutrinos and antineutrinos. However, we use the publicly available effective area [82], which corresponds to the averaged area for neutrinos and antineutrinos. The energy integration is performed over the range between IceCube's threshold energy of \(\omega_{\rm min}=100\) GeV and the maximum energy of the neutrino fluence, which, in turn, depends on the duration of the burst [61]. The main background in IceCube that could affect the measurement of the neutrinos from an EPBH corresponds to high-energy atmospheric neutrinos creating observable tracks. Nevertheless, the events from these neutrinos is of order \(10^{-4}\) for a time interval of 100 s [61], making such a background negligible.
Figure 6 illustrates the variation of neutrino and antineutrino events in IceCube as a function of the zenith angle (top) and the ratio of neutrinos to antineutrinos (bottom) for an EPBH having an initial \(a_{\pi}^{\rm in}=0.999\), a burst duration of \(\tau=100\) s at distance of \(d_{L}=10^{-4}\) pc. Similar to previous figures, we consider the polar angles \(\theta=15^{\circ}\) (purple), \(\theta=90^{\circ}\) (red), and \(\theta=120^{\circ}\) (green), representing the orientation of the Earth relative to the axis of rotation of the PBH. For the case of \(\theta=15^{\circ}\), it is observed that muon antineutrino events dominate, while neutrino events are only \(\sim 80\%\) of those for antineutrinos across various zenith angles. Conversely, at \(\theta=120^{\circ}\), where the Earth is positioned in the southern hemisphere of the EPBH, neutrino events exceed antineutrino events by \(\sim 12\%\). In line with expectations, when the Earth aligns with the EPBH's equatorial plane, neutrino and antineutrino events coincide.
### Gamma Ray Experiments
Gamma rays, emitted abundantly during the final stages of PBH lifetime, can be effectively detected by experiments specifically designed for high-energy photon searches, such as the High Altitude Water Cherenkov observatory (HAWC) [56, 57]. HAWC is a very-high-energy
Figure 6: Muon neutrino (full) and antineutrino (dot-dashed) events in IceCube (top), and the neutrino-to-antineutrino ratio (bottom) as function of the zenith angle for an EPBH at a distance of \(d_{L}=10^{-4}\) pc and a burst duration of \(\tau=100\) s for different values of the polar angle: \(\theta=15^{\circ}\) (purple), \(\theta=90^{\circ}\) (red), and \(\theta=120^{\circ}\) (green), representing the orientation of the Earth relative to the axis of rotation of the PBH.
(VHE) air shower array located on the slopes of the Sierra Negra volcano at an altitude of 4100 m above sea level. HAWC's main array comprises 300 cylindrical water tanks, each equipped with photomultiplier tubes that detect the Cherenkov light produced by secondary particles resulting from the interaction of VHE gamma rays. With a wide field of view of approximately 2 sr, HAWC is capable of detecting photons with energies ranging from \(10^{2}\) to \(10^{5}\) GeV [57]. Additionally, HAWC boasts excellent angular resolution, ranging from approximately \(\sim 0.2^{\circ}\) to \(1^{\circ}\), rendering it well-suited for capturing transient events such as EPBH bursts [57].
To estimate the number of gamma-ray events from the burst of a EPBH, we follow a similar approach as with neutrinos,
\[N_{\gamma}(\theta)=\int_{\omega_{\rm min}}^{\omega_{\rm max}}F_{\gamma}( \theta)\mathscr{A}_{\rm eff}^{\rm HAWC}(\omega,\zeta)\,d\omega,\]
where \(F_{\gamma}(\theta)\) encompasses both primary and secondary photon emissions, and \(\mathscr{A}_{\rm eff}^{\rm HAWC}(\omega,\zeta)\) represents the effective area of HAWC [56]. Given the larger cross-section of interaction for photons, we anticipate that the number of photon events will surpass that of neutrinos. However, as previously mentioned, the symmetric emission of photons from both PBH hemispheres introduces an ambiguity in determining the orientation of the axis of rotation relative to Earth. Consequently, a simultaneous measurement of both neutrinos and photons would provide valuable insights, not only regarding the initial angular momentum of the EPBH at the onset of the burst but also enabling determination of the axis of rotation's orientation relative to Earth.
### Sensitivity
To demonstrate the feasibility of a multimessenger approach, we conduct a sensitivity analysis of combined neutrino/antineutrino and photon measurements. This analysis aims to understand the potential of such an approach in determining the initial parameters of the EPBH at the beginning of the burst. We employ a rate analysis utilizing the following test statistics
\[\chi^{2}=\frac{(N_{\nu_{\mu}}-N_{\nu_{\mu}}^{\rm b})^{2}}{N_{\nu_{ \mu}}^{\rm b}}+\frac{(N_{\overline{\nu}_{\mu}}-N_{\overline{\nu}_{\mu}}^{\rm b })^{2}}{N_{\overline{\nu}_{\mu}}^{\rm b}}+\frac{(N_{\gamma}-N_{\gamma}^{\rm b} )^{2}}{N_{\gamma}^{\rm b}}, \tag{23}\]
where \(N_{\nu_{\mu}},N_{\overline{\nu}_{\mu}},N_{\gamma}\) represent the predicted events of muon neutrinos, muon antineutrinos, and photons, respectively, for a given set of initial EPBH parameters. \(N_{\nu_{\mu}}^{\rm b}\), \(N_{\overline{\nu}\mu}^{\rm b}\), and \(N\gamma^{\rm b}\) are the corresponding benchmark event values. For our analysis, we assume benchmark EPBH parameter values of \(a_{*}^{\rm in}=0.5\), \(\theta=45^{\circ}\), and a distance of \(d_{L}=1.5\times 10^{-4}\) pc, yielding values of \(N_{\nu_{\mu}}^{\rm b}=800.65\), \(N_{\overline{\nu}\mu}^{\rm b}=873.09\), and \(N\gamma^{\rm b}=2.81\times 10^{6}\) for \(\tau=100\) s. Additionally, we assume an ideal scenario with no backgrounds and perfect discrimination between neutrinos and antineutrinos. Moreover, we consider that the PBH-Earth distance is measured via some independent parallax technique. While these assumptions are quite optimistic, they provide a best-case scenario for assessing the feasibility of determining the properties of an EPBH using neutrinos.
Figure 7 depicts the sensitivity to the initial characteristics of the EPBH at the onset of the burst for three different durations, \(\tau=1\) s (left), \(\tau=10\) s (center), and \(\tau=100\) s (right). The contributions from individual measurements of photons (yellow) and neutrinos (green), as well as their combination (purple), are presented, with all regions corresponding to a 95% CL. The input benchmark value is denoted by the cyan star. Analyzing the photon measurement alone, we observe that the large number of photon events facilitates a precise determination of the initial EPBH spin at the onset of the observed burst. However, the allowed regions exhibit a degeneracy between the hemispheres, resulting in a symmetric region under the transformation \(\cos\theta\rightarrow-\cos\theta\), and exhibits a convex-like structure for the larger burst durations, \(\tau=10\) s, \(100\) s. This structure arises due to the enhanced emission of photons at the EPBH poles, resulting in similar gamma-ray event rates for EPBHs with lower spin parameters in the polar region as compared to higher-spin BHs at different polar angles pointing towards Earth. For the smaller burst time of \(\tau=1\) s, such a structure is absent since the number of events in the poles differ from those at the equator at less than \(\sim 1\%\), due to the short duration of the burst.
In contrast, the neutrino measurement yields a broader region for all burst durations primarily due to the lower number of neutrino events, which is approximately \(10^{-3}\) smaller than that of photons. However, we observe that for the shortest burst duration (\(\tau=1\) s), neutrinos alone can exclude an initially close-to-maximally rotating EPBH (\(a_{\star}^{\rm in}=0.999\)) for values of \(\cos\theta\lesssim 1\) at more than 95% CL. For the same burst duration, neutrinos could exclude \(a_{\star}^{\rm in}\gtrsim 0.75\) for an EPBH with its northern pole pointing towards Earth. This is because, in the considered benchmark scenario, there are more antineutrino events than neutrino events. Conversely, for \(\cos\theta=-1\), the behavior is the opposite, with neutrino events being approximately 14% larger than those for the benchmark, while antineutrinos are reduced by about 5%.
Longer observation times significantly contribute to better determination of the initial EPBH parameters, as both the number of observed neutrinos and photons increase with the burst duration. For a burst duration of \(\tau=10\) s, it becomes apparent that the asymmetry in neutrino-antineutrino emission begins to impact the combined sensitivity, leading to a reduction in the degeneracy associated with determining the EPBH hemisphere that is oriented towards Earth. Notably, the previously degenerate solution at \(\theta=135^{\circ}\) is now disfavored with a \(\Delta\chi^{2}=5.78\). Moreover, the neutrino measurement alone excludes values of \(a_{\star}^{\rm in}\gtrsim 0.6\) for \(\cos\theta\lesssim-0.5\) at a
95 % confidence level, while for \(\cos\theta\gtrsim 0.5\), the exclusion occurs exclusively for highly rotating EPBHs, with \(a_{\star}^{\rm in}\gtrsim 0.8\) at the same confidence level. Such a sensitivity is attained through the enhanced emission of fermions for higher rotating BHs, consequently leading to an increased net particle flux.
The exclusion of the solution in the second quadrant (\(\theta=135^{\circ}\)) at level exceeding 95% is ultimately achieved for the longer burst duration of \(\tau=100\) s. Furthermore, the neutrino measurement independently excludes a significant portion of the parameter space, particularly for values \(\cos\theta\lesssim-0.1\) and \(a_{\star}^{\rm in}\gtrsim 0.1\). For \(a_{\star}^{\rm in}\lesssim 0.1\), we observe that the sensitivity becomes nearly independent of \(\cos\theta\). This is because these EPBHs start to resemble a Schwarzschild BH, which is spherically symmetric and does not exhibit neutrino emission asymmetry. In this case, the sensitivity primarily arises from the difference in events due to the enhanced emission for Kerr BHs. A similar behavior occurs for \(a_{\star}^{\rm in}\gtrsim 0.8\), where the increased emission of neutrinos and antineutrinos results in a significantly larger number of events than expected for the chosen benchmark. For instance, at \(a_{\star}^{\rm in}=0.9\) and \(\theta=45^{\circ}\), the \(N_{\nu_{\mu}}=840.97\), and \(N_{\rm\overline{\nu}\mu}=975.45\), making this value disfavored by approximately \(\Delta\chi^{2}\approx 4\sigma\). Thus, by combining photon and neutrino measurements, the degeneracy present in the photon measurement is effectively resolved, leading to a more precise determination of the EPBH's initial characteristics.
Hence, it is natural to question how close the EPBH must be for neutrinos to effectively break the degeneracy in determining the angle \(\theta\). To gain insights into this matter, we present in Fig. 8 the \(\Delta\chi^{2}\) between the two degenerate solutions, \(\theta\) and \(\pi-\theta\), as a function of the EPBH-Earth distance for fixed values of the initial spin parameter and a burst duration of \(\tau=100\) s, assuming \(\theta=45^{\circ}\). We observe that, for an initially close-to-maximally rotating black hole, the neutrino emission asymmetry is capable of breaking the degeneracy at a \(3\sigma\) confidence level when the EPBH-Earth distance is approximately \(2\times 10^{-4}\) pc. As expected, for lower spin
Figure 8: Chi-squared difference between the two degenerate angles \(\theta\) and \(\pi-\theta\) solutions in the photon measurement, after combination with the neutrino measurement, as function of the distance to the EPBH for various initial spin parameter values, \(a_{\star}^{\rm in}=0.999\) (blue), \(a_{\star}^{\rm in}=0.5\) (orange), and \(a_{\star}^{\rm in}=0.1\) (green). The dashed vertical lines indicate the distances of the outer planets to the Sun and a light-day distance for reference.
Figure 7: Sensitivity to the EPBH’s initial parameters at the onset of a burst having three durations of \(\tau=1\) s (left), \(\tau=10\) s (center), and \(\tau=100\) s (right). The allowed regions are at the 95% CL from photons (yellow), neutrinos (green), and their combination (purple). We assumed a distance of \(d_{L}=10^{-4}\) pc, an initial \(a_{\star}^{\rm in}=0.5\) and the Earth placed at a polar angle of \(\theta=45^{\circ}\) with respect to the EPBH axis of rotation.
ning EPBHs, the required distances are smaller due to the decreased asymmetry in emission. For \(a_{\star}^{\rm in}=0.5\), the necessary distance is roughly \(1.17\times 10^{-4}\) pc, while for \(a_{\star}^{\rm in}=0.1\), the distance is approximately \(2.3\times 10^{-5}\) pc. To provide a sense of scale, we have indicated in Fig. 8 the distance to the Sun of the outer planets as vertical dashed lines. From this, we observe that to determine the orientation of the EPBH with respect to Earth, a close-to-maximally rotating EPBH would need to be as close as Pluto's aphelion. In contrast, an EPBH with an initial \(a_{\star}^{\rm in}=0.1\) would need to be much closer, approximately as close as Jupiter is to the Sun, in order to measure the neutrino-antineutrino emission asymmetry and enable the determination of the EPBH hemisphere facing Earth.
_Future neutrino sensitivity. --_ Our previous estimates for determining the EPBH's initial characteristics using neutrino asymmetric emission were based on IceCube's current capabilities, particularly its effective area. However, it is important to note that IceCube is set to undergo an upgrade, which is anticipated to increase its effective area by approximately five times, leading to a substantial enhancement in the detection of high-energy neutrinos [115]. Furthermore, the development of new techniques, such as deep learning, holds promise for refining the measurement of cascades. This advancement has already demonstrated improved angular resolution for neutrino observations, as evidenced by recent findings from the Galactic Center [116]. As a result, we can reasonably expect that these innovations will significantly bolster the statistics for nearby EPBH events, especially if they aid in measuring additional neutrino flavors.
In addition to IceCube's upgrade, a plethora of new experiments, including KM3Net [117], P-ONE [118], Tri- dent [119], and Baikal-GVD [120], are either currently being built or in the planning stages. These experiments are also expected to measure neutrinos in the TeV scale, and in the event of an EPBH burst occurring near Earth, they will undoubtedly contribute to a substantial increase in the data. Furthermore, the distribution of these new facilities across different locations on Earth introduces the possibility of a independent neutrino measurement of the Earth-EPBH distance if a nearby event is observed. Although we do not quantitatively assess the extent to which all these neutrino telescopes would improve the measurement of the initial EPBH characteristics at the onset of its burst, we can anticipate a substantial improvement in the measurement of the initial characteristics of an EPBH, should one be observed in the vicinity of Earth.
## V Conclusions
The detection of an evaporating black hole near Earth would represent an extraordinary triumph for theoretical physics, validating our understanding of quantum fields in curved spacetimes and providing insights into the existing dofs in nature. Moreover, the detection of an evaporating black hole with an initial non-zero spin would hint the existence of physics beyond the Standard Model.
Neutrino emission from Kerr black holes exhibits unique behavior, primarily due to the fact that only left-handed neutrinos and right-handed antineutrinos interact weakly. Since particles with positive helicity are preferentially emitted along the rotation axis of a Kerr black hole, while those with negative helicity are predominantly emitted in the opposite direction, neutrinos present an asymmetric emission: neutrinos are preferentially emitted in the southern hemisphere, whereas antineutrinos are mostly emitted in the northern hemisphere. In this work, we have proposed to exploit this neutrino emission asymmetry as a powerful tool to determine the initial properties of an EPBH. After considering the distribution of neutrinos and photons, we have also derived, for the first time, the angular distribution of secondary neutrinos and photons. We have analysed the time evolution of EPBHs and calculated the full net neutrino flux integrated over the observed burst duration. Our analysis assumed that the EPBH mass and spin follow the standard time evolution, considering the dofs in the SM. An interesting aspect to consider is that this time evolution may be influenced by additional BSM dofs, particularly if they manifest as scalars. This could potentially result in a distinct integrated neutrino-antineutrino emission asymmetry. Consequently, we might expect to observe specific correlations between the detected neutrino and antineutrino events, contingent on the specific BSM scenario at play. This will be investigated in detail in future work.
We have computed the expected events in the IceCube observatory, finding that, for an initially close-to-maximally rotating EPBH, the number of antineutrino events would be larger than the neutrino events by a factor of \(\sim 1.2\) (\(\sim 0.9\)), if Earth is located at an angle of \(\theta=15^{\circ}\) (\(120^{\circ}\)) with respect to the rotation axis for a distance of \(10^{-4}\) pc. Furthermore, through simultaneous measurements of photons and neutrinos emitted from an EPBH, we have explored the possibility of not only determining the initial black hole angular momentum but also the orientation of its axis of rotation with respect to Earth. While gamma-ray events are expected to significantly outnumber neutrino events, the symmetry in the angular eigenfunctions of photon emission under the transformation \(\theta\rightarrow\pi-\theta\) makes it challenging to determine the black hole hemisphere facing Earth. However, the net neutrino-antineutrino flux displays a definite dependence on the polar angle, aiding in breaking the degeneracy present in the photon measurement.
Taking optimistic assumptions regarding backgrounds and discrimination between neutrinos and antineutrinos in IceCube, we found that, depending on the burst duration, a neutrino measurement could lift the degeneracy in determining the polar angle quadrant. To exclude one of the degenerate angle solutions in the photon measure
ment at more than 95% CL, we deduced that the EPBH should be within a distance of approximately \(1.78\times 10^{-4}\) pc for initial spin parameters of \(a_{*}^{\text{in}}=0.5\) or \(3.6\times 10^{-5}\) pc for \(a_{*}^{\text{in}}=0.1\). These distances are smaller than the Neptune-Sun or Saturn-Sun distances, respectively. Consequently, perhaps an EPBH exists within our solar system, approaching its final stages of life, presenting us with a unique opportunity to directly observe Hawking radiation through a multimessenger approach and investigate its properties before the final burst.
###### Acknowledgements.
The author is immensely thankful to Jessica Turner for her encouragement to pursue this work and for engaging in several insightful discussions on various topics related to this work. Moreover, the author would like to thank Jessica Turner and Pedro Machado for their meticulous review of the manuscript and their valuable comments, which significantly improved the quality of this paper. In addition, special thanks are owed to Ivan Martinez-Soler for generously providing assistance with queries related to IceCube, and to Sam Dolan for valuable correspondence on the determination of the angular eigenvalues for fermions. Lastly, the author is deeply grateful for the warm hospitality extended by the Particle and Astroparticle Division of the Max-Planck-Institute fur Kernphysik, where a portion of this research was finalized. This work has been funded by the UK Science and Technology Facilities Council (STFC) under grant ST/T001011/1. This project has received funding/support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 860881-HIDDeN. This work has made use of the Hamilton HPC Service of Durham University.
## Appendix A Fermions in a Kerr Spacetime
Let us briefly describe the separation of variables in the Dirac equation for massive fermions on a Kerr background [88; 89; 90; 91]. The Dirac equation for a fermion with mass \(\mu\) is
\[(i\gamma^{\alpha}\hat{D}_{\alpha}-\mu)\Psi=0, \tag{10}\]
where \(\hat{D}_{\alpha}=\partial_{\alpha}-\Gamma_{\alpha}\), \(\Gamma_{\alpha}\) being the spin connection matrices and \(\gamma^{\mu}\) are the Dirac matrices in the curved spacetime satisfying the algebra
\[\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu}I_{4\times 4}, \tag{11}\]
with \(g^{\mu\nu}\) the inverse metric tensor associated to the Kerr metric. An appropriate choice for these matrices for the Kerr spacetime in the Boyer-Lindquist coordinates is
\[\gamma^{t} =\frac{r^{2}+a^{2}}{\rho\sqrt{\Delta}}\tilde{\gamma}^{0}+\frac{a \sin\theta}{\rho}\tilde{\gamma}^{2},\ \ \ \ \ \gamma^{r}=\frac{\sqrt{\Delta}}{\rho}\tilde{ \gamma}^{3}\] \[\gamma^{\phi} =\frac{a}{\rho\sqrt{\Delta}}\tilde{\gamma}^{0}+\frac{1}{\rho \sin\theta}\tilde{\gamma}^{2},\ \ \ \ \ \ \ \ \gamma^{\theta}=\frac{1}{\sqrt{\rho}}\tilde{ \gamma}^{1}, \tag{12}\]
where \(\rho^{2}=r^{2}+a^{2}\cos\theta\), and \(\tilde{\gamma}^{a}\) are the usual Dirac matrices in a flat spacetime. We choose here the standard chiral representation for \(\tilde{\gamma}^{a}\),
\[\tilde{\gamma}^{0}=\begin{pmatrix}0&I\\ I&0\end{pmatrix},\ \ \ \tilde{\gamma}^{i}=\begin{pmatrix}0&\sigma_{i}\\ -\sigma_{i}&0\end{pmatrix}. \tag{13}\]
Here \(\sigma_{i}\) are the Pauli matrices, and \(I\) is the \(2\times 2\) identity matrix. The chirality matrix \(\gamma^{5}\) is defined in a standard manner,
\[\gamma^{5}=i\tilde{\gamma}^{0}\tilde{\gamma}^{1}\tilde{\gamma}^{2}\tilde{ \gamma}^{3}=\begin{pmatrix}-1&0\\ 0&1\end{pmatrix}. \tag{14}\]
The spin-connection matrices are
\[\Gamma_{\alpha}=\frac{1}{4}\omega_{abc}\tilde{\gamma}^{b}\tilde{\gamma}^{c}, \tag{15}\]
being \(\omega_{abc}\) the spin connection. Explicitly, the spin connection matrices are
\[\Gamma_{t} =\frac{GM}{2}\begin{pmatrix}\theta^{-2}\sigma_{3}&0\\ 0&-(\theta^{*})^{-2}\sigma_{3}\end{pmatrix}, \tag{16a}\] \[\Gamma_{r} =-\frac{1}{2}\frac{a\sin\theta}{\sqrt{\Delta}}\begin{pmatrix} \theta^{-1}\sigma_{2}&0\\ 0&-(\theta^{*})^{-1}\sigma_{2}\end{pmatrix},\] (16b) \[\Gamma_{\theta} =-\frac{1}{2}\sqrt{\Delta}\begin{pmatrix}(i\varrho)^{-1}\sigma_{ 2}&0\\ 0&-(i\varrho^{*})^{-1}\sigma_{2}\end{pmatrix},\] (16c) \[\Gamma_{\phi} =\frac{1}{2}\sqrt{\Delta}\sin\theta\begin{pmatrix}(i\varrho)^{- 1}\sigma_{1}&0\\ 0&-(i\varrho^{*})^{-1}\sigma_{1}\end{pmatrix}+\frac{1}{2}\vartheta\begin{pmatrix} \vartheta\sigma_{3}&0\\ 0&-\vartheta^{*}\sigma_{3}\end{pmatrix}, \tag{16d}\]
where \(\varrho=r+ia\cos\theta\), and \(\vartheta=i\cos\theta-a\varrho^{-2}(\varrho+GM)\sin^{2}\theta\). The contraction of the spin-connection with the Dirac matrices \(\gamma^{\alpha}\Gamma_{\alpha}\), corresponding to the term
which appears in the Dirac equation, is explicitly given by
\[\gamma^{\alpha}\Gamma_{\alpha}=\frac{1}{2}\begin{pmatrix}0&s_{\theta}^{*}\sigma_{ 1}+s_{r}^{*}\sigma_{3}\\ -s_{\theta}\sigma_{1}-s_{r}^{*}\sigma_{3}&0\end{pmatrix}, \tag{100}\]
where
\[s_{r}=\frac{\rho}{\sqrt{\Delta}}\frac{1}{\varrho\sqrt{\Delta}}\frac{\partial} {\partial r}(\varrho\sqrt{\Delta}),\quad s_{\theta}=\rho\frac{1}{\varrho \sin\theta}\frac{\partial}{\partial\theta}(\varrho\sin\theta).\]
Following Ref. [91], we propose an ansatz to separate the Dirac equation,
\[\Psi=\Delta^{-\frac{1}{4}}e^{-i\omega t+im\phi}\begin{pmatrix}\varrho^{-1/2} \eta_{-}(r,\theta)\\ (\varrho^{*})^{-1/2}\eta_{+}(r,\theta)\end{pmatrix}, \tag{101}\]
with the spinors \(\eta_{\pm}\)
\[\eta_{-}(r,\theta)=-\begin{pmatrix}R_{2}(r)\,_{-\frac{1}{2}}S_{ lm}(\theta)\\ R_{1}(r)\,_{+\frac{1}{2}}S_{lm}(\theta)\end{pmatrix}, \tag{102a}\] \[\eta_{+}(r,\theta)=\quad\begin{pmatrix}R_{1}(r)\,_{-\frac{1}{2}} S_{lm}(\theta)\\ R_{2}(r)\,_{+\frac{1}{2}}S_{lm}(\theta)\end{pmatrix}. \tag{102b}\]
Here, \(R_{1,2}(r)\), \({}_{\pm\frac{1}{2}}S_{lm}(\theta)\) are the radial and angular functions associated to massive fermions. Substituting in the Dirac equation, one finds the Eqs. (2) and (3).
## Appendix B Chandrasekhar - Detweiler Potentials
In a series of works, Chandrashekar and Detweiler proposed a method to transform the general Teukolsky equations into Schrodinger-like equations [102; 103; 104]. They found explicit forms for the potentials depending on the spin of the particle. We have the potentials for scalars and vectors
\[V_{0}(r) =\frac{\Delta}{\rho^{4}}\left(\lambda_{0}+\frac{\Delta+2r(r-GM)}{ \rho^{2}}-\frac{3r^{2}\Delta}{\rho^{4}}\right), \tag{103a}\] \[V_{1}(r) =\frac{\Delta}{\rho^{4}}\left[\lambda_{1}+2-\alpha^{2}\frac{ \Delta}{\rho^{4}}\pm i\alpha\rho^{2}\frac{\mathrm{d}}{\mathrm{d}r}\left(\frac {\Delta}{\rho^{4}}\right)\right], \tag{103b}\]
where \(\lambda_{s}\) are the angular eigenvalues, as defined in the text, and \(\alpha^{2}=a^{2}+am/\omega\). For vectors, there are two different values of the potential, depending on the sign of the last term. We have chosen in our numerical code the minus sign.
|
2301.11826 | Deep Clustering Survival Machines with Interpretable Expert
Distributions | Conventional survival analysis methods are typically ineffective to
characterize heterogeneity in the population while such information can be used
to assist predictive modeling. In this study, we propose a hybrid survival
analysis method, referred to as deep clustering survival machines, that
combines the discriminative and generative mechanisms. Similar to the mixture
models, we assume that the timing information of survival data is generatively
described by a mixture of certain numbers of parametric distributions, i.e.,
expert distributions. We learn weights of the expert distributions for
individual instances according to their features discriminatively such that
each instance's survival information can be characterized by a weighted
combination of the learned constant expert distributions. This method also
facilitates interpretable subgrouping/clustering of all instances according to
their associated expert distributions. Extensive experiments on both real and
synthetic datasets have demonstrated that the method is capable of obtaining
promising clustering results and competitive time-to-event predicting
performance. | Bojian Hou, Hongming Li, Zhicheng Jiao, Zhen Zhou, Hao Zheng, Yong Fan | 2023-01-27T16:27:18Z | http://arxiv.org/abs/2301.11826v4 | # Deep Clustering Survival Machines with Interpretable Expert Distributions
###### Abstract
Conventional survival analysis methods are typically ineffective to characterize heterogeneity in the population while such information can be used to assist predictive modeling. In this study, we propose a hybrid survival analysis method, referred to as deep clustering survival machines, that combines the _discriminative_ and _generative_ mechanisms. Similar to the mixture models, we assume that the timing information of survival data is _generatively_ described by a mixture of certain numbers of parametric distributions, i.e., _expert distributions_. We learn weights of the expert distributions for individual instances according to their features _discriminatively_ such that each instance's survival information can be characterized by a weighted combination of the learned constant expert distributions. This method also facilitates interpretable subgrouping/clustering of all instances according to their associated expert distributions. Extensive experiments on both real and synthetic datasets have demonstrated that the method is capable of obtaining promising clustering results and competitive time-to-event predicting performance.
Bojian Hou\({}^{\star}\), Hongming Li\({}^{\star}\), Zhicheng Jiao\({}^{\dagger}\), Zhen Zhou\({}^{\star}\), Hao Zheng\({}^{\star}\), Yong Fan\({}^{\star}\)\({}^{\star}\) Department of Radiology, Perelman School of Medicine, University of Pennsylvania, USA
\({}^{\dagger}\) Department of Diagnostic Imaging, Warren Albert Medical School, Brown University, USA Survival analysis, clustering, time-to-event prediction
## 1 Introduction
In survival analysis, it is desired to know individual subjects' probability of an event of interest to occur such as the occurrence of a disease or even death beyond a certain time \(t\) according to their data /features \(X\)[1]. As a result, the probability can be modeled as a survival function \(S(\cdot|X)=P(T>t|X)\). This task is also referred to as _time-to-event prediction_, and one of its main challenges is _censoring_ when the event outcomes of some instances are unobservable after a certain time or some instances do not experience any event during follow-up.
Many methods have been proposed for time-to-event prediction in survival analysis. The most conventional and prevalent method is a semi-parametric method called the Cox PH model [2]. It assumes that the hazard rate for every instance is constant over time known as the proportional hazard (PH) assumption. Some nonparametric methods such as Kaplan-Meier [3], Nelson-Aalen [4], and Life-Table [5] are also widely used in survival analysis. Nevertheless, they suffer from the curse of dimensionality. Survival analysis also attracts the attention of the machine learning community and many machine learning methods [6, 7, 8, 9, 10, 11] have been developed to reveal the relationship between the features and the survival information. Particularly, a fully parametric method, referred to as deep survival machines (DSM) [12], has demonstrated competitive predicting performance compared with state-of-the-art methods. Nevertheless, DSM learns different base distributions for different instances, which makes its inner mechanism hard to interpret [13, 14].
In addition to the time-to-event prediction task, the task of clustering cohorts is also crucial in survival analysis. With the clustering results, clinicians can provide customized treatments [15] for groups with different risks. The methods mentioned above usually cluster the cohorts in a post-hoc way, i.e., they will artificially stratify the cohorts according to the predicted risks. This usually leads to even groups, thus lacking interpretability. Lately, two studies consider both clustering and time-to-event prediction simultaneously and they can cluster data in an uneven manner. Particularly, survival clustering analysis (SCA) [16] assumes that the latent space is a mixture of distributions and uses the truncated Dirichlet process to realize the automatic identification of the cluster numbers. However, SCA cannot control the number of clusters and thus cannot validate its advantages compared to those post-hoc methods. Variational deep survival clustering (VaDeSC) [17], as a fully generative method, uses Gaussian mixture distribution to model the features in a latent space and uses the Weibull distribution to model the survival timing information. This work builds a good bridge between the features and survival information by jointly optimizing both likelihoods. However, there is a trade-off between the discriminative and generative learning paradigms. A fully generative framework may not be a good fit for all kinds of data since it is hard to let both the features and survival information obey the prior assumption of distributions at the same time.
In this study, we propose a hybrid method to leverage both the discriminative and generative strategies. Specifically, we assume that there are certain numbers of expert distributions in a latent space and each expert distribution can be modeled
by parameterized Weibull distributions in a generative way. The survival function for each instance is a weighted combination of all the expert distributions and the weight for each instance is learned by a multi-layer perceptron (MLP) directly from the features in a discriminative manner. Consequently, we can naturally cluster all the instances according to their weights allocated to different expert distributions. In summary, our contributions are threefold:
* We propose a hybrid survival analysis method that integrates the advantage of discriminative and generative ideas, and can perform both clustering and time-to-event prediction simultaneously.
* We conduct extensive experiments on several real-world datasets and abundant synthetic datasets, and the results show promising clustering results as well as competitive time-to-event prediction performance.
* Our method is interpretable in that the expert distributions are constant for all the instances. Different weight shows different attention to the expert distributions and thus we can easily tell which subgroup the instance belongs to.
## 2 Methods
The data we tackled are right-censored, i.e., our data \(\mathcal{D}\) is a set of tuples \(\{\textbf{x}_{i},t_{i},\delta_{i}\}_{i=1}^{N}\) where **x\({}_{i}\)** is the feature vector associated with the \(i\)th instance, \(t_{i}\) is the last-followed time, \(\delta_{i}\) is the event indicator, and \(N\) is the number of instances. When \(\delta_{i}=1\) (it means the \(i\)th instance is uncensored), \(t_{i}\) will be the time when the event happens whereas when \(\delta_{i}=0\) (it means the \(i\)th instance is censored), \(t_{i}\) will be the time when the instance quits the study or the study ends. Denote the \(\mathcal{D}_{U}\) as the uncensored subset where the corresponding event indicator \(\delta=1\) and \(\mathcal{D}_{C}\) as the censored subset where \(\delta=0\).
In Part 1 of Fig. 1, the deep clustering survival machines are designed to learn a conditional distribution \(P(T|X=\textbf{x})\) by optimizing the maximum likelihood estimation (MLE) of the time \(T\). Similar to the mixture model learning paradigm, the conditional distribution \(P(T|X=\textbf{x})\) is characterized by learning a mixture over \(K\) well-defined parametric distributions, referred to as _expert distributions_. In order to use gradient-based methods to optimize MLE, we choose the Weibull distributions as the expert distributions that are flexible to fit various distributions and have closed-form solutions for the PDF and CDF: \(\text{PDF}(t)=\frac{\mu}{\sigma}\left(\frac{t}{\sigma}\right)^{\mu-1}e^{-\left( \frac{t}{\sigma}\right)^{\mu}}\), \(\text{CDF}(t)=e^{-\left(\frac{t}{\sigma}\right)^{\mu}}\), where \(\mu\) and \(\sigma\) are the shape and scale parameters separately.
Part 1 in Fig. 1 indicates that we first need to learn an encoder for the input features **x** to obtain a compact and informative representation \(\tilde{\textbf{x}}\). Here we use a multi-layer perceptron (MLP) \(\phi_{\theta}(\cdot)\) parameterized by \(\theta\) as the backbone model. This representation will be multiplied by a parameter \(w\) with softmax to obtain the mixture weight \(\alpha_{k}\) with respect to each (\(k\)th) expert distribution that is parameterized by \(\mu_{k}\) and \(\sigma_{k}\). The final survival distribution for the time \(T\) conditioned on each instance is a weighted combination over all \(K\) constant expert distributions. Eventually, we have a set of parameters \(\Theta=\{\theta,w,\{\mu_{k},\sigma_{k}\}_{k=1}^{K}\}\) to learn during the training process. Noting that \(\mu_{k}\) and \(\sigma_{k}\) are the same for different input instances so that we can cluster each instance/subject according to its weight \(\alpha_{k}\) that is allocated to each expert distribution, as illustrated in Part 2 of Fig. 1. Specifically, we assign an subgroup/cluster indicator \(i\) to each instance when the instance's corresponding \(\alpha_{i}\) is the largest among all \(K\) weights.
According to the framework of MLE, our goal is to maximize the likelihood with respect to the timing information \(T\) conditioned on **x**. Given that the likelihood functions are different for uncensored and censored data, we calculate them separately. For the uncensored data, the log-likelihood of \(T\) is computed as follows where \(\alpha\) is the hidden variable and **ELBO** is the lower bound of the likelihood derived by Jensen's Inequality:
\[\ln\mathbb{P}(\mathcal{D}_{U}|\Theta)=\ln\left(\Pi_{i=1}^{| \mathcal{D}_{U}|}\mathbb{P}(T=t_{i}|X=\textbf{x}_{i},\Theta)\right)\] \[=\sum\nolimits_{i=1}^{|\mathcal{D}_{U}|}\ln\left(\sum\nolimits_{k= 1}^{K}\mathbb{P}(T=t_{i}|\alpha,\mu_{k},\sigma_{k})\mathbb{P}(\alpha|X= \textbf{x}_{i},w)\right)\] \[=\sum\nolimits_{i=1}^{|\mathcal{D}_{U}|}\ln\left(\mathbb{E}_{ \alpha\sim(\cdot|\textbf{x}_{i},w)}[\mathbb{P}(T=t_{i}|\alpha,\mu_{k},\sigma _{k})]\right)\] \[\geq\sum\nolimits_{i=1}^{|\mathcal{D}_{U}|}\left(\mathbb{E}_{ \alpha\sim(\cdot|\textbf{x}_{i},w)}[\ln\mathbb{P}(T=t_{i}|\alpha,\mu_{k},\sigma _{k})]\right)\] \[=\sum\nolimits_{i=1}^{|\mathcal{D}_{U}|}\left(\text{softmax}_{K}( \ln\text{PDF}(t_{i}|\mu_{k_{i}},\sigma_{k_{i}}))\right)=\textbf{ELBO}_{U}(\Theta).\]
Similarly, the log-likelihood of \(T\) for the censored data is:
\[\ln\mathbb{P}(\mathcal{D}_{C}|\Theta)=\ln\left(\Pi_{i=1}^{| \mathcal{D}_{C}|}\mathbb{P}(T>t_{i}|X=\textbf{x}_{i},\Theta)\right)\] \[\geq\sum\nolimits_{i=1}^{|\mathcal{D}_{C}|}\left(\mathbb{E}_{ \alpha\sim(\cdot|\textbf{x}_{i},w)}[\ln\mathbb{P}(T>t_{i}|\alpha,\mu_{k}, \sigma_{k})]\right)\] \[=\sum\nolimits_{i=1}^{|\mathcal{D}_{C}|}\left(\text{softmax}_{K}( \ln\text{CDF}(t_{i}|\mu_{k_{i}},\sigma_{k_{i}}))\right)=\textbf{ELBO}_{C}(\Theta).\]
Figure 1: The model structure of the proposed DCSM. Part 1 learns each instance’s survival function by a weighted combination of the expert distributions. Part 2 clusters instances by the learned weights allocated to each expert distribution.
In addition, to stabilize the performance, we incorporate prior knowledge for \(\mu_{k}\) and \(\sigma_{k}\). Specifically, we minimize the prior loss \(L_{prior}\) to make them as close as to the prior \(\mu\) and \(\sigma\) that are determined by the MLE result with single distribution:
\[L_{prior}=\sum\nolimits_{k=1}^{K}\|\mu_{k}-\mu\|_{2}^{2}+\|\sigma_{k}-\sigma \|_{2}^{2}. \tag{1}\]
Our final objective \(L_{all}\) is the sum of the negative of the log-likelihood of both the uncensored and censored data as well as the prior loss where \(\lambda\) is a trade-off hyperparameter:
\[L_{all}=L_{prior}-\textbf{ELBO}_{U}(\Theta)-\lambda\cdot\textbf{ELBO}_{C}( \Theta). \tag{2}\]
## 3 Experiments
We conducted extensive experiments to validate the effectiveness of the proposed method in terms of both time-to-event prediction and clustering.
### Datasets
We conducted experiments on 4 real-world datasets (as shown in Table 2) and 36 synthetic datasets with different numbers of instances, ranging in 200, 500, 1000, 3000, 5000, and 1000, and different numbers of features ranging in 10, 20, 50, 200, 500, and 1000. For all the synthetic datasets, the percentage of censoring was set to 30%. The simulation process followed VaDeSC [17] except that we changed the distribution of the features from Gaussian to Uniform to validate the limitation of the fully generative method, which is discussed in Section 3.4.
### Baseline Methods, Metrics and Settings
We compared our method to five methods. Two of them are the state-of-the-art methods SCA [16] and VaDeSC [17] which can perform both time-to-event prediction and clustering. The other three methods are Cox PH [2], Deep Cox [7], and DSM [12], which only provide the time-to-event prediction function. We used their predicted risks to cluster data evenly.
We used two metrics to evaluate the performance of all the methods. Specifically, "concordance index" (C Index) was used to evaluate the time-to-event prediction performance. For the clustering task, we leveraged LogRank test to evaluate the performance.
We conducted five-fold cross validation to estimate the C Index and LogRank measures and obtained their average values along with the standard deviation. The parameters were chosen by grid search. Specifically, the trade-off parameter \(\lambda\) was chosen from [0.5, 0.75, 1], and the learning parameter step size was chosen from [1e-3, 1e-4]. The layer setting of the multiple perceptron was chosen from [[50], [50], [50]] where "50" is the number of neurons in each layer.
### Quantitative Results on Real Data
Table 1 shows the C Index values on real data, including the average results of five independent runs and their standard deviations. These results indicated that our method achieved a competitive performance compared to other baselines. Although our model's performance was not the best on some datasets, the difference with the best performance was not significant at a 95% confidence interval.
Table 1 also summarizes the results of the LogRank tests. LogRank statistic evaluates how well the clustering results are
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Metric & Dataset & SUPPORT & PBC & FRAMINGHAM & FLCHAIN \\ \hline \multirow{6}{*}{C Index} & Cox PH & 0.8401\(\pm\)0.0070 & **0.8476\(\pm\)0.0126** & 0.7580\(\pm\)0.0063 & 0.7984\(\pm\)0.0046 \\ & Deep Cox & 0.8053\(\pm\)0.0058 & 0.8474\(\pm\)0.0181 & **0.7612\(\pm\)0.0057** & 0.7893\(\pm\)0.0063 \\ & DSM & 0.8300\(\pm\)0.0045 & 0.8363\(\pm\)0.0133 & 0.7593\(\pm\)0.0050 & **0.8009\(\pm\)0.0036** \\ & SCA & 0.8203\(\pm\)0.0121 & 0.8251\(\pm\)0.0258 & 0.5311\(\pm\)0.1235 & 0.7467\(\pm\)0.0091 \\ & VaDeSC & **0.8419\(\pm\)0.0041** & 0.8278\(\pm\)0.0085 & 0.5802\(\pm\)0.0406 & 0.7886\(\pm\)0.0100 \\ & DCSM (Ours) & 0.8305\(\pm\)0.0028 & 0.8359\(\pm\)0.0109 & 0.7530\(\pm\)0.0053 & 0.7916\(\pm\)0.0074 \\ \hline \multirow{6}{*}{LogRank} & Cox PH & 500.3282\(\pm\)60.4977 & 198.2686\(\pm\)17.3940 & 576.1450\(\pm\)22.9621 & 399.0243\(\pm\)25.7657 \\ & Deep Cox & 326.1931\(\pm\)54.7026 & 203.3091\(\pm\)22.8343 & 593.7317\(\pm\)14.4697 & 403.4643\(\pm\)35.8034 \\ \cline{1-1} & DSM & 563.4841\(\pm\)0.0045 & 196.0912\(\pm\)0.0133 & 587.5718\(\pm\)0.0050 & 406.4549\(\pm\)0.0036 \\ \cline{1-1} & SCA & 212.5712\(\pm\)26.2629 & 260.5682\(\pm\)67.4875 & 278.3525\(\pm\)51.1866 & 536.1056\(\pm\)109.1680 \\ \cline{1-1} & VaDeSC & 196.8495\(\pm\)19.6887 & 118.9605\(\pm\)77.4716 & 348.5500\(\pm\)697.1000 & 95.5291\(\pm\)108.9488 \\ \cline{1-1} & DCSM (Ours) & **1067.6184\(\pm\)271.6551** & **302.5395\(\pm\)30.1043** & **751.9770\(\pm\)48.9725** & **571.0441\(\pm\)99.0101** \\ \hline \end{tabular}
\end{table}
Table 1: C Index and LogRank results compared to Cox PH, Deep Cox, DSM, SCA, and VaDeSC. The best ones are bold.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline Dataset & SUPPORT & PBC & FRAM & FLCHAIN \\ \hline Events (\%) & 68.11 & 37.28 & 30.33 & 30.07 \\ \hline \(N\) & 9105 & 1945 & 11627 & 6524 \\ \hline \(d\) (categorical) & 44 (26) & 25 (17) & 18 (10) & 8 (2) \\ \hline \(t_{max}\) & 2029 & 14.31 & 8766 & 5167 \\ \hline \end{tabular}
\end{table}
Table 2: Statistics of datasets used in the experiments. The time range \(t_{max}\) in PBC is noted in years while others are noted in days. “FRAM” refers to “FRAMINGHAM”.
regarding the survival information and with a larger value indicating a better performance. The results demonstrated that our method outperformed all the baselines. This could be more useful than the time-to-event prediction because such information can facilitate personalized treatment planning.
### Quantitative Results on Synthetic Data
Fig. 2(a) shows the comparison of C Index values on synthetic data. We used radar plot to highlight the performance difference. The bigger the area surrounded by the curves, the better the performance. Fig. 2(a) demonstrated that our method generated the biggest area surrounded by the curve, indicating that our method outperformed all the baselines on 30 among 36 datasets. Our method learned the survival information generatively by assuming the survival information follows the Weibull distribution. As Weibull distribution is rather flexible and can simulate many different distributions from reality, therefore our method can fit well to the survival information and obtain the best performance in most cases.
VaDeSC as a fully parametric method also assumes the survival information obeys the Weibull distribution, but it assumes the features follow the Gaussian distribution whereas we generate the features using Uniform distribution. In this way, VaDeSC cannot model the feature distribution well and thus has inferior performance. Our method learns the features in a discriminative way. Thus our method can learn a likely pattern no matter what the real distribution of the features is.
### Qualitative Results on Real Data
Kaplan-Meier (KM) curves according to the clustering results of all the methods are shown in Fig. 2(b-g). Due to the page limit, we only show the results on the PBC dataset. The LogRank of one trial of these methods was 175.81 (Cox PH), 188.59 (Deep Cox), 153.85 (DSM), 162.08 (SCA), 87.62 (VaDeSC), and 357.72 (DCSM, ours). It is worth noting that SCA and VaDeSC in Fig. 2(e, f) can automatically determine the numbers of instances in different groups. VaDeSC had more unbalanced results, which results in a low LogRank. Our method obtained the best performance. Fig. 2(h) shows that the shapes of the two expert distributions resemble the KM curves, facilitating effective data stratification.
## 4 Conclusion
We propose a deep hybrid method that integrates the discriminative and generative strategies into one framework. Assuming the survival function for each instance is a weighted combination of constant expert distributions, our method is capable of learning the weight for each expert distribution discriminatively and the distribution of the survival information generatively. Extensive experimental results along with the quantitative and qualitative analyses have demonstrated the advantages of our method. The constant expert distributions also enhance the interpretability of data stratification.
Figure 2: (a) The C Index comparison among the 36 synthetic datasets. A radar plot is used to illustrate the performance comparison. A bigger area means better performance. We fill the area of our method and we can see that on most synthetic datasets (30 among 36), the baseline methods’ curves fall inside our method. (b-g) The Kaplan-Meier plots of all the methods on data PBC. The cross mark means censoring. The learned expert distributions are shown in (h). The shape of the two expert distributions resembles our Kaplan-Meier curves, facilitating effective data stratification.
## 5 Compliance with Ethical Standards
Our method complies with ethical standards. All the datasets we studied are public benchmark datasets.
|
2305.12600 | PRODIGY: Enabling In-context Learning Over Graphs | In-context learning is the ability of a pretrained model to adapt to novel
and diverse downstream tasks by conditioning on prompt examples, without
optimizing any parameters. While large language models have demonstrated this
ability, how in-context learning could be performed over graphs is unexplored.
In this paper, we develop \textbf{Pr}etraining \textbf{O}ver \textbf{D}iverse
\textbf{I}n-Context \textbf{G}raph S\textbf{y}stems (PRODIGY), the first
pretraining framework that enables in-context learning over graphs. The key
idea of our framework is to formulate in-context learning over graphs with a
novel \emph{prompt graph} representation, which connects prompt examples and
queries. We then propose a graph neural network architecture over the prompt
graph and a corresponding family of in-context pretraining objectives. With
PRODIGY, the pretrained model can directly perform novel downstream
classification tasks on unseen graphs via in-context learning. We provide
empirical evidence of the effectiveness of our framework by showcasing its
strong in-context learning performance on tasks involving citation networks and
knowledge graphs. Our approach outperforms the in-context learning accuracy of
contrastive pretraining baselines with hard-coded adaptation by 18\% on average
across all setups. Moreover, it also outperforms standard finetuning with
limited data by 33\% on average with in-context learning. | Qian Huang, Hongyu Ren, Peng Chen, Gregor Kržmanc, Daniel Zeng, Percy Liang, Jure Leskovec | 2023-05-21T23:16:30Z | http://arxiv.org/abs/2305.12600v1 | # PRODIGY: Enabling In-context Learning Over Graphs
###### Abstract
In-context learning is the ability of a pretrained model to adapt to novel and diverse downstream tasks by conditioning on prompt examples, without optimizing any parameters. While large language models have demonstrated this ability, how in-context learning could be performed over graphs is unexplored. In this paper, we develop **P**retraining **O**ver **D**iverse **In**-Context **G**raph **Systems (PRODIGY), the first pretraining framework that enables in-context learning over graphs. The key idea of our framework is to formulate in-context learning over graphs with a novel _prompt graph_ representation, which connects prompt examples and queries. We then propose a graph neural network architecture over the prompt graph and a corresponding family of in-context pretraining objectives. With PRODIGY, the pre-trained model can directly perform novel downstream classification tasks on unseen graphs via in-context learning. We provide empirical evidence of the effectiveness of our framework by showcasing its strong in-context learning performance on tasks involving citation networks and knowledge graphs. Our approach outperforms the in-context learning accuracy of contrastive pretraining baselines with hard-coded adaptation by 18% on average across all setups. Moreover, it also outperforms standard finetuning with limited data by 33% on average with in-context learning.
## 1 Introduction
In-context learning is a novel and one of the most intriguing capabilities of language models [1]. It refers to the capability of a pretrained model to perform novel and diverse tasks directly at the prediction time when prompted with just a few examples, without the need to update the model weights. For example, a person may describe the new task (_e.g._, question answering, machine translation, or code generation) using natural language and demonstrate it to the language model with several prompt examples. The language model then directly without any model training or finetunning performs the task.
However, how to enable in-context learning for diverse graph machine learning tasks, such as identifying misinformation spreader in social networks [14] and product suggestions across online e-commerce websites [21], still remain unexplored and challenging. An in-context learner for graphs should be able to solve novel tasks on novel graphs. For example, give music product
recommendations on Spotify when being trained on Amazon purchasing graph. The first challenge here is how to formulate and represent node-, edge- and graph-level tasks over graphs with a unified task representation that allows the model to solve diverse tasks without the need for retraining or parameter tuning. In other words, the key challenge is: what is an analog of natural language prompting for graph machine learning tasks? The second challenge is how to design model architecture and pretraining objectives that enable models to achieve in-context learning capability across diverse tasks and diverse graphs in the unified task representation. Existing graph pretraining methods [7; 24; 8; 13] only aim to learn a good graph encoder and require fine-tuning to adapt to different tasks, while existing meta-learning methods over graphs [19; 9; 3; 17; 25] only aim to generalize across different tasks within the same graph. On the other hand, achieving in-context learning requires tackling the more difficult setting of generalizing across the graphs _and_ tasks without finetuning.
Here we present a general approach for solving these two challenges for classification tasks on graphs: (1) _prompt graph_, an in-context graph task representation, and (2) **P**retraining **O**ver **D**iverse **I**n-Context **G**raph **Systems (PRODIGY), a framework for pretraining an in-context learner over prompt graphs.
We propose _prompt graph_ (Figure 1) to provide unified way to represent diverse node-, edge- and graph-level machine learning tasks. Prompt graph first contextualizes the input nodes/edges on which we make prediction (including both the prompt examples and the queries), then connects them with additional label nodes, such that the prompt examples are interconnected with queries. Such a unified representation allows us to specify diverse graph machine learning tasks to the same model regardless of the graph size.
PRODIGY then designs both model architecture and pretraining objectives with the prompt graph in-context task formulation, such that the model is pretrained to solve tasks across a wide range of tasks and graphs, and can continue to do so out-of-the-box. We design a graph architecture that utilizes graph neural networks to learn node/edge representations and an attention mechanism to communicate over prompt graph. Furthermore, we propose a family of in-context pretraining objectives over prompt graph. In particular, this includes a novel self-supervised pretraining task, _neighbor matching_, where we classify which neighborhood a node or edge belongs to.
We use PRODIGY framework to pretrain on citation networks (MAG240M [5]) and knowledge graphs (Wiki [22]). We then show that such model (without any retraining) provides strong performance on in-context paper category classification and knowledge graph completion tasks on novel graphs it was never trained on (arXiv, ConceptNet, FB15K-237, NELL) [6; 16; 23]. Specifically, PRODIGY improves upon contrastive pretraining baselines with hard-coded adaptation for in-context setup by 18% on average across all datasets and numbers of labels to classify among. Moreover, it also outperforms standard finetuning with limited data by 32.6% on average with in-context learning. It even outperforms the state-of-the-art few-shot learning methods trained on the testing downstream graph with pure in-context learning. Finally, we further demonstrate that our methods achieve increasingly higher performance with more examples in the prompt even beyond what it was pretrained with, which shows that the model really learns to learn from context.
Figure 1: In-context few-shot prompting over graphs with prompt graph for _edge classification_ in PRODIGY. (A) Given the source graph \(\mathcal{G}\), we provide prompt examples \(\mathcal{S}\) that consist of the input head/tail nodes and their labels, as well as the queries. (B) For each datapoint from both prompt examples and the queries, we first construct its data graph \(\mathcal{G}^{0}\) by retrieving context from the source graph \(\mathcal{G}\). (C) Then we create a task graph to capture the connection between each datapoint and each label, which includes a data node \(v_{x}\) for each datapoint and a label node \(v_{y}\) for each label in \(\mathcal{Y}\). Each pair of data and label nodes are connected with edge attributes corresponding to their binary labels.
In-context Learning over Graphs
In this work, we specifically focus on in-context learning for node and edge classification tasks on graphs with few-shot prompting, which are the forms of the most standard and important graph machine learning tasks. In this section, we introduce the concrete classification tasks over graphs and few-shot prompting over them with our in-context task representation prompt graph.
### Classification Tasks over Graphs
We define a graph as \(\mathcal{G}\!=\!(\mathcal{V},\mathcal{E},\mathcal{R})\), where \(\mathcal{V},\mathcal{E},\mathcal{R}\) represent the set of nodes, edges and relations. An edge \(e\!=\!(u,\!r,\!v)\!\in\!\mathcal{E}\) consists of a subject \(u\!\in\!\mathcal{V}\), a relation \(r\!\in\!\mathcal{R}\) and an object \(v\!\in\!\mathcal{V}\).
Given a set of classes \(\mathcal{Y}\), a standard classification task is predicting the labeling \(y\!\in\!\mathcal{Y}\) of each input \(x\!\in\!\mathcal{X}\). A node-level classification task is similar but each input is a single node in \(\mathcal{G}\), _i.e._, \(\mathcal{X}\!=\!\mathcal{V}\), with the additional auxiliary information of the entire graph \(\mathcal{G}\). For example, over a citation network consisting of authors and papers, a node-level classification task could be predicting the primary institution of each author. Similarly, an edge-level classification task is predicting the best labeling of potential edges formed by any pair of nodes, _i.e._, \(\mathcal{X}\!=\!\mathcal{V}\!\times\!\mathcal{V}\). A common special case is that the classes are the same as the relations \(\mathcal{Y}\!=\!\mathcal{R}\), such as predicting the relation between entities over knowledge graphs. More generally, the same definitions can be extended to subgraph and graph-level classification tasks, where the input data \(x\) may consist of more nodes and edges, and essentially represents a subgraph of \(\mathcal{G}\).
Since we are interested in tasks of different types/levels, we design a unified formulation, where the space of the input \(\mathcal{X}\) consists of graphs, _i.e._, \(x_{i}\!\in\!\mathcal{X}\!,\!x_{i}\!=\!(\mathcal{V}_{i},\mathcal{E}_{i}, \mathcal{R}_{i})\). For node classification task, \(\mathcal{G}_{i}\) only consists of the input node that we aim to make predictions on, _i.e._, \(|\mathcal{V}_{i}|\!=\!1\) and \(|\mathcal{E}_{i}|\!=\!0\); for edge classification task, it consists of (subject, object) pair, _i.e._, \(|\mathcal{V}_{i}|\!=\!2\) and \(|\mathcal{E}_{i}|\!=\!0\).
### Few-shot Prompting
Here we define in-context learning setup for classification tasks over graphs with few-shot prompting. For a \(k\)-shot prompt with a downstream \(m\)-way classification tasks with \(|\mathcal{Y}|\!=\!m\) classes, we use a small number of input-label pairs \(\mathcal{S}\!=\!\{(x_{i},y_{i})\}_{i=1}^{m\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
**Task graph.** After contextualizing each datapoint to a data graph \(\mathcal{G}^{\text{D}}\), we then construct task graph \(\mathcal{G}^{\text{T}}\) to better capture the connection and relationship among the inputs and the labels. For each data graph \(\mathcal{G}^{\text{D}}_{i}\) from the previous stage, we have a _data node_\(v_{x_{i}}\) that represents each input; for each label, we have a _label node_\(v_{y_{i}}\). So overall, a task graph contains \(m\cdot k+n\) data nodes (\(m\cdot k\) prompt examples and \(n\) queries) and \(m\) label nodes, as shown in Figure 1.
Now we add edges between the data nodes and the label nodes: For the query set, since we do not know the labels of each graph, we add single directional edges from all label nodes to each datapoint in the query set, _i.e._, each query data node \(v_{x_{i}}\) will be connected to all the label nodes as shown by the yellow edges in Figure 1; For the prompt examples, we connect each data node to all the label nodes, where the edge with the true labels is marked as T while the others are marked as F, as shown by the green and red edges in Figure 1 respectively.
Together we propose the prompt graph that consists of both data graphs and a task graph. Prompt graph effectively captures the relationship between input data \(x_{i}\) and the label \(y_{i}\) through the context captured in data graph \(\mathcal{G}^{\text{D}}_{i}\) and the data node \(v_{x_{i}}\) and the label node \(v_{y_{i}}\) in the task graph \(\mathcal{G}^{\text{T}}\). It is also possible to extend prompt graph to non-classification tasks and free-form text prompting. For example, for numerical regression (e.g. molecular energy prediction) and other free-form generation tasks (e.g. text generation), one can extend our task graph to contain vector values on the edges to represent \(y_{i}\). Then different label nodes would represent different prediction tasks. To support more general forms of prompting, one can include additional task information and instructions in the feature of label nodes, and additional description paired with each datapoint in the global feature in data graph.
## 3 Pretraining to Enable In-context Learning
So far given a few-shot prompt for a classification task over graphs, we have defined a prompt graph representation for it that captures relationships between the prompt examples, queries, and labels. Now we need to design a pretraining strategy that can pretrain a generalizable model capable of in-context learning. We assume access to a pretraining graph \(\mathcal{G}_{\text{pretrain}}\) that is independent of the source graph \(\mathcal{G}\) for the downstream task.
In this section, we introduce PRODIGY, a general pretraining framework over \(\mathcal{G}_{\text{pretrain}}\) that is designed specifically for enabling in-context learning over downstream classification tasks without any additional finetuning steps on arbitrary graphs. Our framework PRODIGY has two main components: model architecture over prompt graph and in-context pretraining objectives.
### Message Passing Architecture over prompt graph
Next we introduce our model architecture over the prompt graph consisting of two submodules:
**Data graph Message Passing.** First, we apply a message passing GNN module \(M_{\text{D}}\) that learns node representation \(E\) for nodes in each \(\mathcal{G}^{\text{D}}\).
\[E\!\in\!\mathcal{R}^{|\mathcal{V}^{\mathcal{G}}|\times d}\!=\!M_{\text{D}}( \mathcal{G}^{\text{D}}) \tag{1}\]
where \(d\) is the embedding dimension. \(M_{\text{D}}\) can be implemented in multiple ways, such as using Graph Convolutional Network (GCN) or Graph Attention Networks (GAT) [11; 18].
To read out a single embedding \(G_{i}\) for each data graph, we perform another aggregation step to pool node embeddings. For node classification tasks, we take the updated node representation of the single input node that we aim to predict, _i.e._:
\[G_{i}\!=\!E_{\mathcal{V}_{i}} \tag{2}\]
For link prediction tasks, we concatenate the pair of nodes, which we want to predict a link between, as well as a max pooling over all node representations following [10] with an additional linear projection layer at the end to convert the embedding size back to \(d\).
\[G_{i}\!=\!W^{T}(E_{v_{1}\in\mathcal{V}_{i}}||E_{v_{2}\in\mathcal{V}_{i}}|| \mathrm{max}(E_{i}))\!+\!b, \tag{3}\]
where \(||\) represents concatenation, \(W\!\in\!\mathcal{R}^{3d\times d}\) is a learnable weight matrix and \(b\) is the learnable bias.
**Task graph Message Passing.** Note in the previous step there is no communication between different datapoints in \(\mathcal{S}\) and \(\mathcal{Q}\). Now we would like to communicate between them via message passing over
the task graph \(\mathcal{G}^{\intercal}\). We apply another GNN \(M_{\intercal}\) on the task graph to obtain updated representation of data nodes and label nodes.
\[H=M_{\intercal}(\mathcal{G}^{\intercal}) \tag{4}\]
where H is the obtained embedding per node. The initial embedding of data node \(v_{x_{i}}\) is \(G_{i}\) and the embedding of label node \(v_{y_{i}}\) can either be initialized with random Gaussian or additional information available about the labels. Each edge also has two binary features \(e_{ij}\) that indicate 1) whether the edge comes from an example or a query, and 2) the edge type of T or F. For \(M_{\intercal}\), we use an attention-based GNN, where each node performs attention to other nodes at each layer. See the architecture detail in the appendix C.
The goal of this step is to learn a better representation of the label nodes using the support examples and propagate label information back to the support and query graph representation for a more task-specific graph representation.
**Prediction Read Out.** Finally, we readout the classification logits \(O_{i}\) by taking cosine similarity between each pair of query graph representation and label representation, as in contrastive learning:
\[O_{i}=[\texttt{cosine\_similarity}(H_{x_{i}},H_{y}),\forall y\in\mathcal{Y}] \tag{5}\]
Note that we could perform the two message passing steps for multiple rounds to have more communication between \(x_{i}\) and learn a better representation. One key insight is that different in-context prompt examples share information through the label nodes, which can be seen as an information bottleneck.
### In-context Pretraining Objectives
In order to pretrain the model for solving the downstream graph tasks in-context, we propose a set of in-context pretraining objectives. The goal is to pretrain the graph model using a large pretraining graph \(\mathcal{G}_{\texttt{pretrain}}\) independent of the downstream task graph, such that the model can directly be applied on downstream tasks with in-context learning.
Our main design principle is that we formulate each pretraining objective in an in-context learning way. Most previous graph pretraining objectives only pretrain a shared graph encoder to perform various tasks with task-specific heads, so they require finetuning for another task-specific head over each downstream task. In contrast, we explicitly construct in-context pretraining tasks in prompt graph form and pretrain the model to solve diverse tasks in-context with the same set of weights, such that it can perform in-context learning directly over downstream tasks.
Below, we detail our proposed family of in-context pretraining objectives in terms of three components: 1) pretraining task generation, including few-shot prompt (_i.e._ Figure 1(A)) and corresponding labels, 2) converting generated few-shot prompt to prompt graph format (_i.e._ Figure 1(B,C)) with augmentation, and 3) pretraining loss over the generated prompt graph.
#### 3.2.1 Pretraining Task Generation
We propose two methods to generate pretraining tasks from the pretraining graph \(\mathcal{G}_{\texttt{pretrain}}\) in the form of few-shot prompts: _neighbor matching_ and _multi-task_.
**Neighbor Matching.** Given the pretraining graph, we construct self-supervised in-context pretraining tasks with the goal of classifying which local neighborhood a node belongs to, where each local neighborhood is defined by the example nodes belonging to that neighborhood. Intuitively, we sample multiple subgraphs from the pretraining graph \(\mathcal{G}_{\texttt{pretrain}}\) as the local neighborhoods, and we say a node belongs to a local neighborhood if it is in the sampled subgraph.
Formally, we denote \(\texttt{N}\!\texttt{M}_{k,m}\) as a sampler that generates \(m\)-way neighbor matching tasks, where each includes a \(k\)-shot prompt (\(\mathcal{G}_{\texttt{pretrain}}\),\(\mathcal{S}_{\texttt{NM}}\),\(\mathcal{Q}_{\texttt{NM}}\)) (see subsection 2.2 and Figure 1(A)) and the labels of the queries. For simplicity of the notation, we will include the labels in \(\mathcal{Q}_{\texttt{NM}}\) as paired with the inputs:
\[(\mathcal{G}_{\texttt{pretrain}},\mathcal{S}_{\texttt{NM}},\mathcal{Q}_{ \texttt{NM}})\sim\texttt{N}\!\texttt{M}_{k,m}(\mathcal{G}_{\texttt{pretrain}}) \tag{6}\]
To generate these, we first sample \(m\) nodes from the pretraining graph \(\mathcal{G}_{\texttt{pretrain}}\), where each of the sampled node corresponds to one class.
\[\mathcal{C}=\{c_{i}\}_{i=1}^{m}\quad c_{i}\sim\textit{Uniform}(\mathcal{V}_{ \texttt{pretrain}}) \tag{7}\]
For each sampled node/class \(c_{i}\), we sample \(k\) different nodes from its exact \(l\)-hop neighbors. These \(k\) nodes serve as examples of label \(c_{i}\). We also sample additional \(\lceil\frac{n}{m}\rceil\) nodes similarly for each label \(c_{i}\) to form the query set. Formally,
\[N_{i} =\!\texttt{Neighbor}(c_{i},\!\mathcal{G}_{\texttt{pretrain}},\!l) \tag{8}\] \[\mathcal{S}_{i} =\!\{(x_{j},\!y_{j}\!=\!c_{i})\}_{j=1}^{k}\quad x_{j}\!\sim\! \texttt{Uniform}(N_{i})\] (9) \[\mathcal{Q}_{i} =\!\{(x_{j},\!y_{j}\!=\!c_{i})\}_{j=1}^{\lceil\frac{n}{m}\rceil} \quad x_{j}\!\sim\!\texttt{Uniform}(N_{i}) \tag{10}\]
In such a way, we constructed a neighbor matching pretraining task sample in the format of a few-shot prompt (\(\mathcal{G}_{\texttt{pretrain}},\!\mathcal{S}_{\texttt{NM}}\!=\!\bigcup\! \mathcal{S}_{i},\mathcal{Q}_{\texttt{NM}}\!=\!\bigcup\!\mathcal{Q}_{i}\)).
The neighbor matching task generation process outlined above is specifically applicable when the downstream tasks are also node classification. When the downstream task is link prediction, we may adapt the above neighbor matching tasks to over edges correspondingly. Specifically, we can expand each sampled input node \(x_{i}\) to an edge by randomly sampling an edge that contains \(x_{i}\). Then, instead of classifying to which neighborhood a node in the query set belongs, now the neighbor matching task is to classify to which neighborhood an edge in the query set belongs.
**Multi-task.** When the pretraining graphs have node or edge-level labeling \(f(x_{i})\!=\!y_{i}\!\in\!\mathcal{Y}\) for some \(x_{i}\in\mathcal{V}_{\texttt{pretrain}}\) or \(\mathcal{E}_{\texttt{pretrain}}\), we can further leverage this signal to perform supervised pretraining. Similar to neighbor matching, the key is to construct such supervised pretraining tasks in the format of few-shot prompts and corresponding labels.
\[(\mathcal{G}_{\texttt{pretrain}},\!\mathcal{S}_{\texttt{MT}},\mathcal{Q}_{ \texttt{MT}})\!\sim\!\texttt{MT}_{k,m}(\mathcal{G}_{\texttt{pretrain}},\!f) \tag{11}\]
For node classification tasks, we first sample \(m\) labels from the whole label set. Then, for each label, we directly sample \(k\) nodes as support examples and \(\lceil\frac{n}{m}\rceil\) nodes with labels in this set as query examples.
\[\mathcal{C} =\{c_{i}\}_{i=1}^{m}\quad c_{i} \sim\!\texttt{Uniform}(\mathcal{Y}) \tag{12}\] \[\mathcal{S}_{i} =\!\{(x_{j},\!y_{j}\!=\!c_{i})\}_{j=1}^{k}\quad x_{j} \sim\!\texttt{Uniform}(\{x_{i}|f(x_{i})\!=\!c_{i}\})\] (13) \[\mathcal{Q}_{i} =\!\{(x_{j},\!y_{j}\!=\!c_{i})\}_{j=1}^{\lceil\frac{n}{m}\rceil} \quad x_{j} \sim\!\texttt{Uniform}(\{x_{i}|f(x_{i})\!=\!c_{i}\}) \tag{14}\]
We then construct a task with the few-shot prompt as (\(\mathcal{G}_{\texttt{pretrain}},\!\mathcal{S}_{\texttt{MT}}\!=\!\bigcup\! \mathcal{S}_{i},\mathcal{Q}_{\texttt{MT}}\!=\!\bigcup\!\mathcal{Q}_{i}\)). For link prediction, we directly use the edge type function as \(f\), i.e. \(f((v_{1},\!v_{2}))\!=\!r\!\iff\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the data graphs as inspired by Contrastive Learning. The key insight is to corrupt data graph such that the pretrained model learns representation invariant to various corruptions.
Here we demonstrate how we adopt graph augmentation techniques during the construction of prompt graph from a few-shot prompt generated from \(\mathcal{G}_{\texttt{pretrain}}\). We first still sample the \(k\)-hop neighbor subgraph of each sample \(\mathcal{G}_{i}\) in the prompt examples and queries: \(\mathcal{G}_{i}^{\texttt{D}}\!\sim\!\bigoplus_{j=1}^{k}\!\texttt{Neighbor}( \mathcal{G}_{i},\mathcal{G}_{\texttt{pretrain}},j)\). Then we adopt the following two augmentation techniques to create augmented data graph \(\mathcal{G}_{i}^{aug}\), including (1) node dropping, and (2) node feature masking [24]. For node dropping, we randomly drop nodes from the \(k\)-hop neighbor subgraph and take the remaining graph as \(\mathcal{G}_{i}^{aug}\!=\!\texttt{DropNode}(\mathcal{G}_{i}^{\texttt{D}})\). For node feature masking, we randomly mask the feature of a subset of nodes with value zero to create \(\mathcal{G}_{i}^{aug}\!=\!\texttt{MaskNode}(\mathcal{G}_{i}^{\texttt{D}})\). With the augmented data graphs for each datapoint in the prompt examples and the queries, we may accordingly construct the task graph \(\mathcal{G}^{\texttt{T}}\) by creating a data node \(v_{x_{i}}\) for each augmented data graphs and the label node \(v_{y_{i}}\) as introduced in subsection 2.3. Combining data graphs with task graph, we obtain the prompt graph formulation with augmentation for the few-shot prompt.
#### 3.2.3 Pretraining Loss
Finally, we pretrain the model with the cross-entropy objectives over generated prompt graphs:
\[(\mathcal{G}_{\texttt{pretrain}},\mathcal{S}_{\texttt{NM}}, \mathcal{Q}_{\texttt{NM}})\!\sim\!\texttt{NM}_{k,m}(\mathcal{G}_{\texttt{pretrain}})\] (15) \[(\mathcal{G}_{\texttt{pretrain}},\mathcal{S}_{\texttt{MT}}, \mathcal{Q}_{\texttt{MT}})\!\sim\!\texttt{MT}_{k,m}(\mathcal{G}_{\texttt{pretrain }},\!f)\] (16) \[\mathcal{L}\!=\!\mathop{\mathbb{E}}_{x_{i}\in\mathcal{Q}_{\texttt{ NM}}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
comparing its pretrained embedding against the average embedding of the example inputs of each class. 3) Finetune [7], which trains an additional linear classification head on top of the graph encoder pretrained with contrastive learning, following the standard practice.
### In-Context Learning Results
We first evaluate the in-context learning capability for node classification and link prediction with various numbers of ways (i.e. number of classes to classify among).
Strong in-context learning performance.The results demonstrate that our method PRODIGY consistently outperforms all other baselines in this setting. It achieves the highest average accuracy across all ways on arXiv, with an average improvement of 28.6% and up to 48% over the best baseline of Contrastive. Over KGs, PRODIGY also outperforms contrastive learning on average by 12.2%. PRODIGY also demonstrates similar-to-better performance compared to Finetune, which requires additional training on downstream tasks. On arXiv, we see an average improvement of 77.7% over all ways. This can be attributed to the diverse set of pretraining tasks incorporated in PRODIGY, which allows the model to avoid overfitting on specific tasks and learn in-context.
Self-supervised pretraining PG-NM bridges different tasks.In particular, we highlight that the pure self-supervised pretraining method PG-NM produces significantly higher in-context learning performance over arXiv than baselines, even though the model is pretrained on different tasks from the downstream task. This advantage can be further leveraged by pretraining on even larger-scale unlabeled datasets. On the other hand, PG-MT follows the supervised pretraining objective that directly resembles the format of downstream tasks. On KGs, this allows PG-MT to adapt better to downstream task even sometimes compared to the full PRODIGY ( marked by underlines), while PG-NM might have overfitted to the incorrect strategy of only identifying co-occurring nodes. Yet, PG-MT performs worse on arXiv potentially due to less diversity. The full PRODIGY, which ensembles the two, achieves more diversity than either single task and therefore achieves the best performance over both worlds.
Outperforming meta-learning method trained on test graph.Finally, we compare PG-NM in-context learning performance against state-of-the-art meta-learning method TENT [20] over the downstream test graph arXiv. We evaluate the average 3-ways classification tasks performance over only test labels, since TENT trains on train labels from arXiv. PG-NM achieves \(69.07\%\) over the \(65.13\%\) of TENT, even though PG-NM has never been trained on any paper category classification
\begin{table}
\begin{tabular}{l|c c c c|c||c} \hline \hline Classes & NoPretrain & Contrastive & PG-NM & PG-MT & PRODIGY & Finetune \\ \hline
3 & 33.16 \(\pm\) 0.30 & 65.08 \(\pm\) 0.34 & 72.50 \(\pm\) 0.35 & 65.64 \(\pm\) 0.33 & **73.09 \(\pm\) 0.36** & 65.42 \(\pm\) 5.53 \\
5 & 18.33 \(\pm\) 0.21 & 51.63 \(\pm\) 0.29 & 61.21 \(\pm\) 0.28 & 51.97 \(\pm\) 0.27 & **61.52 \(\pm\) 0.28** & 53.49 \(\pm\) 4.61 \\
10 & 9.19 \(\pm\) 0.11 & 36.78 \(\pm\) 0.19 & 46.12 \(\pm\) 0.19 & 37.23 \(\pm\) 0.20 & **46.74 \(\pm\) 0.20** & 30.22 \(\pm\) 3.77 \\
20 & 4.72 \(\pm\) 0.06 & 25.18 \(\pm\) 0.11 & 33.71 \(\pm\) 0.12 & 25.91 \(\pm\) 0.12 & **34.41 \(\pm\) 0.12** & 17.68 \(\pm\) 1.15 \\
40 & 2.62 \(\pm\) 0.02 & 17.02 \(\pm\) 0.07 & 23.69 \(\pm\) 0.06 & 17.19 \(\pm\) 0.08 & **25.13 \(\pm\) 0.07** & 8.04 \(\pm\) 3.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: In-context learning accuracy (%) on arXiv paper category classification on 500 sampled test tasks with 3-shot prompts. PRODIGY was pretrained on MAG240M and is then applied in-context to arXiv, which has completely different structure and a different set of paper categories. PG-NM and PG-MT are ablations of PRODIGY.
\begin{table}
\begin{tabular}{l|c c c c c||c} \hline \hline Classes & NoPretrain & Contrastive & PG-NM & PG-MT & PRODIGY & Finetune \\ \hline
4 & 30.4 \(\pm\) 0.63 & 44.01 \(\pm\) 0.61 & 46.94 \(\pm\) 0.61 & 51.78 \(\pm\) 0.63 & **53.97 \(\pm\) 0.63** & 53.85 \(\pm\) 9.29 \\ \hline
5 & 33.54 \(\pm\) 0.61 & 81.35 \(\pm\) 0.58 & 80.35 \(\pm\) 0.57 & 89.15 \(\pm\) 0.46 & **88.02 \(\pm\) 0.48** & 82.01 \(\pm\) 12.83 \\
10 & 20.0 \(\pm\) 0.35 & 70.88 \(\pm\) 0.48 & 71.68 \(\pm\) 0.45 & 82.26 \(\pm\) 0.40 & **81.1 \(\pm\) 0.39** & 71.97 \(\pm\) 6.16 \\
20 & 9.2 \(\pm\) 0.18 & 59.8 \(\pm\) 0.35 & 59.9 \(\pm\) 0.35 & 73.47 \(\pm\) 0.32 & **72.04 \(\pm\) 0.33** & 64.01 \(\pm\) 4.66 \\
40 & 2.5 \(\pm\) 0.08 & 49.39 \(\pm\) 0.23 & 46.82 \(\pm\) 0.21 & 58.34 \(\pm\) 0.22 & **59.58 \(\pm\) 0.22** & 57.27 \(\pm\) 3.33 \\ \hline
5 & 33.44 \(\pm\) 0.57 & 84.08 \(\pm\) 0.54 & 80.53 \(\pm\) 0.58 & 84.79 \(\pm\) 0.51 & 87.02 \(\pm\) 0.44 & **87.22 \(\pm\) 12.75** \\
10 & 18.82 \(\pm\) 0.31 & 76.54 \(\pm\) 0.45 & 72.77 \(\pm\) 0.48 & 78.5 \(\pm\) 0.44 & **81.06 \(\pm\) 0.41** & 71.90 \(\pm\) 5.90 \\
20 & 7.42 \(\pm\) 0.16 & 66.56 \(\pm\) 0.35 & 62.82 \(\pm\) 0.36 & 69.82 \(\pm\) 0.34 & **72.66 \(\pm\) 0.32** & 66.19 \(\pm\) 8.46 \\
40 & 3.04 \(\pm\) 0.07 & 57.44 \(\pm\) 0.24 & 49.59 \(\pm\) 0.22 & 53.55 \(\pm\) 0.23 & **60.02 \(\pm\) 0.22** & 55.06 \(\pm\) 4.19 \\ \hline \hline \end{tabular}
\end{table}
Table 2: In-context learning accuracy (%) on ConceptNet, FB15K-237 and NELL (from top to bottom) on 500 sampled test tasks with 3-shot prompts. PRODIGY was pretrained on Wiki, which has completely different node and relation types from graphs it is then applied on in-context.
task during pretraining. This demonstrates the power of self-supervised pretraining over large amount of data compared to supervised meta-learning over the limited labeled data (train labels in arXiv).
### Ablations
Aside from PG-NM and PG-MT, we also conduct ablation studies on various configurations of the self-supervised objective PG-NM as described in 3.2. See the full results in Appendix E and Table 4. Overall, the ablation results reveal that using all of the elements together results in the highest performance. Specifically, attribute prediction (see appendix A) has the greatest impact on PG-NM's performance, as its removal results in an average 7% drop across all ways, shown in the 'No-Attr' column.
### Evaluation using different numbers of in-context examples
We investigate our method's ability to learn from the context by analyzing its performance as the number of prompt examples changes. Figure 3 shows the result on ConceptNet. See full results on other datasets in Appendix F. As the number of prompt examples increases, the margin of our proposed PG models over the baseline increases. This supports the hypothesis that the PRODIGY models can more effectively learn the unknown task by reasoning about the common characteristics of prompt examples.
### Scaling with Data Size
Finally, we explore how the model scales with more pretraining data. The result on arXiv in a 5-ways setting is illustrated in Figure 4. It shows that the Contrastive baseline saturates quickly and its performance fluctuates as trained over more pretraining data. Instead, PRODIGY consistently shows an improvement in performance as more data is pretrained on, since the pretraining tasks are harder and more diverse.
## 5 Related Work
### In-context Learning of Large Language Models
Pretrained large language models can make predictions for diverse downstream tasks directly by prompting with a few examples of the task or more generally any textual instructions. This ability is called in-context learning. Comparing to previous language encoder models like BERT [4], it drastically reduces the adaptation effort comparing to fine-tuning, and has demonstrated strong performance in a broad range of models and tasks. Our work extends this success similarly to graph data compared to the current pretrained graph encoders, such that a single pretrained model can be adapted to different classification tasks over different graphs without additional fine-tuning but only few-shot prompting.
### Pretraining on Graphs
There are many existing works on pretraining over graphs[7; 24; 8; 13]. However, they all follow the general paradigm of learning a good graph encoder that can perform certain pretraining tasks, such as masked feature prediction [7] and paired graph classification [24]. To adapt to any downstream tasks, it then requires finetuning a classification head on top of the encoder with large amount of task specific data for each downstream task. In contrast, we explore pretraining methods for inducing general in-context learning ability, such that the pretrained model can be directly used for various downstream tasks with no gradient updates.
### Meta Learning on Graphs
Another closely related line of works is meta-learning methods over graphs that aim to address standard few shot learning problems over graphs[19, 9, 3, 17, 25]. However, existing meta-learning methods are only designed and tested for generalizing across different tasks on the same graph: the methods are trained on a set of training tasks on a graph, then tested over a disjoint but similar set of test tasks over the same graph. They are shown to exhibit optimal performance only when trained on similar curated tasks [10]. Different from this, our work explicitly focuses on the in-context learning performance, i.e. model performance on graphs and tasks completely different from the pretraining without additional fine-tuning.
## 6 Conclusion
We introduce PRODIGY, the first framework that enables in-context learning on graphs. A model that is pretrained using PRODIGY can seamlessly execute a new classification task over new graphs represented by prompt graph. It markedly surpasses the performance of other baseline models with in-context learning, even those that employ finetuning, in both the node and edge classification tasks.
|
2304.12529 | Improved Trust in Human-Robot Collaboration with ChatGPT | Human robot collaboration is becoming increasingly important as robots become
more involved in various aspects of human life in the era of Artificial
Intelligence. However, the issue of human operators trust in robots remains a
significant concern, primarily due to the lack of adequate semantic
understanding and communication between humans and robots. The emergence of
Large Language Models (LLMs), such as ChatGPT, provides an opportunity to
develop an interactive, communicative, and robust human-robot collaboration
approach. This paper explores the impact of ChatGPT on trust in a human-robot
collaboration assembly task. This study designs a robot control system called
RoboGPT using ChatGPT to control a 7-degree-of-freedom robot arm to help human
operators fetch, and place tools, while human operators can communicate with
and control the robot arm using natural language. A human-subject experiment
showed that incorporating ChatGPT in robots significantly increased trust in
human-robot collaboration, which can be attributed to the robot's ability to
communicate more effectively with humans. Furthermore, ChatGPT ability to
understand the nuances of human language and respond appropriately helps to
build a more natural and intuitive human-robot interaction. The findings of
this study have significant implications for the development of human-robot
collaboration systems. | Yang Ye, Hengxu You, Jing Du | 2023-04-25T02:48:35Z | http://arxiv.org/abs/2304.12529v1 | # Improved Trust in Human-Robot Collaboration
###### Abstract
Human-robot collaboration is becoming increasingly important as robots become more involved in various aspects of human life in the era of Artificial Intelligence. However, the issue of human operators' trust in robots remains a significant concern, primarily due to the lack of adequate semantic understanding and communication between humans and robots. The emergence of Large Language Models (LLMs), such as ChatGPT, provides an opportunity to develop an interactive, communicative, and robust human-robot collaboration approach. This paper explores the impact of ChatGPT on trust in a human-robot collaboration assembly task. This study designs a robot control system called RoboGPT using ChatGPT to control a 7-degree-of-freedom robot arm to help human operators fetch, and place tools, while human operators can communicate with and control the robot arm using natural language. A human-subject experiment showed that incorporating ChatGPT in robots significantly increased trust in human-robot collaboration, which can be attributed to the robot's ability to communicate more effectively with humans. Furthermore, ChatGPT's ability to understand the nuances of human language and respond appropriately helps to build a more natural and intuitive human-robot interaction. The findings of this study have significant implications for the development of human-robot collaboration systems.
Large Language Model, ChatGPT, Human-Robot Interaction, Human Factors, Trust
## I Introduction
The use of robot platforms has become ubiquitous across a broad range of industries, from manufacturing and construction [1] to healthcare and education [2]. Although automation is one of the most essential advantages of robotic systems, a collaborative environment in which human operators and robotic systems work side by side is relevant due to potential task complexity, dexterity, and the need for skillful operators with specialized tools or skills. Such a Human-Robot Collaboration (HRC) work mode can lead to higher efficiency, greater accuracy, and increased productivity [3].
However, one of the most significant challenges in the HRC area is the lack of human operators' trust in robotic collaborators [4]. Trust is believed to be one of the most fundamental elements of a collaborative relationship, which impacts the willingness to cooperate and productivity [5]. Studies suggest that the lack of trust in HRC is caused by the lack of reliability, predictability, and transparency [6, 7], attributed to effective communication and mutual understanding between the human operators and the robotic system. Although previous studies have explored using natural language in robotic control [8], the robotic control models still have challenges in task context understanding, logical and procedural reasoning, and interactivity.
The recent advancement in large language models (LLMs) such as ChatGPT [9], provides a transformative opportunity for HRC. By integrating large training models [9], LLMs possess the close-to-human capability to understand the context and interact with human operators naturally. However, emerging problems associated with LLMs need to be answered. On the one hand, LLMs show great potential in improving the natural interaction between humans and robots, which can be an essential milestone for future HRC. However, the method of such integration has not been thoroughly researched. On the other hand, the trends of LLMs' integration in HRC pose unprecedented needs to understand the underlying impact on human operators, especially related to trust levels, which deserve investigation.
This study aims to explore the potential effect on trust of using an intelligent LLM for robotic control in an HRC assembly task. We proposed RoboGPT in this study. RoboGPT Integrates ChatGPT, a popular LLM, with robot control capabilities. RoboGPT retrains ChatGPT as an AI robotic control assistant by providing the task context and sample prompts. The AI assistant receives human operators' verbal commands and controls the robot arm accordingly. Such an assistant can understand the task context, procedure, and verbal commands and interact with human operators to clarify the commands or request more contextual information. We assume that the ChatGPT-enabled AI robot control assistant increases human operators' trust toward the robotic system owing to the natural bilateral communication and context understanding capabilities.
This study demonstrates and validates the HRC design using a human-subject experiment. A total of 15 participants were recruited to work with a collaborative robot via conversations in an assembly task. The collaborative robot helped human operators fetch and place tools according to
the operator's spoken language. The rest of the paper elaborates on the point of departure, the technique to build the LLM-enabled human-robot collaboration, the human-subject experiment, and the results.
## II Related Work
### _Trust in Human-robot Collaboration_
Trust is crucial in HRC, as it influences the degree to which humans are willing to work with and rely on robotic systems [10]. Trust levels can vary depending on the method of interaction in HRC, such as touchscreen control, joystick control, and verbal control [11]. To better understand this relationship, it is essential to delve deeper into the theories of trust in HRC. Several theories have been proposed to explain the differing levels of trust in human-robot collaboration. One possible reason is the lack of transparency in the robot's decision-making process. Transparency refers to the extent to which a robot's actions, intentions, and decision-making processes are clear and understandable to the human operator [12]. When humans cannot understand how a robot arrives at a specific decision or action, they may hesitate to trust it. The lack of transparency may lead to reduced trust in the robot and, consequently, a lower willingness to collaborate effectively [12].
Another potential factor contributing to trust variations in human-robot collaboration is the robotic system's ability to process and respond to ambiguous input [13]. Ambiguity of human command input can arise from various sources, such as unclear commands, conflicting information, or environmental uncertainty. Research indicates that robots capable of effectively handling ambiguous input are more likely to be trusted by human operators [14]. This is because when robots demonstrate adaptability and competence in dealing with uncertain situations, human operators gradually gain confidence in the robot's capabilities and are more willing to trust them [15]. As such, it is worth exploring whether the trust in HRC can be improved by an AI robot control assistant who can communicate with the human operator naturally and is robust to different speech styles.
### _State-of-the-art in Human-Robot Collaboration Interface_
Over the past few years, human-robot collaboration has evolved significantly, focusing on developing robots that can seamlessly interact and cooperate with humans. A notable example of HRC is using Baxter, a collaborative robot, to perform tasks such as grabbing and manipulating objects [16]. Another example is the utilization of quadruped and wheeled robots for transporting objects within warehouses and hospitals and exploring a site [17, 18]. These robots can navigate autonomously and adapt to dynamic environments, reducing the workload for human workers and increasing efficiency.
Various control methods have been employed to enhance and facilitate the interaction between humans and robots. Many of these methods involve using artificial intelligence (AI) to enable robots to understand and interpret human actions and intentions [19]. For instance, researchers have used deep learning and reinforcement learning techniques to teach robots to recognize human gestures [20, 21] and adapt to human behaviors.
The recent advancements in LLMs, such as GPTs [9], have shown the potential to improve human-robot collaboration by enabling more natural and effective communication. These models can generate human-like responses to natural language input, allowing robots to engage in more intuitive and context-aware conversations with humans [22]. However, integrating LLMs into HRC interfaces is still an emerging area of research and requires further exploration. Research on the effectiveness of these models in fostering mutual understanding and trust between humans and robots, as well as their impact on the efficiency and quality of collaboration, is essential for advancing HRC technologies.
## III ChatGPT-enabled Human-Robot collaboration
### _System Design_
We establish a workflow that integrates LLMs with robotic control modules to build an intelligent AI robot control assistant, called RoboGPT. As shown in Fig 1, the RoboGPT workflow firstly transforms human operators' spoken language into textual input for the AI assistant to process. The decision-making core of the AI assistant utilizes GPT3.5 [9] in this study, to understand the information and respond. By considering the contextual information and evaluating the ambiguity of information, GPT3.5 generates natural responses to either further clarify the information with the human operators via conversations or control the robot. When communicating with human operators, the RoboGPT AI assistant generates prompts, presents the prompts to human operators, and waits for further instructions. Such bidirectional communication clarifies the intention of both human operators and the robotic system, which could increase transparency and reduce ambiguity. Then, if the RoboGPT AI assistant considers the information adequate for decision making, the responses will be sent to a decoder which further processes the commands into Robotic Operating System (ROS) topics and triggers robotic control functions to perform tasks correspondingly.
Fig. 1: System workflow of RoboGPT.
### _ChatGPT Assistant_
It is widely acknowledged that training an ChatGPT-enabled AI assistant from scratch is not feasible for most researchers or organizations, primarily due to the prohibitively high costs associated with data and infrastructure [23]. Given these challenges, an alternative approach that has gained considerable traction is using informative prompts to reshape and fine-tune pre-trained LLMs [24]. This method retains the capabilities embedded within existing models, allowing users to adapt the models to meet specific requirements.
In this study, we fine-tuned GPT3.5 into the RoboGPT AI robot control assistant using carefully designed prompts, as shown in Fig 2, using OpenAI API [25]. These prompts defined the system context and showed sample conversations demonstrating the desired formats. These prompts were carefully crafted to encompass various scenarios and tasks, ensuring that the fine-tuned ChatGPT could establish effective communication between human operators and robotic systems. Meanwhile, this fine-tuning process also regulated the output of the AI assistant such that the output commands could be accurately mapped with robotic control modules, which we will discuss later. After fine-tuning these messages, the GPT3.5 model could understand and process complex instructions while seamlessly integrating with various robotic platforms.
### _Robot Assembly Assistant_
The robot system being tested in this study was a Franka Emika Panda robot arm [26], a lightweight seven-degree-of-freedom robot arm designed for HRC. Given that the ChatGPT controller discretely generates prompts, the target robot's posture and status change discontinuously. As such, it is necessary to map the discrete target status to a continuous trajectory for a smooth and safe robot operation. In this study, we implemented a dual-stage impedance control algorithm to resolve this issue, as shown in Figure 3. After receiving a target posture (\(\widetilde{\mathbf{x}}\)), the primary impedance controller generates a continuous time-series target position (\(\widetilde{\mathbf{x_{t}}}\)) to smooth the translation. The continuous target position is then fed into the secondary impedance control to generate smooth robot trajectories (\(\mathbf{x_{t}}\)). The overall control function is described in Equation 1.
\[\mathbf{\tau_{e}}\ =\mathbf{f}(\mathbf{g}(\widetilde{\mathbf{x}}))\ =\ \mathbf{M} \widetilde{\mathbf{x_{t}}}+\mathbf{D}\widetilde{\mathbf{x_{t}}}+\mathbf{K}( \mathbf{x_{t}}\ -\ \widetilde{\mathbf{x_{t}}})\ -\ \
## IV Test Case and Human Subject Experiment
### _Experiment Design_
To assess the impacts of ChatGPT on human operators, a human-subject experiment was conducted with 15 participants. We implemented the RoboGPT AI robot assistant to help a human operator assemble a workpiece. We selected a simple assembly scenario for this experiment to reduce the impact of the different assembly domain knowledge levels among participants. The assembly task in this experiment aims to assemble a plate onto a workpiece and fasten it with four screws using a driller. The process can be classified into three steps: assemble a plate, place screws, and drill in screws. Participants have the autonomy to decide the operation sequence, such as when to ask the robot arm to deliver tools and when to drill in screws. Human operators conduct the assembly process while the AI assistant controls the robot arm to deliver tools. The experiment has been approved by the university Institutional Review Board (IRB202300667).
This experiment was conducted in Virtual Reality (VR) to record participants' behavior and cognitive data as well as ensure participants' safety. Meanwhile, the AI assistant controlled a real 7-degree-of-freedom robot arm, Franka Emika [26], was controlled by the AI assistant. In addition, a digital twin of the real robot arm was constructed in the VR environment using our previously developed methods [1, 29]. This robot arm digital twin synchronized the robot status in VR and provided real robotic behavior to boost participants' sense of immersion.
## V Results
### _Interactions_
Figure 5 shows an example of a conversation in this experiment while working with the robot. Although the user speech-to-text module mistakenly converted the speech in line 3 and 15 (misinterpreted "_screw_" and "_driller_" as "_school_" and "_jeweler_"), the RoboGPT's AI assistant could tolerate such errors and make correct decisions. Meanwhile,
Fig. 4: Experiment scenario and task
the vague expression in Line 17 was clarified before executing robot commands.
### _Performance_
We measured the assembly task performance using two matrices: performance score as indicated by completion time, and self-evaluated performance as indicated by the questionnaire. The performance score is calculated using the equation below:
\[\mathbf{S}_{i}=\frac{\mathbf{t}_{max}-\mathbf{t}_{i}}{\mathbf{t}_{max}-\mathbf{t}_{min}} \tag{2}\]
, where \(\mathbf{t}_{max}\) and \(\mathbf{t}_{min}\) refer to the maximum and minimum time participants used to finish the task. \(\mathbf{S}_{i}\) and \(\mathbf{t}_{i}\) refer to the performance score and completion time of the participant \(\mathbf{t}\). The higher the calculated performance score, the quicker the participant completed the task and the better the performance.
We also extracted participants' performance from their self-evaluation in the NASA TLX questionnaire. Both the objective performance score and the subjective self-evaluation results are shown in Figure 6.
Both the performance score and self-evaluated performance results passed the Anderson Darling [32] normality test. T-test showed significant differences between the fixed command condition and the RoboGPT's AI assistant condition, with p \(<\) 0.001 for both performance matrices. These results indicate that using the LLM-enabled robot assistant could effectively increase the HRC task performance compared with fixed commands. Participants in the post-experiment interview reported that a potential reason could be that the ChatGPT-enabled robot assistant could memorize previous patterns and thus accelerate the task progress.
### _Trust and Cognitive Load_
Trust and cognitive load were measured using the questionnaire responses. Figure 7 visualizes the results. Like the previous statistical analysis pipeline, these matrices showed significant differences between the two conditions, with p \(<\) 0.001 for both trust and cognitive load. These results indicated that participants perceived less mental load while working with the ChatGPT-enabled robot assistant. Meanwhile, they generally trusted the robot more if it could react to natural communication.
## VI Discussion And Limitations
This study discusses and demonstrates the feasibility of humans using the emerging ChatGPT to collaborate with a robot arm intuitively and effectively. ChatGPT can be fine-tuned into intelligent robot assistants which can understand the task context and generate ROS messages to control a robot arm.
The experiment in this study shows that the ChatGPT-enabled robot assistant improves task performance and is believed to be more trustworthy compared to a fixed or pre-defined control method. According to the post-experiment interview, the enhanced trust can be attributed to the superior competence in completing the HRC assembly task. Furthermore, the ChatGPT-enabled robot assistant communicates with the human operator and retains memory from previous decisions, such as the location to deliver the tools. These natural ways of interaction foster better partner relationships and improve trust in this HRC task.
Furthermore, participants generally believed that working with a ChatGPT-enabled robot assistant was less mentally demanding, potentially due to its autonomy and
Fig. 5: Interaction prompt examples
Fig. 6: Task performance result
Fig. 7: Cognitive load and trust scale results
intelligence, which reduced the attention needed to control the robot.
However, we found that the ChatGPT-enabled robot assistant was negatively evaluated in several aspects. For instance, the ChatGPT -enabled assistant could sometimes be self-assertion. The intelligence and scenario-understanding capabilities of ChatGPT -enabled assistants are double-edged swords. On the one hand, these capabilities boosted trust and reduced mental effort during the HRC. On the other hand, the ChatGPT-enabled assistants made decisions based on their understanding, which could sometimes be problematic, especially in the case of miscommunication or inaccurate communication. Furthermore, the I/O was in text format, which imposed a burden on effective communication. In real-world operations, the text is typically not adequate to describe the working scenario, such as the location of objects, the status of the robot arm, or occurred events (e.g., collision, operation failure). Therefore, the robot assistant should optimally be capable of understanding different modalities, such as imaginary inputs, to achieve better HRC results.
## VII Conclusions
We have proposed a novel design approach for HRC using ChatGPT assistant by fine-tuning the ChatGPT to understand task context and directly control ROS topics corresponding to the human operator's verbal commands. The design was demonstrated with an HRC assembly task. We also performed a human-subject experiment in which participants reported higher trust and lower cognitive load compared with using fixed control commands. The experiment results implied a significant impact of utilizing the emerging LLMs, such as ChatGPT, to work and collaborate with robots.
In future research, it is recommended to investigate an effective HRC workflow using LLMs, such as an endorsement mechanism, before sending commands to ROS. A proper HRC workflow could maximize the benefits of LLMs as well as create comfortable and intuitive working environments for human operators.
## VIII Acknowledgments
The authors would like to thank all the experiment participants for joining this study and contributing their thoughts. This material is supported by the National Science Foundation (NSF) Grant 2024784.
|
2307.13367 | The minimum measurable eccentricity from gravitational waves of LISA
massive black hole binaries | We explore the eccentricity measurement threshold of LISA for gravitational
waves radiated by massive black hole binaries (MBHBs) with redshifted BH masses
$M_z$ in the range $10^{4.5}$-$10^{7.5}~{\rm M}_\odot$ at redshift $z=1$. The
eccentricity can be an important tracer of the environment where MBHBs evolve
to reach the merger phase. To consider LISA's motion and apply the time delay
interferometry, we employ the lisabeta software and produce year-long eccentric
waveforms using the inspiral-only post-Newtonian model TaylorF2Ecc. We study
the minimum measurable eccentricity ($e_{\rm min}$, defined one year before the
merger) analytically by computing matches and Fisher matrices, and numerically
via Bayesian inference by varying both intrinsic and extrinsic parameters. We
find that $e_{\rm min}$ strongly depends on $M_z$ and weakly on mass ratio and
extrinsic parameters. Match-based signal-to-noise ratio criterion suggest that
LISA will be able to detect $e_{\rm min}\sim10^{-2.5}$ for lighter systems
($M_z\lesssim10^{5.5}~{\rm M}_\odot$) and $\sim10^{-1.5}$ for heavier MBHBs
with a $90$ per cent confidence. Bayesian inference with Fisher initialization
and a zero noise realization pushes this limit to $e_{\rm min}\sim10^{-2.75}$
for lower-mass binaries, assuming a $<50$ per cent relative error. Bayesian
inference can recover injected eccentricities of $0.1$ and $10^{-2.75}$ for a
$10^5~{\rm M}_\odot$ system with a $\sim10^{-2}$ per cent and a $\sim10$ per
cent relative errors, respectively. Stringent Bayesian odds criterion
($\ln{B}>8$) provides nearly the same inference. Both analytical and numerical
methodologies provide almost consistent results for our systems of interest.
LISA will launch in a decade, making this study valuable and timely for
unlocking the mysteries of the MBHB evolution. | Mudit Garg, Shubhanshu Tiwari, Andrea Derdzinski, John G. Baker, Sylvain Marsat, Lucio Mayer | 2023-07-25T09:39:41Z | http://arxiv.org/abs/2307.13367v3 | # The minimum measurable eccentricity from gravitational waves of LISA massive black hole binaries
###### Abstract
We explore the eccentricity measurement threshold of LISA for gravitational waves radiated by massive black hole binaries (MBHBs) with redshifted BH masses \(M_{z}\) in the range \(10^{4.5}\)-\(10^{7.5}\) M\({}_{\sun}\) at redshift \(z=1\). The eccentricity can be an important tracer of the environment where MBHBs evolve to reach the merger phase. To consider LISA's motion and apply the time delay interferometry, we employ the lisabeta software and produce year-long eccentric waveforms using the inspiral-only post-Newtonian model TaylorF2Ecc. We study the minimum measurable eccentricity (\(e_{\rm min}\), defined at one year before the merger) analytically by computing matches and Fisher matrices, and numerically via Bayesian inference by varying both intrinsic and extrinsic parameters. We find that \(e_{\rm min}\) has a strong dependence on \(M_{z}\) and a weak dependence on mass ratio and extrinsic parameters. Match-based signal-to-noise ratio criterion suggest that LISA will be able to detect \(e_{\rm min}\sim 10^{-2.5}\) for lighter systems (\(M_{z}\lesssim 10^{5.5}\) M\({}_{\sun}\)) and \(\sim 10^{-1.5}\) for heavier MBHBs with a 90 per cent confidence. Bayesian inference with Fisher initialization and a zero noise realization pushes this limit to \(e_{\rm min}\sim 10^{-2.75}\) for lower-mass binaries assuming a \(<50\) per cent relative error. Bayesian inference can recover injected eccentricities of 0.1 and \(10^{-2.75}\) for a \(10^{5}\) M\({}_{\sun}\) system with a \(\sim 10^{-2}\) per cent and a \(\sim 10\) per cent relative errors, respectively. Both analytical and numerical methodologies provide almost consistent results for our systems of interest. LISA will launch in a decade, making this study valuable and timely to prepare for unlocking the mysteries of the MBHB evolution.
keywords: methods: data analysis - methods: statistical - black hole physics - gravitational waves.
## 1 Introduction
The Laser Interferometer Space Antenna (LISA; Amaro-Seoane et al., 2017; Barack et al., 2019) will be one of the first space-based gravitational wave (GW) observatories that will launch in the 2030s, along with TianQin (Wang et al., 2019) and Taiji (Gong et al., 2021). It will be sensitive to observed frequencies of GWs in the range of \(\sim\)\(10^{-4}\)-\(10^{-1}\) Hz. The primary extragalactic sources for LISA are mergers of massive black hole binaries (MBHBs) of \(10^{4.5}\)-\(10^{8}\) M\({}_{\sun}\) and intermediate/extreme mass ratio inspirals (L/EMRIs; Babak et al., 2017; Amaro-Seoane, 2018) with primary-to-secondary BH mass ratio \(q\) greater than \(10^{3}\). LISA will be sensitive enough to detect GWs from coalescing MBHBs with \(q\lesssim 10.0\) up to redshift \(z\sim 20\)(Amaro-Seoane et al., 2017). Most MBHBs will have high signal-to-noise ratios (SNRs; Amaro-Seoane et al., 2017) in the LISA band, which will help to constrain their parameters with high accuracy.
MBHBs mainly form as by-products of galaxy mergers (Begelman et al., 1980). The process involved in shrinking the separation between MBHs from galactic scales to form a binary in the post-merger nucleus takes millions to billions of years, depending on the internal structure of the host galaxies and the relative dominance of various astrophysical processes (see, e.g. Amaro-Seoane et al., 2023). At sub-pc scales, the interaction of the binary with gas and stars in its
environment can drive the binary to the coalescence phase in the LISA band within a Hubble time (Haiman et al., 2009; Milosavljevic and Merritt, 2003). By the time a tight binary is formed, information on its dynamical history, which reflects the nature of the properties of the host galactic nucleus, is mostly lost. However, GW waveforms from these tight systems can carry signatures of the source environment, either in the form of modifications of the vacuum waveform, from phase shifting (Barausse et al., 2014; Derdzinski et al., 2019, 2021; Garg et al., 2022; Cardoso et al., 2022) to the injection of additional harmonics at higher frequency (Zwick et al., 2022), or via a direct relation with the binary parameters that can be extracted from the analysis of the vacuum waveform. In the latter case, the precise astrophysical environment an MBHB evolves within from pc-scales to the near-merger stage may lead to different system variables at the LISA entry for the same starting binary.
One of the most sensitive binary parameters to the surrounding environment is the orbital eccentricity. While most studies in the literature assume that MBHBs will circularize by the time they enter the LISA band (with entry eccentricity \(e_{\rm LISA}\lesssim 10^{-4}\)) due to emission of GWs (Peters and Mathews, 1963; Peters, 1964), some may retain non-negligible eccentricity due to evolving in a suitable dynamical environment, e.g. if MBHBs are embedded in gas (Armitage and Natarajan, 2005; Sesana et al., 2005; MacFadyen and Milosavljevic, 2008; Cuadra et al., 2009; Zrake et al., 2021; D'Orazio and Duffell, 2021; Siwek et al., 2023; Tiede and D'Orazio, 2023), in a star cluster (Matsubayashi et al., 2007; Lockmann and Baumgardt, 2008; Preto et al., 2009; Bonetti et al., 2020; Gualandris et al., 2022), in a tri-axial potential (Merritt and Vassiliev, 2011; Khan et al., 2013), or if they interact with a third BH (Bonetti et al., 2016, 2018, 2018, 2019). Hence, eccentricity can be an important tracer to probe these effects.
The eccentricity is a unique intrinsic binary parameter because it decreases rapidly as the system approaches the merger. As a result, in order to infer it from a waveform, we need to detect the GW signal many cycles before the merger. Therefore, for now, the ground-based LIGO-Virgo-KAGRA (LVK) collaboration does not include eccentricity in their analysis of the stellar-mass (\(\lesssim 100\) M\({}_{\sun}\)) BH binaries (SmBHBs) due to the challenges in modelling late-inspiral-merger with the presence of eccentricity and spins (see, e.g. Ramos-Buades et al., 2022). However, LVK indeed does searches for eccentric SmBHBs using un-modelled methods (Abbott et al., 2019; Ramos-Buades et al., 2020). Given that we will observe GWs in the early inspiral phase in the LISA band for most MBHBs, ignoring eccentricity could lead to mismodelling of the GW waveform. Most of the focus on eccentricity detection in the LISA frequency band has been in the context of multi-band SmBHBs sources (Nishizawa et al., 2016, 2017; Klein et al., 2022), with some attention on EMRIs. Multi-band sources are seen in the LISA band a few years before they merge in the LVK frequency band of \(\sim 10\)-\(10^{4}\) Hz (Sesana, 2016; Vitale, 2016). The detection of eccentricity is proposed as a way to distinguish whether SmBHBs are formed in the field or via dynamical interaction such as in globular clusters, nuclear clusters, or galactic nuclei (Nishizawa et al., 2016; Breivik et al., 2016; Lower et al., 2018; Gondan et al., 2018; Romero-Shaw et al., 2019, 2020; Zevin et al., 2021; Romero-Shaw et al., 2022). Also, eccentricity can help in breaking parameter degeneracies by inducing higher harmonics (Mikoczi et al., 2012; Yang et al., 2022; Xuan et al., 2023) and it can improve parameter estimation accuracy (Sun et al., 2015; Vitale, 2016; Gondan et al., 2018; Gondan and Kocsis, 2019; Gupta et al., 2020). EMRIs are mostly expected to have a significant entry eccentricity in the LISA band, ranging from \(e_{\rm LISA}\gtrsim 0.1\)-\(0.8\)(Hopman and Alexander, 2005; Amaro-Seoane, 2018), which can be measured to high accuracy, barring data analysis challenges (Babak et al., 2017; Berry et al., 2019; Chua and Cutler, 2022).
This work considers eccentric binaries in vacuum of two near-coalescence non-spinning MBHs. We are interested in robustly estimating the lowest eccentricity LISA can measure for the given MBHB source at \(z=1\), one year before the merger. Our analysis attempts to be as realistic as possible in the data analysis which will be employed for LISA once the mission is operational, i.e. we take into account the full LISA motion in its orbit around the Sun, generate high-order post-Newtonian (PN) waveforms, employ the time delay interferometry (TDI) technique to cancel the detector's laser noise, and finally perform Bayesian inference to recover injected parameters.
The measurability of eccentricity in the MBHB's GW waveform is a novel investigation. It is an important study because, similar to multi-band sources, residual eccentricities can be a signature of the environment in which MBHBs have evolved. For instance, recent high-resolution hydrodynamical simulations by Zrake et al. (2021) show that for equal-mass binaries hardening in prograde circumbinary gas discs, we expect an eccentricity of \(\sim 10^{-3}\) one year before coalescence. The eccentricity evolution in the late stages of hardening by a prograde accretion disc is further supported by D'Orazio and Duffell (2021) and Siwek et al. (2023). Moreover, Tiede and D'Orazio (2023) show that we should expect even higher eccentricity in the LISA band if the circumbinary disc is retrograde instead of prograde. Therefore, eccentricity detection by LISA could be a tracer of gas interaction. Simulations of MBH binary evolution starting from realistic galaxy mergers (Capelo et al., 2015), in which three-body encounters with stars dominate the orbital decay at sub-pc separations, show that the eccentricity always increases above the value that it has when the hardening phase begins, reaching values as large as \(0.9\)(Khan et al., 2018). The residual value of eccentricity around 50-100 Schwarzschild radii (about one year before merger), when circularization via GW emission has already started to act, is yet to be determined. However, recently Gualandris et al. (2022) studied the evolution of eccentricity through the stellar hardening phase and into the GW radiation regime, finding that the residual value of the eccentricity at about 50 Schwarzschild radii for a \(4\times 10^{6}\) M\({}_{\sun}\) MBHB ranges from below \(10^{-4}\) to nearly \(10^{-3}\) (as suggested by Elisa Bertolas in further communication). Interestingly, the specific eccentricity here mainly depends upon the parameters at large scale and positively correlates with the initial eccentricity of the merging galaxies. Also, the lowest possible eccentricity detectable by LISA for a given MBHB will tell us whether its neglect during parameter estimation will lead to biases and degeneracies. The consensus for entry eccentricity in the LISA band for MBHBs is \(\lesssim 10^{-4}\), which justifies the circular assumption, but for \(e_{\rm LISA}>10^{-4}\), it would be crucial to consider eccentricity during match filtering and when constraining binary variables (Porter and Sesana, 2010).
The paper is structured as follows. In Section 2, we describe our waveform model and systems of interest. Section 3 studies analytical constraints on eccentricity measurement using
matches and Fisher formalism. In Section 4, we detail our Bayesian setup to find the minimum measurable eccentricity. We discuss the findings in Section 5 and summarize the key takeaways of this work in Section 6.
## 2 Waveform generation, system parameters, LISA response, and time delay interferometry
MBHBs are one of the most promising sources for LISA as they are expected to be the loudest events and will spend a significant amount of time (up to a few years) in LISA's frequency band before merging. Most of the time MBHBs spend in the LISA band is in the long-inspiral phase where eccentricity (e) can still be non-negligible. The inspiral part of the GW waveforms from eccentric BHB mergers has been developed within the PN formalism both in time and frequency domains (Damour et al., 2004; Mishra et al., 2016). The time-domain PN waveforms have the advantage of having a larger region of validity in eccentricity, and they can probe eccentricities up to \(\approx 0.8\), but they are slow to generate (Tanay et al., 2016, 2019). The frequency-domain waveforms are much faster to compute but are limited to the low-eccentricity approximation. For LISA data analysis, it is imperative to have fast waveform computation as the evolution of the BHB occurs over a large time-frequency volume. There exist a wide range of frequency-domain eccentric BHB waveforms, namely TaylorF2Ecc(Moore et al., 2016), EccentricFD(Huerta et al., 2014), and EFPE(Klein, 2021), among others. For this study, we have employed the TaylorF2Ecc inspiral-only waveform model with circular phasing accurate up to 3.5PN order taken from another inspiral-only model TaylorF2(Buonanno et al., 2009) and eccentricity corrections to phasing up to 3PN and \(\mathcal{O}(e^{2})\), making it valid for \(e\lesssim 0.1\). However, this model does not give a prescription for spinning BHs. We choose TaylorF2Ecc as our fiducial model as astrophysically we mostly do not expect higher eccentricities, as mentioned in the introduction.
The parameter space we consider for MBHBs spans the range of total redshifted masses \(M_{z}\) between \(10^{4.5}\)-\(10^{7.5}\) M\({}_{\sun}\), mass ratios1\(q\in[1.2,8]\), and the initial eccentricity one year before the merger (\(e_{1yr}\)) between \(10^{-3.5}\)-\(0.1\). We have not considered the individual spins of the component BHs for this work. Unless otherwise stated, we always quote the values at the detector frame (L-frame). This leaves us with three intrinsic parameters2 (first three rows of Table 1) and six extrinsic parameters (last six rows of Table 1). We employ the cosmological parameters from the Planck Collaboration et al. (2020) survey to compute the luminosity distance from redshift: Hubble constant \(H_{0}=67.77\) km s\({}^{-1}\) Mpc\({}^{-1}\), matter density parameter \(\Omega_{\rm m}=0.30966\), and dark-energy density parameter \(\Omega_{\Lambda}=0.69034\).
Footnote 1: q=1 system gives leads to Fisher initialization problems in Bayesian inference, hence we choose \(q=1.2\) as a representative value.
Footnote 2: While there are other eccentricity-related binary parameters, we only focus on the magnitude of eccentricity.
We generate eccentric waveforms \(\tilde{h}_{\rm accc}(f)\) for our systems of interest using the TaylorF2Ecc template over these parameter grids to optimally cover the intrinsic parameter space:
\[M_{z}\in \{ 10^{4.5},10^{5},10^{5.5},10^{6},10^{6.5},10^{7},10^{7.5}\}\ {\rm M}_{\sun}, \tag{1}\] \[q\in \{ 1.2,2.0,4.0,8.0\},\] \[e_{1yr}\in \{ 10^{-3.5},10^{-3.25},10^{-3},10^{-2.75},10^{-2.5},10^{-2},\] \[10^{-1.5},0.1\}.\]
Additionally, we also generate quasicircular (\(e_{1yr}=0\)) waveforms (\(\tilde{h}_{\rm cir}\)). For extrinsic parameters, our fiducial values are z=1 which corresponds to \(D_{\rm L}=6791.3\) Mpc, and the angles are all set to 0.5 radians. We choose these parameters such that MBHBs spend at least one year in the LISA band before coalescing.
In this work, we only work with Newtonian amplitudes and study binaries until they reach their innermost stable circular orbit (ISCO), i.e. at binary separation \(r_{\rm ISCO}\equiv 3r_{\rm s}\), where \(r_{\rm s}\equiv 2GM_{z}/c^{2}\) is the Schwarzschild radius of the total mass BH,3 at which point binaries are expected to circularize.4 We find the starting frequency (\(f_{1yr}\)) such that the system reaches the ISCO at frequency \(f_{\rm ISCO}\) in exactly one year by using the Peters' time-scale (Peters, 1964).
Footnote 3: \(G\) is the gravitational constant and \(c\) is the speed of light in vacuum.
Footnote 4: We perform circularization test of TaylorF2Ecc in Appendix A.
In Fig. 1, we show the characteristic strain \(h_{\rm c}(f)\equiv 2f\tilde{h}(f)\), a visual aid to represent how signal adds up in the detector, the LISA noise curve \(S_{n}(f)\) including a confusion noise due to galactic binaries taken from Marsat et al. (2021), and the accumulated phase for an \(M_{z}=10^{5}\) M\({}_{\sun}\), \(q=8.0\), and \(e_{1yr}=0.1\) system at \(z=1\) for TaylorF2Ecc and the quasicircular inspiral-merger-ringdown waveform model IMRPhenomD(Husa et al., 2016; Khan et al., 2016). Since the inspiral part of the IMRPhenomD comes from the TaylorF2 template, phasings are almost identical until the system is close to the ISCO. The initial phase difference is due to non-zero eccentricity in TaylorF2Ecc, and later deviations come from the merger part of the IMRPhenomD, which are beyond the scope of the inspiral-only model TaylorF2Ecc.
To account for LISA motion and to project the waveform into TDI channels, namely A, E, and T, we modify the lisa-beta software (see Marsat et al., 2021 for details and subse
\begin{table}
\begin{tabular}{|l|c|} \hline
**Parameter** & **Units** \\ \hline \hline Total redshifted mass \(M_{z}\) & M\({}_{\sun}\) \\ \hline Mass ratio \(q\) & Dimensionless \\ \hline Eccentricity one year before coalescence \(e_{1yr}\) & Dimensionless \\ \hline Luminosity distance \(D_{\rm L}\) & Mpc \\ \hline Phase at coalescence \(\phi\) & Radian \\ \hline Inclination \(\imath\) & Radian \\ \hline Ecliptic latitude \(\lambda\) & Radian \\ \hline Ecliptic longitude \(\beta\) & Radian \\ \hline Initial polarization angle \(\psi\) & Radian \\ \hline \end{tabular}
\end{table}
Table 1: Source parameters in the L-frame.
quent notations) by including support for TaylorF2Ecc. Therefore, a waveform \(\bar{h}(f)\) will have strain projections \(\bar{h}_{\rm A,E,T}(f)\) and noise power spectral densities \(S_{n}^{\rm A,E,T}\) corresponding to A, E, and T channels, respectively.
Now, we can write down the standard inner-product between two waveforms as
\[(a|b)=4\sum_{\rm A,E,T}{\rm Re}\int_{f_{1\rm yr}}^{f_{\rm ISCO}}{\rm d}f\frac{ \tilde{a}(f)\tilde{b}^{*}(f)}{S_{n}(f)}, \tag{2}\]
where the pre-factor 4 comes from the one-sided spectral noise density normalization. Hence, the SNR of the signal is \(\sqrt{(h|h)}\). In Fig. 2, we show the dependence of the SNR at \(z=1\) as a function of total mass and mass ratio for our parameter space in Eq. (1). As expected, the SNR is higher for near-equal mass ratios than the unequal ones and decreases as the redshift increases. Furthermore, the SNR peaks at middle-range masses of \(\sim 10^{6}\) M\({}_{\sun}\), known as golden LISA sources.
In the following two sections, we find the minimum eccentricity and errors on its recovery in the LISA data stream for a given source. First, we approach this task analytically by using a match between waveforms and computing Fisher information matrices. We then perform Bayesian inference to numerically determine the posteriors.
## 3 Analytical Measurability of eccentricity
We first present a simple and commonly used estimate for the distinguishability of eccentric from quasicircular binaries in LISA using a match-based SNR criterion defined in Eq. (5). Furthermore, we employ the Fisher formalism to estimate how well-constrained eccentricity will be for these sources. These computations provide a theoretical benchmark that can be compared with Bayesian inference presented later.
### Optimal match
We compute matches between \(h_{\rm acc}\) and \(h_{\rm cir}\) waveforms with the same \(M_{z}\) and \(q\), and find the minimum SNR (\(\rm SNR_{\rm min}\)) for which LISA can distinguish between these waveforms with more than 90 per cent confidence. To compute \(\rm SNR_{\rm min}\), we use the criterion from Baird et al. (2013):
\[{\rm SNR}_{\rm min}^{2}=\frac{1}{2}\frac{\chi_{k}^{2}(1-p)}{(1-{\rm M}(h_{\rm acc },h_{\rm cir}))}, \tag{3}\]
where \(\chi_{k}^{2}(1-p)\) is the \(\chi^{2}\) probability distribution function, \(1-p\) is the significance level, \(k\) is the number of free binary parameters, and \({\cal M}(h_{\rm acc},h_{\rm cir})\) is a match between \(h_{\rm cir}\) and \(h_{\rm acc}\):
\[{\cal M}(h_{\rm cir},h_{\rm acc})=\max_{\Delta\phi}\frac{(h_{\rm cir}|h_{\rm acc })}{\sqrt{(h_{\rm cir}|h_{\rm cir})\sqrt{(h_{\rm acc}|h_{\rm acc})}}}, \tag{4}\]
which is maximized over phase shifts \(\Delta\phi\). In our case, we have \(p=0.9\) and \(k=3\) as we vary only three binary parameters - \(M_{z}\), \(q\), and \(e_{\rm 1yr}\). This transforms Eq. (3) into
\[{\rm SNR}_{\rm min}^{2}=\frac{3.12}{(1-{\cal M}(h_{\rm circ},h_{\rm acc}))}. \tag{5}\]
Figure 1: Characteristic strain \(h_{c}\) compared to the LISA noise (solid gray; in the top panel) and accumulated phase (in the bottom panel) for an \(M_{z}=10^{5}\) M\({}_{\sun}\), \(q=8.0\), and \(e_{\rm 1yr}=0.1\) binary at \(z=1\) for waveform templates \(\rm IMRPhenomD\) (dashed red) and \(\rm TaylorF2Ecc\) (dot-dashed blue) between \(f_{\rm 1yr}\) and \(f_{\rm ISCO}\). In the bottom panel we also enlarge the initial phase to show the difference between quasicircular and eccentric phasings.
Figure 2: SNR for our systems of interest for two limiting mass ratios of \(q=1.2\) (top panel) and \(q=8.0\) (bottom panel). In both panels, we vary \(M_{z}\) from \(10^{4.5}\) to \(10^{7.5}\) M\({}_{\sun}\) and \(z\) from 1 to 5, and set rest for the parameters to our fiducial values. These SNRs take into account LISA motion and are calculated by summing over three TDI channels A, E, and T. The low eccentricities we consider here do not affect the SNR significantly.
If the event's SNR is less than \(\mathrm{SNR_{min}}\) then one cannot differentiate between quasicircular and eccentric binaries, which in turn provides a constraint on the minimum detectable eccentricity (\(e_{\mathrm{min}}\)) assuming the rest of the binary parameters are known. In Fig. 3, we show \(\mathrm{SNR_{min}}\) for our systems of interest and compare it with the event's SNR at our fiducial \(z=1\) (\(\mathrm{SNR_{z=1}}\)). It illustrates that \(e_{\mathrm{min}}\sim 10^{-2.5}\) for lower-mass MBHBs (\(M_{z}\lesssim 10^{5.5}\) M\({}_{\odot}\)) and \(\sim 10^{-1.5}\) for higher-mass systems. The strong dependence on the total mass can be attributed to the fact that even though our considered binaries spend one year in the LISA band, \(f_{\mathrm{1yr}}\) for heavier systems is much lower than for the lighter binaries. This implies that the inspiral part of the signal, where eccentricity is dominant, will fall within the low sensitivity region of LISA, leading to systematically worse constraints for heavier systems. Moreover, the weak dependence on the mass ratio can be explained from our definition of \(e_{\mathrm{1yr}}\). Unsurprisingly, higher eccentricities are easily distinguishable from lower ones.
One can use the SNR estimates in Fig. 2 for any MBHBs at higher redshift in the LISA band and use Fig. 3 to assess whether eccentricity will be detectable for the given system since \(\mathrm{SNR_{min}}\) is computed in the L-frame. In the next section, we find the expected error bars on the recovery of injected eccentricity using the Fisher formalism.
### Fisher matrix
A standard parameter estimation technique in the LISA community is to compute a Fisher matrix (Vallisneri, 2008), which tells us how well we can constrain a certain parameter assuming a Gaussian noise and high SNR. We can define the Fisher matrix as
\[\Gamma_{\mathrm{ab}}=\left(\partial_{\mathrm{a}}h|\partial_{\mathrm{b}}h \right), \tag{6}\]
where \(\partial_{\mathrm{a}}h\equiv\partial h/\partial\theta_{\mathrm{a}}\) is the partial derivative of a waveform \(h\) with respect to a parameter \(\theta_{\mathrm{a}}\).
The inverse of the Fisher matrix is the variance-covariance matrix, whose diagonal elements are variances (\(\sigma^{2}\)) for each of the injected parameters. The square-root of a variance provides the standard deviation (\(\sigma\)), which tell us the error estimate on a given parameter.
We again only vary intrinsic parameters: \(M_{z}\), \(q\), and \(e_{\mathrm{1yr}}\), and show in Fig. 4 the Fisher-based error estimate on eccentricity (\(\sigma_{e}^{\mathrm{Fisher}}\)) for our parameters of interest in Eq. (1). Errors mainly vary with total mass and less significantly with mass ratio, due to the same reasons as explained for the match results in Section 3.1. Fig. 4 suggests that for lighter systems, higher eccentricities are constrained to error (relative error \(\equiv 100\times\sigma_{e}^{\mathrm{Fisher}}/e\)) \(\sim 10^{-4}\) (\(\sim 0.1\) per cent) whereas for lower \(e_{\mathrm{1yr}}\) we find \(\sigma_{e}^{\mathrm{Fisher}}\sim 10^{-3}\) (\(\sim 1000\) per cent). For heavier binaries, errors are \(\sim 10^{-3}\) (\(\sim 1\) per cent) for higher eccentricities and \(\sim 10^{-1}\) ( \(10^{5}\) per cent) for lower \(e_{\mathrm{1yr}}\). This suggests that lower eccentricities are completely unconstrained.
One can always scale these errors (\(\sim 1/\mathrm{SNR}\)) at higher redshift by using SNR values in Fig. 2. The error estimates of the parameters from the Fisher matrix procedure are simplistic but its quite useful in understanding the upper-limit (Cramer-Rao Bound). In the next section, we perform Bayesian inference to find error estimates on eccentricity recovery and the minimum measurable eccentricity.
## 4 Measurability of eccentricity using Bayesian inference
The main goal of Bayesian inference is to construct posterior distributions \(p(\theta|d)\) for the parameter space \(\theta\) to fit the observed data \(d\) (see, e.g. Thrane and Talbot 2019). \(p(\theta|d)\) represents the probability distribution function of \(\theta\) given the data \(d\) and it is normalized such that \(\int\mathrm{d}\theta\ p(\theta|d)=1\). To
Figure 4: For the same binary parameters as in Fig. 3, error estimates by Fisher formalism (\(\sigma_{e}^{\mathrm{Fisher}}\)) on eccentricities of our considered binaries.
Figure 3: \(\mathrm{SNR_{min}}\) required to distinguish between quasicircular and eccentric waveforms for our parameter space. In the top panel, we fix \(q=1.2\), and vary \(M_{z}\) from \(10^{4.5}\) to \(10^{7.5}\) M\({}_{\odot}\) and \(e_{\mathrm{1yr}}\) from \(10^{-3.5}\) to \(10^{-1}\). In the bottom panel, we keep the mass ratio \(q=8.0\) constant, and vary \(M_{z}\) and \(e_{\mathrm{1yr}}\) as in the top panel. Both panels have a blue line showing the boundary of the LISA non-detectability region at \(z=1\) (\(\mathrm{SNR_{z=1}}<\mathrm{SNR_{min}}\)).
compute the posterior, we use Bayes theorem,
\[p(\theta|d)=\frac{\mathcal{L}(d|\theta)\pi(\theta)}{Z}, \tag{7}\]
where \(\mathcal{L}(d|\theta)\) is the likelihood function of the data \(d\) given the parameters \(\theta\), \(\pi(\theta)\) is the prior on \(\theta\), and \(Z\equiv\int\mathrm{d}\theta\mathcal{L}(d|\theta)\pi(\theta)\) is the evidence. Since we are not selecting between different models, we can treat \(Z\) as a normalization constant. Also, we only consider uniform (flat) priors for all parameters.
For our stationary Gaussian noise \(S_{n}^{A,E,T}\), we can write down the log-likelihood with a zero-noise realization summed over A, E, and T channels as (Marsat et al., 2021):
\[\ln\mathcal{L}\propto\sum_{A,E,T}(h-h_{\mathrm{ini}}|h-h_{\mathrm{ini}}), \tag{8}\]
where \(\tilde{h}\) is the template signal, and \(\tilde{h}_{\mathrm{ini}}\) is the simulated injected signal. The zero-noise realization accelerates the likelihood computation, improves upon the Fisher results by providing the shape of posteriors, and helps understand parameter degeneracies and detectability of certain effects (here eccentricity).
For sampling, we use the parallel tempering Markov-chain Monte Carlo (MCMC) code ptmcmc.5 To further speed up the likelihood computation, we draw random samples from a multivariate Gaussian with the mean given by the injected parameters and standard deviations provided by the Fisher formalism in Section 3.2.
Footnote 5: [https://github.com/JohnGBaker/ptmcmc](https://github.com/JohnGBaker/ptmcmc)
We primarily sample only the intrinsic parameters and set a high-frequency cutoff for the data at \(f_{\mathrm{ISCO}}\) of the injected binary.6 We show the posteriors for \(M_{z}\), \(q\), and \(e_{\mathrm{1yr}}\) in Fig. 5 for injected binary parameters \(10^{5}\) M\({}_{\sun}\), \(8.0\), and \(0.01\). All parameters are well recovered, with the injected values being extremely close to the median of their respective posterior. Moreover, Bayesian errors are similar to the errors provided by the Fisher formalism, as expected due to the high SNR and a zero-noise realization.
Footnote 6: Using an earlier cutoff than the ISCO does not significantly affect the posteriors, as shown in Appendix B.
We also study the effect of including extrinsic parameters7 (also given in Table 1) on the measurability of the eccentricity in Fig. 6. Here, we show the comparison of the posteriors of \(e_{\mathrm{1yr}}\) in Eq. (1) for fixed \(M_{z}=10^{5}\) M\({}_{\sun}\) and \(q=8.0\) between the cases when sampling only intrinsic parameters and when sampling over all parameters in Table 1. Adding extrinsic parameters results in a slight broadening of eccentricity posteriors and a narrow shift in the peak. This is anticipated due to the increase in degrees of freedom, which do not contribute to the measurement of eccentricity.8 Unsurprisingly, the higher the eccentricity, the better the recovery of the injected value, i.e. the injected value is extremely close to the peak of the posterior.
Footnote 7: We show the full posteriors in Fig. 12.
Footnote 8: Eccentricity is not expected to be correlated to the extrinsic parameters.
To measure how well injected eccentricities are recovered in our Bayesian inference, we introduce a Bayesian relative error metric in terms of the injected eccentricity \(e_{\mathrm{ini}}\) and the standard deviation of the corresponding eccentricity posterior
Figure 5: Posterior distributions (solid black) for an injected binary with \(M_{z}=10^{5}\) M\({}_{\sun}\), \(q=8.0\), and \(e_{\mathrm{1yr}}=0.01\). The extrinsic parameters are fixed to our fiducial values. The two extreme vertical dashed lines constrain the 90 per cent credible interval, whereas the middle dash line represents the median of the distribution. The blue lines mark the injected values, whereas the contours in two-dimensional posteriors indicate 68, 95, and 99 per cent credible intervals. We also indicate the Fisher results (dashed red) for comparison.
Figure 6: Posterior distributions (ececpost) for the eccentricity corresponding to each injected \(e_{\mathrm{1yr}}\) for binaries with fixed \(M_{z}=10^{5}\) M\({}_{\sun}\) and \(q=8.0\). The posteriors are constrained to the 90 per cent credible interval and are shown in blue (left) if we only sample the intrinsic parameters and in orange (right) if we vary all parameters. The injected values are marked with a red cross.
\(\sigma_{e}^{\rm MCMC}\):
\[\sigma_{e,\rm rel}^{\rm MCMC}\left[\%\right]=100\times\frac{\sigma_{e}^{\rm MCMC }}{e_{\rm inj}}. \tag{9}\]
To survey the parameter space widely with Bayesian inference, we have conducted a total of \(7\times 4\times 8\) runs by sampling over only intrinsic parameters for seven values of the total mass, four values of the mass ratio, and eight values of the eccentricity given in Eq. (1). We present only the runs for the intrinsic parameters here, as we have shown that including extrinsic parameters does not affect the results significantly.
We present the findings of our Bayesian inference in terms of \(\sigma_{e,\rm rel}^{\rm MCMC}\)(%) in Fig. 7. Systems with \(e_{1yr}\gtrsim 10^{-1.5}\) will mostly lead to the measurement of eccentricity to a relative error of less than 1 per cent for lower-mass MBHBs and \(\lesssim 10\) per cent for higher-mass binaries, independent of \(q\). The lowest value of eccentricity (\(e_{\rm min}\)) that LISA can measure with a less than 50 per cent relative error is \(10^{-2.75}\) for \(M_{z}=10^{4.5}\) M\({}_{\sun}\).
We set 50 per cent Bayesian relative error as a fiducial threshold for the measurement of eccentricity. We summarize the results of all our MCMC runs in terms of the minimum measurable eccentricity (\(e_{\rm min}\)) by LISA as a function of total mass and mass ratio in Fig. 8. The results are mostly independent of mass ratio, although we witness some slight change for higher-mass ratios (\(q=8\)). \(e_{\rm min}\) for heavier systems is around \(10^{-1.5}\), whereas for lighter MBHBs the eccentricity can be measured down to \(\sim 10^{-2.75}\). The measurement of eccentricity in this regime can have far-reaching astrophysical consequences which we present in the discussion.
## 5 Discussion
The current detectability analysis of GWs from MBHBs mostly assumes negligible eccentricity (\(\lesssim 10^{-4}\)) once the binaries enter the LISA frequency band. However, we know that environmental interaction is necessary for binaries to reach the near-coalescence phase within a Hubble time. Therefore, it is important to consider if even residual eccentricities are measurable, which can be a tracer of the binary's environment. In this work, we remain agnostic about the driver of the binary's eccentricity. Instead, we have determined the minimum measurable eccentricity for a range of binary parameters. These limits can be compared with theoretical models of binary evolution in order to determine which binary formation scenarios lead to measurable eccentric signatures in the GW waveform. For example, we can compare our results with eccentricities predicted by binary evolution in circumbinary discs (Zrake et al., 2021; D'Orazio and Duffell, 2021; Siwek et al., 2023), which predict \(e_{1yr}\sim 10^{-3}\) for \(\sim 10^{3}\)-\(10^{5}\) M\({}_{\sun}\) systems at \(z=1\). Based on our results in Fig. 8, \(e\sim 10^{-3}\) will be indeed detectable9 for binaries within the mass range \(\sim 10^{4.5}\)-\(10^{5.5}\) M\({}_{\sun}\) at \(z=1\). Considering that the eccentricity evolution will depend on the accretion disc properties (D'Orazio and Duffell, 2021), precise detection of eccentricity in GWs can help constrain the source's environmental properties. The interaction with stars can also excite non-negligible eccentricities in the LISA band. Gualandris et al. (2022) suggest \(e_{1yr}\sim 10^{-4}\)-\(10^{-3}\) for a \(4\times 10^{6}\) M\({}_{\sun}\) MBHB, a range of eccentricities not detectable for such massive system as per Fig. 8. However, lower-mass binaries are not yet explored in these models. It is possible that a better waveform model which includes more physics concerning eccentricity, such as the advance of periastron (Tiwari et al., 2019), could improve eccentricity measurements, but we leave this to future work. Overall, measuring specific eccentricities predicted by various environments may help to distinguish between them.
Footnote 9: See Fig. C1 for \(e_{\rm lyr}=10^{-2.75}\approx 2\times 10^{-3}\) posteriors.
In addition to measuring orbital properties of binaries in GWs, informative measurements of environmental deviations in GW waveforms are also possible for certain systems. Suppose the influence of scattered stars, surrounding gas, or a nearby third body causes alterations in the orbital evolution (compared to the same binary in vacuum). In that case, this interaction leads to a dephasing of the detected GW signal (e.g. Garg et al., 2022; Zwick et al., 2023) and can excite harmon
Figure 8: Minimum measurable eccentricities as a function of binary mass and mass ratio based on whether \(\sigma_{e,\rm rel}^{\rm MCMC}[\%]<50\) in Eq. (8).
Figure 7: For the same binary parameters as in Fig. 3, relative error percentage (\(\sigma_{e,\rm rel}^{\rm MCMC}[\%]\)) on the recovery of eccentricity in our Bayesian inference. A dashed red line is drawn to separate the region with relative error larger than 10 per cent and a solid blue line is drawn to separate the region with \(\sigma_{e,\rm rel}^{\rm MCMC}[\%]>50\) per cent. We have suppressed relative errors above 100 per cent to enhance the informative results.
ics at higher frequencies (Zwick et al., 2022). For a complete characterisation of the binary properties in astrophysical environments, it will be necessary to consider how these deviations correlate with binary parameters. Assuming one has a robust knowledge of the range of predicted residual eccentricities in different scenarios for the background (e.g. a gaseous environment versus stellar encounters) and, simultaneously, of the expected waveform modulation due to various interactions, it becomes possible to cross-correlate these parameters to enhance the determination of environmental effects. We plan to quantify the feasibility of these measurements in future work.
LISA and other space-based mHz GW detectors will be able to observe the coalescence of MBHBs in the mass range \(10^{4}\)-\(10^{8}\)M\({}_{\sun}\) across the whole sky. We expect to detect at least a few events per year, with the event rate dominated by lower-mass MBH mergers at \(z\lesssim 2\)(Amaro-Seoane et al., 2023). However, current predictions by both post-processing of cosmological simulations and semi-analytical models vary by orders of magnitude, as they depend on intricate details of MBH seeding mechanisms and evolution in their host galaxies (e.g. Tremmel et al., 2018; Ricarte and Natarajan, 2018; Volonteri et al., 2020; Valiante et al., 2021; Barausse et al., 2020). While the literature is still evolving on the expected residual eccentricity at LISA entry from different environments, being able to measure the eccentricity might add important information to place further constraints on astrophysical scenarios for binary evolution. Furthermore, irrespective of that, we need to be able to extract all the potential information from the waveform if we are going to use them for fundamental physics tests, such as excluding alternative general relativistic theories (there can be various hidden degeneracies we do not know of at the moment).
The work presented here is not devoid of certain systematics that are present in the GW waveform model that is employed. As mentioned in Section 2, the GW model TaylorF2Ecc we use only provides eccentric phase corrections up to 3PN and at \(\mathcal{O}(e^{2})\), which makes it reasonable to use in the low-eccentricity regime but can still induce some inaccuracies. The higher-order eccentric corrections - up to \(\mathcal{O}(e^{6})\) - are known (Tiwari et al., 2019) but are cumbersome to implement within the full Bayesian inference infrastructure, and the comparison of result for the leading order in eccentricity with respect to higher-order eccentric corrections are left for future work. Additionally, TaylorF2Ecc does not include the component spin effects, which can have positive and negative consequences for the measurability of eccentricity. However, we would like to point out that LISA will very well measure spin effects near the late inspiral-merger phase of the MBH binary's evolution, where the system will be quasicircular for the eccentricities considered here, so any possible degeneracies between spins and eccentricity will be broken. To summarize, for low values of eccentricities, one can ignore the above-mentioned GW modelling issues without drastically changing the final results.
In this work, we only consider eccentricity corrections to phase and not to the amplitude. The eccentricity enters at \(\mathcal{O}(e^{2})\) in phase without having a \(\mathcal{O}(e)\) term which could be more important for low eccentricities. Amplitude corrections from higher harmonics induced by eccentricity can include \(\mathcal{O}(e)\) terms. Therefore, it needs to be explored how much the inclusion of amplitude corrections due to eccentricity would improve the eccentricity measurement. Lower-mass MBHBs have a large number of GW cycles in the LISA band, which magnifies the \(\mathcal{O}(e^{2})\) terms in the cumulative phase, thereby leading to possibly better measurement of eccentricity from phase than from the amplitude for lighter binaries. Furthermore, Moore et al. (2016) states that for the small eccentricities we consider here, eccentricity corrections to phase are more important than to the amplitude.
## 6 Conclusion
In this work, we study LISA-detectable GWs from eccentric MBHBs in vacuum to find the minimum measurable eccentricity (\(e_{\rm min}\)) that can be inferred from the GW waveform. We consider systems that spend at least a year before merging in the LISA frequency band at \(z=1\) with total redshifted mass \(M_{z}\) in the range \(10^{4.5}\)-\(10^{7.5}\) M\({}_{\sun}\), primary-to-secondary mass ratio \(q\) between 1.2 and 8, and initial eccentricity \(e_{\rm 1yr}\) from \(10^{-3.5}\) to \(10^{-1}\). These MBHBs have SNR \(\sim 100\)-\(2500\) (see Fig. 2), allowing us to infer their binary parameters with high accuracy. To robustly estimate \(e_{\rm min}\), we use the inspiral-only post-Newtonian eccentric waveform template TaylorF2Ecc, and consider LISA's motion in its orbit around the Sun as well as time delay interferometry to suppress the laser noise by employing the lisabeta software. We approach this analytically via computing matches and Fisher matrices, and numerically via Bayesian inference to find \(e_{\rm min}\) for optimally chosen parameter grids in Eq. (1) to cover our systems of interest. We itemize our findings below.
* \(M_{z}\), \(q\), and \(e_{\rm 1yr}\)
- we find that all approaches suggest that \(e_{\rm min}\) mainly depends upon \(M_{z}\) and weakly upon \(q\) (see Figs 3, 4, and 7).
* The optimal match-based SNR criterion, that distinguishes eccentric and quasicircular waveforms with more than 90 per cent confidence, suggests that \(e_{\rm min}\) is \(\sim 10^{-2.5}\) for lower-mass MBHBs (\(M_{z}\lesssim 10^{5.5}\) M\({}_{\sun}\) and \(\sim 10^{-1.5}\) for higher-mass systems (see Section 3.1 and Fig. 3).
* Relative errors on the recovery of eccentricity provided by the Fisher formalism for lighter systems are \(\sim 0.1\) per cent for high eccentricities and \(\sim 1000\) per cent for low \(e_{\rm 1yr}\). For heavier MBHBs, relative errors are \(\sim 1\) per cent for higher eccentricities and \(10^{5}\) per cent for lower \(e_{\rm 1yr}\) (see Section 3.2 and Fig. 4).
* Bayesian inference can constrain \(e_{\rm 1yr}\sim 10^{-1.5}\) to less than 10 per cent relative error for most MBHBs.
* Sampling also extrinsic parameters in Table 1 does not affect the eccentricity posterior significantly (see Figs 6 and 2).
* Assuming a Bayesian relative error of less than 50 per cent as a threshold for \(e_{\rm min}\), we find that the minimum measurable eccentricity is \(e_{\rm min}=10^{-2.75}\) for \(10^{4.5}\) M\({}_{\sun}\) MBHBs, independent of the mass ratio (Fig. 8).
## Data availability statement
The data underlying this article will be shared on reasonable request to the authors.
## Acknowledgements
AD, MG, and LM acknowledge support from the Swiss National Science Foundation (SNSF) under the grant 200020_192092. ST is supported by the SNSF Ambizione Grant Number: PZ00P2-202204. We acknowledge Stanislav Babak, Pedro R. Capelo, and Jonathan Gair for insightful discussions. The authors also acknowledge use of the Mathematica software (Wolfram Research Inc., 2021), NumPy (Harris et al., 2020), and inspiration drawn from the GWFAST package (Iacovelli et al., 2022) regarding the python implementation of TaylorF2ECc.
|
2301.05853 | Quantum diamond microscopy with optimized magnetic field sensitivity and
sub-ms temporal resolution | Quantum diamond magnetometers using lock-in detection have successfully
detected weak bio-magnetic fields from neurons, a live mammalian muscle, and a
live mouse heart. This opens up the possibility of quantum diamond
magnetometers visualizing microscopic distributions of the bio-magnetic fields.
Here, we demonstrate a lock-in-based wide-field quantum diamond microscopy,
achieving a mean volume-normalized per pixel sensitivity of 43.9 $\mathrm{nT\mu
m^{1.5}/Hz^{0.5}}$. We optimize the sensitivity by implementing a double
resonance with hyperfine driving and magnetic field alignment along the
$<$001$>$ orientation of the diamond. Additionally, we show that sub-ms
temporal resolution ($\sim$ 0.4 ms) can be achieved while keeping the per-pixel
sensitivity at a few tens of nanotesla per second using quantum diamond
microscopy. This lock-in-based diamond quantum microscopy could be a step
forward in mapping functional activity in neuronal networks in micrometer
spatial resolution. | Sangwon Oh, Seong-Joo Lee, Jeong Hyun Shim, Nam Woong Song, Truong Thi Hien | 2023-01-14T08:14:38Z | http://arxiv.org/abs/2301.05853v3 | # Quantum diamond microscopy with sub-ms temporal resolution
###### Abstract
Quantum diamond magnetometers using lock-in detection have successfully detected weak bio-magnetic fields from neurons, a live mammalian muscle, and a live mouse heart. This opens up the possibility of quantum diamond magnetometers visualizing microscopic distributions of the bio-magnetic fields. Here, we demonstrate a lock-in-based wide-field quantum diamond microscopy, achieving a mean volume-normalized per pixel sensitivity of 43.9 nT \(\cdot\mu\)m\({}^{1.5}\)/Hz\({}^{0.5}\). We obtain the sensitivity by implementing a double resonance with hyperfine driving and magnetic field alignment along the \(<\)001\(>\) orientation of the diamond. Additionally, we have demonstrated that sub-ms temporal resolution (\(\sim\) 0.4 ms) can be achieved at a micrometer scale with tens of nanotesla per-pixel sensitivity using quantum diamond microscopy. This lock-in-based diamond quantum microscopy could be a step forward in mapping functional activity in neuronal networks in micrometer spatial resolution.
## I Introduction
A nitrogen-vacancy (NV) center in diamond is an atomic defect, in which magnetic sensitivity is in the range of several microtesla with the nanometer spatial resolution at room temperature [1, 2]. The magnetic field sensitivity even reaches below 1 pT/Hz\({}^{0.5}\) by adopting NV ensembles at the cost of the spatial resolution [3, 4, 5, 6]. Various techniques such as high-density NV centers [3], improved readout fidelity [4, 7, 8], and flux concentrators [5, 6] have been implemented to improve sensitivity. These techniques have contributed to the detection of bio-magnetic fields from neurons, mammalian muscles, and a heart using NV ensembles [3, 9, 10]. However, these measurements exhibit millimeter-scale spatial resolutions, which may limit the visualization of functional activity in neural networks [11, 12, 13]. A widefield NV microscope based on the frequency shift of optically detected magnetic resonance (ODMR) can be an alternative method for detecting magnetic field distributions at a micrometer-scale spatial resolution [14, 15, 16, 17]. Currently, density distribution in graphene, magnetic field arising from an integrated circuit, a 2-D magnet, geological samples, and living cells were imaged at sub- or several-micrometer spatial resolutions [18, 19, 20, 21, 22]. However, the widefield NV microscopes based on the ODMR frequency shift may have difficulties in detecting several tens of nanotesla, which corresponds to several hundreds of Hz in the frequency shift. This can be addressed using lock-in-based NV magnetometry [23].
In this study, we adopt a lock-in camera to improve the magnetic field sensitivity in a wide-field NV microscope [24, 25, 26, 27]. For a simple yet reliable operation, we continuously excite NV diamond using a 532nm laser and a microwave. Multiple hyperfine transitions are simultaneously excited for a higher ODMR contrast, and double resonance is implemented to suppress the influences of temperature drift and strain [3, 5, 28, 29, 30, 25]. Additionally, all four NV axes are exploited by aligning an external magnetic field along the \(<\)001\(>\) direction of the NV diamond. Sub-millisecond temporal resolution, 0.4 ms, is demonstrated by detecting a magnetic field from a short-pulsed current at tens of nanotesla per-pixel sensitivity.
Figure 1: ODMR spectrum and decoherence times of the over-grown diamond layer. The ODMR spectrum(a), and decoherence times, T\({}_{2}\)(b) and T\({}_{2}\)(c), support the high quality of the crystal.
## II Experimental
A thin nitrogen-doped ([\({}^{14}\)N] \(\sim\) 10 ppm) diamond layer (\({}^{12}\)C \(>\) 99.99%, 40 \(\mu\)m thick) is grown by chemical vapor deposition (CVD) on top of an electronic grade diamond plate by Applied Diamond Inc.. The dimensions of the diamond plate are approximately \(2\times 2\times\) 0.54 mm\({}^{3}\). The diamond is electron irradiated (1 MeV, 5 \(\times\) 10\({}^{8}\)/cm\({}^{2}\)), and annealed in a vacuum at 800\({}^{\circ}\)C for 4 hours and 1000 \({}^{\circ}\)C for 2 hours, sequentially. An ODMR spectrum of the crystal is shown in Fig. 1(a). The NV decoherence times, T\({}_{2}^{*}\) and T\({}_{2}\), are found to be 1.6 \(\mu\)s and 19.3 \(\mu\)s at B \(\approx\) 3 mT, respectively. T\({}_{2}^{*}\) is found by fitting the data to \(C_{0}\)exp[-(\(\tau/T_{2}^{*}\))]\({}_{\sum i}\) cos[\(\omega_{i}\tau\)], where \(C_{0}\) is the maximal ODMR contrast, and \(\omega_{i}\) is frequency due to the NV hyperfine splittings, Fig. 1(b). Similarly, T\({}_{2}\) is obtained by fitting the data to \(C_{0}\)exp[-(\(2\tau/T_{2}\))\({}^{p}\)], where \(p\) is stretched exponential parameter, Fig. 1(c)[31; 32].
A schematic of the experimental setup is shown in Fig. 2. An omega-shaped coil is placed on the overgrown diamond layer and a 532 nm laser (Opus 3W, LaserQuantum) illuminates the diamond. The incident light power on the diamond was approximately 200 mW. An objective lens (MPLFN50x, Olympus) is used to collect the red fluorescence from the diamond. Long pass (BLP01-633R, Semrock) and dichroic (LM01-552, Semrock) filters are placed before the lock-in camera (heliCam C3, Heliotis) to separate the 532-nm pump laser. We use two microwave generators (SG394, SRS) and a single RF source (2.16 MHz, WX1282C, Tabor Elec.) for the double resonance with hyperfine driving. Each SG394 was mixed (ZX05-43MH+, mini-circuits) with WX1282C and then the outputs are combined (ZX10-2-42S+, mini-circuits) for the double resonance with hyperfine driving. The mixed and combined signals are sent to an amplifier (ZHL-16W43-S+, mini-circuits). A switch (ZASWA-2-50DRA+, mini-circuits) is placed before the amplifier to control the delivery of the microwave to the omega-shaped coil. A TTL pulse generator (PB24-100-4k-PCI, spinore) is used to control the switch. A frequency modulation (square wave) is selected for the lock-in detection. The modulation depth is 300 kHz and the modulation frequencies (f\({}_{mod}\)) are 2.5 or 10 kHz, depending on the temporal resolution [33]. The phases (\(\phi_{1},\phi_{2}\)) of the frequency modulation are controlled using an arbitrary waveform generator (AWG, 33522B, Keysight). For magnetic field and temperature detections, their phase differences are maintained at 180\({}^{\circ}\) and 0\({}^{\circ}\), respectively. The phases used for imaging magnetic fields are shown in Fig. 2. An external trigger for the camera (Cam\({}_{trig}\)) comes from the TTL pulse generator and is synchronized to the phases. The trigger internally initiates an integration of the fluorescence signals during four periods, (I\({}^{+}\), Q\({}^{+}\), I\({}^{-}\), and Q\({}^{-}\)). The in-phase (I = I\({}^{+}\) - I\({}^{+}\)) and quadrature (Q = Q\({}^{+}\) - Q\({}^{-}\)) images are automatically calculated by the camera. A single cycle is composed of four periods, and a single frame is a repetition of a single-cycle N\({}_{cyc}\) times.
Figure 2: (a) Schematic of the widefield microscope and protocol for lock-in camera detection. Frequency modulated (square wave) microwave (MW) is delivered into an omega-shaped coil for lock-in detection. Two MW sources (MW1 and MW2) and one RF (2.16 MHz) generator are mixed and combined for double resonance and hyperfine driving. A continuous-wave (CW) double resonance lock-in protocol is described in the inset. Phases (\(\Phi_{1}\) and \(\Phi_{2}\)) of the frequency modulation are synchronized to a camera trigger for signal integration (I\({}^{+}\), Q\({}^{+}\), I\({}^{-}\), and Q\({}^{-}\)). For magnetic field/temperature detections, \(|\Phi_{1}-\Phi_{2}|\) were kept at \(\pi/0\), respectively. The trigger, Cam\({}_{\rm trig}\), starts the acquisition of the fluorescence, and a single frame of in-phase (I = I\({}^{+}\) - I\({}^{+}\)) and quadrature (Q = Q\({}^{+}\) - Q\({}^{-}\)) images are found after repeating\(N_{cyc}\) times.
Figure 3: ODMR spectra at two aligned cases. (a) ODMR spectra, bias fields aligned along \(<\)111\(>\) and \(<\)001\(>\) of the crystal axes; (b) ODMR of the \(<\)001\(>\) aligned case (SR) compared to the case of hyperfine driving (SR + HF). The contrast of the SR + HF case is enhanced by 2.4 times due to the HF driving. The contrast is further increased by adopting double resonance (DR + HF) with the same phase
The hyperfine(HF) interaction between the NV and N nuclear spin results in a reduced ODMR contrast, which is detrimental to the magnetic field sensitivity. A single frequency-sweeping ODMR spectrum due to HF interaction can be expressed as 1-\(\sum_{p=-1}^{1}\frac{\delta\mu^{2}}{\delta\nu^{2}+4(\omega{-}(\omega{+}p~{}HF)) ^{2}}\), where C,\(\nu,\omega,\omega_{0}\), HF represent contrast, ODMR linewidth, applied MW frequency, resonant MW frequency, and hyperfine splitting (2.16 MHz in our case), respectively. The central contrast can be enhanced by up to three times if three equally spaced MW frequencies are swept simultaneously, 1-\(\sum_{p,q=-1}^{1}\frac{\delta\nu^{2}}{\delta\nu^{2}+4(\omega{+}q~{}HF)-(\omega {+}p~{}HF))^{2}}\).[3] In practice, the enhancement is approximately two because of power-broadened hyperfine features.
The negatively charged NV\({}^{-}\) has an electronic spin triplet (S = 1) state with a temperature-dependent zero-field splitting, D \(\sim\) 2.87 GHz at room temperature, between the \(|m_{s}=0\rangle\) and degenerated \(|m_{s}=\pm 1\rangle\)[34]. The degenerated states are splitted by the Zeeman effect as an external magnetic field is applied. The Hamiltonian for the NV\({}^{-}\) in a magnetic field (\(|B|>1mT\)) can be approximated as: [25; 35]
\[\frac{H}{h}=D(T)S_{z}^{2}+\frac{\gamma}{2\pi}B_{NV}S_{z}, \tag{1}\]
where z, Sz, and B\({}_{NV}\), denote the NV symmetry axis, the dimensionless spin-1 operator, the external magnetic field projected along the NV symmetry axis, and \(\gamma/2\pi\) is the gyromagnetic ratio (28 GHz/T), respectively. The hyperfine interaction and spin-stress coupling parameters are ignored. The resonance frequencies can be expressed as: \(f_{1}=D(t)-\gamma B_{NV}(t)\) for \(|m_{s}=0\rangle{\leftrightarrow}\)\(|m_{s}=-1\rangle\) and \(f_{2}=D(t)+\gamma B_{NV}(t)\) for \(|m_{s}=0\rangle{\leftrightarrow}|m_{s}=1\rangle\) where \(D(t)=D_{0}+\Delta D(t)\) and \(B_{NV}(t)=B_{NV0}+\Delta B_{NV}(t)\).
A double resonance (DR) simultaneously drives the resonance frequencies (\(f_{1}\), \(f_{2}\)) using two microwave generators (MW1 and MW2) with the modulation frequency (\(f_{mod}\)) and phases (\(\phi_{1}\), \(\phi_{2}\)). The output signals of the lock-in amplifier at \(f_{1}\) and \(f_{2}\) are \(S_{1}(t)=\alpha[\Delta D(t)-\gamma B_{NV}(t)]\) and \(S_{2}(t)=\alpha[\Delta D(t)+\gamma B_{NV}(t)]\), respectively, where \(\alpha\) is the slope of the lock-in amplifier. If we apply DR with the same phase, i.e. \(\phi_{1}=\phi_{2}\), then the lock-in signal (\(S_{LIA}\)) is only sensitive to the temperature, \(S_{LIA}\) = 2 \(\alpha\Delta\) D(t). Alternatively, it depends only on the magnetic field (\(\Delta B_{NV}\)) if \(|\phi_{1}-\phi_{2}|=\pi\), \(S_{LIA}=2\alpha\Delta B(t)\)[5; 30; 36; 37]. Additionally, the sensitivity of the DR method is expected to be enhanced by \(\sim\)4/3 times compared to the sensitivity of the single-resonance method [5].
## III Results
The shot-noise-limited continuous wave (CW) magnetic field sensitivity,\(\eta_{CW}\), is given by[38; 3
Figure 4: Volume-normalized magnetic field sensitivity map. (a) Two-dimensional map of the volume-normalized field sensitivity, \(\eta_{V}\) and the scale bar represents 10 \(\mu m\); (b) A histogram of the sensitivity map within the red circled area. Mean volume sensitivity is 43.9 nT \(\cdot\)\(\mu\)m\({}^{1.5}\)/Hz\({}^{0.5}\)
Figure 5: Sub-ms temporal resolution. (a) A pulsed (triangular shape) voltage (dashed blue line) is applied at a coil of which inductance and resistance are 1.8 mH and 2 \(\Omega\), respectively. An expected current (dashed black line) in the coil is simulated by LTspice. The measured magnetic field (solid red line), parallel to the z-axis of the crystal, is found from a single pixel in the middle of the sensitivity map in Fig. 4 (a) and shown in the upper panel. A detailed view, (b), in the time domain shows that an inductive delay in the current, \(\sim\) 0.9 ms, is qualitatively captured by the magnetic field measurement.
\[\eta_{CW}=\frac{4}{3\sqrt{3}}\frac{h}{g_{e}\mu_{B}}\frac{\Delta\nu}{C\sqrt{R}}, \tag{2}\]
where R is the photon-detection rate, \(\Delta\nu\) is the linewidth, and \(C\) is the ODMR contrast. To minimize the \(\eta_{CW}\), we adopt several methods to obtain a higher ODMR contrast in Eq. (2). The first method is projecting a magnetic field equally along all the NV axes. We compare the ODMR spectra where the external magnetic fields are aligned along the \(<\)111\(>\) and \(<\)001\(>\) directions of the crystal, as shown in Fig. 3(a). When the magnetic field is along the \(<\)001\(>\) direction of the crystal, the ODMR contrast can be maximized [3; 30]. Hereafter, we fix the external field along the \(<\)001\(>\) direction. The second method is to simultaneously excite the three HF features (SR + HF) instead of exciting a single frequency (SR). The ODMR contrast is improved by 2.4 times, compared to that in SR, as shown in Fig. 3(b). The third method involves applying DR along with HF driving (DR + HF). This further enhances the contrast compared to that in SR + HF, as shown in Fig. 3 (b). DR is essential for minimizing errors in the magnetic field due to temperature drift in the system [28; 29; 30].
The magnetic field sensitivity can be expressed as \(\eta=\delta B\sqrt{T}\), where \(\delta B\) is the minimum detectable magnetic field, and T is the measurement duration. The minimum magnetic field is given by the standard deviation of a series of measurements [25; 27]. To determine the minimum magnetic field, a test field is applied along the z-axis of the crystal. The test field is found to be 6.8 \(\mu\)T, and the projected magnetic field along the NV axes is 4 \(\mu\)T. The frame rate of the camera is 114 Hz (8.8 ms, \(f_{mod}\) = 2.5 kHz, 22 cycles), and 110 frames are collected for the estimation.
A two-dimensional map of the volume-normalized magnetic field sensitivity, \(\eta_{V}=\eta\sqrt{V}\), is shown in Fig. 4(a), where the field of view is approximately 46 \(\times\) 46 \(\mu m^{2}\), the pixel size is 0.54 \(\times\) 0.54 \(\mu m^{2}\), and the sensor volume, V, is 11.7 (0.54 \(\times\) 0.54 \(\times\) 40) \(\mu m^{3}\). The mean \(\eta_{V}\) within the red circled area is 43.9 nT\(\cdot\)\(\mu\)m\({}^{1.5}\)/Hz\({}^{0.5}\) and the mean per pixel \(\eta\) is 12.8 nT/Hz\({}^{0.5}\). The histogram of the \(\eta_{V}\) within the area shows a Gaussian-like distribution due to the beam shape, as shown in Fig. 4 (b). Illuminating the NV diamond based on the total internal reflection is expected to result in a larger field of view and a more uniform sensitivity distribution than that in the current scheme [7; 25].
To demonstrate a sub-millisecond temporal resolution, the frame rate is increased to 2500 Hz, and 200 frames are acquired, where the modulation frequency is set to 10 kHz. The increased modulation frequency decreases the signal-to-noise ratio due to the wider bandwidth. A series of pulsed voltages are applied to a coil with a diameter of 10 cm, an inductance of 1.8 mH, and a resistance of 2 \(\Omega\). The voltage pulse has a triangular shape, and its polarity is changed within 2 ms and repeats every 10 ms, as shown in Fig. 5 (b). Because the dimensions of the coil are significantly larger than the field of view (Fig. 4 (a)), the magnetic field produced by the pulsed voltage is uniform.
A single acquisition of the magnetic field from the voltage pulse at a single central pixel is shown in Fig. 5(a). The dashed blue line represents the applied voltage (scaled), and the dashed black line represents the expected current in the coil. The solid red line represents the magnetic field along the z-axis of the crystal. Even in a single measurement, we can distinguish \(\pm\) 4 \(\mu\)T magnetic pulse trains from the noise. The expected delay between the current (magnetic field) and the voltage is approximately 0.9 (1.8/ 2) ms. A close-up in the time domain, Fig. 5(b), indicates that our system can capture the transient behavior with sub-millisecond temporal resolution. The standard deviation of the noise level (\(\approx\) 1 \(\mu\)T) during the acquisition duration (0.4 ms) leads to a per-pixel sensitivity of 20 nT/Hz\({}^{0.5}\). These observations support the nanotesla sensitivity with sub-millisecond temporal resolution.
## IV Discussion
In this study, we optimize the volume-normalized magnetic field sensitivity of NV center ensembles using a lock-in camera. The mean per pixel volume-normalized magnetic field sensitivity of 43.9 nT\(\cdot\)\(\mu\)m\({}^{1.5}\)/Hz\({}^{0.5}\) and the sub-ms temporal resolution are obtained at a relatively low optical power density of 0.12 mW/\(\mu m^{2}\). However, we still need to improve the sensitivity to less than 1 nT\(\cdot\)\(\mu\)m\({}^{1.5}\)/Hz\({}^{0.5}\) to visualize neuronal networks [12; 13]. In this section, we will discuss how we can further improve the magnetic field sensitivity.
Photonic structures such as diamond nano-pillars will enhance the volume-normalized sensitivity by improving readout fidelity [39; 40]. It has been reported that the sensitivity can be improved by more than four times owing to increased photoluminescence and spin coherence time by nano-pillar [40]. An additional antireflective coating of 600 - 800 nm on the diamond further increases the photoluminescence further [41].
The inhomogeneous spin dephasing time, \(T_{2}^{*}\), can be extended by applying decoupling sequences. The dipolar coupling between NV centers and substitutional nitrogen (P1) can be suppressed by driving P1 spins [32; 42]. Bauch \(et\)\(al.\) and Balasubramanian \(et\)\(al.\) reported that \(T_{2}^{*}\) in a high P1 density increased more than four times using spin-bath driving [32; 42]. Moreover, Balasubramanian \(et\)\(al.\) decoupled NV-NV interaction by adopting WAHUHA sequence and additionally extended \(T_{2}^{*}\) by ten times [42]. These methods, combined with our technique, could reduce the volume-normalized sensitivity to less than 1 nT\(\cdot\)\(\mu\)m\({}^{1.5}\)/Hz\({}^{0.5}\), which is an essential tool for understanding neuronal connectivity [11; 12; 13].
Illuminating the NV layer uniformly using total internal reflection geometry improves the magnetic sensitivity distribution and increases the field of view up to the mil
-limeter scale [7; 19]. This can be utilized to detect magnetic fields from an integrated circuit and 3-D current distribution in a multi-layer printed circuit board [19; 43]. Combined with the sub-millisecond temporal resolution, the wide field of view could contribute to imaging transient events, which could be missed by scanning-based systems such as giant-magneto resistive (GMR) or superconducting quantum interference device (SQUID)-based current mapping equipment [19; 44].
## V Conclusion
In conclusion, we have obtained a mean per pixel volume-normalized magnetic sensitivity of 43.9 nT \(\cdot\)\(\mu\)m\({}^{1.5}\)/Hz\({}^{0.5}\) and a sub-ms temporal resolution using NV center ensembles and a lock-in camera. The HF driving, DR, and exploitation of the four NV axes are adopted with CW lock-in detection to reach the sensitivity. These methods could be a step forward for visualizing microscopic distributions of sub-nanotesla changes due to neuronal currents in real-time, as well as defects in a packaged battery [45].
## Declarations
### Acknowledgments
The authors thank Kiwoong Kim for valuable discussions and Heliotis AG for experimental assistance in implementing the camera.
### Funding
This research was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (MSIT) (No.2019-000296, No.2021-0-00076) and a grant (GP2021-0010) from Korea Research Institute of Standards and Science.
### Availability of data and materials
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2305.06675 | Rokhlin Dimension: Permanence Properties and Ideal Separation | We study the Rokhlin dimension for actions of residually finite groups on
C*-algebras. We give a definition equivalent to the original one due to Szabo,
Wu and Zacharias. We then prove a number of permanence properties and discuss
actions on C_0(X)-algebras and commutative C*- algebras. Finally, we use a
theorem of Sierakowski to show that, for an action with finite Rokhlin
dimension, every ideal in the associated reduced crossed product C*-algebra
arises from an invariant ideal of the underlying algebra. | Sureshkumar M, Prahlad Vaidyanathan | 2023-05-11T09:16:37Z | http://arxiv.org/abs/2305.06675v2 | # Rokhlin dimension: permanence properties and ideal separation
###### Abstract.
We study the Rokhlin dimension for actions of residually finite groups on C*-algebras. We give a definition equivalent to the original one due to Szabo, Wu and Zacharias. We then prove a number of permanence properties and discuss actions on \(C_{0}(X)\)-algebras and commutative C*-algebras. Finally, we use a theorem of Sierakowski to show that, for an action with finite Rokhlin dimension, every ideal in the associated reduced crossed product C*-algebra arises from an invariant ideal of the underlying algebra.
The study of group actions on C*-algebras is a deep and integral part of the general theory of operator algebras. It allows us to understand the underlying algebra by studying its symmetries, and also provides us with a new C*-algebra, the crossed product. This crossed product is a fascinating object that (broadly speaking) contains the original algebra and the group, and implements the group action via conjugation by unitaries. The difficulty then is to understand the structure of this crossed product C*-algebra, and a key question in this context is to determine when certain'regularity' properties pass from the underlying algebra to the crossed product. These regularity properties include many properties that are useful from the point of view of the classification programme such as finiteness of nuclear dimension, simplicity, \(\mathcal{Z}\)-stability, etc.
The Rokhlin property (studied by Kishimoto [16], Izumi [13] and others) was a first attempt in this direction, but it has the crucial problem in that is requires the underlying algebra to have sufficiently many projections. The notion of Rokhlin dimension, introduced by Hirshberg, Winter and Zacharias [12], seeks to avoid this problem by replacing projections by positive contractions. In the past decade or so, this idea has proved to be very fruitful. In [12], the authors studied such actions of finite groups and of a single automorphism. This was generalized to actions of compact groups by Gardella [6] and to actions of residually finite groups by Szabo, Wu and Zacharias [30]. In each case, it was proved that actions with finite Rokhlin dimension allow us to deduce a number of regularity properties of the crossed product C*-algebra from those of the underlying algebra (particularly in the compact case, and under the 'commuting towers' condition [7]).
The goal of this paper is to study actions of countable, discrete, residually finite groups with finite Rokhlin dimension, building on the work done in [30] and [6]. We begin in Section 1, where we discuss the profinite completion of a residually finite group. This artifact allows us to prove results for residually finite groups along the same lines as those of compact groups. We also describe the topological join of two compact spaces, a notion that gets used in the discussion on Rokhlin dimension with commuting towers. In Section 2, we recall the definition of Rokhlin dimension from [30], and give a number of equivalent formulations. In Section 3, we show that finiteness of Rokhlin dimension is preserved under many standard constructions such as passage to hereditary subalgebras, quotients, inductive limits and extensions. Moreover, we show that finiteness of Rokhlin dimension passes to the action of a subgroup provided the subgroup is a virtual retract.
In Section 4, we consider actions on commutative C*-algebras. After proving an estimate for the Rokhlin dimension of an action on a \(C_{0}(X)\)-algebra, we show that an action on a commutative
C*-algebra has finite Rokhlin dimension if the induced action on its maximal ideal space is both free and proper. In Section5, we show that actions with finite Rokhlin dimension are outer, and are properly outer if the group is amenable and has the (VRC) property (see Definition3.8). Along the way, we use a theorem of Sierakowski [29] to show that if an action is exact and has finite Rokhlin dimension, then the ideals in the associated crossed product C*-algebra must arise from invariant ideals of the underlying C*-algebra.
## 1. Preliminaries
### The Profinite Completion
Recall that a discrete group \(G\) is said to be residually finite if for each non-identity element \(g\in G\), there is a subgroup \(H\) of \(G\) of finite index such that \(g\notin H\). Given such a group \(G\), let \(\mathcal{I}_{G}\) denote the set of all normal subgroups of \(G\) of finite index, partially ordered by reverse inclusion. In other words, \(H\leq K\) if and only if \(K\subset H\). Whenever \(H\leq K\), there is a homomorphism \(\varphi_{K,H}:G/K\to G/H\) given by \(gK\mapsto gH\), and this makes the collection \(\{G/H,\varphi_{K,H}\}_{\mathcal{I}_{G}}\) an inverse system of groups. The inverse limit of this system is called the profinite completion of \(G\) and is denoted by \(\overline{G}\). By definition,
\[\overline{G}=\{(g_{H}H)_{H\in\mathcal{I}_{G}}:g_{K}H=g_{H}H\text{ for all }H,K\in\mathcal{I}_{G}\text{ with }K\subset H\}\]
Note that \(\overline{G}\) is a group under componentwise multiplication and is a topological space as a subspace of \(\prod_{H\in\mathcal{I}_{G}}G/H\). A profinite group is, by definition, the inverse limit of a surjective inverse system of finite groups, and \(\overline{G}\) is therefore a profinite group.
For each \(H\in\mathcal{I}_{G}\), there is a natural action \(\beta^{H}:G\curvearrowright G/H\) given by \(\beta^{H}_{t}(gH):=tgH\). The maps \(\varphi_{K,H}\) respect these actions, so we have an induced left-translation action \(\beta:G\curvearrowright\overline{G}\) given by
\[\beta_{t}((g_{H}H)_{H\in\mathcal{I}_{G}}):=(tg_{H}H)_{H\in\mathcal{I}_{G}}.\]
The most important properties of \(\overline{G}\) (from our perspective) are listed below, and the interested reader will find proofs of all these facts in [28].
**Proposition 1.1**.: _Let \(G\) be a discrete, residually finite group and let \(\overline{G}\) denote its profinite completion._
1. _For each_ \(K\in\mathcal{I}_{G}\)_, there is a surjective group homomorphism_ \(\pi_{K}:\overline{G}\to G/K\) _given by_ \((g_{H}H)_{H\in\mathcal{I}_{G}}\mapsto g_{K}K\)_._
2. _If_ \(\{H_{1},H_{2},\ldots,H_{k}\}\) _is a finite collection of subgroups in_ \(\mathcal{I}_{G}\) _and_ \(\{x_{1},x_{2},\ldots,x_{k}\}\subset G\)_, then the set_ \[\bigcap_{i=1}^{k}\pi_{H_{i}}^{-1}(\{x_{i}H_{i}\})\] _is an open set in_ \(\overline{G}\)_, and sets of this type form a basis for the topology on_ \(\overline{G}\)_._
3. \(\overline{G}\) _is compact, Hausdorff and totally disconnected._
4. _The map_ \(\iota:G\to\overline{G}\) _given by_ \(g\mapsto(gH)_{H\in\mathcal{I}_{G}}\) _is an injective group homomorphism and_ \(\iota(G)\) _is dense in_ \(\overline{G}\)_._
5. _If_ \(H\) _is a profinite group and_ \(\varphi:G\to H\) _is a group homomorphism, then there is a unique continuous group homomorphism_ \(\overline{\varphi}:\overline{G}\to H\) _such that_ \(\overline{\varphi}\circ\iota=\varphi\)_._
6. _The action_ \(\beta:G\curvearrowright\overline{G}\) _defined above is both free and minimal (in the sense that_ \(\overline{G}\) _has no non-trivial closed_ \(\beta\)_-invariant subsets)._
Now consider the commutative C*-algebra \(C(\overline{G})\). Since \(\overline{G}=\varprojlim(G/H,\varphi_{K,H})\), it follows that \(C(\overline{G})\) is the inductive limit of the system \(\{C(G/H),\varphi_{K,H}^{*}\}\). For each \(H\in\mathcal{I}_{G}\), let \(\sigma^{H}:G\to\operatorname{Aut}(C(G/H))\) be the action induced by \(\beta^{H}\) (in other words, \(\sigma^{H}_{t}(f)(gH):=f(t^{-1}gH)\)). Similarly,
let \(\sigma:G\to\operatorname{Aut}(C(\overline{G}))\) be the action induced by \(\beta\). Then the maps \(\pi_{K}^{*}:C(G/K)\to C(\overline{G})\) and \(\varphi_{K,H}^{*}:C(G/H)\to C(G/K)\) are all \(G\)-equivariant. Therefore, we obtain the following fact.
**Lemma 1.2**.: _If \(G\) is a discrete, residually finite group, then_
\[(C(\overline{G}),\sigma)\cong\varprojlim_{\mathcal{I}_{G}}(C(G/H),\sigma^{H}).\]
Now if \(G\) is a finitely generated group and \(n\in\mathbb{N}\) is fixed, then by a theorem of Hall [9], \(G\) has finitely many subgroups of index \(n\). In other words, \(\mathcal{I}_{G}\) is countable. Since \(\overline{G}\) is a subspace of \(\prod_{H\in\mathcal{I}_{G}}G/H\), it is metrizable. We record this fact for later use.
**Lemma 1.3**.: _If \(G\) is a discrete, finitely generated, residually finite group, then \(\overline{G}\) is metrizable, and therefore \(C(\overline{G})\) is separable._
### The Topological Join
Given two compact Hausdorff spaces \(X\) and \(Y\), the topological join of \(X\) and \(Y\) is defined as
\[X*Y:=([0,1]\times X\times Y)/\sim\]
where \(\sim\) is the equivalence relation defined by \((0,x,y)\sim(0,x^{\prime},y)\) and \((1,x,y)\sim(1,x,y^{\prime})\) for all \(x,x^{\prime}\in X\) and \(y,y^{\prime}\in Y\). Elements of \(X*Y\) are denoted by the symbol \([t,x,y]\) for the equivalence class containing \((t,x,y)\).
Given three compact Hausdorff spaces \(X,Y\) and \(Z\), we may also define \((X*Y)*Z\) and \(X*(Y*Z)\) as above. Since all spaces are compact and Hausdorff, these two spaces are naturally homeomorphic, so the join operation is associative. Thus if \(X_{1},X_{2},\ldots,X_{n}\) are compact Hausdorff spaces, then \(X_{1}*X_{2}*\ldots*X_{n}\) may be defined unambiguously. If \(X_{i}=X\) for all \(1\leq i\leq n\), we denote the space \(X_{1}*X_{2}*\ldots*X_{n}\) by \(X^{*(n)}\).
Now suppose \(G\) is a discrete group that acts on both spaces \(X\) and \(Y\), then there is a natural diagonal action of \(G\) on \(X*Y\) given by \(g\cdot[t,x,y]:=[t,gx,gy]\). Moreover, this action is free if each individual action is free. In particular, if \(G\) acts freely on a compact Hausdorff space \(X\) by an action \(\gamma:G\curvearrowright X\), then there is an induced action \(\gamma^{(n)}:G\curvearrowright X^{*(n)}\) that is also free.
Our immediate goal is to give an estimate for the covering dimension of the join of two spaces. Henceforth, we write \(\dim(Z)\) to denote the covering dimension of a compact Hausdorff space \(Z\). Observe that if \(X\) is a compact Hausdorff space, then \(\mathcal{C}X:=X*\{p\}\) is the cone over \(X\). We denote the points in \(\mathcal{C}X\) by \([t,x]\) (by suppressing the \(p\)).
**Proposition 1.4**.: _For any two compact Hausdorff spaces \(X\) and \(Y\),_
\[\dim(X*Y)\leq 1+\dim(X)+\dim(Y).\]
_In particular, if \(\dim(X)=0\), then \(\dim(X^{*(n)})\leq n-1\)._
Proof.: With a slight abuse of notation, we write \(\mathcal{C}Y:=([0,1]\times Y)/\sim\) where \(\sim\) is the equivalence relation \((1,y)\sim(1,y^{\prime})\) for all \(y,y^{\prime}\in Y\). In other words, \(\mathcal{C}X=X*\{p\}\) while \(\mathcal{C}Y:=\{p\}*Y\). Let \(Z:=\mathcal{C}X\times Y\cup X\times\mathcal{C}Y\subset\mathcal{C}X\times \mathcal{C}Y\) and define \(g:[0,1]\times X\times Y\to Z\) by
\[g(t,x,y):=\begin{cases}([2t,x],y)&:\text{ if }0\leq t\leq\frac{1}{2}\\ (x,[2t-1,y])&:\text{ if }\frac{1}{2}\leq t\leq 1.\end{cases}\]
Note that \(g\) is well-defined and continuous, and descends to a continuous map \(f:X*Y\to Z\). That \(f\) is bijective is easy to check. Since \(X*Y\) is compact and \(Z\) is Hausdorff, \(f\) is a homeomorphism. Hence, it suffices to estimate \(\dim(Z)\).
For each \(n\in\mathbb{N}\), let \(A_{n}=\{[t,x]\in\mathcal{C}X:1/n\leq t\leq 1,x\in X\}\). Then \(A_{n}\) is homeomorphic to \([1/n,1]\times X\) so \(\dim(A_{n})\leq\dim(X)+1\) by the product theorem [22, Proposition III.2.6]. Since
\[\mathcal{C}X=\{p\}\sqcup\left(\bigcup_{n=1}^{\infty}A_{n}\right),\]
it follows that \(\dim(\mathcal{C}X)\leq\dim(X)+1\) by [22, Proposition III.5.3]. By the product theorem, \(\dim(\mathcal{C}X\times Y)\leq 1+\dim(X)+\dim(Y)\). By symmetry, \(\dim(X\times\mathcal{C}Y)\leq 1+\dim(X)+\dim(Y)\). Moreover, both \(A=\mathcal{C}X\times Y\) and \(B=X\times\mathcal{C}Y\) are \(F_{\sigma}\)-sets in \(\mathcal{C}X\times\mathcal{C}Y\). By [22, Proposition III.5.3], \(\dim(Z)\leq\dim(X)+\dim(Y)+1\).
### Notational Conventions
For convenience, we now fix some notation that we will use frequently through the rest of the paper. Henceforth, all groups (denoted \(G,H,K\), etc.) will be countable and discrete, and we will write \(e\) for the identity of the group. We write \(H\leq_{fin}G\) if \(H\) is a subgroup of \(G\) of finite index, we write \(H\lhd G\) if \(H\) is a normal subgroup of \(G\), and we write \(H\lhd_{fin}G\) if \(H\) is a normal subgroup of finite index.
If \(Z\) is a topological space, we write \(\dim(Z)\) for its Lebesgue covering dimension. If a group \(G\) acts on \(Z\) by homeomorphisms, we denote this by \(G\curvearrowright Z\) or \(\beta:G\curvearrowright Z\) if \(\beta:G\to\operatorname{Homeo}(Z)\) denotes the corresponding homomorphism. If \(K\) is a set, then we write \(F\subset\!\!\subset K\) if \(F\) is a finite subset of \(K\). If \(A\) is a C*-algebra and \(a,b\in A\), we write \([a,b]:=ab-ba\), and we write \(a\approx_{\epsilon}b\) if \(\|a-b\|<\epsilon\). If \(A\) is unital, we write \(1_{A}\) for the unit of \(A\).
## 2. Rokhlin Dimension for actions of Residually Finite groups
In this section, we recall from [30] the definition of Rokhlin dimension for actions of residually finite groups. We then give an alternate definition using the profinite completion of the group, and we discuss the notion of Rokhlin dimension with commuting towers. Given a natural number \(d\in\mathbb{N}\), we describe explicitly a universal space for actions with Rokhlin dimension \(d\) with commuting towers. This last result is a natural analogue of the corresponding result for compact groups due to Gardella [6, Lemma 4.3]. We begin by describing the central sequence algebra relative to a C*-subalgebra, a notion due to Kirchberg [14].
**Definition 2.1**.: Given a C*-algebra \(A\), let \(\ell^{\infty}(\mathbb{N},A)\) be the space of all bounded sequences in \(A\) and \(c_{0}(\mathbb{N},A)\) be the subspace of sequences that vanish at infinity. If \(A_{\infty}:=\ell^{\infty}(\mathbb{N},A)/c_{0}(\mathbb{N},A)\), then \(A\) embeds in \(A_{\infty}\) as the set of all constant sequences so we identify \(A\) with its image in \(A_{\infty}\). For a C*-subalgebra \(D\subset A\), we define
\[A_{\infty}\cap D^{\prime} =\{x\in A_{\infty}:xd=dx\text{ for all }d\in D\}\text{ and}\] \[\operatorname{Ann}(D,A_{\infty}) =\{x\in A_{\infty}:xd=dx=0\text{ for all }d\in D\}.\]
\(\operatorname{Ann}(D,A_{\infty})\) is an ideal in \(A_{\infty}\cap D^{\prime}\) is an ideal, so we write
\[F(D,A):=(A_{\infty}\cap D^{\prime})/\operatorname{Ann}(D,A_{\infty})\]
and \(\kappa_{D,A}:A_{\infty}\cap D^{\prime}\to F(D,A)\) for the corresponding quotient map. When \(D=A\), we write \(F(A)\) for \(F(A,A)\) and \(\kappa_{A}\) for \(\kappa_{A,A}\). Note that \(F(D,A)\) is unital if \(D\) is \(\sigma\)-unital.
Let \(G\) be a discrete group and \(\alpha:G\to\operatorname{Aut}(A)\) be an action of \(G\) on a C*-algebra \(A\). If \(D\) is an \(\alpha\)-invariant subalgebra of \(A\), there is a natural induced action of \(G\) on \(A_{\infty}\) and on \(F(D,A)\) which we denote by \(\alpha_{\infty}\) and \(\widetilde{\alpha}_{\infty}\) respectively. Moreover, there is a \(G\)-equivariant \(*\)-homomorphism \((F(D,A)\otimes_{\max}D,\widetilde{\alpha}_{\infty}\otimes\alpha|_{D})\to(A_{ \infty},\alpha_{\infty})\) given on elementary tensors by \(\kappa_{D,A}(x)\otimes a\to x\cdot a\). Under this \(*\)-homomorphism \(1_{F(D,A)}\otimes a\) is mapped to \(a\) for all \(a\in D\), so we think of it as a way to multiply elements of \(F(D,A)\) with elements of \(D\) to obtain elements of \(A_{\infty}\) (in a way that respects
the action of \(G\)).
We need one last piece of terminology. Two elements \(a\) and \(b\) in a C*-algebra are said to be orthogonal (in symbols, we write \(a\perp b\)) if \(ab=ba=a^{*}b=b^{*}a=0\). A contractive, completely positive map \(\varphi:A\to B\) between two C*-algebras is said to have order zero if \(\varphi(a)\perp\varphi(b)\) whenever \(a\perp b\). As is customary, we will use the abbreviation 'c.c.p' for 'contractive and completely positive'.
With all this in place, we are now in a position to define Rokhlin dimension for actions of residually finite groups. Following [30], we first define Rokhlin dimension relative to a subgroup of finite index before defining the full Rokhlin dimension for an action.
**Definition 2.2**.: [30, Definition 5.4] Let \(A\) be a C*-algebra, \(G\) be a discrete, countable group, \(H\) be a subgroup of \(G\) of finite index, and \(\alpha:G\to\operatorname{Aut}(A)\) be an action of \(G\) on \(A\). We say that \(\alpha\) has Rokhlin dimension \(d\) relative to \(H\) if \(d\) is the least integer such that for any separable, \(\alpha\)-invariant C*-subalgebra \(D\subset A\), there exist \((d+1)\) equivariant c.c.p. order zero maps
\[\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:(C(G/H),\sigma^{H})\to(F(D,A), \widetilde{\alpha}_{\infty})\]
such that \(\sum_{\ell=0}^{d}\varphi_{\ell}(1_{C(G/H)})=1_{F(D,A)}\). We denote the Rokhlin dimension of \(\alpha\) relative to \(H\) by \(\dim_{\operatorname{Rok}}(\alpha,H)\). If no such integer \(d\) exists, then we write \(\dim_{\operatorname{Rok}}(\alpha,H)=+\infty\).
The following lemma is contained in [30, Proposition 5.5].
**Lemma 2.3**.: _Let \(A\) be a C*-algebra, \(G\) be a discrete, countable group, \(H\) be a subgroup of \(G\) of finite index, and \(\alpha:G\to\operatorname{Aut}(A)\) be an action of \(G\) on \(A\). Then \(\dim_{\operatorname{Rok}}(\alpha,H)\leq d\) if and only if for each \(F\subsetneq A,M\subsetneq G,S\subsetsubset C(G/H)\) and \(\epsilon>0\), there exist \((d+1)\) c.c.p. maps_
\[\psi_{0},\psi_{1},\ldots,\psi_{d}:C(G/H)\to A\]
_satisfying the following properties:_
1. \([\psi_{\ell}(f),a]\approx_{\epsilon}0\) _for all_ \(a\in F,f\in S\) _and_ \(0\leq\ell\leq d\)_._
2. \((\psi_{\ell}(\sigma_{H}^{H}(f))\approx_{\epsilon}\alpha_{g}(\psi_{\ell}(f)))a\) _for all_ \(a\in F,f\in S,g\in M\) _and_ \(0\leq\ell\leq d\)_._
3. \(\psi_{\ell}(f_{1})\psi_{\ell}(f_{2})a\approx_{\epsilon}0\) _for all_ \(a\in F\) _and_ \(f_{1},f_{2}\in S\) _such that_ \(f_{1}\perp f_{2}\) _and for all_ \(0\leq\ell\leq d\)_._
4. \(\sum_{\ell=0}^{d}\psi_{\ell}(1_{C(G/H)})a\approx_{\epsilon}a\) _for all_ \(a\in F\)_._
The set of maps \(\{\psi_{0},\psi_{1},\ldots,\psi_{d}\}\) satisfying the conditions of this lemma is called an \((H,d,F,M,S,\epsilon)\)-Rokhlin system.
**Definition 2.4**.: [30, Definition 5.8] Let \(A\) be a C*-algebra, \(G\) be a residually finite group, and \(\alpha:G\to\operatorname{Aut}(A)\) be an action of \(G\) on \(A\). We define the Rokhlin dimension of \(\alpha\) as
\[\dim_{\operatorname{Rok}}(\alpha)=\sup\{\dim_{\operatorname{Rok}}(\alpha,H):H \leq_{fin}G\}.\]
The next simple proposition will be used repeatedly throughout the rest of the paper. As such, it introduces the profinite completion of \(G\) into the picture, thereby allowing us to treat discrete groups and compact groups on an (almost) equal footing.
**Proposition 2.5**.: _Let \(A\) be a C*-algebra, \(G\) be a residually finite group and \(\alpha:G\to\operatorname{Aut}(A)\) be an action of \(G\) on \(A\). Then \(\dim_{\operatorname{Rok}}(\alpha)\leq d\) if and only if for any \(F\subsetneq A,M\subsetneq G,S\subsetsubset C(\overline{G})\) and any \(\epsilon>0\) there exist \((d+1)\) c.c.p. maps_
\[\psi_{0},\psi_{1},\ldots,\psi_{d}:C(\overline{G})\to A\]
_satisfying the following properties:_
1. \([\psi_{\ell}(f),a]\approx_{\epsilon}0\) _for all_ \(a\in F,f\in S\) _and_ \(0\leq\ell\leq d\)_._
2. \(\psi_{\ell}(\sigma_{g}(f))a\approx_{\epsilon}\alpha_{g}(\psi_{\ell}(f))a\) _for all_ \(a\in F,f\in S,g\in M\) _and_ \(0\leq\ell\leq d\)
_._
3. \(\psi_{\ell}(f_{1})\psi_{\ell}(f_{2})a\approx_{\epsilon}0\) _for all_ \(a\in F\) _and_ \(f_{1},f_{2}\in S\) _such that_ \(f_{1}\perp f_{2}\) _and all_ \(0\leq\ell\leq d\)_._
4. \(\sum_{\ell=0}^{d}\psi_{\ell}(1_{C(\overline{G})})a\approx_{\epsilon}a\) _for all_ \(a\in F\)_._
The set of maps \(\{\psi_{0},\psi_{1},\ldots,\psi_{d}\}\) satisfying the conditions of this proposition is called a \((d,F,M,S,\epsilon)\)-Rokhlin system.
Proof.: Assume that \(\dim_{\text{\rm{Rok}}}(\alpha)\leq d\), and let \(F\subset\!\!\subset A,M\subset\!\!\subset G,S\subset\!\!\subset C(\overline{G})\) and \(\epsilon>0\) be given. By Lemma1.2, \(C(\overline{G})\cong\lim_{\mathcal{I}_{G}}C(G/H)\), so by approximating, we may assume that \(S\subset\pi_{H}^{*}(C(G/H))\) for some \(H\in\mathcal{I}_{G}\). Write \(S=\pi_{H}^{*}(S^{\prime})\) for some finite set \(S^{\prime}\subset C(G/H)\) and we may further assume that \(1_{C(G/H)}\in S^{\prime}\), that \(e\in M\) and that \(\|a\|\leq 1\) for all \(a\in F\). By Lemma2.3, we obtain \((d+1)\) c.c.p. maps
\[\widetilde{\psi}_{0},\widetilde{\psi}_{1},\ldots,\widetilde{\psi}_{d}:C(G/H)\to A\]
which form an \((H,d,F,M,S^{\prime},\epsilon/3)\)-Rokhlin system. For each \(0\leq\ell\leq d\), since \(C(G/H)\) is nuclear, there is a sequence of c.c.p. maps \(\rho_{\ell}^{i}:C(G/H)\to M_{k(i)}(\mathbb{C})\) and \(\theta_{\ell}^{i}:M_{k(i)}(\mathbb{C})\to A\) such that
\[\lim_{i\to\infty}\theta_{\ell}^{i}\circ\rho_{\ell}^{i}(f)=\widetilde{\psi}_{ \ell}(f)\]
for all \(f\in C(G/H)\). Since \(\pi_{H}^{*}\) is injective, we may use Arveson's extension theorem to obtain c.c.p. maps \(\overline{\rho}_{\ell}^{i}:C(\overline{G})\to M_{k(i)}(\mathbb{C})\) such that \(\overline{\rho}_{\ell}^{i}\circ\pi_{H}^{*}=\rho_{\ell}^{i}\). Then there exists \(i_{0}\in\mathbb{N}\) such that
\[\left\|\theta_{\ell}^{i_{0}}(\overline{\rho}_{\ell}^{i_{0}}((\pi_{H}^{*}( \sigma_{g}^{H}(f)))))-\widetilde{\psi}_{\ell}(\sigma_{g}(f))\right\|<\frac{ \epsilon}{3(d+1)}\]
for all \(f\in S^{\prime}\) and \(g\in M\). If \(\psi_{\ell}:C(\overline{G})\to A\) is given by \(\psi_{\ell}:=\theta_{\ell}^{i_{0}}\circ\overline{\rho}_{\ell}^{i_{0}}\), then we claim that the maps \(\{\psi_{0},\psi_{1},\ldots,\psi_{d}\}\) satisfy the required conditions. For the sake of brevity, we verify only condition (4) as the others are similar: If \(a\in F\), then
\[\sum_{\ell=0}^{d}\psi_{\ell}(1_{C(\overline{G})})a=\sum_{\ell=0}^{d}\theta_{ \ell}^{i_{0}}\circ\overline{\rho}_{\ell}^{i_{0}}\circ\pi_{H}^{*}(1_{C(G/H)})a \approx_{\frac{\epsilon}{3}}\sum_{\ell=0}^{d}\widetilde{\psi}_{\ell}(1_{C(G/H )})a\approx_{\frac{\epsilon}{3}}a.\]
With the other conditions verified, we conclude that \(\{\psi_{0},\psi_{1},\ldots,\psi_{d}\}\) form a \((d,F,M,S,\epsilon)\)-Rokhlin system.
For the converse, choose a subgroup \(H\leq_{fin}G\) and let \(\pi_{H}:\overline{G}\to G/H\) be the natural map from Proposition1.1. Then \(\pi_{H}^{*}:C(G/H)\to C(\overline{G})\) is a unital \(G\)-equivariant \(*\)-homomorphism. If \(F\subset\!\!\subset A,M\subset\!\!\subset G,S\subset\!\!\subset C(G/H)\) and \(\epsilon>0\) are given, then by hypothesis, there exist \((d+1)\) c.c.p. maps
\[\psi_{0},\psi_{1},\ldots,\psi_{d}:C(\overline{G})\to A\]
satisfying the four conditions listed above for the tuple \((d,F,M,\pi_{H}^{*}(S),\epsilon)\). Then, \(\{\psi_{\ell}\circ\pi_{H}^{*}:0\leq\ell\leq d\}\) clearly forms a \((H,d,F,M,S,\epsilon)\)-Rokhlin system as in Lemma2.3, proving that \(\dim_{\text{\rm{Rok}}}(\alpha,H)\leq d\). This is true for each \(H\leq_{fin}G\), so \(\dim_{\text{\rm{Rok}}}(\alpha)\leq d\).
In order to turn the previous proposition into a global statement (without reference to finite sets and \(\epsilon\)'s), we need to assume that \(G\) is finitely generated to ensure that \(C(\overline{G})\) is separable.
**Corollary 2.6**.: _Let \(A\) be a C*-algebra, \(G\) be a finitely generated, residually finite group and \(\alpha:G\to\text{Aut}(A)\) be an action of \(G\) on \(A\). Then \(\dim_{\text{\rm{Rok}}}(\alpha)\leq d\) if and only if, for each separable, \(\alpha\)-invariant C*-subalgebra \(D\subset A\), there exist \((d+1)\) equivariant, order zero, c.c.p. maps_
\[\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:(C(\overline{G}),\sigma)\to(F(D,A), \widetilde{\alpha}_{\infty})\]
_such that \(\sum_{\ell=0}^{d}\varphi_{\ell}(1_{C(\overline{G})})=1_{F(D,A)}\)._
Proof.: Suppose that \(\dim_{\text{\rm{Rok}}}(\alpha)\leq d\). Choose a separable, \(\alpha\)-invariant C*-subalgebra \(D\subset A\) and choose an increasing sequence of finite sets \((F_{n})\) such that \(T:=\bigcup_{n=1}^{\infty}F_{n}\) is dense in \(D\). Since \(\overline{G}\) is compact and metrizable, there is an increasing sequence \((S_{n})\) of finite subsets of \(C(\overline{G})\) such that \(\mathcal{A}:=\bigcup_{n=1}^{\infty}S_{n}\) is dense in \(C(\overline{G})\). Furthermore, we may arrange it so that for each positive function \(f\in C(\overline{G})\) and each \(\delta>0\), there exists \(f_{0}\in\mathcal{A}\) such that \(\|f_{0}-f\|<\delta\) and \(\operatorname{supp}(f_{0})\subset\operatorname{supp}(f)\) ([31, Lemma 1.4]). Finally, since \(G\) is countable, there is an increasing sequence \((M_{n})\) of finite subsets of \(G\) such that \(G=\bigcup_{n=1}^{\infty}M_{n}\). For each \(n\in\mathbb{N}\), let
\[\psi_{0}^{(n)},\psi_{1}^{(n)},\ldots,\psi_{d}^{(n)}:C(\overline{G})\to A\]
be a \((d,F_{n},M_{n},S_{n},1/n)\)-Rokhlin system. For \(0\leq\ell\leq d\), define \(\widetilde{\varphi}_{\ell}:C(\overline{G})\to A_{\infty}\) by
\[\widetilde{\varphi}_{\ell}(f):=\eta_{A}((\psi_{\ell}^{(n)}(f)))\]
where \(\eta_{A}:\ell^{\infty}(\mathbb{N},A)\to A_{\infty}\) is the natural quotient map. If \(f\in\mathcal{A}\) and \(d\in T\), then there exists \(N\in\mathbb{N}\) such that
\[\|[\psi_{\ell}^{(n)}(f),d]\|<\frac{1}{n}\]
for all \(n\geq N\). Therefore, \(\widetilde{\varphi}_{\ell}(f)\) commutes with \(d\) in \(A_{\infty}\). Since \(\mathcal{A}\) is dense in \(C(\overline{G})\) and \(T\) is dense in \(D\), it follows that \(\widetilde{\varphi}_{\ell}(f)\in D^{\prime}\) for all \(f\in C(\overline{G})\). We thus get well-defined maps
\[\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:C(\overline{G})\to F(D,A)\]
given by \(\varphi_{\ell}:=\kappa_{D,A}\circ\widetilde{\varphi}_{\ell}\). Following the argument of [31, Lemma 1.5], one can show that these maps satisfy the required conditions.
For the converse, the proof of [5, Lemma 3.7] applies verbatim, except that \(\overline{G}\) plays the role of \(G\) here.
### Rokhlin Dimension with Commuting Towers
The notion of Rokhlin dimension with commuting towers has proved to be very useful, particularly in the context of compact groups (see, for instance, [7]). For the purpose of this paper, it is here merely for completeness. Almost all results we prove for Rokhlin dimension with commuting towers have analogous statements for Rokhlin dimension without commuting towers.
**Definition 2.7**.: [30, Definition 5.4] Let \(A\) be a C*-algebra, \(G\) be a discrete, countable group, \(H\) be a subgroup of \(G\) of finite index, and \(\alpha:G\to\operatorname{Aut}(A)\) be an action of \(G\) on \(A\). We say that \(\alpha\) has Rokhlin dimension \(d\) with commuting towers relative to \(H\) if \(d\) is the least integer such that for any separable, \(\alpha\)-invariant C*-subalgebra \(D\subset A\), there exist \((d+1)\) equivariant c.c.p. order zero maps
\[\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:(C(G/H),\sigma^{H})\to(F(D,A), \widetilde{\alpha}_{\infty})\]
with pairwise commuting ranges such that \(\sum_{\ell=0}^{d}\varphi_{\ell}(1_{C(G/H)})=1_{F(D,A)}\). We denote the Rokhlin dimension of \(\alpha\) with commuting towers relative to \(H\) by \(\dim_{\text{\rm{Rok}}}^{c}(\alpha,H)\). If no such integer \(d\) exists, then we write \(\dim_{\text{\rm{Rok}}}^{c}(\alpha,H)=+\infty\). We define the Rokhlin dimension of \(\alpha\) with commuting towers as
\[\dim_{\text{\rm{Rok}}}^{c}(\alpha)=\sup\{\dim_{\text{\rm{Rok}}}^{c}(\alpha,H) :H\leq_{fin}G\}.\]
Each result from the previous section has an analogous version for Rokhlin dimension with commuting towers. For brevity, we state (without proof) the two that we will need in the future.
**Proposition 2.8**.: _Let \(A\) be a C*-algebra, \(G\) be a residually finite group and \(\alpha:G\to\text{Aut}(A)\) be an action of \(G\) on \(A\). Then \(\dim_{\text{\rm{Rok}}}^{c}(\alpha)\leq d\) if and only if for any \(F\subset\!\!\subset A,M\subset\!\!\subset G,S\subset\!\!\subset C(\overline{G})\) and any \(\epsilon>0\) there exist \((d+1)\) c.c.p. maps_
\[\psi_{0},\psi_{1},\ldots,\psi_{d}:C(\overline{G})\to A\]
satisfying the following properties:_
1. \([\psi_{\ell}(f),a]\approx_{\epsilon}0\) _for all_ \(a\in F,f\in S\) _and_ \(0\leq\ell\leq d\)_._
2. \(\psi_{\ell}(\sigma_{g}(f))a\approx_{\epsilon}\alpha_{g}(\psi_{\ell}(f))a\) _for all_ \(a\in F,f\in S,g\in M\) _and_ \(0\leq\ell\leq d\)_._
3. \(\psi_{\ell}(f_{1})\psi_{\ell}(f_{2})a\approx_{\epsilon}0\) _for all_ \(a\in F\) _and_ \(f_{1},f_{2}\in S\) _such that_ \(f_{1}\perp f_{2}\) _and all_ \(0\leq\ell\leq d\)_._
4. \(\sum_{\ell=0}^{d}\psi_{\ell}(1_{C(\overline{G})})a\approx_{\epsilon}a\) _for all_ \(a\in F\)_._
5. \([\psi_{k}(f_{1}),\psi_{\ell}(f_{2})]a\approx_{\epsilon}0\) _for all_ \(f_{1},f_{2}\in S,0\leq k,\ell\leq d\) _and all_ \(a\in F\)_._
The set of maps \(\{\psi_{0},\psi_{1},\ldots,\psi_{d}\}\) satisfying the conditions of this proposition is called a \((d,F,M,S,\epsilon)\)-commuting Rokhlin system.
**Proposition 2.9**.: _Let \(A\) be a C*-algebra, \(G\) be a finitely generated, residually finite group and \(\alpha:G\to\text{Aut}(A)\) be an action of \(G\) on \(A\). Then \(\dim_{\text{Rok}}^{c}(\alpha)\leq d\) if and only if, for each separable, \(\alpha\)-invariant C*-subalgebra \(D\subset A\), there exist \((d+1)\) equivariant, order zero, c.c.p. maps_
\[\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:(C(\overline{G}),\sigma)\to(F(D,A),\widetilde{\alpha}_{\infty})\]
_with pairwise commuting ranges such that \(\sum_{\ell=0}^{d}\varphi_{\ell}(1_{C(\overline{G})})=1_{F(D,A)}\)._
The reason we have isolated this result is that it leads directly to the following theorem. This was proved in the context of compact groups by Gardella [5, Lemma 4.3]. Since we wish to identify the universal space explicitly (and also estimate its covering dimension), we repeat the proof below with appropriate modifications. Recall that if \(G\) is a finitely generated, residually finite group, then \(\overline{G}\) is a compact metric space. For \(n\in\mathbb{N}\), we write \(\overline{G}^{*(n)}\) for the \(n\)-fold join of \(\overline{G}\) with itself. Moreover, if \(\sigma:G\to\text{Aut}(C(\overline{G}))\) is the natural action described earlier, we write \(\sigma^{(n)}:G\to\text{Aut}(C(\overline{G}^{*(n)}))\) for the induced action of \(G\) on \(C(\overline{G}^{*(n)})\).
**Theorem 2.10**.: _Let \(A\) be a C*-algebra, \(G\) be a finitely generated, residually finite group and \(\alpha:G\to\text{Aut}(A)\) be an action of \(G\) on \(A\). Then, \(\dim_{\text{Rok}}^{c}(\alpha)\leq d\) if and only if, for each separable, \(\alpha\)-invariant C*-subalgebra \(D\subset A\), there is a unital, \(G\)-equivariant \(*\)-homomorphism_
\[\varphi:(C(\overline{G}^{*(d+1)}),\sigma^{(d+1)})\to(F(D,A),\widetilde{\alpha }_{\infty}).\]
_Moreover, \(\dim(\overline{G}^{*(d+1)})\leq d\)._
Proof.: Suppose that \(\dim_{\text{Rok}}^{c}(\alpha)\leq d\) and let \(D\subset A\) be a separable, \(\alpha\)-invariant C*-subalgebra of \(A\). Let \(\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:(C(\overline{G}),\sigma)\to(F(D,A),\widetilde{\alpha}_{\infty})\) be \(G\)-equivariant, order zero c.c.p. maps with pairwise commuting ranges such that \(\sum_{\ell=0}^{d}\varphi_{\ell}(1_{C(\overline{G})})=1_{F(D,A)}\). We then follow the line of reasoning from [6, Lemma 4.3]. Let \(t\in C_{0}(0,1]\) denote the identity function. Then for each \(0\leq\ell\leq d\), there exists a \(*\)-homomorphism \(\rho_{\ell}:C_{0}(0,1]\times C(\overline{G})\to F(D,A)\) such that
\[\rho_{\ell}(t\otimes f)=\varphi_{\ell}(f)\]
for all \(f\in C(\overline{G})\). Let \(\mathcal{C}\overline{G}\) denote the cone on \(\overline{G}\), whose elements we denote by \([t,x]\) for \(t\in[0,1]\) and \(x\in\overline{G}\). Note that \(C(\mathcal{C}\overline{G})\) is the minimal unitization of \(C_{0}(0,1]\otimes C(\overline{G})\). In other words, \(C(\mathcal{C}\overline{G})=\{f:[0,1]\to C(\overline{G})\text{ continuous}:f(0)\in \mathbb{C}1_{C(\overline{G})}\}\). Then \(G\) acts on \(C(\mathcal{C}\overline{G})\) by \(\gamma:G\to\text{Aut}(C(\mathcal{C}\overline{G}))\) given by \(\gamma_{g}\cdot f([t,x]):=f([t,\beta_{g^{-1}}(x)])\). Each \(\rho_{\ell}\) is \(G\)-equivariant as well. Let
\[E:=\bigotimes_{\ell=0}^{d}C(\mathcal{C}\overline{G})\]
and let \(\delta:G\to\text{Aut}(E)\) be the action induced by \(\gamma\). Let \(\omega:E\to\mathbb{C}\) denote the \(*\)-homomorphism give on simple tensors by \(\omega(f_{0}\otimes f_{1}\otimes\ldots\otimes f_{d}):=\prod_{\ell=0}^{d}f_{ \ell}(0)\), and let \(J:=\ker(\omega)\). Observe that
\[J=\{f:(\mathcal{C}\overline{G})^{d+1}\to\mathbb{C}\text{ continuous}:f([0,x_{0}],[0,x_{1}],\ldots,[0,x_{d}])=0\text{ for all }x_{i}\in\overline{G}\}\]
By [12, Lemma 5.2], there is a unique \(*\)-homomorphism \(\rho:J\to F(D,A)\) induced by the tuple \((\rho_{0},\ldots,\rho_{d})\). Also note that \(\delta_{g}(J)\subset J\) for each \(g\in G\), so we get an induced action \(\delta:G\to\operatorname{Aut}(J)\) which is realized as
\[\delta_{g}(f)([s_{0},x_{0}],[s_{1},x_{1}],\ldots,[s_{d},x_{d}]):=f([s_{0},\beta _{g^{-1}}(x_{0})],\ldots,[s_{d},\beta_{g^{-1}}(x_{d})]).\]
Let \(e\in J\) denote the function \(e([s_{0},x_{0}],\ldots,[s_{d},x_{d}]):=\sum_{\ell=0}^{d}s_{\ell}\). Then \(\delta_{g}(e)=e\) for all \(g\in G\), and \(\rho(e)=\sum_{\ell=0}^{d}\varphi_{\ell}(1_{C(\overline{G})})=1_{F(D,A)}\). If \(I\) denotes the \(\delta\)-invariant ideal in \(J\) generated by the set \(\{ef-f:f\in J\}\) and \(C:=J/I\), then \(\rho\) induces a unital, \(G\)-equivariant \(*\)-homomorphism
\[\overline{\rho}:C\to(F(D,A),\widetilde{\alpha}_{\infty}).\]
Now observe that \(C=C(Y)\) where
\[Y=\left\{([s_{0},x_{0}],[s_{1},x_{1}],\ldots,[s_{d},x_{d}])\in(\mathcal{C} \overline{G})^{d+1}:\sum_{\ell=0}^{d}s_{\ell}=1\right\}\]
with the action of \(G\) on \(Y\) given by \(g\cdot([s_{0},x_{0}],\ldots,[s_{d},x_{d}]):=([s_{0},\beta_{g}(x_{0})],\ldots,[s _{d},\beta_{g}(x_{d})])\). Hence there is a \(G\)-equivariant homeomorphism \(Y\cong\overline{G}^{*(d+1)}\) and the result is proved.
The converse follows by simply reversing the above argument. Finally, observe that \(\dim(\overline{G})=0\) because \(\overline{G}\) is totally disconnected, so the inequality \(\dim(\overline{G}^{*(d+1)})\leq d\) follows from Proposition1.4.
## 3. Permanence Properties
We now wish to prove some permanence properties for actions with finite Rokhlin dimension along the lines of [6, Section 3] and [10]. The next lemma is stated as [10, Lemma 3.3]. We give a slightly more detailed proof here because we will seek to generalize it later in the section.
**Lemma 3.1**.: _Let \(A\) be a C*-algebra, \(G\) be an amenable group, and \(\beta:G\to\operatorname{Aut}(A)\) be an action of \(G\) on \(A\). Let \(D\subset A\) be a \(\beta\)-invariant hereditary subalgebra and let \(\alpha:G\to\operatorname{Aut}(D)\) denote the restriction of \(\beta\) to \(D\). Then for any \(M\subset\subset G,F\subset D\) and any \(\eta>0\), there exists a positive contraction \(q\in D\) satisfying the following conditions:_
1. \(\|\alpha_{s}(q)-q\|<\eta\) _for all_ \(s\in M\)_._
2. \(\|qa-a\|<\eta,\|aq-a\|<\eta\) _and_ \(\|qa-aq\|<\eta\) _for all_ \(a\in F\)_._
Proof.: Since the sets in question are finite, we may assume that \(\|a\|\leq 1\) for all \(a\in F\). Let \((e_{\lambda})_{\lambda\in\Lambda}\subset D\) be an approximate unit in \(D\). Let \(K\subset G\) be a Folner set such that \(M\subset K\) and
\[\frac{|sK\Delta K|}{|K|}<\eta\]
for all \(s\in M\). Define
\[q_{\lambda}:=\frac{1}{|K|}\sum_{t\in K}\alpha_{t}(e_{\lambda}).\]
Observe that \((q_{\lambda})_{\lambda\in\Lambda}\) is an approximate unit for \(D\) with the property that \(\|\alpha_{s}(q_{\lambda})-q_{\lambda}\|<\eta\) for all \(s\in M\). Then \(q:=q_{\lambda}\) satisfies the required conditions for \(\lambda\) large enough.
Notice that this is the first time we have needed to assume that \(G\) is an amenable group. Indeed, amenability is a crucial assumption in many of the results to follow. We should mention that amenability is implicitly an assumption in many of the results of [30] as well: There, one would like groups to have box spaces with finite asymptotic dimension for a variety of results to be useful, and this condition automatically implies amenability (see [30, Remark 3.12]). We now prove a number
of permanence properties analogous to those proved by Gardella [6, Theorem 3.8] for compact groups.
**Theorem 3.2**.: _Let \(A\) be a C*-algebra, \(G\) be an amenable, residually finite group and \(\alpha:G\to\text{Aut}(A)\) be an action of \(G\) on \(A\)._
1. _Let_ \(B\) _be a C*-algebra and_ \(\beta:G\to\text{Aut}(B)\) _be an action. Let_ \(A\otimes B\) _be any C*-algebra completion of_ \(A\odot B\) _for which the tensor product action_ \(g\mapsto\alpha_{g}\otimes\beta_{g}\) _is defined and continuous. Then_ \[\dim_{\text{Rok}}(\alpha\otimes\beta) \leq\min\{\dim_{\text{Rok}}(\alpha),\dim_{\text{Rok}}(\beta)\}\text { and }\] \[\dim_{\text{Rok}}{}^{c}(\alpha\otimes\beta) \leq\min\{\dim_{\text{Rok}}{}^{c}(\alpha),\dim_{\text{Rok}}{}^{c}( \beta)\}\]
2. _Let_ \(I\) _be an_ \(\alpha\)_-invariant ideal in_ \(A\)_, and let_ \(\overline{\alpha}\in\text{Aut}(A/I)\) _be the induced action on the quotient. Then_ \[\dim_{\text{Rok}}(\overline{\alpha}) \leq\dim_{\text{Rok}}(\alpha)\text{ and }\] \[\dim_{\text{Rok}}{}^{c}(\overline{\alpha}) \leq\dim_{\text{Rok}}{}^{c}(\alpha)\]
3. _Let_ \(D\subset A\) _an_ \(\alpha\)_-invariant hereditary subalgebra of_ \(A\) _and let_ \(\alpha|_{D}\colon\, G\to\text{Aut}(D)\) _be the induced action on_ \(D\)_. Then,_ \[\dim_{\text{Rok}}(\alpha|_{D}) \leq\dim_{\text{Rok}}(\alpha)\text{ and }\] \[\dim_{\text{Rok}}{}^{c}(\alpha|_{D}) \leq\dim_{\text{Rok}}{}^{c}(\alpha).\]
4. _Let_ \((A_{n},\iota_{n})\) _be a direct system of C*-algebras, and let_ \(\alpha^{(n)}\in\text{Aut}(A_{n})\) _be automorphisms such that_ \(\iota_{n}\circ\alpha^{(n)}=\alpha^{(n+1)}\) _for all_ \(n\in\mathbb{N}\)_. Let_ \(\alpha=\lim_{n\to\infty}\alpha^{(n)}\) _denote the induced action on_ \(A:=\lim_{n\to\infty}(A_{n},\iota_{n})\)_. Then,_ \[\dim_{\text{Rok}}(\alpha) \leq\liminf\dim_{\text{Rok}}(\alpha^{(n)})\text{ and }\] \[\dim_{\text{Rok}}{}^{c}(\alpha) \leq\liminf\dim_{\text{Rok}}{}^{c}(\alpha^{(n)}).\]
Proof.: In each case, the argument for \(\dim_{\text{Rok}}{}^{c}(\cdot)\) is similar to that of \(\dim_{\text{Rok}}(\cdot)\), so we omit the former.
1. This proof is, in principle, identical to that of [6, Theorem 3.8], but we no longer need to assume that \(B\) is unital. We first assume that \(d:=\dim_{\text{Rok}}(\beta)<\infty\). Let \(F\subset A\otimes B,M\subset G,S\subset C(\overline{G})\) and let \(\epsilon>0\) be given. We fix \(\eta>0\) to be chosen later. By approximating, we may assume that \(F\subset A\odot B\). By further decomposing, we may assume that \(F\) consists of elementary tensors, say \(F=\{a_{i}\otimes b_{i}:1\leq i\leq n\}\). Let \(F_{A}:=\{a_{i}:1\leq i\leq n\}\) and \(F_{B}:=\{b_{i}:1\leq i\leq n\}\). Then there are \((d+1)\) c.c.p. maps \[\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:C(\overline{G})\to B\] which forms a \((d,F_{B},M,S,\eta)\)-Rokhlin system. Choose \(q\in A\) by Lemma 3.1 corresponding to the triple \((F_{A},M,\eta)\). Then define \(\theta:B\to A\otimes B\) by \(b\mapsto q\otimes b\), and let \(\psi_{\ell}:C(\overline{G})\to A\otimes B\) be the c.c.p. map given by \(\psi_{\ell}:=\theta\circ\varphi_{\ell}\) for \(0\leq\ell\leq d\). To verify that \(\{\psi_{0},\ldots,\psi_{d}\}\) forms a \((d,F,M,S,\epsilon)\)-Rokhlin system, we must check the four conditions of Proposition 2.5. Since the arguments are similar, we only verify condition (2): If \(g\in M,f\in S,x=a_{i}\otimes b_{i}\in F\) and \(0\leq\ell\leq d\), then \[(\alpha\otimes\beta)_{g}(\psi_{\ell}(f))x =\alpha_{g}(q)a_{i}\otimes\beta_{g}(\varphi_{\ell}(f))b_{i}\] \[\approx_{\eta}qa_{i}\otimes\beta_{g}(\varphi_{\ell}(f))b_{i}\] \[\approx_{\eta}qa_{i}\otimes\varphi_{\ell}(\sigma_{g}(f))b_{i}=\psi _{\ell}(\sigma_{g}(f))x.\]
We may verify the other conditions in a similar manner. One sees that if \(\eta:=\epsilon/(d+2)\), then the maps \(\{\psi_{0},\ldots,\psi_{d}\}\) forms a \((d,F,M,S,\epsilon)\)-Rokhlin system. We conclude that \(\dim_{\mathrm{Rok}}(\alpha\otimes\beta)\leq\dim_{\mathrm{Rok}}(\beta)\). The argument is symmetric, so it follows that \[\dim_{\mathrm{Rok}}(\alpha\otimes\beta)\leq\min\{\dim_{\mathrm{Rok}}(\alpha), \dim_{\mathrm{Rok}}(\beta)\}.\]
2. The argument here is identical to that of [10, Proposition 3.4]. Since it is omitted there, we give a sketch here. Assume that \(d:=\dim_{\mathrm{Rok}}(\alpha)<\infty\). Let \(F\subset\!\!\subset A/I,S\subset\!\!\subset C(\overline{G}),M\subset\!\!\subset G\) and \(\epsilon>0\) be given. Let \(\widetilde{F}\subset\!\!\subset A\) be a finite set of preimages of elements in \(F\) under the quotient map \(Q:A\to A/I\). We may arrange it so that the norms of the lifts are the same as the norms of their images in \(F\). By Proposition 2.5, we may choose c.c.p. maps \[\psi_{0},\psi_{1},\ldots,\psi_{d}:C(\overline{G})\to A\] which form a \((d,\widetilde{F},M,S,\epsilon)\)-Rokhlin system. It is then clear that if \(\varphi_{\ell}:=Q\circ\psi_{\ell}\), then \(\{\varphi_{0},\varphi_{1},\ldots,\varphi_{d}\}\) forms a \((d,F,M,S,\epsilon)\)-Rokhlin system for the action \(\overline{\alpha}\). Hence, \(\dim_{\mathrm{Rok}}(\overline{\alpha})\leq d\).
3. The argument is identical to [6, Theorem 3.8] with \(\overline{G}\) playing the role of \(G\) (one also needs to appeal to Lemma 3.1 to obtain the positive element \(x\) in that proof).
4. Again, the argument is identical.
In [10, Theorem 2.10], the authors have shown that if \(\alpha:G\to\mathrm{Aut}(A)\) is an action of a finite group \(G\) on a C*-algebra \(A\) and if \(J\) is an \(\alpha\)-invariant ideal of \(A\), then \(\dim_{\mathrm{Rok}}{}^{c}(\alpha)\leq\dim_{\mathrm{Rok}}{}^{c}(\alpha|_{J})+ \dim_{\mathrm{Rok}}{}^{c}(\overline{\alpha})+1\) where \(\alpha|_{J}\colon G\to\mathrm{Aut}(J)\) and \(\overline{\alpha}:G\to\mathrm{Aut}(A/J)\) are the natural actions induced by \(\alpha\) on \(J\) and \(A/J\) respectively. We wish to prove that the same estimates also hold for actions of amenable, residually finite groups. The next lemma isolates one part of the proof, and may be thought of along the lines of Lemma 3.1.
**Lemma 3.3**.: _Let \(G\) be an amenable group, and let_
\[0\to(J,\alpha)\xrightarrow{\iota}(A,\beta)\xrightarrow{\pi}(B,\gamma)\to 0\]
_be a short exact sequence of \(G\)-algebras. If \(d\in\mathbb{N},F\subset\!\!\subset A,F^{\prime}\subset\!\!\subset A,M\subset\!\!\subset G\) and \(\eta>0\), then there exists \(q\in J\) such that \(0\leq q\leq 1\) and if \(\delta:=\eta/(d+6)\), then the following conditions are satisfied:_
1. _If_ \(b_{1},b_{2}\in F^{\prime}\) _and_ \(a\in F\) _are such that_ \(\pi(b_{1}b_{2}a)\approx_{\delta}0\)_, then_ \[(1-q)^{1/2}b_{1}(1-q)b_{2}(1-q)^{1/2}a\approx_{\eta}0\]
2. _If_ \(b_{0},b_{1},\ldots,b_{d}\in F^{\prime}\) _and_ \(a\in F\) _are such that_ \(\sum_{j=0}^{d}\pi(b_{j}a)\approx_{\delta}\pi(a)\)_, then_ \[(1-q)^{1/2}(b_{0}+b_{1}+\ldots+b_{d})(1-q)^{1/2}a\approx_{\eta}(1-q)a\]
3. _If_ \(b\in F^{\prime}\) _and_ \(a\in F\) _are such that_ \([\pi(b),\pi(a)]\approx_{\delta}0\)_, then_ \[[(1-q)^{1/2}b(1-q)^{1/2},a]\approx_{\eta}0\]
4. _If_ \(b_{1},b_{2}\in F^{\prime}\) _and_ \(a\in F\) _are such that_ \([\pi(b_{1}),\pi(b_{2})]\pi(a)\approx_{\delta}0\)_, then_ \[[(1-q)^{1/2}b_{1}(1-q)^{1/2},(1-q)^{1/2}b_{2}(1-q)^{1/2}]a\approx_{\eta}0\]
5. _If_ \(b_{1},b_{2}\in F^{\prime}\) _are such that_ \(\pi(b_{1})\approx_{\delta}\pi(b_{2})\)_, then_ \[(1-q)^{1/2}b_{1}(1-q)^{1/2}\approx_{\eta}(1-q)^{1/2}b_{2}(1-q)^{1/2}\]
6. _For every_ \(b\in F^{\prime}\)_,_ \[[b,q]\approx_{\eta}0,[b,q^{1/2}]\approx_{\eta}0\text{ and }[b,(1-q)^{1/2}]\approx_{\eta}0.\]
7. \(\alpha_{s}(q)\approx_{\eta}q\) _whenever_ \(s\in M\)
Proof.: Let \((e_{\lambda})_{\lambda\in\Lambda}\) be a quasi-central approximate unit for \(J\). By the argument in Lemma 3.1, we may assume that \(\|\alpha_{s}(e_{\lambda})-e_{\lambda}\|<\eta\) for all \(\lambda\in\Lambda\) and all \(s\in M\). Let \(\delta:=\eta/(d+6)\) and assume without loss generality that \(\|b\|\leq 1\) for all \(b\in F^{\prime}\). Replacing \(A\) by its unitalization, we may assume that \(A\) is unital. Let \(f:[0,1]\to\mathbb{R}\) and \(g:[0,1]\to\mathbb{R}\) be the functions \(f(t):=t^{1/2}\) and \(g(t):=(1-t)^{1/2}\). By [11, Exercise 3.9.6], there is \(\rho>0\) such that \(0<\rho<\delta\) and for any \(b\in F^{\prime}\) and \(0\leq x\leq 1\),
\[\|f(x)b-bf(x)\|<\delta\text{ and }\|g(x)b-bg(x)\|<\delta.\]
hold whenever \(\|xb-bx\|<\rho\). Replacing \((e_{\lambda})\) by a subnet, we assume that \(\|e_{\lambda}b-be_{\lambda}\|<\rho\) for all \(b\in F^{\prime}\) and all \(\lambda\in\Lambda\). We wish to choose \(q:=e_{\lambda}\) for \(\lambda\) large enough. To do this, we make repeated use of the fact that if \(c\in A\) is such that \(\|\pi(c)\|<\delta\), then there exists \(\lambda^{\prime}\in\Lambda\) such that \(\|c(1-e_{\lambda})\|<\delta\) for all \(\lambda\geq\lambda^{\prime}\).
1. If \(b_{1},b_{2}\in F^{\prime}\) and \(a\in F\) are such that \(\pi(b_{1}b_{2}a)\approx_{\delta}0\), then by the previous remark, there exists \(\lambda_{0}\in\Lambda\) such that if \(q:=e_{\lambda}\) for any \(\lambda\geq\lambda_{0}\), then \(b_{1}(1-q)b_{2}(1-q)a\approx_{\delta}0\). In that case, \[(1-q)^{1/2}b_{1}(1-q)b_{2}(1-q)^{1/2}a\approx_{2\delta}b_{1}(1-q)b_{2}(1-q)a \approx_{\delta}0.\]
2. If \(b_{0},b_{1},\ldots,b_{d}\in F^{\prime}\) and \(a\in F\) are such that \(\sum_{j=0}^{d}\pi(b_{j}a)\approx_{\delta}\pi(a)\), then there exists \(\lambda_{1}\in\Lambda\) such that if \(q:=e_{\lambda}\) for any \(\lambda\geq\lambda_{1}\), then \[\sum_{j=0}^{d}b_{j}(1-q)a\approx_{\delta}(1-q)a.\] In that case, \[(1-q)^{1/2}\left(\sum_{j=0}^{d}b_{j}\right)(1-q)^{1/2}a\approx_{(d+1)\delta} \left(\sum_{j=0}^{d}b_{j}\right)(1-q)a\approx_{\delta}(1-q)a\]
3. If \(b\in F^{\prime}\) and \(a\in F\) are such that \([\pi(b),\pi(a)]\approx_{\delta}0\), then there exists \(\lambda_{2}\in\Lambda\) such that if \(q:=e_{\lambda}\) for any \(\lambda\geq\lambda_{2}\), then \([b(1-q),a]\approx_{\delta}0\). Hence, \[[(1-q)^{1/2}b(1-q)^{1/2},a]\approx_{2\delta}[b(1-q),a]\approx_{\delta}0.\]
4. If \(b_{1},b_{2}\in F^{\prime}\) and \(a\in F\) are such that \([\pi(b_{1}),\pi(b_{2})]\pi(a)\approx_{\delta}0\), then there exists \(\lambda_{3}\in\Lambda\) such that if \(q:=e_{\lambda}\) for any \(\lambda\geq\lambda_{3}\), then \([b_{1}(1-q),b_{2}(1-q)]a\approx_{\delta}0\). Hence, \[[(1-q)^{1/2}b_{1}(1-q)^{1/2},(1-q)^{1/2}b_{2}(1-q)^{1/2}]a\approx_{4\delta}[ b_{1}(1-q),b_{2}(1-q)]\approx_{\delta}0.\]
5. If \(b_{1},b_{2}\in F^{\prime}\) are such that \(\pi(b_{1})\approx_{\delta}\pi(b_{2})\), then there exists \(\lambda_{4}\in\Lambda\) such that if \(q:=e_{\lambda}\) for any \(\lambda\geq\lambda_{4}\), then \(b_{1}(1-q)\approx_{\delta}b_{2}(1-q)\). Thus, \[(1-q)^{1/2}b_{1}(1-q)^{1/2}\approx_{\delta}b_{1}(1-q)\approx_{\delta}b_{2}(1- q)\approx_{\delta}(1-q)^{1/2}b_{2}(1-q)^{1/2}.\]
6. If \(b\in F^{\prime}\), then if \(q:=e_{\lambda}\) for any \(\lambda\in\Lambda\), we have \([b,e_{\lambda}]\approx_{\delta}0\) by construction. Moreover, \([b,e_{\lambda}^{1/2}]\approx_{\delta}0\) and \([b,(1-e_{\lambda})^{1/2}]\approx_{\delta}0\) hold because of our choice of \(\rho\).
7. Once again, if \(q:=e_{\lambda}\) for any \(\lambda\in\Lambda\), then we have \(\alpha_{s}(q)\approx_{\eta}q\) for any \(s\in M\).
Therefore, if we choose \(\lambda\in\Lambda\) such that \(\lambda\geq\lambda_{j}\) for all \(0\leq j\leq 4\), then \(q:=e_{\lambda}\) satisfies the required conditions.
**Theorem 3.4**.: _Let \(A\) be a C*-algebra and \(G\) be an amenable, finitely generated, residually finite group. Let_
\[0\to(J,\alpha)\xrightarrow{\iota}(A,\beta)\xrightarrow{\pi}(B,\gamma)\to 0\]
_be a short exact sequence of \(G\)-algebras. Then_
\[\dim_{Rok}(\beta) \leq\dim_{Rok}(\alpha)+\dim_{Rok}(\gamma)+1,\text{ and}\] \[\dim_{Rok}^{c}(\beta) \leq\dim_{Rok}^{c}(\alpha)+\dim_{Rok}^{c}(\gamma)+1\]
Proof.: We prove the second inequality as the proof of the first inequality is similar (indeed, it is subsumed in the former). Assume that \(d_{J}:=\dim_{\operatorname{Rok}}^{c}(\alpha)<\infty\) and \(d_{B}:=\dim_{\operatorname{Rok}}^{c}(\gamma)<\infty\). Fix finite sets \(F\subset\!\!\subset A,S\subset\!\!\subset C(\overline{G}),M\subset\!\!\subset G\) and \(\epsilon>0\). Moreover, we assume without loss of generality that \(e\in M\), \(\|a\|\leq 1\) for all \(a\in F\), \(\|f\|\leq 1\) for all \(f\in S\) and that \(1_{C(\overline{G})}\in S\). Let \(\eta>0\) be fixed to be chosen later.
By hypothesis, there exist \((d_{B}+1)\) c.c.p. maps
\[\overline{\psi}_{0},\overline{\psi}_{1},\ldots,\overline{\psi}_{d_{B}}:C( \overline{G})\to B\]
that form a \((d_{B},\pi(F),M,S,\eta/(d_{B}+6))\)-commuting Rokhlin system. Since \(G\) is finitely generated, \(C(\overline{G})\) is both separable and nuclear. By the Choi-Effros theorem, there exist c.c.p. maps \(\widetilde{\psi}_{0},\widetilde{\psi}_{1},\ldots,\widetilde{\psi}_{d_{B}}:C( \overline{G})\to A\) such that \(\pi\circ\widetilde{\psi}_{k}=\overline{\psi}_{k}\) for all \(0\leq k\leq d_{B}\). Let
\[F^{\prime}:=F\cup\{\widetilde{\psi}_{k}(\sigma_{g}(f)):f\in S,g\in M,0\leq k \leq d_{B}\}\]
and choose an element \(q\in J\) satisfying the conditions of Lemma3.3 for the tuple \((d_{B},F,F^{\prime},M,\eta)\). Define \(\psi_{0},\psi_{1},\ldots,\psi_{d_{B}}:C(\overline{G})\to A\) by
\[\psi_{k}(f):=(1-q)^{1/2}\widetilde{\psi}_{k}(f)(1-q)^{1/2}\]
Then each \(\psi_{k}\) is a c.c.p. map. Moreover, by Lemma3.3, we see that
1. \(\psi_{k}(f_{1})\psi_{k}(f_{2})a\approx_{\eta}0\) for all \(a\in F\) and \(f_{1},f_{2}\in S\) such that \(f_{1}\perp f_{2}\) and for all \(0\leq k\leq d_{B}\).
2. \(\sum_{k=0}^{d_{B}}\psi_{k}(1_{C(\overline{G})})a\approx_{\eta}(1-q)a\) for any \(a\in F\).
3. \([\psi_{k}(f),a]\approx_{\eta}a\) for all \(a\in F,f\in S\) and \(0\leq k\leq d_{B}\).
4. \([\psi_{j}(f_{1}),\psi_{k}(f_{2})]a\approx_{\eta}0\) for any \(f_{1},f_{2}\in S,0\leq j,k\leq d_{B}\) and any \(a\in F\).
5. If \(f\in S,a\in F\) and \(g\in M\) and \(0\leq k\leq d_{B}\), then \[\beta_{g}(\psi_{k}(f))a \approx_{\eta}\beta_{g}(\widetilde{\psi}_{k}(f)(1-q))a\] \[\approx_{\eta}\widetilde{\psi}_{k}(\sigma_{g}(f))(1-q)a\] \[\approx_{\eta}(1-q)^{1/2}\widetilde{\psi}_{k}(\sigma_{g}(f))(1-q) ^{1/2}a\] \[=\psi_{k}(\sigma_{g}(f))a\]
Now set
\[F_{J}:=\{q,q^{1/2}\}\cup\{qa:a\in F\}\cup\{\widetilde{\psi}_{k}(f)q:f\in S,0 \leq k\leq d_{B}\}\subset\!\!\subset J,\]
and \(S_{J}:=\{\sigma_{g}(f):f\in S,g\in M\}\subset\!\!\subset C(\overline{G})\). Then, there exist \((d_{J}+1)\) c.c.p. maps
\[\widetilde{\varphi}_{0},\widetilde{\varphi}_{1},\ldots,\widetilde{\varphi}_{d_ {J}}:C(\overline{G})\to J\]
which form a \((d_{J},F_{J},M,S_{J},\eta)\)-commuting Rokhlin system. Define \(\varphi_{0},\varphi_{1},\ldots,\varphi_{d_{J}}:C(\overline{G})\to A\) by
\[\varphi_{j}(f):=q^{1/2}\widetilde{\varphi}_{j}(f)q^{1/2}.\]
Then each \(\varphi_{j}\) is a c.c.p. map. Moreover, the following hold:
1. If \(f_{1},f_{2}\in S\) are such that \(f_{1}\perp f_{2}\) and \(a\in F\), then \[\varphi_{j}(f_{1})\varphi_{j}(f_{2})a\approx_{\eta}q^{3/2}\widetilde{\varphi}_{ j}(f_{1})\widetilde{\varphi}_{j}(f_{2})q^{1/2}a\approx_{\eta}0.\]
2. If \(a\in F\), then \[\sum_{j=0}^{d_{J}}\varphi_{j}(1_{C(\overline{G})})a\approx_{(d_{J}+1)\eta}\sum _{j=0}^{d_{J}}\widetilde{\varphi}_{j}(1_{C(\overline{G})})qa\approx_{\eta}qa.\]
3. If \(f\in S\) and \(a\in F\), then \[[\varphi_{j}(f),a]\approx_{2\eta}\widetilde{\varphi}_{j}(f)qa-aq\widetilde{ \varphi}_{j}(f)\approx_{\eta}[\widetilde{\varphi}_{j}(f),qa]\approx_{\eta}0.\]
4. If \(g\in M,f\in S\) and \(a\in F\), then \[\alpha_{g}(\varphi_{j}(f))a \approx_{\eta}\alpha_{g}(\widetilde{\varphi}_{j}(f)q)a\] \[\approx_{\eta}\alpha_{g}(\widetilde{\varphi}_{j}(f))qa\] \[\approx_{\eta}\widetilde{\varphi}_{j}(\sigma_{g}(f))qa\] \[\approx_{\eta}q^{1/2}\widetilde{\varphi}_{j}(\sigma_{g}(f))q^{1/ 2}a=\varphi_{j}(\sigma_{g}(f))a.\]
5. If \(f_{1},f_{2}\in S\) and \(0\leq k,j\leq d_{J}\) and \(a\in F\), then \[[\varphi_{j}(f_{1}),\varphi_{k}(f_{2})]a\approx_{6\eta}[\widetilde{\varphi}_{ j}(f_{1}),\widetilde{\varphi}_{k}(f_{2})]q^{2}a\approx_{\eta}0\]
6. Finally, if \(f_{1},f_{2}\in S\) and \(0\leq j\leq d_{J},0\leq k\leq d_{B}\) and \(a\in F\), then \[[\varphi_{j}(f_{1}),\psi_{k}(f_{2})]a \approx_{6\eta}\widetilde{\varphi}_{j}(f_{1})\widetilde{\psi}_{k} (f_{2})q(1-q)a-\widetilde{\psi}_{k}(f_{2})\widetilde{\varphi}_{j}(f_{1})q(1-q)a\] \[\approx_{\eta}\widetilde{\varphi}_{j}(f_{1})\widetilde{\psi}_{k} (f_{2})q(1-q)a-\widetilde{\psi}_{k}(f_{2})q\widetilde{\varphi}_{j}(f_{1})(1-q)a\] \[=[\widetilde{\varphi}_{j}(f_{1}),\widetilde{\psi}_{k}(f_{2})q](1 -q)a\] \[\approx_{\eta}0.\]
Thus, if \(a\in F\), then
\[\left(\sum_{k=0}^{d_{B}}\psi_{k}(1_{C(\overline{G})})+\sum_{j=0}^{d_{J}} \varphi_{j}(1_{C(\overline{G})})\right)a\approx_{(d_{J}+2)\eta}a.\]
Therefore, if \(\eta>0\) is chosen as \(\eta:=\epsilon/(d_{J}+8)\), then the system \(\{\psi_{0},\psi_{1},\ldots,\psi_{d_{B}},\varphi_{0},\varphi_{1},\ldots,\varphi _{d_{J}}\}\) forms a \((d_{B}+d_{J}+1,F,M,S,\epsilon)\)-commuting Rokhlin system for the action \(\beta\). By Proposition2.8, we conclude that \(\dim_{\text{\rm{Rok}}}{}^{c}(\beta)\leq d_{B}+d_{J}+1\).
In the above theorem, we assume that \(G\) was finitely generated to ensure that \(C(\overline{G})\) is separable so that the Choi-Effros theorem may be used. However, if we use the Lemma2.3 (or its analogue for commuting towers) instead of Proposition2.8, we can avoid this requirement. We have presented this proof here to highlight the role of \(\overline{G}\) and also because the other proof is notationally even more cumbersome. Moreover, this proof also works mutatis mutandis for second countable compact groups, thus answering a question of Gardella [6, Question 5.1].
From the arguments given in both Theorem3.2 and Theorem3.4, it is evident that many results that are true for actions of compact groups are also true for discrete, residually finite groups. Indeed, for a discrete group \(G\), the profinite completion \(\overline{G}\) plays the same role that the group does in the compact case, thanks to Proposition2.5.
However, we now arrive at an apparent point of departure between the two theories. In [6, Theorem 3.9], Gardella has shown that for actions of finite dimensional, compact groups, the restriction of an action with finite Rokhlin dimension to a subgroup also has finite Rokhlin dimension. In trying to extend this result to discrete groups, we arrived at an impasse. We were unable to prove the theorem in full generality, but we were able to prove it for a large class of groups (including all finitely generated, virtually abelian groups). It is to these ideas that we now turn and we begin with a lemma that shows that subgroups of finite index play a crucial role in this context.
**Lemma 3.5**.: _Let \(G\) be a residually finite group and \(H\leq_{\text{fin}}G\). Then there is a finite set \(Y\) and an \(H\)-equivariant homeomorphism_
\[\theta:Y\times\overline{H}\to\overline{G}\]
_where \(H\) acts on \(Y\) trivially and by left-translation on both \(\overline{H}\) and \(\overline{G}\)._
Proof.: Let \(\mathcal{J}_{G}:=\{K\lhd_{fin}G:K\subset H\}\), then we claim that \(\mathcal{J}_{G}\) is cofinal in \(\mathcal{I}_{G}\). To see this, choose \(L\in\mathcal{I}_{G}\). Since \(H\) has finite index in \(G\), there is a subgroup \(L^{\prime}\subset H\) such that \(L^{\prime}\lhd_{fin}G\). Then \(K:=L\cap L^{\prime}\in\mathcal{J}_{G}\) and \(K\subset L\). Therefore \(\mathcal{J}_{G}\) is cofinal in \(\mathcal{I}_{G}\) and we conclude from [28, Lemma 1.1.9] that
\[\overline{G}\cong\varprojlim_{\overline{\mathcal{J}_{G}}}G/K\text{ and } \overline{H}\cong\varprojlim_{\overline{\mathcal{J}_{G}}}H/K.\]
We need one more important ingredient: By a theorem of Ore [21, Theorem 4.3], there is a finite set \(Y\) of left-coset representatives of \(H\) in \(G\) such that \(Y\) is also a set of right-coset representatives. In other words,
\[G=\bigsqcup_{y\in Y}yH=\bigsqcup_{y\in Y}Hy.\]
We fix one such set \(Y\) and define \(\theta:Y\times\overline{H}\to\overline{G}\) by
\[\theta(w,(h_{K}K)_{K\in\mathcal{J}_{G}}):=(h_{K}wK)_{K\in\mathcal{J}_{G}}\]
To see that \(\theta\) is well-defined, let \((w,(h_{K}K))\in Y\times\overline{H}\) and fix \(K,L\in\mathcal{J}_{G}\) with \(K\subset L\). Then \(h_{K}wL=h_{K}Lw=h_{L}Lw=h_{L}wL\). Therefore \(\theta(w,(h_{K}K))\in\overline{G}\). Also, if \((w,(h_{K}K))=(v,(g_{K}K))\in Y\times\overline{H}\) are equal, then \(w=v\) and \(h_{K}K=g_{K}K\) for all \(K\in\mathcal{J}_{G}\). Hence,
\[h_{K}wK=h_{K}Kw=g_{K}Kw=g_{K}wK=g_{K}vK.\]
Therefore, \(\theta(w,(h_{K}K))=\theta(v,(g_{K}K))\). We claim that \(\theta\) is the homeomorphism we are looking for.
1. \(\theta\) is continuous: Let \((w,(h_{K}K))\in Y\times\overline{H}\) and let \(\overline{g}=(g_{K}K)=\theta(w,(h_{K}K))\in\overline{G}\). Let \(U\) be an open set in \(\overline{G}\) containing \(\overline{g}\), then there is a finite set \(F\subset\mathcal{J}_{G}\) such that \[V=\bigcap_{L\in F}(\pi_{L}^{G})^{-1}(\{g_{L}L\})=\bigcap_{L\in F}(\pi_{L}^{G} )^{-1}(\{h_{L}wL\})\subset U.\] (Note that we write \(\pi_{L}^{G}:\overline{G}\to G/L\) and \(\pi_{L}^{H}:\overline{H}\to H/L\) for the natural maps.) Let \((v,(t_{K}K))\in\theta^{-1}(V)\) and fix \(L\in F\). Then, \(t_{L}vL=h_{L}wL\). Since \(L\lhd G\), this implies that \(t_{L}Lv=h_{L}Lw\). Since \(L\subset H\) and \(t_{L},h_{L}\in H\), it follows that \(Hv=Hw\). Since \(Y\) is a set of right coset representatives of \(H\), we conclude that \(v=w\). Hence, \[t_{L}wL=h_{L}wL.\] Since \(L\lhd G\), \(h_{L}L=t_{L}L\). This is true for each \(L\in F\), so \((t_{K}K)\in W\) where \[W=\bigcap_{L\in F}(\pi_{L}^{H})^{-1}(\{h_{L}L\}).\] Therefore, \(\{w\}\times W\) is an open set in \(Y\times\overline{H}\), it is contained in \(\theta^{-1}(V)\subset\theta^{-1}(U)\) and \(\overline{g}\in\{w\}\times W\). This is true for any \((w,(h_{K}K))\in Y\times\overline{H}\), so \(\theta\) is continuous.
2. \(\theta\) is injective: If \(\theta(w,(h_{K}K))=\theta(v,(t_{K}K))\), then for any \(L\in\mathcal{J}_{G}\), \(h_{L}wL=t_{L}vL\). As above, this implies that \(Hw=Hv\). Since \(Y\) is a set of right coset representatives, it follows that \(w=v\). Then, \[h_{L}wL=t_{L}wL\] holds for all \(L\in\mathcal{J}_{G}\). Since each \(L\in\mathcal{J}_{G}\) is normal in \(G\), we conclude that \(h_{L}L=t_{L}L\), so that \((h_{K}K)=(t_{K}K)\).
3. \(\theta\) is surjective: If \(\overline{g}=(g_{L}L)\in\overline{G}\), then for each \(L\in\mathcal{J}_{G}\), there exists \(w_{L}\in Y\) and \(h_{L}\in H\) such that \(g_{L}=w_{L}h_{L}\). Since \(\overline{g}\in\overline{G}\), it follows that whenever \(K,L\in\mathcal{J}_{G}\) with \(K\subset L\), we have \[w_{K}h_{K}L=w_{L}h_{L}L.\]
Once again, this implies that \(w_{K}H=w_{L}H\). Since \(Y\) is a set of left coset representatives of \(H\) in \(G\), we conclude that \(w_{K}=w_{L}\) whenever \(K\subset L\). Now suppose \(L_{1},L_{2}\in\mathcal{J}_{G}\), then there exists \(K\in\mathcal{J}_{G}\) such that \(K\subset L_{1}\cap L_{2}\). Then it follows that
\[w_{L_{1}}=w_{K}=w_{L_{2}}.\]
Therefore, there is a commmon value \(w:=w_{L}\) for all \(L\in\mathcal{J}_{G}\). Now consider \(\overline{h}=(h_{L}L)\in\prod_{L\in\mathcal{J}_{G}}H/L\). The fact that \(\overline{g}\in\overline{G}\) implies that whenever \(K,L\in\mathcal{J}_{G}\) with \(K\subset L\), one has
\[wh_{K}L=wh_{L}L.\]
Once again, since \(L\lhd G\), this implies that \(h_{K}L=h_{L}L\). We conclude that \(\overline{h}\in\overline{H}\) and thus \(\overline{g}=\theta(w,\overline{h})\).
Since \(Y\times\overline{H}\) is compact and \(\overline{G}\) is Hausdorff, \(\theta\) is a homeomorphism. That \(\theta\) is \(H\)-equivariant is easily verified. This completes the proof.
**Definition 3.6**.: Let \(G\) be a discrete group and \(H\) be a subgroup of \(G\). We say that \(H\) is a retract of \(G\) if there is a group homomorphism \(\rho:G\to H\) which restricts to the identity map on \(H\). We say that \(H\) is a virtual retract of \(G\) (denoted by \(H\leq_{vr}G\)) if there exists \(K\leq_{fin}G\) such that \(H\) is a retract of \(K\).
**Theorem 3.7**.: _Let \(G\) be a residually finite group and \(H\) be a subgroup of \(G\). Let \(\alpha:G\to\text{Aut}(A)\) be an action of \(G\) on a C*-algebra \(A\) and \(\alpha_{H}:H\to\text{Aut}(A)\) be the restricted action of \(H\) on \(A\). If \(H\leq_{vr}G\), then \(\dim_{\text{Rok}}(\alpha_{H})\leq\dim_{\text{Rok}}(\alpha)\) and \(\dim_{\text{Rok}}^{c}(\alpha_{H})\leq\dim_{\text{Rok}}^{c}(\alpha)\)._
Proof.: We prove the first inequality since the argument for the second is similar. Assume without loss of generality that \(d:=\dim_{\text{Rok}}(\alpha)<\infty\). Choose \(K\leq_{fin}G\) and a homomorphism \(\rho:K\to H\) that is identity on \(H\). By Lemma3.5, there is a finite set \(Y\) and a \(K\)-equivariant homeomorphism \(\theta:\overline{G}\to Y\times\overline{K}\). Projecting onto the second component gives us a continuous, surjective, \(K\)-equivariant map \(\mu:\overline{G}\to\overline{K}\). Moreover, by the universal property of the profinite completion (part (5) of Proposition1.1), there is a continuous group homomorphism \(\overline{\rho}:\overline{K}\to\overline{H}\) such that \(\overline{\rho}|_{K}=\rho\). In particular, \(\overline{\rho}\) respects the left-translation action of \(H\) on both \(\overline{K}\) and \(\overline{H}\). Therefore, \(p:=\overline{\rho}\circ\mu:\overline{G}\to\overline{H}\) is a continuous \(H\)-equivariant map, which induces a unital \(*\)-homomorphism
\[p^{*}:C(\overline{H})\to C(\overline{G})\]
that is equivariant with respect to action of \(H\) on both algebras. To prove that \(\dim_{\text{Rok}}(\alpha_{H})\leq d\), choose finite sets \(F\subset\!\!\subset A,S\subset\!\!\subset C(\overline{H}),M\subset\!\!\subset H\) and \(\epsilon>0\). By hypothesis, there exist \((d+1)\) c.c.p. maps
\[\psi_{0},\psi_{1},\ldots,\psi_{d}:C(\overline{G})\to A\]
which form a \((d,F,M,p^{*}(S),\epsilon)\)-Rokhlin system (by Proposition2.5). Hence, the system \(\{\psi_{\ell}\circ p^{*}:0\leq\ell\leq d\}\) forms a \((d,F,M,S,\epsilon)\)-Rokhlin system for the action \(\alpha_{H}\). We conclude that \(\dim_{\text{Rok}}(\alpha_{H})\leq d\).
To understand how Theorem3.7 may be used, we describe a few notions from geometric group theory. These have been explored in [18] and the reader will find a wealth of information concerning these ideas in that article.
**Definition 3.8**.: Let \(G\) be a group. We say that \(G\) has property (VRC) (for 'virtual retractions onto cyclic subgroups') if every cyclic subgroup is a virtual retract of \(G\). We say that \(G\) has property (LR) (for 'local retractions') if every finitely generated subgroup is a virtual retract of \(G\).
**Remark 3.9**.: We list a number of facts concerning these properties. Once again, the reader is referred to [18] for proofs of all of these.
1. If \(G\) satisfies (VRC), then it is residually finite.
2. If \(G\) is a residually finite group and \(H<G\) is a finite subgroup, then \(H\leq_{vr}G\).
3. If \(K,H\) are subgroups of \(G\) such that \(K\leq_{vr}H\) and \(H\leq_{vr}G\), then \(K\leq_{vr}G\).
4. Clearly, if \(G\) satisfies (LR), then it satisfies (VRC), but the converse is false. However, if \(G\) satisfies (VRC), then every finitely generated virtually abelian subgroup is a virtual retract of \(G\).
5. If \(G\) is a finitely generated, virtually abelian group, then every subgroup \(H\) of \(G\) is a virtual retract of \(G\).
6. Free groups satisfy (LR) and virtually free groups satisfy (VRC).
7. If \(K\) is a finite index subgroup of \(G\) and \(K\) satisfies (VRC), then \(G\) satisfies (VRC).
8. The Heisenberg group \(H\) (the group of all \(3\times 3\) unitriangular matrices with integer coefficients) does not satisfy (VRC).
The following consequence of part (5) of Remark 3.9 and Theorem 3.7 bears repeating.
**Corollary 3.10**.: _Let \(G\) be a finitely generated, virtually abelian group and \(H\) be a subgroup of \(G\). Let \(\alpha:G\to\text{Aut}(A)\) be an action of \(G\) on a C*-algebra \(A\) and \(\alpha_{H}:H\to\text{Aut}(A)\) be the restricted action of \(H\) on \(A\). Then \(\dim_{\text{Rok}}(\alpha_{H})\leq\dim_{\text{Rok}}(\alpha)\) and \(\dim_{\text{Rok}}{}^{c}(\alpha_{H})\leq\dim_{\text{Rok}}{}^{c}(\alpha)\)._
## 4. Actions on \(C_{0}(x)\)-algebras and Commutative C*-algebras
Given a group \(G\) and a locally compact metric space \(X\), every action \(\widetilde{\alpha}:G\curvearrowright X\) of \(G\) on \(X\) induces an action \(\alpha:G\to\text{Aut}(C_{0}(X))\) and vice-versa. If \(G\) is compact and finite dimensional, then \(\widetilde{\alpha}\) is free if and only if \(\alpha\) has finite Rokhlin dimension (see [6, Theorem 4.1] and [31, Lemma 4.1]). However, if \(G\) is a discrete group, then the analogous statement is not known in full generality. What we do know is the following result, due to Szabo, Wu and Zacharias. Note that the version stated here (for locally compact spaces) is more general than the one proved in [30]. However, Szabo has proved this more general statement in his PhD thesis (see [30, Remark 8.7]).
**Theorem 4.1**.: _[_30_, Corollary 8.5]_ _Let \(G\) be an infinite, finitely generated, nilpotent group and \(X\) be a locally compact metric space of finite covering dimension. If \(\widetilde{\alpha}:G\curvearrowright X\) is a free action of \(G\) on \(X\), then the induced action \(\alpha:G\to\text{Aut}(C_{0}(X))\) has finite Rokhlin dimension._
Our goal in this section is to investigate this question further, and the first observation is the following.
**Proposition 4.2**.: _Let \(X\) be a locally compact Hausdorff space and \(G\) be a residually finite group satisfying the (VRC) property. Let \(\widetilde{\alpha}:G\curvearrowright X\) be such that the induced action \(\alpha:G\to\text{Aut}(C_{0}(X))\) has finite Rokhlin dimension. Then \(\widetilde{\alpha}\) is free._
Proof.: With Theorem 3.7 in hand, this proof essentially reduces to that of [6, Theorem 4.1]. Let \(g\in G\) and \(x\in X\) be such that \(\widetilde{\alpha}_{g}(x)=x\). Let \(H\) be the cyclic subgroup generated by \(g\), then by Theorem 3.7, \(\alpha_{H}:H\to\text{Aut}(C_{0}(X))\) has finite Rokhlin dimension. Since \(\{x\}\) is closed and invariant under this action, it follows from Theorem 3.2 that the induced action \(\overline{\alpha_{H}}:H\to\text{Aut}(C(\{x\}))\) has finite Rokhlin dimension. This action is trivial, so it follows that \(H\) must be trivial. Hence \(g=e\), proving that \(\widetilde{\alpha}\) must be free.
To understand the extent to which the converse of Proposition 4.2 holds for arbitrary residually finite groups (as against just nilpotent ones), we first study actions on \(C_{0}(X)\)-algebras. Our goal is to prove an estimate for the Rokhlin dimension of certain actions on a \(C_{0}(X)\)-algebra analogous to that of [31, Theorem 2.3].
**Definition 4.3**.: Let \(X\) be a locally compact Hausdorff space. A C*-algebra \(A\) is said to be a \(C_{0}(X)\)-algebra if there is a non-degenerate \(*\)-homomorphism \(\Theta:C_{0}(X)\to Z(M(A))\), where \(Z(M(A))\) denotes the center of the multiplier algebra of \(A\).
For a function \(f\in C_{0}(X)\) and \(a\in A\), we will write \(fa:=\Theta(f)(a)\). If \(Y\subset X\) is a closed subspace, let \(C_{0}(X,Y)\) denote the ideal of functions that vanish on \(Y\). Then \(C_{0}(X,Y)A\) is a closed ideal in \(A\). We write \(A(Y):=A/C_{0}(X,Y)A\) for the corresponding quotient and \(\pi_{Y}:A\to A(Y)\) for the quotient map. If \(Y=\{x\}\) is a singleton set, then the algebra \(A(x):=A(\{x\})\) is called the fiber of \(A\) at \(x\), and we write \(\pi_{x}:A\to A(x)\) for the corresponding quotient map. If \(a\in A\), we simply write \(a(x):=\pi_{x}(a)\in A(x)\). For each \(a\in A\), we have a map \(\Gamma_{a}:X\to\mathbb{R}\) given by \(x\mapsto\|a(x)\|\). This map is upper semicontinuous (by [26, Proposition 1.2]), a fact we will use crucially in the next proof.
Given a \(C_{0}(X)\)-algebra \(A\), an automorphism \(\beta\in\operatorname{Aut}(A)\) is said to be \(C_{0}(X)\)-linear if \(\beta(fa)=f\beta(a)\) for all \(f\in C_{0}(X)\) and all \(a\in A\). We write \(\operatorname{Aut}_{X}(A)\) for the subgroup of \(\operatorname{Aut}(A)\) consisting of all \(C_{0}(X)\)-linear automorphisms. Given an automorphism \(\beta\in\operatorname{Aut}_{X}(A)\) and a closed subset \(Y\subset X\), there is a natural automorphism \(\beta_{Y}\in\operatorname{Aut}(A(Y))\) such that \(\beta_{Y}\circ\pi_{Y}=\pi_{Y}\circ\beta\). In particular, if \(\alpha:G\to\operatorname{Aut}_{X}(A)\) is an action of a discrete group \(G\) on \(A\) by \(C_{0}(X)\)-linear automorphisms, then for each closed subspace \(Y\subset X\), there is an action \(\alpha_{Y}:G\to\operatorname{Aut}(A(Y))\) such that the quotient map \(\pi_{Y}:A\to A(Y)\) is \(G\)-equivariant. Once again, if \(Y=\{x\}\) is a singleton set, we write \(\alpha_{x}\) for the action on \(A(x)\).
We begin with a variant of [8, Theorem 4.26] that we need below.
**Lemma 4.4**.: _Let \(A\) be a separable, nuclear C*-algebra and \(G\) be an amenable group. Let \(\alpha:G\to\operatorname{Aut}(A)\) and \(\beta:G\to\operatorname{Aut}(B)\) be two actions, and \(\pi:A\to B\) be a surjective \(G\)-equivariant \(*\)-homomorphism. Then for each \(F\subsetneq B,M\subsetneq G\) and each \(\epsilon>0\), there is a c.c.p. section \(\theta:B\to A\) such that_
\[\|\alpha_{t}(\theta(b))-\theta(\beta_{t}(b))\|<\epsilon\]
_for all \(b\in F\) and \(t\in M\)._
Proof.: That there is a c.c.p. section \(\widetilde{\theta}:B\to A\) follows from the Choi-Effros theorem. Let \((F_{n})\) be a Folner sequence in \(G\), and define
\[\theta_{n}(b):=\frac{1}{|F_{n}|}\sum_{s\in F_{n}}\alpha_{s}(\widetilde{\theta} (\beta_{s^{-1}}(b)).\]
Then each \(\theta_{n}\) is c.c.p. (because it is a convex combination of c.c.p. maps). Since \(\widetilde{\theta}\) is a section and \(\pi\circ\alpha_{s}=\beta_{s}\circ\pi\), it follows that \(\theta_{n}\) is a section. Now observe that if \(t\in G\), then
\[\alpha_{t}(\theta_{n}(b))-\theta_{n}(\beta_{t}(b)) =\frac{1}{|F_{n}|}\left(\sum_{s\in F_{n}}\alpha_{ts}(\widetilde{ \theta}(\beta_{s^{-1}}(b)))-\sum_{s\in F_{n}}\alpha_{s}(\widetilde{\theta}( \beta_{s^{-1}t}(b)))\right)\] \[=\frac{1}{|F_{n}|}\left(\sum_{w\in tF_{n}}\alpha_{w}(\widetilde{ \theta}(\beta_{w^{-1}t}(b)))-\sum_{w\in F_{n}}\alpha_{w}(\widetilde{\theta}( \beta_{w^{-1}t}(b)))\right)\]
Hence,
\[\|\alpha_{t}(\theta_{n}(b))-\theta_{n}(\beta_{t}(b))\|\leq\frac{|tF_{n}\triangle F _{n}|\|b\|}{|F_{n}|}.\]
Therefore, \(\theta=\theta_{n}\) does the job for \(n\) large enough.
**Theorem 4.5**.: _Let \(X\) be a locally compact Hausdorff space and \(A\) be a separable, nuclear \(C_{0}(X)\)-algebra. Let \(G\) be an amenable, finitely generated, residually finite group and \(\alpha:G\to Aut_{X}(A)\) be an action of \(G\) on \(A\) by \(C_{0}(X)\)-linear automorphisms. Then,_
\[\dim_{\text{Rok}}(\alpha)\leq(\dim(X)+1)(\sup_{x\in X}\dim_{\text{Rok}}( \alpha_{x})+1)-1.\]
Proof.: Assume without loss of generality that \(n:=\dim(X)<\infty\) and that \(d:=\sup_{x\in X}\dim_{\mathrm{Rok}}(\alpha_{x})<\infty\), and let \(m:=(n+1)(d+1)-1\). Fix \(F\subset\!\!\subset A,S\subset\!\!\subset C(\overline{G}),M\subset\!\!\subset G\) and \(\epsilon>0\) and assume that \(\|a\|\leq 1\) for all \(a\in F\) and that \(\|f\|\leq 1\) for all \(f\in S\). Set \(\eta:=\frac{\epsilon}{2(m+2)}\) and choose a compact set \(K\subset X\) such that
\[\sup_{x\in X\setminus K}\|a(x)\|<\eta\]
for all \(a\in F\). Now observe that \(A(K)\) is a \(C(K)\)-algebra, which carries an action \(\alpha_{K}:G\to\mathrm{Aut}_{K}(A(K))\) of \(G\) acting by \(C(K)\)-linear automorphisms. Moreover, \(\dim(K)\leq\dim(X)\) and if \(x\in K\), then \(A(K)(x)=A(x)\). Now assume for a moment that there exist \((m+1)\) c.c.p. maps
\[\psi_{0},\psi_{1},\ldots,\psi_{m}:C(\overline{G})\to A(K)\]
which form a \((m,\pi_{K}(F),M,S,\eta)\)-Rokhlin system for the action \(\alpha_{K}\). Then, we may choose any c.c.p. section \(\theta:A(K)\to A\) (which exists by the Choi-Effros theorem), and observe that the maps \(\{\theta\circ\psi_{\ell}:0\leq\ell\leq m\}\) form a \((m,F,M,S,\epsilon)\)-Rokhlin system for the action \(\alpha\). This calculation is elementary and relies on the fact that for any \(b\in A\) and \(a\in F\),
\[\|ba\|\leq\max\{\eta\|b\|,\|\pi_{K}(ba)\|\}\text{ and }\|[b,a]\|\leq\max\{2 \eta\|b\|,\|[\pi_{K}(b),\pi_{K}(a)]\|\}.\]
In turn, both of these follow from the fact that for any \(c\in A,\|c\|=\sup_{x\in X}\|c(x)\|\) by [1, Proposition 2.8]. Replacing \(X\) by \(K\), it therefore suffices to prove the theorem when \(X\) is itself compact.
We now assume \(X\) is compact. The remainder of the proof follows that of [31, Theorem 2.3]. For \(x\in X\) fixed, since \(d\geq\dim_{\mathrm{Rok}}(\alpha_{x})\), there are \((d+1)\) c.c.p. maps
\[\psi_{0},\psi_{1},\ldots,\psi_{d}:C(\overline{G})\to A(x)\]
which form a \((d,\pi_{x}(F),M,S,\eta)\)-Rokhlin system. By the Choi-Effros theorem (which is applicable since \(G\) is finitely generated), there are c.c.p. maps \(\widetilde{\psi}_{0},\widetilde{\psi}_{1},\ldots,\widetilde{\psi}_{d}:C( \overline{G})\to A\) such that \(\pi_{x}\circ\widetilde{\psi}_{j}=\psi_{j}\) for all \(0\leq j\leq d\). Since the maps \(\{\Gamma_{c}:c\in A\}\) are upper semicontinuous, for each \(x\in X\), there is an open set \(V_{x}\) containing \(x\) and \((d+1)\) c.c.p. maps \(\psi^{(0)},\psi^{(1)},\ldots,\psi^{(d)}:C(\overline{G})\to A(\overline{V_{x}})\) satisfying the following conditions:
1. \(\psi^{(k)}(f_{1})\psi^{(k)}(f_{2})a\approx_{\eta}0\) for all \(f_{1},f_{2}\in S\) with \(f_{1}\perp f_{2}\), all \(a\in F\) and all \(0\leq k\leq d\).
2. \(\sum_{k=0}^{d}\psi^{(k)}(1_{C(\overline{G})})\pi_{\overline{V_{x}}}(a)\approx_ {\eta}\pi_{\overline{V_{x}}}(a)\) for all \(a\in F\).
3. \([\psi^{(k)}(f),\pi_{\overline{V_{x}}}(a)]\approx_{\eta}0\) for all \(a\in F,f\in S\) and \(0\leq k\leq d\).
4. \((\alpha_{s})_{\overline{V_{x}}}(\psi^{(k)}(f))\approx_{\eta}\psi^{(k)}(\sigma _{s}(f))\) for all \(f\in S,s\in M\) and \(0\leq k\leq d\).
By [31, Lemma 2.2], we may choose a strongly \(n\)-decomposable refinement of this cover \(\mathcal{U}=\mathcal{U}_{0}\sqcup\mathcal{U}_{1}\sqcup\ldots\sqcup\mathcal{U} _{n}\). In other words, if \(\mathcal{U}_{i}=\{V_{i,1},V_{i,2},\ldots,V_{i,k_{i}}\}\), then \(\overline{V_{i,j_{1}}}\cap\overline{V_{i,j_{2}}}=\emptyset\) whenever \(j_{1}\neq j_{2}\). Now for each \(0\leq i\leq n\) and \(1\leq j\leq k_{i}\), there are \((d+1)\) c.c.p. maps
\[\psi^{(0)}_{i,j},\psi^{(1)}_{i,j},\ldots,\psi^{(d)}_{i,j}:C(\overline{G})\to A (\overline{V_{i,j}}).\]
satisfying the four conditions listed above on \(\overline{V_{i,j}}\). Write \(V_{i}=\sqcup_{j=1}^{k_{i}}V_{i,j}\) so that \(\overline{V_{i}}=\sqcup_{j=1}^{k_{i}}\overline{V_{i,j}}\). By [3, Lemma 2.4],
\[A(\overline{V_{i}})\cong\bigoplus_{j=1}^{k_{i}}A(\overline{V_{i,j}}).\]
Therefore, we get \((d+1)\) c.c.p. maps
\[\psi^{(0)}_{i},\psi^{(1)}_{i},\ldots,\psi^{(d)}_{i}:C(\overline{G})\to A( \overline{V_{i}})\]
satisfying the four conditions listed above on \(\overline{V_{i}}\). Choose a partition of unity \(\{f_{i}:0\leq i\leq n\}\) subordinate to the open cover \(\{V_{0},V_{1},\dots,V_{n}\}\) and unital, c.c.p. sections \(\theta_{i}:A(\overline{V_{i}})\to A\) using Lemma4.4 such that
\[\alpha_{t}(\theta_{i}(\psi_{i}^{(k)}(f))\approx_{\eta}\theta_{i}((\alpha_{t}) _{\overline{V_{i}}}(\psi_{i}^{(k)}(f)))\]
for all \(f\in S\) and \(t\in M\). For \(0\leq i\leq n,0\leq k\leq d\), define \(\varphi_{i,k}:C(\overline{G})\to A\) by
\[\varphi_{i,k}(f):=f_{i}\theta_{i}(\psi_{i}^{(k)}(f)).\]
Then each \(\varphi_{i,k}\) is an c.c.p. map, and an argument entirely similar to [31, Theorem 2.3] (with \(\overline{G}\) playing the role of \(G\)) shows that the collection \(\{\varphi_{i,k}:0\leq i\leq n,0\leq k\leq d\}\) forms a \((m,F,M,S,\epsilon)\)-Rokhlin system for \(\alpha\). Thus, \(\dim_{\text{Rok}}(\alpha)\leq(n+1)(d+1)-1\).
The next result now wraps up the discussion (to an extent) of the relationship between free actions on a locally compact space \(X\) and finiteness of Rokhlin dimension of the associated action on \(C_{0}(X)\).
**Corollary 4.6**.: _Let \(G\) be an amenable, finitely generated, residually finite group and let \(X\) be a locally compact, separable metric space of finite covering dimension. Let \(\tilde{\alpha}:G\curvearrowright X\) be an action of \(G\) on \(X\) and let \(\alpha:G\to\text{Aut}(C_{0}(X))\) be the induced action on \(C_{0}(X)\). If \(\tilde{\alpha}\) is free and proper, then_
\[\dim_{\text{Rok}}(\alpha)\leq\dim(X).\]
Proof.: Let \(Y:=X/G\), which is Hausdorff and locally compact and the natural map \(\pi:X\to Y\) is a local homeomorphism. Moreover, \(X\) is metrizable and separable, so by Alexandroff's theorem [4, Theorem 1.12.8] it follows that \(\dim(Y)=\dim(X)<\infty\) (Note that small inductive dimension and Lebesgue covering dimension coincide for separable metric spaces).
If \(A:=C_{0}(X)\), the map \(\pi^{*}:C_{0}(Y)\to Z(M(A))\cong C_{b}(X)\) gives \(A\) the structure of a \(C_{0}(Y)\)-algebra. \(A\) is separable because \(X\) is second countable and \(A\) is clearly nuclear. Moreover, the action \(\alpha\) is by \(C_{0}(Y)\)-linear automorphisms. By [32, Example C.4], the fiber \(A(y)\) at a point \(y=\pi(x)\in Y\) is isomorphic to \(C_{0}(G\cdot x)\). The map \(\theta:G\to G\cdot x\) given by \(g\mapsto\alpha_{g}(x)\) induces an equivariant isomorphism
\[\theta^{*}:(C_{0}(G\cdot x),\alpha_{y})\to(C_{0}(G),\text{Lt})\]
where \(\text{Lt}:G\to\text{Aut}(C_{0}(G))\) is the natural action of \(G\) on \(C_{0}(G)\) induced by the left-translation action of \(G\) on itself. We claim that \(\dim_{\text{Rok}}(\text{Lt})=0\). To see this, fix \(H\lhd_{fin}G\). Then the quotient map \(p:G\to G/H\) induces an equivariant, unital \(*\)-homomorphism
\[p^{*}:(C(G/H),\sigma_{G})\to(C_{b}(G),\text{Lt}).\]
By Lemma2.3, the action \(\text{Lt}:G\to\text{Aut}(C_{b}(G))\) has Rokhlin dimension zero. Since \(C_{0}(G)\) is a \(G\)-invariant hereditary subalgebra of \(C_{b}(G)\), it follows from Theorem3.2 that \(\text{Lt}:G\to\text{Aut}(C_{0}(G))\) also has Rokhlin dimension zero. Hence \(\dim_{\text{Rok}}(\alpha_{y})=0\) for each \(y\in Y\). The result now follows from Theorem4.5.
## 5. Ideal Separation and Outerness
In this, the final section of the paper, we discuss three related notions for an action of a residually finite group on a C*-algebra with finite Rokhlin dimension. We show that such actions are pointwise outer. Using a theorem of Sierakowski [29], we show that the ideals in the associated reduced crossed product C*-algebra must arise from invariant ideals in the underlying C*-algebra. Finally, we show how this property implies that such actions are also properly outer, provided the group satisfies the (VRC) property.
**Definition 5.1**.: Let \(A\) be a C*-algebra. An automorphism \(\alpha\in\operatorname{Aut}(A)\) is said to be inner if there is a unitary \(u\in\widetilde{A}\) such that \(\alpha(a)=uau^{*}\) for all \(a\in A\). In that case, we write \(\alpha=\operatorname{Ad}(u)\). Moreover, \(\alpha\) is said to be outer if it is not inner. If \(G\) is a locally compact group, an action \(\alpha:G\to\operatorname{Aut}(A)\) is said to be pointwise outer if \(\alpha_{g}\) is outer for each non-identity element \(g\in G\).
If \(\alpha:G\to\operatorname{Aut}(A)\) is an action of a compact group \(G\) on a C*-algebra \(A\), then finiteness of Rokhlin dimension implies that \(\alpha\) is pointwise outer ([6, Proposition 4.15]). We show that the same is true for discrete, residually finite groups as well.
**Proposition 5.2**.: _Let \(A\) be a C*-algebra and \(G\) be a residually finite group. If \(\alpha:G\to\operatorname{Aut}(A)\) is an action such that \(\dim_{\text{Rok}}(\alpha)<\infty\), then \(\alpha\) is pointwise outer._
Proof.: Let \(d:=\dim_{\text{Rok}}(\alpha)<\infty\) and suppose \(g\in G\) is a non-identity element such that \(\alpha_{g}=\operatorname{Ad}(u)\) for some unitary \(u\in\widetilde{A}\). Write \(u=v+\lambda 1_{\widetilde{A}}\) for some \(v\in A\) and \(\lambda\in\mathbb{C}\). Choose a subgroup \(H\leq_{fin}G\) such that \(g\notin H\) and let \(n:=[G:H]\). Fix a non-zero element \(x\in A\) with \(\|x\|\leq 1\), and set \(F:=\{x,x^{*},x^{1/2},v,v^{*}\}\subsetneq A,S:=\{\delta_{\overline{s}}: \overline{s}\in G/H\}\subsetneq C(G/H),M=\{g\}\) and choose \(\epsilon>0\). By Lemma 2.3, there exist \((d+1)\) c.c.p. maps
\[\varphi_{0},\varphi_{1},\dots,\varphi_{d}:C(G/H)\to A\]
which form an \((H,d,F,M,S,\epsilon)\)-Rokhlin system. For \(0\leq\ell\leq d\) and \(\overline{s}\in G/H\), let \(y_{\overline{s}}^{(\ell)}:=\varphi_{\ell}(\delta_{\overline{s}})\). Then,
\[x^{1/2}uy_{\overline{s}}^{(\ell)}u^{*}x^{1/2}=x^{1/2}\alpha_{g}(y_{\overline{s }}^{(\ell)})x^{1/2}\approx_{\epsilon}x^{1/2}y_{\overline{gs}}^{(\ell)}x^{1/2} \approx_{\epsilon}y_{\overline{gs}}^{(\ell)}x.\]
However,
\[x^{1/2}uy_{\overline{s}}^{(\ell)}u^{*}x^{1/2} =x^{1/2}(vy_{\overline{s}}^{(\ell)}v^{*}+\overline{\lambda}vy_{ \overline{s}}^{(\ell)}+\lambda y_{\overline{s}}^{(\ell)}v^{*}+|\lambda|^{2}y_ {\overline{s}}^{(\ell)})x^{1/2}\] \[\approx_{2\epsilon}x^{1/2}(y_{\overline{s}}^{(\ell)}vv^{*}+ \overline{\lambda}y_{\overline{s}}^{(\ell)}v+\lambda y_{\overline{s}}^{(\ell )}v^{*}+|\lambda|^{2}y_{\overline{s}}^{(\ell)})x^{1/2}\] \[=x^{1/2}y_{\overline{s}}^{(\ell)}uu^{*}x^{1/2}\] \[\approx_{\epsilon}y_{\overline{s}}^{(\ell)}x\]
Since c.c.p. maps preserve adjoints, the \(y_{\overline{s}}^{(\ell)}\) are self-adjoint, and
\[(y_{\overline{s}}^{(\ell)}x)^{*}(y_{\overline{s}}^{(\ell)}x)\approx_{5 \epsilon}(y_{\overline{gs}}^{(\ell)}x)^{*}(y_{\overline{s}}^{(\ell)}x)=x^{*}y _{\overline{gs}}^{(\ell)}y_{\overline{s}}^{(\ell)}x\approx_{\epsilon}0.\]
Hence, \(y_{\overline{s}}^{(\ell)}x\approx_{\sqrt{6\epsilon}}0\) for each \(\overline{s}\in G/H\). This implies that
\[x\approx_{\epsilon}\sum_{\ell=0}^{d}\sum_{\overline{s}\in G/H}y_{\overline{s }}^{(\ell)}x\approx_{n(d+1)\sqrt{6\epsilon}}0.\]
This is true for any \(\epsilon>0\), so \(x=0\). This contradicts our assumption on \(x\), so we conclude that \(\alpha_{g}\) cannot be inner.
We now turn our attention to the ideal structure of the reduced crossed product C*-algebra. Let \(A\) be a C*-algebra and \(\alpha:G\to\operatorname{Aut}(A)\) be an action of a discrete group \(G\) on \(A\). We write \(A\rtimes_{r}G\) for the associated crossed product C*-algebra. If \(I\) is a \(G\)-invariant ideal of \(A\), then \(I\rtimes_{r}G\) forms an ideal in \(A\rtimes_{r}G\). An interesting question, therefore, is to determine conditions under which every ideal of \(A\rtimes_{r}G\) arises in this way.
**Definition 5.3**.: For an action \(\alpha:G\to\operatorname{Aut}(A)\), we say that \(A\) separates ideals in \(A\rtimes_{r}G\) if the only ideals in \(A\rtimes_{r}G\) are of the form \(I\rtimes_{r}G\) for some \(G\)-invariant ideal \(I\lhd A\).
The following result due to Sierakowski is relevant to us, as it gives an easily verifiable condition to determine if an action has the ideal separation property. Recall that if \(\alpha:G\to\operatorname{Aut}(A)\) is an action of a discrete group \(G\) on a C*-algebra \(A\), then there is a conditional expectation \(E:A\rtimes_{r}G\to A\) such that \(E(\sum_{s\in G}a_{s}\lambda_{s})=a_{e}\) on \(C_{c}(G,A)\) (see, for instance, [2, Proposition 4.1.9]). Given a C*-algebra \(B\) and an element \(x\in B\), we write \(I_{B}[x]\) for the ideal in \(B\) generated by \(x\).
**Theorem 5.4**.: _[_29_, Theorem 1.13]_ _Let \(G\) be a discrete group and \(\alpha:G\to\operatorname{\mathit{Aut}}(A)\) be an exact action of \(G\) on a C*-algebra \(A\). If \(E(x)\in I_{A\rtimes_{r}G}[x]\) for every positive element \(x\in A\rtimes_{r}G\), then \(A\) separates ideals in \(A\rtimes_{r}G\)._
In [25, Theorem 2.2], Pasnicu and Phillips have shown that if \(G=\mathbb{Z}\) and the action has the Rokhlin property, then it has the ideal separation property. Sierakowski extended this result to include finite groups in [29, Theorem 1.30] (the result is originally due to Pasnicu and Phillips [24, Corollary 2.5], although the proof does not rely on Theorem 5.4). The analogous result for actions of compact abelian groups with finite Rokhlin dimension was proved in [7, Corollary 2.17]. We now extend these result to actions of residually finite groups with finite Rokhlin dimension. The following lemma (whose proof we omit) will be useful to us.
**Lemma 5.5**.: _Let \(B\) be a C*-algebra, \(\{v_{1},v_{2},\ldots,v_{n}\}\) be orthogonal contractions and \(a,b\in B\). Then_
\[\left\|\sum_{i=1}^{n}v_{i}av_{i}-\sum_{i=1}^{n}v_{i}bv_{i}\right\|\leq\|a-b\|\]
**Theorem 5.6**.: _Let \(A\) be a C*-algebra and \(\alpha:G\to\operatorname{\mathit{Aut}}(A)\) be an action of a residually finite group on \(A\). If the action is exact and if \(\dim_{\operatorname{\mathit{Rok}}}(\alpha)<\infty\), then \(A\) separates ideals in \(A\rtimes_{r}G\)._
Proof.: Let \(d:=\dim_{\operatorname{\mathrm{Rok}}}(\alpha)<\infty\) and set \(B:=A\rtimes_{r}G\). Fix \(x\in B^{+}\) and we wish to prove that \(E(x)\in I_{B}[x]\). For \(\epsilon>0\) fixed, there exists \(z\in C_{c}(G,A)\) such that \(\|x-z\|<\frac{\epsilon}{3(d+1)}\). Let \(M\subset\!\!\subset G\) be a finite set such that \(z=\sum_{t\in M}a_{t}\lambda_{t}\). Assume \(e\in M,\|a_{t}\|\leq 1\) for all \(t\in M\) and choose \(H\lhd_{fin}G\) such that \(M\cap H=\{e\}\). For convenience of notation, we write \(k:=|M|\). Let \(F=\{a_{t}:t\in M\}\subset\!\!\subset A,S:=\{\delta_{\overline{s}}:\overline{ s}\in G/H\}\subset\!\!\subset C(G/H)\) and fix \(\eta>0\) to be chosen later. Choose \(0<\delta<\eta\) such that if \(d,d^{\prime}\in A\) are two positive contractions, then
\[\|[d,d^{\prime}]\| <\delta\Rightarrow\|[\sqrt{d},d^{\prime}]\|<\eta\text{ and }\] \[\|d-d^{\prime}\| <\delta\Rightarrow\|\sqrt{d}-\sqrt{d^{\prime}}\|<\eta.\]
The first condition can be met by [11, Exercise 3.9.6] and the second by [27, Lemma 1.2.5] applied for \(K=[0,1]\). By Lemma 2.3, there exist \((d+1)\) c.c.p. maps
\[\varphi_{0},\varphi_{1},\ldots,\varphi_{d}:C(G/H)\to A\]
which form a \((H,d,F,M,S,\delta)\)-Rokhlin system. Moreover, since the cone over \(C(G/H)\) is projective, we may arrange it so that
\[\varphi_{i}(\delta_{\overline{s}})\varphi_{i}(\delta_{\overline{r}})=0\]
whenever \(\overline{s}\neq\overline{r}\) in \(G/H\) (see [10, Remark 1.18] and [17, Theorem 4.6]). For \(\overline{s}\in G/H\) and \(0\leq i\leq d\), define \(y_{\overline{s}}^{(i)}:=\sqrt{\varphi_{i}(\delta_{\overline{s}})}\). Then for all \(a\in F\),
1. \(\sum_{i=0}^{d}\sum_{\overline{s}\in G/H}(y_{\overline{s}}^{(i)})^{2}a\approx_{ \delta}a\).
2. \(y_{\overline{s}}^{(i)}y_{\overline{r}}^{(i)}=0\) for any \(\overline{s},\overline{r}\in G/H\) with \(\overline{s}\neq\overline{r}\). In particular, if \(t\in M\setminus\{e\}\), then \(y_{\overline{s}}^{(i)}y_{\overline{ts}}^{(i)}=0\) for all \(\overline{s}\in G/H\).
3. \([y_{\overline{s}}^{(i)},a]\approx_{\eta}0\) for all \(\overline{s}\in G/H\).
4. \(\alpha_{t}(y_{\overline{s}}^{(i)})a\approx_{\eta}y_{\overline{ts}}^{(i)}a\) and \(a\alpha_{t}(y_{\overline{s}}^{(i)})\approx_{\eta}ay_{\overline{ts}}^{(i)}\) for any \(t\in M\) and \(\overline{s}\in G/H\).
For any \(\overline{s}\in G/H\) and \(0\leq i\leq d\),
\[y_{\overline{s}}^{(i)}zy_{\overline{s}}^{(i)} =\sum_{t\in M}y_{\overline{s}}^{(i)}a_{t}u_{t}y_{\overline{s}}^{(i)}\] \[=\sum_{t\in M}y_{\overline{s}}^{(i)}a_{t}\alpha_{t}(y_{\overline{s }}^{(i)})u_{t}\] \[\approx_{k\eta}\sum_{t\in M}y_{\overline{s}}^{(i)}a_{t}y_{ \overline{ts}}^{(i)}u_{t}\] \[\approx_{k\eta}\sum_{t\in M}y_{\overline{s}}^{(i)}y_{\overline{ ts}}^{(i)}a_{t}u_{t}\] \[=(y_{\overline{s}}^{(i)})^{2}a_{e}.\]
Therefore,
\[E(x)\approx \tfrac{\epsilon}{3(d+1)}\;E(z)\] \[=a_{e}\] \[\approx_{\eta}\sum_{i=0}^{d}\sum_{\overline{s}\in G/H}(y_{ \overline{s}}^{(i)})^{2}a_{e}\] \[\approx_{(d+1)(2k)\eta}\sum_{i=0}^{d}\sum_{\overline{s}\in G/H}y_ {\overline{s}}^{(i)}zy_{\overline{s}}^{(i)}\] \[\approx_{(d+1)\tfrac{\epsilon}{3(d+1)}}\sum_{i=0}^{d}\sum_{ \overline{s}\in G/H}y_{\overline{s}}^{(i)}xy_{\overline{s}}^{(i)}\]
where the last approximation follows from Lemma 5.5. If we choose \(\eta>0\) so that
\[\eta<\frac{\epsilon}{3(1+(d+1)(2k))},\]
then,
\[E(x)\approx_{\epsilon}\sum_{i=0}^{d}\sum_{\overline{s}\in G/H}y_{\overline{s} }^{(i)}xy_{\overline{s}}^{(i)}\in I_{B}[x].\]
Hence, \(E(x)\in I_{B}[x]\) for each \(x\in B^{+}\), so by [29, Theorem 1.13], \(A\) separates ideals in \(A\rtimes_{r}G\).
We end with a short application of this result. If \(A\) is a C*-algebra and \(\alpha\in\operatorname{Aut}(A)\) is an automorphism of \(A\), we write \(\operatorname{Sp}(\alpha)\) for the Arveson spectrum of \(\alpha\) (see [23, Section 8.1]). If \(H^{\alpha}(A)\) denotes the set of all non-zero, \(\alpha\)-invariant, hereditary C*-subalgebras of \(A\), then the Connes spectrum of \(\alpha\) is defined as
\[\Gamma(\alpha)=\bigcap_{B\in H^{\alpha}(A)}\operatorname{Sp}(\alpha|_{B})\]
Write \(H_{B}^{\alpha}(A)\) for the set of all \(B\in H^{\alpha}(A)\) with the property that the closed ideal generated by \(B\) is an essential ideal of \(A\). Then the Borchers spectrum of \(\alpha\) is defined as
\[\Gamma_{B}(\alpha)=\bigcap_{B\in H_{B}^{\alpha}(A)}\operatorname{Sp}(\alpha|_{ B})\]
For more about these objects and their relationship with the ideal structure of the reduced crossed product, the reader is referred to [23]. The notion of proper outerness (defined below) is originally
due to Kishimoto [15]. For separable C*-algebras, this definition is equivalent to a number of other conditions (see [20, Theorem 6.6]). For convenience, we adopt one such condition as our definition.
**Definition 5.7**.: Let \(A\) be a separable C*-algebra. An automorphism \(\alpha\in\operatorname{Aut}(A)\) is said to be properly outer if \(\Gamma_{B}(\alpha|_{B})\neq\{1\}\) for each \(B\in H^{\alpha}(A)\). Moreover, an action \(\alpha:G\to\operatorname{Aut}(A)\) of a discrete group on \(A\) is said to be properly outer if \(\alpha_{g}\) is properly outer for each non-identity element \(g\in G\).
**Corollary 5.8**.: _Let \(A\) be a separable C*-algebra, \(G\) be an amenable, residually finite group satisfying the (VRC) property, and let \(\alpha:G\to\operatorname{Aut}(A)\) be an action of \(G\) on \(A\). If \(\dim_{\text{Rok}}(\alpha)<\infty\), then \(\alpha\) is properly outer._
Proof.: Fix \(g\in G\setminus\{e\}\) and let \(H\) be the cyclic subgroup generated by \(g\). Since \(G\) satisfies the (VRC) property, the restricted action \(\alpha_{H}:H\to\operatorname{Aut}(A)\) also has finite Rokhlin dimension by Theorem 3.7. Replacing \(G\) by \(H\), we may assume that \(G\) is cyclic. We write \(\beta\) for the automorphism \(\alpha_{g}\) (and for the action \(\beta:G\to\operatorname{Aut}(A)\)), and write \(\widehat{G}\) for the Pontryagin dual of \(G\).
Now, if \(B\) is a non-zero, \(\beta\)-invariant, hereditary C*-subalgebra of \(A\), then \(\beta|_{B}\colon G\to\operatorname{Aut}(B)\) also has finite Rokhlin dimension by Theorem 3.2. Note that \(G\) is exact, so by Theorem 5.6, each non-zero ideal in \(B\rtimes_{r}G\) has non-zero intersection with \(B\). By [19, Theorem 2.5], \(\Gamma(\beta|_{B})=\widehat{G}\). Hence,
\[\Gamma_{B}(\beta|_{B})=\widehat{G}\]
This is true for each \(B\in H^{\alpha}(A)\), so \(\beta\) is properly outer.
**Acknowledgements:** The first named author is supported by NBHM Fellowship (Fellowship No. 0203/22(12)/2022/R&D-II/11168) and the second named author was partially supported by the SERB (Grant No. MTR/2020/000385).
|
2307.08021 | Local weighted topological pressure | In [D. Feng, W. Huang, Variational principle for weighted topological
pressure. J. Math. Pures Appl. (2016)], the authors studied weighted
topological pressure and established a variational principle for it. In this
paper, we introduce the notion of local weighted topological pressure and
generalize Feng and Huang's main results to localized version. | Fangzhou Cai | 2023-07-16T12:24:58Z | http://arxiv.org/abs/2307.08021v1 | # Local weighted topological pressure
###### Abstract.
In [D. Feng, W. Huang, Variational principle for weighted topological pressure. J. Math. Pures Appl. (2016)], the authors studied weighted topological pressure and established a variational principle for it. In this paper, we introduce the notion of local weighted topological pressure and generalize Feng and Huang's main results to localized version.
## 1. Introduction
### Weighted topological entropy and pressure
We say that \((X,T)\) is a topological dynamical system (TDS for short) if \(X\) is a compact metric space and \(T\) is a continuous map from \(X\) to \(X\). We define \(M(X)\) and \(M(X,T)\) as the sets of Borel probability measures and \(T\)-invariant Borel probability measures on \(X\). We first briefly review the classical theory of entropy and pressure in dynamical systems. One of the most basic theorems about entropy is variational principle [1, 2, 3]:
\[h_{top}(X,T)=\sup_{\mu\in M(X,T)}h_{\mu}(T),\]
where \(h_{top}(X,T)\) is the topological entropy of \((X,T)\) and \(h_{\mu}(T)\) is the Kolomogorov-Sinai entropy. Motivated by statistical mechanics, Ruelle [4] and Walters [5] introduced the topological pressure \(P(T,f)\) for a real-valued continuous function \(f\) on \(X\) and proved the variational principle:
\[P(T,f)=\sup_{\mu\in M(X,T)}\big{(}h_{\mu}(T)+\int_{X}fd\mu\big{)}.\]
Motivated by fractal geometry of self-affine carpets and sponges [6, 7, 8], Feng and Huang [21] introduced weighted topological pressure for factor maps between dynamical systems, and established a variational principle for it. To be precise, let us introduce some notations first. Let \((X,T)\) and \((Y,S)\) be two TDSs. We say \((Y,S)\) is a factor of \((X,T)\) if there exists a continuous surjective map \(\pi:X\to Y\) such that \(\pi\circ T=S\circ\pi\). The map \(\pi\) is called a factor map from \(X\) to \(Y\). For \(\mu\in M(X,T)\), we denote by \(\pi\mu\in M(Y,S)\) the pushforward of \(\mu\) by \(\pi\). Denote the set of real-valued continuous functions on \(X\) by \(C(X,\mathbb{R})\). Let \(f\in C(X,\mathbb{R})\) and \(a_{1}>0,a_{2}\geq 0\). The main purpose of [21] is to consider the following question:
How can one define a meaningful term \(P^{(a_{1},a_{2})}(T,f)\) such that the following variational principle holds?
\[P^{(a_{1},a_{2})}(T,f)=\sup_{\mu\in M(X,T)}\Big{(}a_{1}h_{\mu}(T)+a_{2}h_{\pi \mu}(S)+\int_{X}fd\mu\Big{)}.\]
Feng and Huang's definition for \(P^{(a_{1},a_{2})}(T,f)\) was inspired from the dimension theory of affine invariant subsets of tori, and from the "dimension" approaches of Bowen [9] and Pesin-Pitskel [27] in defining the topological entropy and topological pressure for arbitrary subsets. We will give the detailed definition in the next section.
In recent years, lots of authors focus on weighted topological and measure-theoretic entropy and pressure. In [28], Wang and Huang introduced various weighted topological entropies from different points of view and studied their relationships. The notion of measure-theoretic weighted entropy was also defined and studied in [28]. In [20] the authors studied weighted topological entropy of the set of generic points. It was proved in [20] that the weighted topological entropy of generic points of the ergodic measure \(\mu\) is equal to the weighted measure entropy of \(\mu\), which generalized the classical result of Bowen [9]. In [29], the authors studied weighted entropy of a flow on non-compact sets. In [30], the authors studied weighted topological entropy of random dynamical systems. Recently, Tsukamoto [31] introduced a new approach to weighted topological pressure and established a corresponding variational principle for it. The new approach is a modification of the familiar definition of topological entropy and pressure. It is very different from the original definitions in [21]. The equivalence of the two definitions is highly nontrivial. In [32], the authors generalized Tsukamoto's approach to amenable group action.
### Local entropy and pressure
The local theory of entropy and pressure is fundamental to many areas in dynamical systems. It has relations with entropy pairs, entropy sets, entropy points, and entropy structure, etc. (please see [10, 11, 12, 13, 14, 15, 16, 17, 22, 23, 24, 25]). In this subsection we briefly review the local theory of entropy and pressure. With the notion of entropy pairs [11, 13] in both topological and measure-theoretic situations, the study of local version of the variational principle of entropy has attracted a lot of attention. Blanchard, Glasner and Host [12] showed the following local variational principle: for an open cover \(\mathcal{U}\) of \(X\), there exists \(\mu\in M(X,T)\) such that
\[\inf_{\alpha\succeq\mathcal{U}}h_{\mu}(T,\alpha)\geq h_{top}(T,\mathcal{U}),\]
where the infimum is taken over all \(\alpha\in\mathcal{P}_{X}\), i.e., all finite Borel partitions of \(X\), finer than \(\mathcal{U}\). To make a general investigation on the converse of the inequality, Romagnoli [22] introduced two types of measure-theoretic entropies related to \(\mathcal{U}\):
\[h_{\mu}(T,\mathcal{U})=\lim_{n\to\infty}\frac{1}{n}\inf_{\alpha\succeq\mathcal{ U}_{0}^{n-1}}H_{\mu}(\alpha),\]
\[h_{\mu}^{+}(T,\mathcal{U})=\inf_{\alpha\succeq\mathcal{U}}h_{\mu}(T,\alpha).\]
He showed the following local variational principle:
\[h_{top}(T,\mathcal{U})=\max_{\mu\in M(X,T)}h_{\mu}(T,\mathcal{U}),\]
and the supremum can be attained by an ergodic measure. Later, Glasner and Weiss [15] proved that if \((X,T)\) is invertible then the local variational principle also holds for
\(h_{\mu}^{+}(T,\mathcal{U}):\)
\[h_{top}(T,\mathcal{U})=\max_{\mu\in M(X,T)}h_{\mu}^{+}(T,\mathcal{U}).\]
Huang and Yi [25] generalized the above local variational principles of entropy to the case of pressure: For any \(f\in C(X,\mathbb{R})\), the local pressure \(P(T,f,\mathcal{U})\) satisfies
\[P(T,f,\mathcal{U})=\max_{\mu\in M(X,T)}\Big{(}h_{\mu}(T,\mathcal{U})+\int_{X} fd\mu\Big{)},\]
and the supremum can be attained by an ergodic measure. The relative local variational principle for entropy was proved by Huang, Ye and Zhang in [24]. In [18], Wu studied various notions of local pressure of subsets and measures, which were defined by Caratheordory-Pesin construction.
In this paper, we study the local weighted topological pressure with respect to a fixed open cover. We generalize the main results in [21] to localized version. As a corollary, we get the variational principle for weighted topological pressure, which was first proved in [21].
## 2. Local Weighted topological pressure
In this section, we introduce the notion of local weighted topological pressure and give some basic properties of it. First let us recall the definition of weighted topological pressure introduced in [21].
Let \(k\geq 1\), \(\mathbf{a}=(a_{1},\ldots,a_{k})\) with \(a_{1}>0\) and \(a_{i}\geq 0,i=2,\ldots,k\). Let \((X_{i},T_{i}),i=1,\ldots,k\) be TDSs with metric \(d_{i}\) and \((X_{i+1},T_{i+1})\) be a factor of \((X_{i},T_{i})\) for each \(i\). Let \(\tau_{i}:X_{1}\to X_{i+1}\) be the factor map and set \(\tau_{0}=\mathrm{id}_{X_{1}}\).
**Definition 2.1**.: _[_21_, \(\mathbf{a}\)-weighted Bowen ball]_ For \(x\in X_{1},n\in\mathbb{N}\) and \(\varepsilon>0,\) denote
\[B_{n}^{\mathbf{a}}(x,\varepsilon):=\{y\in X_{1}:d_{i}(T_{i}^{j}\tau_{i-1}x,T_{ i}^{j}\tau_{i-1}y)<\varepsilon,0\leq j\leq\lceil(a_{1}+\ldots+a_{i})n\rceil-1,1 \leq i\leq k\},\]
where \(\lceil u\rceil\) denotes the least integer \(\geq u\).
For \(f\in C(X_{1},\mathbb{R})\), denote \(S_{n}f=\sum_{i=0}^{n-1}f\circ T_{1}^{i}.\) Now we give the definition of \(\mathbf{a}\)-weighted topological pressure.
**Definition 2.2**.: _[_21_, \(\mathbf{a}\)-weighted topological pressure]_ Let \(Z\subset X_{1},N\in\mathbb{N},s\geq 0\), \(\varepsilon>0\) and \(f\in C(X_{1},\mathbb{R})\). Define
\[\Lambda_{N,\varepsilon}^{\mathbf{a},s}(Z,f)=\inf\sum_{j}e^{-sn_{j}+\frac{1}{a _{1}}\sup_{x\in A_{j}}S_{\lfloor a_{1}n_{j}\rceil}f(x)},\]
where the infimum is taken over all countable collections \(\{(n_{j},A_{j})\}_{j}\) such that \(n_{j}\geq N,A_{j}\) be a Borel subset of \(B_{n_{j}}^{\mathbf{a}}(x_{j},\varepsilon)\) for some \(x_{j}\in X_{1}\) and \(Z\subset\bigcup_{j}A_{j}.\) Define
\[\Lambda_{\varepsilon}^{\mathbf{a},s}(Z,f)=\lim_{N\to\infty}\Lambda_{N, \varepsilon}^{\mathbf{a},s}(Z,f)\]
and
\[P^{\mathbf{a}}(T_{1},Z,\varepsilon,f) =\inf\{s:\Lambda_{\varepsilon}^{\mathbf{a},s}(Z,f)=0\}\] \[=\sup\{s:\Lambda_{\varepsilon}^{\mathbf{a},s}(Z,f)=+\infty\}.\]
The **a**-_weighted topological pressure for \(Z\) with potential \(f\)_ is defined as
\[P^{\textbf{a}}(T_{1},Z,f)=\lim_{\varepsilon\to 0}P^{\textbf{a}}(T_{1},Z, \varepsilon,f).\]
For a function \(g:X_{1}\to[0,+\infty)\), define
\[W^{\textbf{a},s}_{N,\varepsilon}(g)=\inf\sum_{j}c_{j}e^{-sn_{j}+\frac{1}{a_{1 }}\sup_{x\in A_{j}}S_{[a_{1}n_{j}]}f(x)},\]
where the infimum is taken over all countable collections \(\{(n_{j},A_{j},c_{j})\}_{j}\) such that \(n_{j}\geq N,\)\(0<c_{j}<\infty,A_{j}\) be a Borel subset of \(B^{\textbf{a}}_{n_{j}}(x_{j},\varepsilon)\) for some \(x_{j}\in X_{1}\) and
\[\sum_{j}c_{j}\lambda_{A_{j}}\geq g.\]
For \(Z\subset X_{1},\) we set \(W^{\textbf{a},s}_{N,\varepsilon}(Z,f)=W^{\textbf{a},s}_{N,\varepsilon}( \chi_{Z}).\) Define
\[W^{\textbf{a},s}_{\varepsilon}(Z,f)=\lim_{N\to\infty}W^{\textbf{a},s}_{N, \varepsilon}(Z,f)\]
and
\[P^{\textbf{a}}_{W}(T_{1},Z,\varepsilon,f) =\inf\{s:W^{\textbf{a},s}_{\varepsilon}(Z,f)=0\}\] \[=\sup\{s:W^{\textbf{a},s}_{\varepsilon}(Z,f)=+\infty\}.\]
The _average_\(\mathbf{a}\)_-weighted topological pressure for \(Z\) with potential \(f\)_ is defined as
\[P^{\textbf{a}}_{W}(T_{1},Z,f)=\lim_{\varepsilon\to 0}P^{\textbf{a}}_{W}(T_{1},Z, \varepsilon,f).\]
If there is no confusion, we omit \(T_{1},f\) and simply write \(P^{\textbf{a}}(Z),P^{\textbf{a}}_{W}(Z)\) and \(P^{\textbf{a}}(Z,\varepsilon),P^{\textbf{a}}_{W}(Z,\varepsilon)\) for short.
It was proved in [21] that weighted topological pressure and average weighted topological pressure are equal:
**Proposition 2.3**.: _[_21_, Proposition 3.5]_ _For any \(f\in C(X_{1},\mathbb{R})\) and \(\varepsilon>0\), we have_
\[P^{\textbf{a}}(T_{1},Z,6\varepsilon,f)\leq P^{\textbf{a}}_{W}(T_{1},Z, \varepsilon,f)\leq P^{\textbf{a}}(T_{1},Z,\varepsilon,f),\]
_hence \(P^{\textbf{a}}(Z,f)=P^{\textbf{a}}_{W}(Z,f).\)_
We now define local weighted topological pressure for a fixed open cover. The entropy case please also see [20].
**Definition 2.4**.: Let \(Z\subset X_{1},N\in\mathbb{N},s\geq 0\), \(f\in C(X_{1},\mathbb{R})\) and \(\mathcal{U}_{i}\) be open covers of \(X_{i},i=1,\ldots,k\). Define
\[\Lambda^{\textbf{a},s}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(Z,f)=\inf\sum_{j}e^{- sn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_{1}n_{j}]}f(x)},\]
where the infimum is taken over all countable collections \(\{(n_{j},A_{j})\}_{j}\) such that \(n_{j}\geq N,\)\(Z\subset\bigcup_{j}A_{j}\) and \(A_{j}\) be a Borel subset of some element of \(\bigvee\limits_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+ \ldots+a_{i})n_{j}\rceil-1}.\) Since \(\Lambda^{\textbf{a},s}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(Z,f)\) does not decrease with \(N\), define
\[\Lambda^{\textbf{a},s}_{\{\mathcal{U}_{i}\}_{i=1}^{k}}(Z,f)=\lim_{N\to\infty} \Lambda^{\textbf{a},s}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(Z,f)\]
and
\[P^{\mathbf{a}}(T_{1},Z,\{\mathcal{U}_{i}\}_{i=1}^{k},f) =\inf\{s:\Lambda_{\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f )=0\}\] \[=\sup\{s:\Lambda_{\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f )=+\infty\}.\]
For a function \(g:X_{1}\rightarrow[0,+\infty)\), define
\[W_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(g)=\inf\sum\limits_{j}c_{j} e^{-sn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_{1}n_{j}]}f(x)},\]
where the infimum is taken over all countable collections \(\{(n_{j},A_{j},c_{j})\}_{j}\) such that \(n_{j}\geq N,\)\(0<c_{j}<\infty\), \(A_{j}\) be a Borel subset of some element of \(\bigvee\limits_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+ \ldots+a_{i})n_{j}\rceil-1}\) and \(\sum_{j}c_{j}\mathcal{X}_{j}\geq g\). For \(Z\subset X_{1}\), we set
\[W_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f)=W_{N,\{\mathcal{U}_{i }\}_{i=1}^{k}}^{\mathbf{a},s}(\chi_{Z}).\]
Since \(W_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f)\) does not decrease with \(N\), define
\[W_{\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f)=\lim\limits_{N \rightarrow\infty}W_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f)\]
and
\[P_{W}^{\mathbf{a}}(T_{1},Z,\{\mathcal{U}_{i}\}_{i=1}^{k},f) =\inf\{s:W_{\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f)=0\}\] \[=\sup\{s:W_{\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},s}(Z,f)=+ \infty\}.\]
If there is no confusion, we omit \(T_{1},f\) and simply write \(P^{\mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k}),P_{W}^{\mathbf{a}}(Z,\{ \mathcal{U}_{i}\}_{i=1}^{k})\) for short. If \(Z=X_{1}\), we also write \(P^{\mathbf{a}}(\{\mathcal{U}_{i}\}_{i=1}^{k})\) and \(P_{W}^{\mathbf{a}}(\{\mathcal{U}_{i}\}_{i=1}^{k})\) instead of \(P^{\mathbf{a}}(X_{1},\{\mathcal{U}_{i}\}_{i=1}^{k})\) and \(P_{W}^{\mathbf{a}}(X_{1},\{\mathcal{U}_{i}\}_{i=1}^{k})\).
For an open cover \(\mathcal{U}\), denote \(diam\mathcal{U}=\max_{U\in\mathcal{U}}diamU\). The relation between weighted topological pressure and local weighted topological pressure is the following:
**Theorem 2.5**.: _For subset \(Z\subset X_{1},\) we have_
\[P^{\mathbf{a}}(Z) =\sup\limits_{\{\mathcal{U}_{i}\}_{i=1}^{k}}P^{\mathbf{a}}(Z,\{ \mathcal{U}_{i}\}_{i=1}^{k})=\lim\limits_{\max\limits_{1\leq i\leq k}diam \mathcal{U}_{i}\to 0}P^{\mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k})\] \[=P_{W}^{\mathbf{a}}(Z)=\sup\limits_{\{\mathcal{U}_{i}\}_{i=1}^{k} }P_{W}^{\mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k})=\lim\limits_{\max\limits _{1\leq i\leq k}diam\mathcal{U}_{i}\to 0}P_{W}^{\mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k}),\]
_where the supremum are taken over all open covers \(\mathcal{U}_{i}\) of \(X_{i},i=1,\ldots,k\)._
Proof.: By Proposition 2.3 we have \(P^{\mathbf{a}}(Z)=P_{W}^{\mathbf{a}}(Z)\). We prove the equality for \(P^{\mathbf{a}}(Z)\), the one for \(P_{W}^{\mathbf{a}}(Z)\) is similar. Let \(\mathcal{U}_{i}\) be open covers of \(X_{i}\) and the Lebesgue number of \(\mathcal{U}_{i}\) be \(\varepsilon_{i},1\leq i\leq k\). Let \(\varepsilon<\min_{i}\frac{\varepsilon_{i}}{2}\). Then
\[P^{\mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k})\leq P^{\mathbf{a}}(Z, \varepsilon).\]
Hence
\[\sup\limits_{\{\mathcal{U}_{i}\}_{i=1}^{k}}P^{\mathbf{a}}(Z,\{\mathcal{U}_{i} \}_{i=1}^{k})\leq P^{\mathbf{a}}(Z).\]
Let \(\varepsilon>0\). If \(\max\limits_{1\leq i\leq k}diam\mathcal{U}_{i}<\varepsilon\), then
\[P^{\mathbf{a}}(Z,\varepsilon)\leq P^{\mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k} )\leq\sup\limits_{\{\mathcal{U}_{i}\}_{i=1}^{k}}P^{\mathbf{a}}(Z,\{\mathcal{U}_ {i}\}_{i=1}^{k}).\]
Hence
\[P^{\mathbf{a}}(Z)\leq\max\limits_{1\leq i\leq k}diam\mathcal{U}_{i}\to 0}P^{ \mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k})\leq\sup\limits_{\{\mathcal{U}_{i }\}_{i=1}^{k}}P^{\mathbf{a}}(Z,\{\mathcal{U}_{i}\}_{i=1}^{k}).\]
For \(M\in\mathbb{N},\)\(\mathcal{U}_{i}\) be open cover of \(X_{i}\) and \(f\in C(X_{1},\mathbb{R})\), recall \(S_{M}f=\sum_{i=0}^{M-1}f\circ T_{1}^{i}\) and \((\mathcal{U}_{i})_{0}^{M-1}=\bigvee\limits_{l=0}^{M-1}T_{i}^{-l}\mathcal{U}_ {i}.\) We have the following proposition:
**Proposition 2.6**.: _Let \(M\in\mathbb{N},Z\subset X_{1},f\in C(X_{1},\mathbb{R})\) and \(\mathcal{U}_{i}\) be open covers of \(X_{i},i=1,\ldots,k.\) We have_
\[P^{\mathbf{a}}(T_{1},Z,\{\mathcal{U}_{i}\}_{i=1}^{k},f)=\frac{1}{M}P^{\mathbf{ a}}(T_{1}^{M},Z,\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k},S_{M}f),\]
\[P^{\mathbf{a}}_{W}(T_{1},Z,\{\mathcal{U}_{i}\}_{i=1}^{k},f)=\frac{1}{M}P^{ \mathbf{a}}_{W}(T_{1}^{M},Z,\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k},S_{M}f).\]
Proof.: We prove the equality for \(P^{\mathbf{a}}\), the one for \(P^{\mathbf{a}}_{W}\) is similar. Let
\[h=P^{\mathbf{a}}(T_{1},Z,\{\mathcal{U}_{i}\}_{i=1}^{k},f).\]
For any \(s<Mh,\) we have
\[\Lambda^{\mathbf{a},\frac{s}{M}}_{\{\mathcal{U}_{i}\}_{i=1}^{k}}(T_{1},Z,f)=+\infty,\]
hence there exists some \(N\) such that
\[\Lambda^{\mathbf{a},\frac{s}{M}}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(T_{1},Z,f) \geq e^{\frac{M\|f\|}{a_{1}}}.\]
Now we prove
\[\Lambda^{\mathbf{a},s}_{N,\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k}}(T_{1}^{M},Z,S_{M}f)\geq 1.\]
Consider
\[\sum_{j}e^{-sn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}\sum_{i=0}^{\lceil a_{1}n_{ j}\rceil-1}(S_{M}f)\circ T_{1}^{Mi}(x)},\]
where \(n_{j}\geq N,\)\(Z\subset\bigcup_{j}A_{j}\) and \(A_{j}\) is a Borel subset of some element of
\[\bigvee\limits_{i=1}^{k}\bigvee\limits_{l=0}^{\lceil(a_{1}+\ldots+a_{i})n_{j} \rceil-1}T_{1}^{-MI}\tau_{i-1}^{-1}(\mathcal{U}_{i})_{0}^{M-1}=\bigvee\limits_ {i=1}^{k}\tau_{i-1}^{-1}(\mathcal{U}_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})n_{j }\rceil M-1}.\]
Since
\[\bigvee\limits_{i=1}^{k}\tau_{i-1}^{-1}(\mathcal{U}_{i})_{0}^{\lceil(a_{1}+ \ldots+a_{i})n_{j}\rceil M-1}\succeq\bigvee\limits_{i=1}^{k}\tau_{i-1}^{-1}( \mathcal{U}_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})Mn_{j}\rceil-1}\]
and \(Mn_{j}>N,\) by definition we have
\[\sum_{j}e^{-\frac{s}{M}Mn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{\lfloor a_{1} Mn_{j}\rceil}f(x)}\geq e^{\frac{M\|f\|}{a_{1}}}.\]
It is easy to check that
\[\sum_{j}e^{-\frac{s}{M}Mn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_{1}Mn_{j}]}f (x)}\leq\sum_{j}e^{-\frac{s}{M}Mn_{j}+\frac{M||f||}{a_{1}}+\frac{1}{a_{1}}\sup_ {x\in A_{j}}S_{M[a_{1}n_{j}]}f(x)}.\]
It follows that
\[\Lambda^{\mathbf{a},s}_{N,\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k}}(T_{1}^{M},Z,S_{M}f)\geq 1.\]
Let \(s\to Mh\) we have
\[Mh\leq P^{\mathbf{a}}(T_{1}^{M},Z,\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k},S_ {M}f).\]
Now we prove the opposite direction. Let
\[h=P^{\mathbf{a}}(T_{1}^{M},Z,\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k},S_{M}f).\]
For any \(s<\frac{h}{M}\), we have
\[\Lambda^{\mathbf{a},Ms}_{\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k}}(T_{1}^{M},Z,S_{M}f)=+\infty.\]
There exists \(N_{0}\) such that
\[\Lambda^{\mathbf{a},Ms}_{N_{0},\{(\mathcal{U}_{i})_{0}^{M-1}\}_{i=1}^{k}}(T_{1 }^{M},Z,S_{M}f)\geq e^{Ms(\frac{1}{a_{1}}+1)+\frac{[a_{1}M+M||f||}{a_{1}}}.\]
Let \(N>M(N_{0}+1+\frac{1}{a_{1}})\). Now we prove that
\[\Lambda^{\mathbf{a},s}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(T_{1},Z,f)\geq 1.\]
Consider
\[\sum_{j}e^{-sn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_{1}n_{j}]}f(x)},\]
where \(n_{j}\geq N,Z\subset\bigcup_{j}A_{j}\) and \(A_{j}\) is a Borel subset of some element of \(\bigvee\limits_{i=1}^{k}\tau_{i-1}^{-1}(\mathcal{U}_{i})_{0}^{\lceil(a_{1}+ \ldots+a_{i})n_{j}\rceil-1}\). Let
\[\tilde{n}_{j}=\lceil\frac{n_{j}}{M}-\frac{1}{a_{1}}\rceil-1\geq N_{0}.\]
Note that
\[\bigvee\limits_{i=1}^{k}\tau_{i-1}^{-1}(\mathcal{U}_{i})_{0}^{ \lceil(a_{1}+\ldots+a_{i})n_{j}\rceil-1} \succeq\bigvee\limits_{i=1}^{k}\tau_{i-1}^{-1}(\mathcal{U}_{i})_ {0}^{\lceil(a_{1}+\ldots+a_{i})\tilde{n}_{j}\rceil M-1}\] \[=\bigvee\limits_{i=1}^{k}\bigvee\limits_{I=0}^{\lceil(a_{1}+ \ldots+a_{i})\tilde{n}_{j}\rceil-1}T_{1}^{-MI}\tau_{i-1}^{-1}(\mathcal{U}_{i}) _{0}^{M-1}.\]
By definition we have
\[\sum_{j}e^{-Ms\tilde{n}_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}\sum_{i=0}^{[a_{1} \tilde{n}_{j}]-1}(S_{M}f)\circ T_{1}^{Mi}(x)}\geq e^{Ms(\frac{1}{a_{1}}+1)+ \frac{[a_{1}M+M||f||}{a_{1}}}.\]
It is easy to check that
\[\sum_{j}e^{-Ms\widehat{n}_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{M[a _{1}\widehat{n}_{j}]}f(x)} \leq\sum_{j}e^{-Ms(\frac{n_{j}}{M}-\frac{1}{a_{1}}-1)+\frac{[a_{1}M +M\||||f]}{a_{1}}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_{1}n_{j}]}f(x)}\] \[=\sum_{j}e^{-sn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_{1}n_{j} ]}f(x)}e^{Ms(\frac{1}{a_{1}}+1)+\frac{[a_{1}M+M\||f]}{a_{1}}}.\]
Hence
\[\sum_{j}e^{-sn_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_{1}n_{j}]}f(x)}\geq 1.\]
It follows that
\[\Lambda^{\mathbf{a},s}_{N,\{\mathcal{U}_{t}\}_{i=1}^{k}}(T_{1},Z,f)\geq 1.\]
Let \(s\to\frac{h}{M}\), we have
\[\frac{h}{M}\leq P^{\mathbf{a}}(T_{1},Z,\{\mathcal{U}_{t}\}_{i=1}^{k},f).\]
## 3. lower bound
In order to prove the lower bound of the variational principle for weighted topological pressure, in [21] the authors established weighted version of Shannon-McMillan-Breiman theorem ([21, Proposition A.2]) and weighted version of Brin-Katok theorem ([21, Theorem 4.1]). It was shown in [21, Proposition 4.2] that
\[P^{\mathbf{a}}(T_{1},f)\geq\sum_{i=1}^{k}a_{i}h_{\tau_{i-1}\mu}(T_{i})+\int fd\mu\]
for any \(\mu\in M(X_{1},T_{1})\) and \(f\in C(X_{1},\mathbb{R})\). In this section, we generalize the above lower bound part of the variational principle, with the notion of local weighted topological pressure.
We first need some lemmas:
**Proposition 3.1**.: _[_21_, Proposition A.2]_ _Let \((X,T)\) be a TDS, \(\mathcal{B}\) be the collection of all Borel sets of \(X\), \(\mu\in M(X,T)\) and \(k\geq 1\). Let \(\alpha_{1},\ldots,\alpha_{k}\in\mathcal{P}_{X}\) with \(H_{\mu}(\alpha_{i})<\infty,1\leq i\leq k\), and \(\mathbf{a}=(a_{1},\ldots,a_{k})\in\mathbb{R}^{k}\) with \(a_{1}>0\) and \(a_{i}\geq 0,i=2,\ldots,k\). Then_
\[\lim_{N\to+\infty}\frac{1}{N}I_{\mu}\big{(}\bigvee_{i=1}^{k}(\alpha_{i})_{0}^ {[(a_{1}+\ldots+a_{i})N]-1}\big{)}(x)=\sum_{i=1}^{k}a_{i}\mathbb{E}_{\mu}(F_{i }|\mathcal{I}_{\mu})(x)\]
_almost everywhere, where_
\[F_{i}(x)=I_{\mu}\Big{(}\bigvee_{j=i}^{k}\alpha_{j}|\bigvee_{n=1}^{\infty}T^{- n}(\bigvee_{j=i}^{k}\alpha_{j})\Big{)}(x),i=1,\ldots,k\]
_and \(\mathcal{I}_{\mu}=\{B\in\mathcal{B}:\mu(B\Delta T^{-1}B)=0\}\). In particular, if \(T\) is ergodic, we have_
\[\lim_{N\to+\infty}\frac{1}{N}I_{\mu}\big{(}\bigvee_{i=1}^{k}(\alpha_{i})_{0}^ {[(a_{1}+\ldots+a_{i})N]-1}\big{)}(x)=\sum_{i=1}^{k}a_{i}h_{\mu}(T,\bigvee_{j =i}^{k}\tau_{j-1}^{-1}\alpha_{j}).\]
**Lemma 3.2**.: _Let \((X,T)\) be a TDS, \(\alpha=\{A_{1},\ldots,A_{m}\}\in\mathcal{P}_{X},\ \mu\in M(X,T)\) and \(\delta>0\). Then there exists an open cover \(\mathcal{U}=\{U_{0},U_{1},\ldots,U_{m}\}\) such that_
1. \(\mu(U_{0})<\delta\)_._
2. \(\mu(A_{i}\Delta U_{i})<\delta,1\leq i\leq m\)_._
3. \(U_{1},\ldots,U_{m}\) _are pairwise disjoint._
Proof.: We can find compact sets \(K_{i}\subset A_{i}\) with \(\mu(A_{i}-K_{i})<\frac{\delta}{m},1\leq i\leq m\). Since \(K_{1},\ldots,K_{m}\) are pairwise disjoint, we can find open sets \(K_{i}\subset U_{i}\) such that \(U_{1},\ldots,U_{m}\) are pairwise disjoint. It is easy to see that \(\mu(\bigcup_{i=1}^{m}U_{i})>1-\delta\). Hence we can find an open set \(U_{0}\) such that \((\bigcup_{i=1}^{m}U_{i})^{c}\subset U_{0}\) and \(\mu(U_{0})<\delta\). It is easy to see that \(A_{i}\Delta U_{i}\subset\bigcup_{i=1}^{m}(A_{i}-K_{i})\) and hence \(\mu(A_{i}\Delta U_{i})<\delta,1\leq i\leq m\).
For a finite set \(M\), denote \(|M|\) the cardinality of \(M\). The following theorem is the main result of this section:
**Theorem 3.3**.: _Let \(\alpha_{i}\in\mathcal{P}_{X_{i}},1\leq i\leq k\) and \(\mu\in M(X_{1},T_{1})\). Then for any \(\varepsilon>0\), there exist open covers \(\mathcal{U}_{i}\) of \(X_{i}\) such that \(|\mathcal{U}_{i}|=|\alpha_{i}|+1,\ \tau_{i-1}\mu(\mathcal{U}_{i}\Delta\alpha_{i})<\varepsilon,1\leq i\leq k\) and_
\[P^{\mathbf{a}}(supp\mu,\{\mathcal{U}_{i}\}_{i=1}^{k},f)\geq\sum_{i=1}^{k}a_{i }h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\alpha_{j})+\int fd\mu-\varepsilon.\]
Proof.: step 1: reduce to an ergodic measure.
Assume \(\alpha_{i}=\{A_{1}^{i},\ldots,A_{m_{i}}^{i}\},1\leq i\leq k\). Set \(M=\max_{1\leq i\leq k}m_{i}\). By [19, Lemma 4.15], there exists \(\delta_{1}\) such that if \(\alpha,\beta\in\mathcal{P}_{X_{1}},|\alpha|,|\beta|\leq(M+1)^{k}\) and \(\mu(\alpha\Delta\beta)<\delta_{1},\) then
\[|h_{\mu}(T_{1},\alpha)-h_{\mu}(T_{1},\beta)|<\varepsilon.\]
Choose \(\delta>0\) such that
\[\delta<\frac{1}{k}\min\{\frac{\varepsilon}{k\log(M+1)\sum_{i=1}^{k}a_{i}+||f|| +\varepsilon},\frac{\varepsilon}{2\log(M+1)},\frac{\delta_{1}}{M+1},(\frac{ \varepsilon}{M+1})^{\frac{1}{2}},1\}.\]
By Lemma 3.2, for each \(1\leq i\leq k\) there exists an open cover \(\mathcal{U}_{i}=\{U_{0}^{i},U_{1}^{i},\ldots,U_{m_{i}}^{i}\}\) such that
* \(\tau_{i-1}\mu(U_{0}^{i})<\delta^{2}\).
* \(\tau_{i-1}\mu(A_{1}^{i}\Delta U_{1}^{i})<\delta^{2},1\leq l\leq m_{i}\).
* \(U_{1}^{i},\ldots,U_{m_{i}}^{i}\) are pairwise disjoint.
It is easy to see that \(\tau_{i-1}\mu(\mathcal{U}_{i}\Delta\alpha_{i})<\varepsilon\). Let
\[\beta_{i}=\{(\bigcup_{l=1}^{m_{i}}U_{l}^{i})^{c},U_{1}^{i},\ldots,U_{m_{i}}^{i }\}\in\mathcal{P}_{X_{i}}.\]
Then
\[\tau_{i-1}\mu(\alpha_{i}\Delta\beta_{i})<(M+1)\delta\]
and hence
\[\mu\Big{(}(\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\alpha_{j})\Delta(\bigvee_{j=i}^{k }\tau_{j-1}^{-1}\beta_{j})\Big{)}<\delta_{1}.\]
It follows that
\[\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\beta_{j})>\sum_ {i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\alpha_{j})-\sum_{i =1}^{k}a_{i}\varepsilon. \tag{1}\]
Let
\[\mu=\int_{M^{e}(X_{1},T_{1})}\theta dm(\theta)\]
be the ergodic decomposition of \(\mu\). Denote
\[I_{\mu}=\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\beta _{j})+\int fd\mu\]
and
\[I_{\theta}=\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{- 1}\beta_{j})+\int fd\theta\]
for \(\theta\in M^{e}(X_{1},T_{1})\). Then
\[I_{\mu}=\int_{M^{e}(X_{1},T_{1})}I_{\theta}dm(\theta)=\int_{\{\theta:I_{ \theta}>I_{\mu}-\varepsilon\}}I_{\theta}dm(\theta)+\int_{\{\theta:I_{\theta} \leq I_{\mu}-\varepsilon\}}I_{\theta}dm(\theta).\]
Since
\[|\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\beta_{j}|\leq(M+1)^{k},\]
we have
\[I_{\theta}\leq k\log(M+1)\sum_{i=1}^{k}a_{i}+||f||.\]
Hence
\[\int_{\{\theta:I_{\theta}>I_{\mu}-\varepsilon\}}I_{\theta}dm(\theta)\leq\big{(} k\log(M+1)\sum_{i=1}^{k}a_{i}+||f||\big{)}m(\{\theta:I_{\theta}>I_{\mu}- \varepsilon\})\]
and
\[\int_{\{\theta:I_{\theta}\leq I_{\mu}-\varepsilon\}}I_{\theta}dm(\theta)\leq( I_{\mu}-\varepsilon)m(\{\theta:I_{\theta}\leq I_{\mu}-\varepsilon\}).\]
We get
\[m(\{\theta:I_{\theta}>I_{\mu}-\varepsilon\})\geq\frac{\varepsilon}{k\log(M+1) \sum_{i=1}^{k}a_{i}+||f||+\varepsilon}>k\delta.\]
Since for each \(1\leq i\leq k\), \(\tau_{i-1}\mu(U_{0}^{i})<\delta^{2}\), it is easy to see that
\[m(\{\theta:\theta(\tau_{i-1}^{-1}U_{0}^{i})<\delta\})>1-\delta.\]
Hence there exists \(\theta\in M^{e}(X_{1},T_{1})\) such that
\[\theta(\tau_{i-1}^{-1}U_{0}^{i})<\delta,\ \forall\ 1\leq i\leq k \tag{2}\]
and
\[\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\beta_{j}) +\int fd\theta>\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1 }\beta_{j})+\int fd\mu-\varepsilon. \tag{3}\]
step 2: deal with the ergodic measure \(\theta\) and finish the proof.
Here we use the method in [27] (please also see [20]). For \(y\in supp\theta,1\leq i\leq k\) and \(n\in\mathbb{N},\) write \(t_{n}^{i}(y)\) for the number of integers \(0\leq j\leq\lceil(a_{1}+\ldots+a_{i})n\rceil-1\) such that
\[T_{i}^{j}\tau_{i-1}y\in U_{0}^{i}.\]
By the Birkhoff Ergodic Theorem and (2), there exist \(N_{1}\in\mathbb{N}\) and \(B_{1}\subset supp\theta\) such that \(\theta(B_{1})>\frac{5}{6}\) and for all \(y\in B_{1},n\geq N_{1}\) and \(1\leq i\leq k,\)
\[t_{n}^{i}(y)<2\delta\lceil(a_{1}+\ldots+a_{i})n\rceil.\]
By Proposition 3.1, there exist \(N_{2}\in\mathbb{N}\) and \(B_{2}\subset supp\theta\) such that \(\theta(B_{2})>\frac{5}{6}\) and for all \(y\in B_{2},n\geq N_{2},\)
\[\theta\big{(}\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\beta_{i})_{0}^{\lceil(a_{1}+ \ldots+a_{i})n\rceil-1}(y)\big{)}\leq e^{-n\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1 },\vee_{j=i}^{k}\tau_{j-1}^{-1}\beta_{j})+n\varepsilon}.\]
By the Birkhoff Ergodic Theorem there exist \(N_{3}\in\mathbb{N}\) and \(B_{3}\subset supp\theta\) such that \(\theta(B_{3})>\frac{5}{6}\) and for all \(y\in B_{3},n\geq N_{3},\)
\[\big{|}\frac{1}{a_{1}n}S_{\lceil a_{1}n\rceil}f(y)-\int fd\theta|<\varepsilon.\]
Let \(N=\max\{N_{1},N_{2},N_{3}\}\) and \(B=B_{1}\cap B_{2}\cap B_{3}.\) We have \(\theta(B)>\frac{1}{2}.\) Let
\[\lambda<\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1} \beta_{j})-\sum_{i=1}^{k}(a_{1}+\ldots+a_{i})\varepsilon-2\varepsilon+\int fd \theta.\]
We will prove \(\Lambda_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}^{\mathbf{a},\lambda}(supp\theta) \geq\frac{1}{2}e^{-k\varepsilon}.\) Consider
\[\sum_{j}e^{-\lambda n_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{\lceil a_{1}n_{j }\rceil}f(x)},\]
where \(n_{j}\geq N,\)\(A_{j}\) is a Borel subset of some element of \(\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+\ldots+a_ {i})n_{j}\rceil-1}\) and \(supp\theta\subset\bigcup_{j}A_{j}.\) For \(l\geq N,\) let
\[\Gamma_{l}=\{(n_{j},A_{j}):n_{j}=l,A_{j}\cap B\neq\emptyset\}\]
and
\[Y_{l}=\bigcup_{(n_{j},A_{j})\in\Gamma_{l}}A_{j}.\]
Denote
\[L=\{C\in\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\beta_{i})_{0}^{\lceil(a_{1}+\ldots+ a_{i})l\rceil-1}:C\cap Y_{l}\cap B\neq\emptyset\}.\]
Note that
\[\sum_{C\in L}\theta(C)\geq\theta(Y_{l}\cap B)\]
and \(C\cap B_{2}\neq\emptyset\) for \(C\in L.\) We have
\[\big{|}L\big{|}\geq\theta(Y_{l}\cap B)e^{l\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1}, \vee_{j=i}^{k}\tau_{j-1}^{-1}\beta_{j})-l\varepsilon}.\]
For \((n_{j},A_{j})\in\Gamma_{l},\) by the definition of \(B_{1}\) we have
\[|\{C\in\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\beta_{i})_{0}^{\lceil(a_{1} +\ldots+a_{i})I\rceil-1}:C\cap A_{j}\cap B\neq\emptyset\}|\] \[\leq\prod_{i=1}^{k}(M+1)^{2\delta\lceil(a_{1}+\ldots+a_{i})I \rceil}\leq e^{I\epsilon\sum_{i=1}^{k}(a_{1}+\ldots+a_{i})+k\epsilon},\]
the second inequality holds because \(2\delta\log(M+1)<\epsilon\). We have
\[|\Gamma_{l}|\geq\theta(Y_{l}\cap B)e^{I\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1}, \bigvee_{j=i}^{k}\tau_{j-1}^{-1}\beta_{j})-I\epsilon-I\epsilon\sum_{i=1}^{k}(a _{1}+\ldots+a_{i})-k\epsilon}.\]
For \((n_{j},A_{j})\in\Gamma_{l},A_{j}\cap B_{3}\neq\emptyset\), we have
\[\sum_{j}e^{-\lambda n_{j}+\frac{1}{a_{1}}\sup_{x\in A_{j}}S_{[a_ {1}n_{j}]}f(x)}\] \[\geq\sum_{l=N}^{\infty}\sum_{(n_{j},A_{j})\in\Gamma_{l}}e^{- \lambda l+l\int fd\theta-I\epsilon}\] \[\geq\sum_{l=N}^{\infty}\theta(Y_{l}\cap B)e^{-\lambda l+l\int fd \theta+l\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1} \beta_{j})-2I\epsilon-I\epsilon\sum_{i=1}^{k}(a_{1}+\ldots+a_{i})-k\epsilon}\] \[\geq\sum_{l=N}^{\infty}\theta(Y_{l}\cap B)e^{-k\epsilon}\geq \theta(B)e^{-k\epsilon}\geq\frac{1}{2}e^{-k\epsilon}.\]
Hence \(\Lambda^{\mathbf{a},\lambda}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(supp\theta) \geq\frac{1}{2}e^{-k\epsilon}.\) It follows that
\[P^{\mathbf{a}}(supp\theta,\{\mathcal{U}_{i}\}_{i=1}^{k},f)\geq\sum_{i=1}^{k}a _{i}h_{\theta}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\beta_{j})-\sum_{i=1}^{k }(a_{1}+\ldots+a_{i})\epsilon-2\epsilon+\int fd\theta. \tag{4}\]
Since \(supp\theta\subset supp\mu\), combining (1), (3) and (4) we finish the proof.
_Remark 3.4_.: In [23], the authors proved the following lemma:
**Lemma 3.5**.: _[_23_, Lemma 9]_ _Let \((X,T)\) be a TDS. For every \(M\in\mathbb{N},\epsilon>0\), there exists \(\delta>0\) such that for every \(M\)-set measurable covers \(\mathcal{U}=\{U_{1},\ldots,U_{M}\},\mathcal{V}=\{V_{1},\ldots,V_{M}\}\) of \(X\) with \(\mu(\mathcal{U}\Delta\mathcal{V})<\delta,\) one has_
\[|h_{\mu}^{+}(T,\mathcal{U})-h_{\mu}^{+}(T,\mathcal{V})|<\epsilon.\]
Hence in Theorem 3.3, we can further require the open covers \(\mathcal{U}_{i}\) satisfying
\[|h_{\tau_{i-1}\mu}^{+}(T_{i},\mathcal{U}_{i})-h_{\tau_{i-1}\mu}(T_{i},\alpha_ {i})|<\epsilon\]
and
\[|\sum_{i=1}^{k}a_{i}h_{\mu}^{+}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1} \mathcal{U}_{j})-\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^ {-1}\alpha_{j})|<\epsilon.\]
From Theorem 3.3 we can get the lower bound part of the variational principle for weighted topological pressure:
**Corollary 3.6**.: _Let \(f\in C(X_{1},\mathbb{R})\). We have_
\[P^{\mathbf{a}}(T_{1},f)\geq\sup_{\mu\in M(X_{1},T_{1})}\big{(}\sum_{i=1}^{k}a_{ i}h_{\pi_{i-1}\mu}(T_{i})+\int fd\mu\big{)}.\]
Proof.: Let \(\alpha_{i}\in\mathcal{P}_{X_{i}},1\leq i\leq k,\;\mu\in M(X_{1},T_{1})\) and \(\varepsilon>0\). By Theorem 3.3 there exist open covers \(\mathcal{U}_{i}\) of \(X_{i}\) such that
\[P^{\mathbf{a}}(supp\mu,\{\mathcal{U}_{i}\}_{i=1}^{k},f)\geq\sum_{i=1}^{k}a_{i} h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\alpha_{j})+\int fd\mu-\varepsilon.\]
By Theorem 2.5, we have
\[P^{\mathbf{a}}(T_{1},f)\geq P^{\mathbf{a}}(T_{1},\{\mathcal{U}_{i}\}_{i=1}^{k },f)\geq P^{\mathbf{a}}(supp\mu,\{\mathcal{U}_{i}\}_{i=1}^{k},f).\]
Since \(\alpha_{i}\) and \(\varepsilon\) are chosen arbitrary, we have
\[P^{\mathbf{a}}(T_{1},f)\geq\sum_{i=1}^{k}a_{i}h_{\pi_{i-1}\mu}(T_{i})+\int fd\mu.\]
## 4. upper bound
In order to prove the upper bound of the variational principle for weighted topological pressure, the authors in [21] applied the techniques in geometric measure theory. They introduced the notion of average weighted topological pressure to prove the dynamical Frostman lemma ([21, Lemma 3.3]), which played a key role in the proof of the upper bound part of the variational principle. In this section, we generalize the result of the upper bound part in [21] to local case.
First we need some lemmas:
**Lemma 4.1**.: _[_22_, Proposition 6]_ _Let \(\pi:(X,T)\to(Y,S)\) be a factor map between two TDSs and \(\mathcal{U}\) be an open cover of \(Y\). Then for any \(\mu\in M(X,T)\), we have_
\[h_{\mu}(T,\pi^{-1}\mathcal{U})=h_{\pi\mu}(S,\mathcal{U}).\]
**Lemma 4.2**.: _[_23, 24, 25_]_ _Let \((X,T)\) be a TDS, \(\mu\in M(X,T)\) and \(\mathcal{U}\) be an open cover of \(X\). If \((X,T)\) is invertible, then_
\[h_{\mu}^{+}(T,\mathcal{U})=h_{\mu}(T,\mathcal{U}).\]
**Lemma 4.3**.: _[_23_, Proposition 5]_ _Let \((X,T)\) be a TDS, \(\mu\in M(X,T)\) and \(\mathcal{U}\) be an open cover of \(X\). Let_
\[\mu=\int_{M^{\varepsilon}(X,T)}\theta dm(\theta)\]
_be the ergodic decomposition of \(\mu\). Then_
\[h_{\mu}(T,\mathcal{U})=\int_{M^{\varepsilon}(X,T)}h_{\theta}(T,\mathcal{U})dm (\theta).\]
**Lemma 4.4**.: _[_26_, Lemma 2.4]_ _Let \((X,T)\) be a TDS, \(\nu\in M(X)\) and \(M\in\mathbb{N}.\) Suppose \(\alpha\in\mathcal{P}_{X}\) and \(|\alpha|\leq M.\) Then for any \(n,l\in\mathbb{N}\) satisfying \(n\geq 2l,\) we have_
\[\frac{1}{n}H_{\nu}(\bigvee_{i=0}^{n-1}T^{-i}\alpha)\leq\frac{1}{l}H_{\nu_{n}}( \bigvee_{i=0}^{l-1}T^{-i}\alpha)+\frac{2l}{n}\log M,\]
_where \(\nu_{n}=\frac{1}{n}\sum_{i=0}^{n-1}T^{i}\nu.\)_
**Lemma 4.5**.: _[_21_, Lemma 5.1]_ _Let \((X,T)\) be a TDS and \(\mu\in M(X,T)\). Suppose \(\alpha\in\mathcal{P}_{X}\) and \(|\alpha|\leq M\). Write_
\[h(n) =H_{\frac{1}{n}\sum_{i=0}^{n-1}T^{i}\mu}(\alpha),\] \[h(n,m) =H_{\frac{1}{n}\sum_{i=n}^{m+n-1}T^{i}\mu}(\alpha).\]
_Then_
\[|h(n+1)-h(n)|\leq\frac{1}{n+1}\log(3M^{2}(n+1)).\] \[|h(n+m)-\frac{n}{n+m}h(n)-\frac{m}{n+m}h(n,m)|\leq\log 2.\]
**Lemma 4.6**.: _[_21_, Lemma 5.4]_ _Let \(p\in\mathbb{N}\) and \(u_{j}:\mathbb{N}\to\mathbb{R},j=1,\ldots,p\) be bounded functions with_
\[\lim_{n\to\infty}|u_{j}(n+1)-u_{j}(n)|=0.\]
_Then for any positive numbers \(c_{1},\ldots,c_{p}\) and \(r_{1},\ldots,r_{p}\), we have_
\[\varlimsup_{n\to\infty}\sum_{j=1}^{p}\big{(}u_{j}(\lceil c_{j}n\rceil)-u_{j}( \lceil rjn\rceil)\big{)}\geq 0.\]
**Lemma 4.7**.: _[_23_, Lemma 2]_ _Let \((X,T)\) be a TDS, \(\mathcal{U}=\{U_{1},\ldots,U_{m}\}\) be an open cover of \(X\) and \(G:\mathcal{P}_{X}\to\mathbb{R}\) be monotone in the sense that \(G(\alpha)\geq G(\beta)\) whenever \(\alpha\succeq\beta.\) Then_
\[\inf_{\alpha\in\mathcal{P}_{X},\alpha\succeq\mathcal{U}}G(\alpha)=\inf_{ \alpha:\alpha=\{A_{1},\ldots,A_{m}\},A_{i}\subset U_{i},1\leq i\leq m}G(\alpha).\]
Now we pay attention back to weighted topological pressure. We apply the techniques in geometric measure theory and get the following useful lemma, which can be seen as the local version of the dynamical Frostman lemma [21, Lemma 3.3].
**Lemma 4.8**.: _Let \(s\geq 0,N\in\mathbb{N}\) and \(\mathcal{U}_{i}\) be open covers of \(X_{i},i=1,\ldots,k\). Suppose that_
\[c=W^{\mathbf{a},s}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}\left(X_{1}\right)>0,\]
_then there is a Borel probability measure \(\mu\) on \(X_{1}\) such that for any \(n\geq N\) and_
\[A\in\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+\ldots +a_{i})n\rceil-1},\]
_we have_
\[\mu(A)\leq\frac{1}{c}e^{-ns+\frac{1}{a_{1}}\sup_{n\in A}S_{[a_{1}n]}f(x)}.\]
Proof.: Define a function \(p\) on \(C(X_{1})\) by
\[p(g)=\frac{1}{c}W^{\mathbf{a},s}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(g).\]
It is easy to verify that
1. \(p(g_{1}+g_{2})\leq p(g_{1})+p(g_{2})\) for any \(g_{1},g_{2}\in C(X_{1}).\)
2. \(p(tg)=tp(g)\) for any \(t\geq 0,g\in C(X_{1}).\)
3. \(p(\mathbf{1})=1,0\leq p(g)\leq||g||_{\infty}\) for any \(g\in C(X_{1}),\) and \(p(g)=0\) for \(g\in C(X_{1})\) with \(g\leq 0.\)
By the Hahn-Banach theorem, we can extend the linear functional \(t\to tp(\mathbf{1}),t\in\mathbb{R},\) from the subspace of the constant functions to a linear functional \(L:C(X_{1})\to\mathbb{R}\) satisfying \(L(\mathbf{1})=p(\mathbf{1})=1\) and \(-p(-g)\leq L(g)\leq p(g)\) for any \(g\in C(X_{1}).\) By the Riesz representation theorem we can find a Borel probability measure \(\mu\) on \(X_{1}\) such that \(L(g)=\int gdu\) for \(g\in C(X_{1}).\) By standard discussion we can prove that for any \(n\geq N\) and any \(A\in\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+\dots+ a_{i})n\rceil-1},\) we have
\[\mu(A)\leq\frac{1}{c}e^{-ns+\frac{1}{a_{1}}\sup_{x\in A}S_{[a_{1}n]}f(x)}.\]
For open covers \(\mathcal{U}_{i}\) of \(X_{i}\) and \(f\in C(X_{1},\mathbb{R}),\) denote
\[w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})=\sup_{U\in\bigvee_{i=1 }^{k}\tau_{i-1}^{-1}\mathcal{U}_{i}}\{|f(x)-f(y)|:x,y\in U\}.\]
The following theorem is the main result of this section:
**Theorem 4.9**.: _Let \(\mathcal{U}_{i}\) be open covers of \(X_{i},i=1,\dots,k\) and \(f\in C(X_{1},\mathbb{R}).\) Then there exists an ergodic measure \(\mu\in M(X_{1},T_{1})\) such that_
\[\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_ {j})\geq P_{W}^{\mathbf{a}}(T_{1},\{\mathcal{U}_{i}\}_{i=1}^{k},f)-3w_{f}( \bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\int fd\mu.\]
Proof.: case 1: \((X_{1},T_{1})\) is invertible and zero-dimensional. We will prove there exists an ergodic measure \(\mu\in M(X_{1},T_{1})\) such that
\[\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_ {j})\geq P_{W}^{\mathbf{a}}(T_{1},\{\mathcal{U}_{i}\}_{i=1}^{k},f)-2w_{f}( \bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\int fd\mu.\]
Write
\[t_{0}(n)=0,\;t_{i}(n)=\lceil(a_{1}+\dots+a_{i})n\rceil.\]
For an open cover \(\mathcal{U}=\{U_{1},\dots,U_{d}\}\) of \(X_{1},\) define
\[\mathcal{U}^{*}=\{\alpha\in\mathcal{P}_{X_{1}}:\alpha=\{A_{1},\dots,A_{d}\},A _{m}\subset U_{m},A_{m}\text{ are clopen \ sets, }m=1,\dots,d\}.\]
Denote \(h=P_{W}^{\mathbf{a}}(\{\mathcal{U}_{i}\}_{i=1}^{k}).\) Without loss of generality, we assume \(h>0.\) Let
\[M=\max_{1\leq i\leq k}|\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j}|.\]
**Claim:** For every \(l\in\mathbb{N},\) the set
\[M(l)=\{\mu\in M(X_{1},T_{1}):\forall\ \beta_{i}\in(\bigvee_{j=i}^{k} \tau_{j-1}^{-1}\mathcal{U}_{j})^{*},i=1,\ldots,k,\] \[\sum_{i=1}^{k}a_{i}H_{\mu}(\bigvee_{h=0}^{l-1}T^{-h}\beta_{i}) \geq l\big{(}h-2w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\int fd \mu\big{)}-1-k\sum_{i=1}^{k}a_{i}\log 2\}\]
is not empty.
proof of claim: Fix \(l\in\mathbb{N}\). Choose some \(s\geq 0\) such that \(h-\frac{1}{l}\leq s<h.\) By definition there exists \(N\in\mathbb{N}\) such that \(W^{\mathbf{a},s}_{N,\{\mathcal{U}_{i}\}_{i=1}^{k}}(X_{1})\geq 1.\) By Lemma 4.8, there is a Borel probability measure \(\nu\) on \(X_{1}\) such that for any \(n\geq N\) and \(A\in\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+ \ldots+a_{i})n\rceil-1},\)we have
\[\nu(A)\leq e^{-ns+\frac{1}{a_{1}}\sup_{x\in A}S_{[a_{1}n]}f(x)}.\]
Define \(\nu_{m}=\frac{1}{m}\sum_{j=0}^{m-1}T^{j}\nu\) and
\[\omega_{i,n}=\frac{\sum_{j=t_{i-1}(n)}^{t_{i}(n)-1}T^{j}\nu}{t_{i}(n)-t_{i-1}( n)}\]
for \(1\leq i\leq k\) with \(t_{i}(n)>t_{i-1}(n).\) For a Borel probability measure \(\theta\) on \(X_{1}\) and \(1\leq i\leq k,\) define
\[H_{\theta}(i)=\inf_{\beta\in(\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j}) ^{*}}H_{\theta}(\bigvee_{j=0}^{l-1}T^{-j}\beta).\]
For
\[n>\max\{N,\{\frac{2l+1}{a_{i}}:1\leq i\leq k,a_{i}\neq 0\}\}\]
and
\[\beta_{i}\in(\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j})^{*},i=1,\ldots,k,\]
we have
\[\bigvee_{\begin{subarray}{c}i:1\leq i\leq k,\\ t_{i}(n)>t_{i-1}(n)\end{subarray}}(\beta_{i})_{t_{i-1}(n)}^{t_{i}(n)-1} \succeq\bigvee_{\begin{subarray}{c}i:1\leq i\leq k,\\ t_{i}(n)>t_{i-1}(n)\end{subarray}}(\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_ {j})_{t_{i-1}(n)}^{t_{i}(n)-1}\] \[=\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_ {1}+\ldots+a_{i})n\rceil-1}.\]
Hence for \(A\in\bigvee_{\begin{subarray}{c}i:1\leq i\leq k,\\ t_{i}(n)>t_{i-1}(n)\end{subarray}}(\beta_{i})_{t_{i-1}(n)}^{t_{i}(n)-1},\) we have
\[\nu(A)\leq e^{-ns+\frac{[a_{1}n]}{a_{1}}w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1 }\mathcal{U}_{i})+\frac{1}{a_{1}}\sup_{x\in A}S_{[a_{1}n]}f(x)}.\]
It follows that
\[-\nu(A)\log\nu(A)\geq\big{(}ns-\frac{\lceil a_{1}n\rceil}{a_{1}}w_{f}(\bigvee_{i= 1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\frac{1}{a_{1}}\sup_{x\in A}S_{\lceil a_{ 1}n\rceil}f(x)\big{)}\nu(A)\]
and
\[H_{\nu}\Big{(}\bigvee_{\begin{subarray}{c}i:1\leq i\leq k,\\ t_{i}(n)>t_{i-1}(n)\end{subarray}}\bigvee_{j=t_{i-1}(n)}^{t_{i}(n)-1}T^{-j} \beta_{i}\Big{)}\geq ns-\frac{2\lceil a_{1}n\rceil}{a_{1}}w_{f}(\bigvee_{i=1} ^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\frac{1}{a_{1}}\int S_{\lceil a_{1}n \rceil}fdv.\]
For \(i\) with \(1\leq i\leq k,a_{i}>0\), since \(n\geq\frac{2l+1}{a_{i}}\), we have \(t_{i}(n)-t_{i-1}(n)\geq 2l\), hence by Lemma 4.4, we have
\[\frac{t_{i}(n)-t_{i-1}(n)}{l}H_{\omega_{i,n}}(\bigvee_{j=0}^{l-1}T^{-j}\beta_ {i})\geq H_{\nu}\Big{(}\bigvee_{j=t_{i-1}(n)}^{t_{i}(n)-1}T^{-j}\beta_{i} \Big{)}-2l\log M.\]
It follows that
\[\sum_{i=1}^{k}\frac{t_{i}(n)-t_{i-1}(n)}{l}H_{\omega_{i,n}}(\bigvee_{j=0}^{l- 1}T^{-j}\beta_{i})\]
\[\geq ns-\frac{2\lceil a_{1}n\rceil}{a_{1}}w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{ -1}\mathcal{U}_{i})-\frac{\lceil a_{1}n\rceil}{a_{1}}\int fd\nu_{t_{1}(n)}-2 kl\log M.\]
Since
\[\nu_{t_{i}(n)}=\frac{t_{i-1}(n)}{t_{i}(n)}\nu_{t_{i-1}(n)}+\frac{t_{i}(n)-t_{i- 1}(n)}{t_{i}(n)}\omega_{i,n},\]
by Lemma 4.5, we have
\[t_{i}(n)H_{\nu_{t_{i}(n)}}(\bigvee_{j=0}^{l-1}T^{-j}\beta_{i})-t _{i-1}(n)H_{\nu_{t_{i-1}(n)}}(\bigvee_{j=0}^{l-1}T^{-j}\beta_{i})\] \[\geq\big{(}t_{i}(n)-t_{i-1}(n)\big{)}H_{\omega_{i,n}}\big{(} \bigvee_{j=0}^{l-1}T^{-j}\beta_{i}\big{)}-t_{i}(n)\log 2.\]
Hence
\[\sum_{i=1}^{k}\Big{(}t_{i}(n)H_{\nu_{t_{i}(n)}}(\bigvee_{j=0}^{l -1}T^{-j}\beta_{i})-t_{i-1}(n)H_{\nu_{t_{i-1}(n)}}(\bigvee_{j=0}^{l-1}T^{-j} \beta_{i})\Big{)}\] \[\geq nls-\frac{2l\lceil a_{1}n\rceil}{a_{1}}w_{f}(\bigvee_{i=1}^{ k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\frac{\lceil a_{1}n\rceil l}{a_{1}}\int fd \nu_{t_{1}(n)}-2kl^{2}\log M-kt_{k}(n)\log 2.\]
Since \(\beta_{i}\) is chosen arbitrary, we have
\[\Theta_{n}: =\sum_{i=1}^{k}\Big{(}t_{i}(n)H_{\nu_{t_{i}(n)}}(i)-t_{i-1}(n)H_{ \nu_{t_{i-1}(n)}}(i)\Big{)}\] \[\geq nls-\frac{2l\lceil a_{1}n\rceil}{a_{1}}w_{f}(\bigvee_{i=1}^{ k}\tau_{i-1}^{-1}\mathscr{U}_{i})-\frac{\lceil a_{1}n\rceil l}{a_{1}}\int fd \nu_{t_{1}(n)}-2kl^{2}\log M-kt_{k}(n)\log 2.\]
Define
\[\gamma_{n}=\sum_{i=2}^{k}t_{i}(n)\big{(}H_{\nu_{t_{i}(n)}}(i)-H_{\nu_{t_{1}(n)} }(i)\big{)}-\sum_{i=2}^{k}t_{i-1}(n)\big{(}H_{\nu_{t_{i-1}(n)}}(i)-H_{\nu_{t_{1 }(n)}}(i)\big{)}.\]
We have
\[\Theta_{n}=\gamma_{n}+\sum_{i=1}^{k}\big{(}t_{i}(n)-t_{i-1}(n)\big{)}H_{\nu_{t _{1}(n)}}(i).\]
Define
\[w(n)= \sum_{i=2}^{k}(a_{1}+\ldots+a_{i-1})\big{(}H_{\nu_{t_{i-1}(n)}}(i )-H_{\nu_{t_{1}(n)}}(i)\big{)}\] \[-\sum_{i=2}^{k}(a_{1}+\ldots+a_{i})\big{(}H_{\nu_{t_{i}(n)}}(i)-H _{\nu_{t_{1}(n)}}(i)\big{)}.\]
In Lemma 4.6, we take \(p=2k-2\),
\[u_{j}(n)=(a_{1}+\ldots+a_{j})H_{\nu_{n}}(j+1),c_{j}=a_{1}+\ldots+ a_{j},1\leq j\leq k-1;\] \[u_{j}(n)=-(a_{1}+\ldots+a_{j-k+2})H_{\nu_{n}}(j-k+2),c_{j}=a_{1}+ \ldots+a_{j-k+2},k\leq j\leq 2k-2,\]
and take \(r_{j}=a_{1},1\leq j\leq 2k-2\). Since for any \(1\leq i\leq k\),
\[|\bigvee_{j=0}^{l-1}T^{-j}\beta|\leq M^{l},\forall\beta\in(\bigvee_{j=i}^{k} \tau_{j-1}^{-1}\mathscr{U}_{j})^{*},\]
by Lemma 4.5 we have
\[|H_{\nu_{n}}(i)-H_{\nu_{n+1}}(i)|\leq\frac{1}{n+1}\log(3M^{2l}(n+1)).\]
Hence for any \(1\leq j\leq 2k-2\),
\[\lim_{n\to\infty}|u_{j}(n+1)-u_{j}(n)|=0.\]
By Lemma 4.6 we have \(\overline{\lim_{n\to\infty}}\,w(n)\geq 0\). It follows that
\[\overline{\lim_{n\to\infty}}-\frac{\gamma_{n}}{n}=\overline{\lim_{n\to\infty}} \,w(n)\geq 0.\]
Hence
\[\overline{\lim_{n\to\infty}}\,\Big{(}\sum_{i=1}^{k}a_{i}H_{\nu_{t_{1}(n)}}(i)+ l\int fd\nu_{t_{1}(n)}\Big{)}\geq l\big{(}s-2w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1} \mathscr{U}_{i})\big{)}-k\sum_{i=1}^{k}a_{i}\log 2.\]
We can take a subsequence \((n_{j})\) such that
\[\lim_{j\to\infty}\Big{(}\sum_{i=1}^{k}a_{i}H_{\nu_{t_{1}(n_{j})}}(i)+l\int fd\nu_{ t_{1}(n_{j})}\Big{)}=\varlimsup_{n\to\infty}\Big{(}\sum_{i=1}^{k}a_{i}H_{\nu_{t_{1}(n)}} (i)+l\int fd\nu_{t_{1}(n)}\Big{)}\]
and \(\nu_{t_{1}(n_{j})}\) converges. Assume \(\nu_{t_{1}(n_{j})}\to\mu\). It is easy to see that \(\mu\) is \(T_{1}\)-invariant. For any \(\beta_{i}\in(\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j})^{*},i=1,\ldots,k\), since \(\beta_{i}\) are clopen sets, we have
\[\sum_{i=1}^{k}a_{i}H_{\mu}(\bigvee_{h=0}^{l-1}T^{-h}\beta_{i}) \geq l\big{(}s-2w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i })-\int fd\mu)-k\sum_{i=1}^{k}a_{i}\log 2\] \[\geq l\big{(}h-2w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_ {i})-\int fd\mu)-1-k\sum_{i=1}^{k}a_{i}\log 2.\]
Hence \(\mu\in M(l)\). The claim is proved.
It is easy to check that \(M(l)\) is closed and \(M(l_{1}l_{2})\subset M(l_{1})\) for \(l_{1},l_{2}\in\mathbb{N}\). Hence \(\bigcap\limits_{l\in\mathbb{N}}M(l)\neq\emptyset\). Let \(\mu\in\bigcap\limits_{l\in\mathbb{N}}M(l)\). We have
\[\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\beta_{i})\geq h-2w_{f}(\bigvee_{i=1}^{k} \tau_{i-1}^{-1}\mathcal{U}_{i})-\int fd\mu,\ \forall\beta_{i}\in(\bigvee_{j=i}^{k}\tau_{j-1}^{-1} \mathcal{U}_{j})^{*},i=1,\ldots,k.\]
Since \(X\) is zero-dimensional, there exists a fundamental base of the topology made of clopen sets. Hence by Lemma 4.7, we have
\[\sum_{i=1}^{k}a_{i}h_{\mu}^{+}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1} \mathcal{U}_{j})\geq h-2w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i} )-\int fd\mu.\]
Since \((X_{1},T_{1})\) is invertible, by Lemma 4.2 we have \(h_{\mu}(T_{1},\mathcal{U})=h_{\mu}^{+}(T_{1},\mathcal{U})\), hence (we also can prove directly using Proposition 2.6, see [25, Proposition 4.3] )
\[\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_ {j})\geq h-2w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\int fd\mu.\]
Let
\[\mu=\int_{M^{e}(X_{1},T_{1})}\theta dm(\theta)\]
be the ergodic decomposition of \(\mu\). By Lemma 4.3, there exists \(\theta\in M^{e}(X_{1},T_{1})\) such that
\[\sum_{i=1}^{k}a_{i}h_{\theta}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U }_{j})\geq P_{W}^{\mathbf{a}}(\{\mathcal{U}_{i}\}_{i=1}^{k})-2w_{f}(\bigvee_{i =1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\int fd\theta.\]
case 2: general case
Let \((\bar{X}_{1},\sigma_{T_{1}})\) be the natural extension of \((X_{1},T_{1})\). That is:
\[\bar{X}_{1}=\{(x_{1},x_{2},\ldots)\in X_{1}^{\mathbb{N}}:T_{1}(x_{i+1})=x_{i}, i\in\mathbb{N}\},\]
\(\sigma_{T_{1}}:\bar{X}_{1}\to\bar{X}_{1}\) is defined as
\[\sigma_{T_{1}}(x_{1},x_{2},\ldots)=(T_{1}x_{1},x_{1},x_{2},\ldots).\]
Let \(\pi:(\tilde{X}_{1},\sigma_{T_{1}})\to(X_{1},T_{1})\) be the factor map which project each element of \(\tilde{X}_{1}\) onto is first component. Consider the following system
\[\tilde{X}_{1}\to X_{2}\to\dots\to X_{k}.\]
By definition we have
\[w_{f}(\bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})=w_{f\circ\pi}(\bigvee_{i =1}^{k}\pi^{-1}\tau_{i-1}^{-1}\mathcal{U}_{i}).\]
It is easy to check that
\[P_{W}^{\mathbf{a}}(T_{1},\{\mathcal{U}_{i}\}_{i=1}^{k},f)-w_{f}(\bigvee_{i=1}^ {k}\tau_{i-1}^{-1}\mathcal{U}_{i})\leq P_{W}^{\mathbf{a}}(\sigma_{T_{1}},\{\pi^{-1} \mathcal{U}_{1},\mathcal{U}_{2},\dots,\mathcal{U}_{k}\},f\circ\pi).\]
By case 1, there exists \(\mu\in M^{e}(\tilde{X}_{1},\sigma_{T_{1}})\) such that
\[\sum_{i=1}^{k}a_{i}h_{\mu}(\sigma_{T_{1}},\bigvee_{j=i}^{k}\pi^{ -1}\tau_{j-1}^{-1}\mathcal{U}_{j})\geq P_{W}^{\mathbf{a}}(\sigma_{T_{1}},\{\pi^{-1}\mathcal{U}_{1}, \mathcal{U}_{2},\dots,\mathcal{U}_{k}\},f\circ\pi)\] \[-2w_{f\circ\pi}(\bigvee_{i=1}^{k}\pi^{-1}\tau_{i-1}^{-1}\mathcal{U }_{i})-\int f\circ\pi d\mu\] \[\geq P_{W}^{\mathbf{a}}(T_{1},\{\mathcal{U}_{i}\}_{i=1}^{k},f)-3w_{f}( \bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\int f\circ\pi d\mu.\]
By Lemma 4.1, we have
\[\sum_{i=1}^{k}a_{i}h_{\pi\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U }_{j})\geq P_{W}^{\mathbf{a}}(T_{1},\{\mathcal{U}_{i}\}_{i=1}^{k},f)-3w_{f}( \bigvee_{i=1}^{k}\tau_{i-1}^{-1}\mathcal{U}_{i})-\int fd\pi\mu.\]
By Theorem 2.5 and Theorem 4.9 we know
\[\sup_{\{\mathcal{U}_{i}\}_{i=1}^{k}}\sup_{\mu\in M(X_{1},T_{1})}\Big{(}\sum_{ i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j})+ \int fd\mu\Big{)}\geq P^{\mathbf{a}}(T_{1},f).\]
We will show
\[\sup_{\{\mathcal{U}_{i}\}_{i=1}^{k}}\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{ j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j})=\sum_{i=1}^{k}a_{i}h_{\tau_{i-1}\mu}(T_{i }).\]
Hence the left side of the inequality above is just
\[\sup_{\mu\in M(X_{1},T_{1})}\big{(}\sum_{i=1}^{k}a_{i}h_{\tau_{i-1}\mu}(T_{i}) +\int fd\mu\big{)}.\]
We will give a more general result. We need the following lemma:
**Lemma 4.10**.: _Let \(\pi:(X,T)\to(Y,S)\) be a factor map between TDSs. Let \(\mu\in M(X,T)\), \(\alpha=\{A_{1},A_{2},\ldots,A_{k}\}\in\mathcal{P}_{Y}\) and \(\epsilon>0\). Then there exists an open cover \(\mathcal{U}\) of \(Y\) with \(k\) elements such that for any \(j\geq 0\) and any \(\beta\in\mathcal{P}_{X}\) satisfying \(T^{-j}\pi^{-1}\mathcal{U}\preceq\beta\), we have \(H_{\mu}(T^{-j}\pi^{-1}\alpha|\beta)<\epsilon\)._
Proof.: By [19, Lemma 4.15] there exists \(\delta>0\) such that whenever \(\beta_{1},\beta_{2}\in\mathcal{P}_{X}\) with \(|\beta_{1}|=|\beta_{2}|=k\) and \(\mu(\beta_{1}\Delta\beta_{2})<\delta\), then \(H_{\mu}(\beta_{1}|\beta_{2})<\epsilon\). Take closed subsets \(B_{i}\subset A_{i}\) with
\[\pi\mu(A_{i}-B_{i})<\frac{\delta}{2k^{2}},i=1,\ldots,k.\]
Let \(B_{0}=(\bigcup_{i=1}^{k}B_{i})^{c}\) and \(U_{i}=B_{0}\cup B_{i},i=1,\ldots,k.\) Then \(\pi\mu(B_{0})<\frac{\delta}{2k}\) and \(\mathcal{U}=\{U_{1},\ldots,U_{k}\}\) is an open cover of \(Y\). For \(j\geq 0\) and \(\beta\in\mathcal{P}_{X}\) satisfying \(T^{-j}\pi^{-1}\mathcal{U}\preceq\beta,\) we can find \(\beta^{\prime}=\{C_{1},\ldots,C_{k}\}\in\mathcal{P}_{X}\) satisfying
\[C_{i}\subset T^{-j}\pi^{-1}U_{i},i=1,\ldots,k\]
and \(\beta\succeq\beta^{\prime}.\) Since
\[T^{-j}\pi^{-1}B_{i}\subset C_{i}\subset T^{-j}\pi^{-1}U_{i},\]
we have
\[\mu(C_{i}\Delta T^{-j}\pi^{-1}A_{i})\leq\mu(T^{-j}\pi^{-1}A_{i}-T^{-j}\pi^{-1 }B_{i})+\mu(T^{-j}\pi^{-1}B_{0})<\frac{\delta}{k}.\]
Hence
\[\sum_{i=1}^{k}\mu(C_{i}\Delta T^{-j}\pi^{-1}A_{i})<\delta.\]
It follows that \(H_{\mu}(T^{-j}\pi^{-1}\alpha|\beta^{\prime})<\epsilon\) and hence \(H_{\mu}(T^{-j}\pi^{-1}\alpha|\beta)<\epsilon\).
We have following theorem:
**Theorem 4.11**.: _For any \(\mu\in M(X_{1},T_{1})\), we have_
\[\sum_{i=1}^{k}a_{i}h_{\tau_{i-1}\mu}(T_{i}) =\sup_{\{\mathcal{U}_{i}\}_{i=1}^{k}}\overline{\lim_{N\to+\infty }}\frac{1}{N}H_{\mu}(\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{ \lceil(a_{1}+\ldots+a_{i})N\rceil-1})\] \[=\sup_{\{\mathcal{U}_{i}\}_{i=1}^{k}}\lim_{N\to+\infty}\frac{1}{ N}H_{\mu}(\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+ \ldots+a_{i})N\rceil-1}).\]
_where the supremum are taken over all open covers \(\mathcal{U}_{i}\) of \(X_{i},i=1,\ldots,k.\)_
Proof.: It is easy to see that
\[\overline{\lim_{N\to+\infty}}\frac{1}{N}H_{\mu}(\bigvee_{i=1}^{k}(\tau_{i-1}^ {-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1})\leq\sum_{i=1} ^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j})\leq \sum_{i=1}^{k}a_{i}h_{\tau_{i-1}\mu}(T_{i}).\]
Now we prove the opposite direction. Let \(\epsilon>0\) and \(\alpha_{i}\in\mathcal{P}_{X_{i}},i=1,\ldots,k.\) By Lemma 4.10, we can find corresponding open covers \(\mathcal{U}_{i}\) of \(X_{i}\). For \(\beta\in\mathcal{P}_{X_{1}}\) with
\[\beta\succeq\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{ 1}+\ldots+a_{i})N\rceil-1},\]
we have
\[\beta\succeq T_{1}^{-j}\tau_{i-1}^{-1}\mathcal{U}_{i},1\leq i\leq k,0\leq j\leq \lceil(a_{1}+\ldots+a_{i})N\rceil-1.\]
By Lemma 4.10, we have
\[H_{\mu}\big{(}T_{1}^{-j}\tau_{i-1}^{-1}\alpha_{i}|\beta\big{)}\leq\varepsilon,1 \leq i\leq k,0\leq j\leq\lceil(a_{1}+\ldots+a_{i})N\rceil-1.\]
It follows that
\[H_{\mu}\big{(}\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\alpha_{i})_{0}^{ \lceil(a_{1}+\ldots+a_{i})N\rceil-1}\big{)} \leq H_{\mu}(\beta)+H_{\mu}\big{(}\bigvee_{i=1}^{k}(\tau_{i-1}^{-1} \alpha_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1}|\beta\big{)}\] \[\leq H_{\mu}(\beta)+\sum_{i=1}^{k}\sum_{j=0}^{\lceil(a_{1}+\ldots+ a_{i})N\rceil-1}H_{\mu}\big{(}T_{1}^{-j}\tau_{i-1}^{-1}\alpha_{i}|\beta\big{)}\] \[\leq H_{\mu}(\beta)+\sum_{i=1}^{k}\lceil(a_{1}+\ldots+a_{i})N \rceil\varepsilon.\]
Since \(\beta\) is arbitrary, we have
\[H_{\mu}\big{(}\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\alpha_{i})_{0}^ {\lceil(a_{1}+\ldots+a_{i})N\rceil-1}\big{)}\] \[\leq H_{\mu}\big{(}\bigvee_{i=1}^{k}(\tau_{i-1}^{-1}\mathcal{U}_{i })_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1}\big{)}+\sum_{i=1}^{k}\lceil(a_{1} +\ldots+a_{i})N\rceil\varepsilon.\]
Hence
\[\lim_{N\to+\infty}\frac{1}{N}H_{\mu}\big{(}\bigvee_{i=1}^{k}( \tau_{i-1}^{-1}\alpha_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1}\big{)}\] \[\leq\lim_{N\to+\infty}\frac{1}{N}H_{\mu}\big{(}\bigvee_{i=1}^{k} (\tau_{i-1}^{-1}\mathcal{U}_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1} \big{)}+\sum_{i=1}^{k}(a_{1}+\ldots+a_{i})\varepsilon.\]
Since \(I_{\mu}\geq 0\), by Fatou's Lemma and Proposition 3.1 we have
\[\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j=i}^{k}\tau_{j-1}^{-1 }\alpha_{j}) =\int\sum_{i=1}^{k}a_{i}\mathbb{E}_{\mu}(F_{i}|\mathcal{I}_{\mu})(x )d\mu(x)\] \[=\int\lim_{N\to+\infty}\frac{1}{N}I_{\mu}\big{(}\bigvee_{i=1}^{k} (\tau_{i-1}^{-1}\alpha_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1}\big{)}( x)d\mu(x)\] \[\leq\lim_{N\to+\infty}\int\frac{1}{N}I_{\mu}\big{(}\bigvee_{i=1} ^{k}(\tau_{i-1}^{-1}\alpha_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1} \big{)}(x)d\mu(x)\] \[=\lim_{N\to+\infty}\frac{1}{N}H_{\mu}\big{(}\bigvee_{i=1}^{k}( \tau_{i-1}^{-1}\alpha_{i})_{0}^{\lceil(a_{1}+\ldots+a_{i})N\rceil-1}\big{)}\]
Hence
\[\sum_{i=1}^{k}a_{i}h_{\tau_{i-1}\mu}(T_{i})\leq\sup_{\{\mathcal{U}_{i}\}_{i=1}^{k} }\lim_{N\rightarrow+\infty}\frac{1}{N}H_{\mu}\big{(}\bigvee_{i=1}^{k}(\tau_{i- 1}^{-1}\mathcal{U}_{i})_{0}^{[(a_{1}+\ldots+a_{i})N]-1}\big{)}.\]
**Corollary 4.12**.: _For any \(\mu\in M(X_{1},T_{1}),\) we have_
\[\sup_{\{\mathcal{U}_{i}\}_{i=1}^{k}}\sum_{i=1}^{k}a_{i}h_{\mu}(T_{1},\bigvee_{j =i}^{k}\tau_{j-1}^{-1}\mathcal{U}_{j})=\sum_{i=1}^{k}a_{i}h_{\tau_{i-1}\mu}(T_ {i}).\]
Combing Theorem 3.3 and Theorem 4.9, we have
**Corollary 4.13**.: _[_21_, Theorem 1.4]__(variational principle for weighted topological pressure) For \(f\in C(X_{1},\mathbb{R}),\) we have_
\[P^{\mathbf{a}}(T_{1},f)=\sup_{\mu\in M(X_{1},T_{1})}\Big{(}\sum_{i=1}^{k}a_{i }h_{\tau_{i-1}\mu}(T_{i})+\int fd\mu\Big{)}.\]
|
2305.11309 | Revealing mass distributions of dwarf spheroidal galaxies in the
Subaru-PFS era | The Galactic dwarf spheroidal galaxies (dSphs) provide valuable insight into
dark matter (DM) properties and its role in galaxy formation. Their close
proximity enables the measurement of line-of-sight velocities for resolved
stars, which allows us to study DM halo structure. However, uncertainties in DM
mass profile determination persist due to the degeneracy between DM mass
density and velocity dispersion tensor anisotropy. Overcoming this requires
large kinematic samples and identification of foreground contamination. With
1.25 deg$^2$ and 2394 fibers, PFS plus pre-imaging with Hyper Suprime Cam will
make significant progress in this undertaking. | Kohei Hayashi, Laszlo Dobos, Carrie Filion, Evan Kirby, Masashi Chiba, Rosemary F. G. Wyse, PFS Galactic Archaeology Science Working Group | 2023-05-18T21:17:51Z | http://arxiv.org/abs/2305.11309v1 | # Revealing mass distributions of dwarf spheroidal galaxies in the Subaru-PFS era
###### Abstract
The Galactic dwarf spheroidal galaxies (dSphs) provide valuable insight into dark matter (DM) properties and its role in galaxy formation. Their close proximity enables the measurement of line-of-sight velocities for resolved stars, which allows us to study DM halo structure. However, uncertainties in DM mass profile determination persist due to the degeneracy between DM mass density and velocity dispersion tensor anisotropy. Overcoming this requires large kinematic samples and identification of foreground contamination. With 1.25 deg\({}^{2}\) and 2394 fibers, PFS plus pre-imaging with Hyper Suprime Cam will make significant progress in this undertaking.
Dark matter - Dwarf spheroidal galaxies - Galaxy dynamics - Local Group IAU Symposium 379] Hayashi K.\({}^{1,2,5}\), Dobos L.\({}^{3}\), Filion C.\({}^{3}\), Kirby E.\({}^{4}\), Chiba M.\({}^{5}\), Wyse R.\({}^{3}\), & PFS Galactic Archaeology Science Working Group
## 1 Introduction
The Galactic dwarf spheroidal (dSph) galaxies are ideal sites for studying the basic properties of dark matter and its role in galaxy formation. This is because these galaxies have high dynamical mass-to-light ratio \((M/L\sim 10-1000)\), which means that these are the dark matter rich systems. Owing to their proximity of the Sun, the dSphs have the advantage that individual member stars can be resolved. Therefore, it is possible to measure accurate line-of-sight velocities for their member stars, so that we are able to set constraints on their internal structures of dark matter halo using the high-quality data (Battaglia & Nipoti, 2022; Battaglia et al., 2013, for reviews).
It is well documented that the \(\Lambda\) cold dark matter (\(\Lambda\)CDM) theory has well-reproduced the cosmological and astrophysical observations on large spacial scales such as cosmic microwave background and large-scale structure of galaxies. At galactic and sub-galactic scales (\(\lesssim 1\) Mpc), however, this concordant theory has several outstanding discrepancies between the predictions from pure dark matter simulations based on \(\Lambda\)CDM models and some observational facts (Bullock & Boylan-Kolchin, 2017). The oldest controversial issues in \(\Lambda\)CDM models is the so-called "core-cusp" problem: dark matter halos predicted by \(\Lambda\)CDM simulations have strongly cusped central density profiles, whereas the dark matter halos in the observed less massive galaxies (dSphs and low surface brightness galaxies) are suggested to have cored dark matter density profiles. Recently, the dynamical studies for the luminous dSphs, so-called classical dSphs, have suggested that although there are still large uncertainties, these galaxies show a diversity of the inner dark matter densities (e.g., Read et al., 2019; Hayashi et al., 2020). To interpret this diversity, there are various possible mechanisms such as baryonic feedback and star formation burst (Lazar et al., 2020) and alternative dark matter models (e.g., self-interacting dark matter: Nishikawa et al., 2020).
On the other hand, current dynamical studies of dSphs still face challenges in accurately measuring their central dark matter density profiles due to the well-known degeneracy between dark matter mass density and the anisotropy of the stellar velocity dispersion tensor as well as limited availability of data. Motivated by the aforementioned problem, we, the Prime Focus Spectrograph (PFS) Galactic Archaeology (PFS-GA) science working group, have addressed the development of dynamical mass models that take into account non-trivial effects such as Milky Way contamination stars, binary stars, and non-equilibrium systems. This is because the future PFS spectroscopic surveys will enable us to improve the quantity and quality of kinematic data. Such data from PFS will open the door to placing more stringent constraints on inner dark matter density profiles of the dSphs.
## 2 Subaru Prime Focus Spectrograph
The Subaru Prime Focus Spectrograph attached on Subaru 8-m class telescope is a massively-multiplexed fiber-fed optical and near-infrared three-armed spectrograph (Tamura et al., 2016). The wave length coverage is from the optical to the near infrared (\(380-1260\) nm), and thus PFS can observe the blue and infrared ends simultaneously. PFS is designed to allow simultaneous low and intermediate-resolution spectroscopy with blue (\(380-650\) nm, R \(\sim 2,300\)), medium-resolution red (\(710-885\) nm, R \(\sim 5,000\)) arm, and infrared (\(940-1260\) nm, R \(\sim 4,300\)) arm, respectively. Thanks to the Subaru 8-m class telescope and its Prime Focus, PFS has a large field of view (1.25 degrees in diameter in a hexagonal field) and 2400 optical fibers arranged in the hexagon. This new spectroscopy capability will allow us to obtain statistically significant samples of stellar and galactic spectra over wide areas. Figure 1 shows a
Figure 1: Schematic comparison of the field of view of Subaru PFS with that of current spectroscopy instruments such as VLT/FLAMES, Gemini/GMOS, and Keck/DEIMOS.
schematic comparison of the FoV of Subaru PFS with that of VLT/FLAMES, Gemini/GMOS, and Keck/DEIMOS. It is clear from this figure that PFS has much larger FoV than the other current spectroscopy instruments, and thus PFS will offer unique opportunities in large astronomical survey.
Taking advantage of Subaru PFS in synergy with Subaru Hyper Suprime Cam (HSC) pre-imaging, PFS-GA science working group is planning a large observation for the GA science cases in the context of a Subaru Strategic Program (Takada et al., 2014), which will start from 2024. The main targets of our provisional plans are the Galactic dwarf galaxies, M31 disk and halo regions as well as M33 companion, and the Milky Way outer disk/halo region including stellar streams. Here, we provide an update on the progress of the PFS-GA science project, with a particular focus on the survey of Galactic dSphs.
## 3 Motivation for PFS-dSph survey
As stated in the introduction, there are significant uncertainties in estimating the dark matter density profiles of dSphs due to the limited availability of kinematic data. The small field of view of current spectroscopy instruments has resulted in the outer regions of most dSphs being unobserved, which is one of the main reasons for this challenge. Before presenting the current progress of the PFS-dSph survey, we demonstrate why kinematic data in the outer regions of dSphs are crucial for estimating their dark matter distributions.
Recently, new member stars of Ursa Minor (UMi), which is classified as classical dSph, are discovered by Sestito et al. (2023). Using Gaia proper motion and GRACES stellar spectra, they identified five bright member stars. Surprisingly, these stars are far away from the center of Ursa Minor. In particular, one of them is located at 11 times half-light radius of this galaxy, which corresponds to 4 kpc as physical scale. Adding these stars to the current available kinematic data (\(N_{\rm star}=313\)), we re-performed the axisymmetric Jeans analysis constructed by Hayashi et al. (2020).
Figure 2: The estimated dark matter density profile (left) and mass profile (right) of Ursa Minor dwarf spheroidal galaxy. The gray solid and dashed lines are median and 68 per cent confidence level from the kinematic sample _without_ five new member stars (i.e, \(N_{\rm star}=313\)), while the green solid line and shaded region are the same as gray ones but calculated from the kinematic sample _with_ five new member stars (\(N_{\rm star}=313+5\)).
Here, we describe briefly our mass models as follow. For surface stellar density distribution of a dSph, we assumed oblate Plummer profile generalized to an axisymmetric shape. For dark matter density profile, we assume a generalized Hernquist profile:
\[\rho_{\rm DM}(R,z)=\rho_{0}\Big{(}\frac{r}{b_{\rm halo}}\Big{)}^{- \gamma}\Big{[}1+\Big{(}\frac{r}{b_{\rm halo}}\Big{)}^{\alpha}\Big{]}^{-\frac{ \beta-\gamma}{\alpha}}, \tag{1}\] \[r^{2}=R^{2}+z^{2}/Q^{2}, \tag{2}\]
where \(\rho_{0}\) and \(b_{\rm halo}\) are the scale density and radius, respectively; \(\alpha\) is the sharpness parameter of the transition from the inner slope \(\gamma\) to the outer slope \(\beta\); and \(Q\) is a constant axial ratio of a dark matter halo. These \((Q,\rho_{0},b_{\rm halo},\alpha,\beta,\gamma)\) are free parameters in our models. Utilizing these density profiles, we solve numerically the axisymmetric Jeans equations (Binney & Tremaine, 2008) and calculate the line-of-sight velocity dispersions from these equations, where a stellar velocity anisotropy \(\beta_{z}=1-\overline{v_{z}^{2}}/\overline{v_{R}^{2}}\)\((=\) constant) is taken into account. We employ the axisymmetric mass models and Markov chain Monte Carlo techniques based on Bayesian statistics to analyze the line-of-sight velocity data of UMi dSph and obtain limits on its dark matter halo parameters.
Figure 2 shows the estimated dark matter density profile (left) and mass profile (right) of Ursa Minor dwarf spheroidal galaxy. The gray lines calculated from the sample without the five new member stars (i.e., these results are the same as the ones from Hayashi et al. (2020) completely), while green ones are estimated from the sample including new member stars. This result suggests that the addition of outermost stars can alter the dark matter density profile, although we should consider the potential influence of tidal forces on these stars. Therefore, to place robust constraints on the dark matter density profiles, it is crucial to collect a significant number of kinematic samples from stars across wide areas of the Galactic dSphs.
## 4 PFS forecast of dark matter density profile in a mock dSph
To take advantage of PFS's exceptional field-of-view, depth, and spectroscopic multiplexing capabilities, we are performing mock observations and dynamical analyses for the Galactic dSphs. We here focus on Draco dSph as an example. Mock data for dSph's member stars are generated from AGAMA public code (Vasiliev, 2019), which can treat non-spherical stellar and dark matter components. Using the mock data, we conducted PFS survey simulation that was developed by the PFS science collaboration. Thanks to the wide and deep PFS survey, we obtained approximately 5,000 stellar spectra with only four PFS pointings (see the inset in Figure 3). In comparison to the current available kinematic data of Draco dSph, which consists of 500 stars, PFS will significantly improve the statistical quality of the kinematic sample.
Applying our axisymmetric mass models to the mock PFS data for Draco, we can demonstrate a significant improvement in the estimated dark matter density profile compared to the current available kinematic data. From Figure 3, the increased number of stars observed by PFS allows us to probe the kinematics of the outermost regions of Draco and thus better constrain the dark matter distribution. This will provide important insights into the nature of dark matter and the formation history of the Milky Way's satellite galaxies.
## 5 Future plans
It is important to keep in mind that the current mock data shown in Figure 3 do not take into account non-negligible effects such as contamination stars, binary member stars, and dynamical non-equilibrium. These effects may affect the accuracy of the mass models and the resulting dark matter density profiles. Therefore, it is important to generate more realistic mock data and
to develop more sophisticated models that can take these effects into account when analyzing the actual PFS data.
Furthermore, in order to more precisely and accurately estimate the dark matter density profiles of the Galactic dwarf satellites, it is essential to use all of the information available in the full line-of-sight velocity distribution (LOSVD). This is because the LOSVD, characterized by higher-order velocity moments, is sensitive to the shape of the velocity ellipsoid, and thus should be a powerful technique for mitigating the degeneracy between dark matter density and velocity anisotropy. Considering the shape of LOSVD can, therefore, place further constraints on the inner density slope of a dark matter halo. Thus, for the PFS survey operation, we are developing dynamical mass models that include the LOSVD information based on Jeans analysis as well as Schwarzschild modeling, and considering the non-negligible effects using statistical approaches such as Bayesian statistics.
## Acknowledgements
This work was supported in part by the MEXT Grant-in-Aid for Scientific Research (No. 20H01895, 21K13909, 21H05447 and 23H04009 for K.H.)
|
2307.07312 | Using Large Language Models for Zero-Shot Natural Language Generation
from Knowledge Graphs | In any system that uses structured knowledge graph (KG) data as its
underlying knowledge representation, KG-to-text generation is a useful tool for
turning parts of the graph data into text that can be understood by humans.
Recent work has shown that models that make use of pretraining on large amounts
of text data can perform well on the KG-to-text task even with relatively small
sets of training data on the specific graph-to-text task. In this paper, we
build on this concept by using large language models to perform zero-shot
generation based on nothing but the model's understanding of the triple
structure from what it can read. We show that ChatGPT achieves near
state-of-the-art performance on some measures of the WebNLG 2020 challenge, but
falls behind on others. Additionally, we compare factual, counter-factual and
fictional statements, and show that there is a significant connection between
what the LLM already knows about the data it is parsing and the quality of the
output text. | Agnes Axelsson, Gabriel Skantze | 2023-07-14T12:45:03Z | http://arxiv.org/abs/2307.07312v2 | # Using Large Language Models for Zero-Shot
###### Abstract
In any system that uses structured knowledge graph (KG) data as its underlying knowledge representation, KG-to-text generation is a useful tool for turning parts of the graph data into text that can be understood by humans. Recent work has shown that models that make use of pretraining on large amounts of text data can perform well on the KG-to-text task, even with relatively little training data on the specific graph-to-text task. In this paper, we build on this concept by using large language models to perform zero-shot generation based on nothing but the model's understanding of the triple structure from what it can read. We show that ChatGPT achieves near state-of-the-art performance on some measures of the WebNLG 2020 challenge, but falls behind on others. Additionally, we compare factual, counter-factual and fictional statements, and show that there is a significant connection between what the LLM already knows about the data it is parsing and the quality of the output text.
## 1 Introduction
For any system that presents verbal information to users, whether that information is in the form of text or audio, it can be useful to generate the system's speech or text from a consistent underlying knowledge representation based on structured data. A commonly used data representation is **knowledge graphs** (KGs), where information is stored as _properties_ or _relations_ tying _entities_ together (Hogan et al., 2021). The combination of a property, its source and its target is referred to as a **triple**(Lassila and Swick, 1999; Hogan et al., 2021). KGs as an underlying structured data representation have been used to allow systems to tell narrative information (Colas et al., 2022), to retrieve information in chatbots (Zhou et al., 2020) or recommender systems (Shao et al., 2021), and to reason about the grounding status of the information in terms of what the user currently knows (Axelsson and Skantze, 2020).
Traditionally, template-based approaches for generating text from knowledge graphs have been sufficient for confined dialogue domains (Konstas and Lapata, 2013; Duma and Klein, 2013; Perera and Nand, 2015). An alternative is to train a data-driven end-to-end generation model, but a limiting factor is the relative lack of human-labelled data for the task. The WebNLG datasets produced for the challenges in 2017 (Gardent et al., 2017) and 2020 (Castro Ferreira et al., 2020) are relatively small, and although recent progress has been made on producing much larger datasets (Colas et al., 2022), methods for natural language generation from knowledge graphs have generally had to work around the absence of large datasets.
Recently, approaches that use pretraining on large amounts of non-KG-related text data, which is then finetuned on the KG-to-text task, have shown promising results (Kale and Rastogi, 2021; Colas et al., 2022; Li et al., 2021; Yang et al., 2020; Ke et al., 2021). Such models can learn and extrapolate from patterns in the text data to the KG data that the model has never seen before. The logical endpoint of such an approach is to simply rely on the pretraining, and not use any finetuning at all. In this paper, we perform a partial evaluation of this approach by using large language models (LLMs) to generate text from knowledge graph data, zero-shot.
A known problem with natural language generation through language models is that the output - the text generated by the method - is not guaranteed to match the input - the specification for what should be generated (Ji et al., 2023). When such models over-generate, it is often referred to as **hallucinations**(Alkaissi and McFarlane, 2023; Ji et al., 2023). Both under-generation and hallucinatory over-generation can result in systems producing unwanted content, potentially disastrously so.
Since LLMs rely on pretraining, their language generation competence will to some extent stem from the facts expressed in the pretraining data. Thus, the expression of the facts expressed in the KG triples could to some extent be helped by this inherent knowledge. While this could be advantageous when generating text from factual triples, a potential side-effect could be increased hallucinations, or that it could be harder for the LLM to generate from triples that express counter-factual or fictional knowledge. Thus, it is important to also gain an understanding of the ability of LLMs to perform the KG-to-text task, and not only evaluate their performance on factual triples.
In this paper, we present an evaluation of zero-shot KG-to-text natural language generation using LLMs. We address the following questions:
1. How do LLMs perform on KG-to-text tasks such as the WebNLG 2020 challenge Castro Ferreira et al. (2020)?
2. How does the factualness of the KG triples (being factual, counter-factual, or fictional) affect the capability of the LLM to express arbitrary knowledge graph information, in terms of: 1. Grammar and coherence? 2. Coverage of the triples? 3. Hallucinations (overgeneration)?
In this paper, we will be using OpenAI's Chat-GPT LLM (gpt-3.5-turbo). It should be noted that since the data used for training ChatGPT has not been disclosed, we cannot guarantee that it has not seen the WebNLG data used in Section 3. At the time when this study was conducted, we were not aware of any open-source LLMs with comparable performance to closed-source LLMs on these types of NLG tasks. However, our follow-up analysis in Section 4 is based on newly collected data for which KG-to-text references should not have existed in any LLM's training set.
## 2 Background
### Knowledge graphs
The concept of representing human knowledge as a graph in a computer dates at least as far back as work by Schneider (1973), but did not grow into their modern relevance until work by Google and competitors in the early 2010s Hogan et al. (2021). Ehrlinger and Woss (2016) consider the difference between older and more recent use of KGs to lie primarily in how the data is collected - at a large scale, using automated tools, rather than handcrafted, as in earlier work. WikiData Vrandecic and Krotzsch (2014), a knowledge graph run by the WikiMedia foundation and editable by the public, is what Hogan et al. (2021) call a _property graph_, where each entity and edge can be annotated with an arbitrary set of key-value pairs.
Common to all types of knowledge graphs described by Hogan et al. (2021) is that entities, the nodes of the graph, are connected by relations. In Wikidata, the relations are called **properties**Vrandecic and Krotzsch (2014) (unrelated to _property graphs_ as defined by Hogan et al. (2021)), terminology which we will use throughout this paper. A property, its source and its target combined are a **triple**Lassila and Swick (1999); Ding et al. (2005).
### KG-to-text synthesis
The process of converting data represented as knowledge graphs into text is sometimes referred to as **graph to text**Schmitt et al. (2020); Song et al. (2021), or **KG to text**Schmitt et al. (2021); Wang et al. (2019). The term **Data-to-Text** typically refers to a more general group of tasks of which KG-to-text is part Nan et al. (2021); Yin and Wan (2022); Ji et al. (2023). Competitions like the WebNLG 2020 challenge contained tracks for both KG-to-text and text-to-KG Castro Ferreira et al. (2020), but this paper only considers the KG-to-text task.
On small, restricted domains with structured data, examples of data-to-text generation with the use of prewritten templates can be found from the 1980s in the work by Kukich (1983) (stock market reports), and in the work of Goldberg et al. (1994) (weather forecasts). An early modern example of database-to-text synthesis is the work by Konstas and Lapata (2013), who used statistical rules to turn database entries into text through the use of rhetorical structure theory trees, which the system could generate from data, effectively generating its own templates. Patterns for converting individual knowledge graph triples into verb phrase templates are sometimes referred to as **lexicalisation**Perera and Nand (2015); Gardent et al. (2017).
A problem with template-based approaches like the ones by Kukich (1983); Goldberg et al. (1994); Konstas and Lapata (2013) is that the templates may not be applicable outside of a specific domain of synthesised text. Duma and Klein (2013) and
Perera and Nand (2015) developed systems for generating lexicalisation templates from text data, but Duma and Klein (2013) found that their approach performed significantly worse than human-written reference text, and Perera and Nand (2015) found that their approach had varying performance depending on the domain of the text.
KG-to-text includes microplanning Levelt (1989), and surface realisation, referring expression generation and content selection must all be done simultaneously to produce a legible result (Gardent et al., 2017, 2017). The 2017 WebNLG challenge was set up to evaluate KG-to-text models on English triple-to-text data. A novel dataset was collected specifically for the challenge. The top performing models were a bidirectional LSTM-based template extraction model and a rule-based transducer model (Gardent et al., 2017).
The WebNLG 2020 challenge added Russian data to the KG-to-text task, and additionally had more triple data as a whole. On the English KG-to-text task (Castro Ferreira et al., 2020), the top-performing competitor was Amazon AI Shanghai's \(\mathcal{P}^{2}\) model, which used an intermediate representation to first break down the requested triples into a plan, which a second model based on the T5 pretrained language model by Raffel et al. (2020) then turned into more coherent text (Guo et al., 2020). In second place, Ohio State University's OSU Lab presented a model that was pretrained on the T5 transformer model for English (Kale and Rastogi, 2021) to compensate for the relatively small amount of triple data to train on in the WebNLG dataset (Li et al., 2020).
Schmitt et al. (2020) proposed a parallel graph-to-text and text-to-graph model. While no gold standard human-written data existed as a baseline with which to compare the output of the graph-to-text model, the authors found that unsupervised training performed "nearly on par" with supervised training on BLEU, METEOR and CHRF++ measures compared to the text output of a baseline. The authors also found the text output by their own model was more readable than the baseline (Schmitt et al., 2020), but no structured evaluation was done on this factor
As a follow-up, Schmitt et al. (2021) proposed a Transformer-based architecture for graph-to-text generation where each relation in the graph is encoded in context with the other relations and entities in the graph. The resulting model performed favourably on BLEU, METEOR and CHRF++ measures compared to previous work on the WebNLG dataset (Schmitt et al., 2021; Gardent et al., 2017; Castro Ferreira et al., 2020).
Koncel-Kedziorski et al. (2022) applied Transformer models to the abstracts of scientific articles. Four models that got differing amounts of information about the contents of the abstract of the article were prompted to write an abstract. The GraphWriter model that was fed with both the title of the article and a knowledge graph-based representation of the contents of the abstract was rated more highly by human annotators than other models which got more limited information, although the gold standard human-written abstract was considered the best in \(64\%\) of the cases (Koncel-Kedziorski et al., 2022).
Recently, Colas et al. (2022) presented the Event-Narrative dataset, which contains excerpts from EventKG matched with text from Wikipedia describing the event narrated by the knowledge graph data. While the authors could not manually validate their full dataset, which contains over 650 000 knowledge graph triples, a smaller annotation of 500 randomly sampled sets of triples with their corresponding Wikipedia text suggested that around 96% of both entities and relations are present in the text, with errors mostly appearing where Wikipedia and the underlying knowledge graph disagree about the nature of an event (Colas et al., 2022). The authors also provided benchmarks for models trained on the dataset, with the pretrained BART (Lewis et al., 2020) model performing the best on BLEU, ChrF++ and BERT measures, while GraphWriter (Koncel-Kedziorski et al., 2022) performed the best on CIDEr, METEOR and ROUGE.
In Axelsson and Skantze (2023), we proposed an approach where knowledge graph triples in a text form were fed to GPT-3, synthesised into sentences one-by-one using a few-shot approach, and then merged into one or more sentences of fluent text with a secondary prompt that summarises the sentences generated by the first step. While the approach worked for the presenting robot we used in that project, we did not evaluate specifically the graph-to-text synthesis, and this paper follows up on that aspect.
Recent work by Yuan and Farber (2023) applied ChatGPT to the WebNLG 2017 challenge, similar to our approach in Section 3. The authors additionally use the AGENDA dataset from Koncel
Kedziorski et al. (2019). Yuan and Farber use a linearisation approach proposed by Ribeiro et al. (2019) to create a consistent representation of the graph data in text form to pass to the LLM as a prompt. The results are compared to the best-performing models from WebNLG 2017 (Gardent et al., 2017), but both ChatGPT and GPT-3 perform relatively poorly on most measures. For ChatGPT, this appears to partially be because the LLM generates large amounts of hallucinated text beyond what it is prompted to synthesise.
### NLG evaluation metrics
Numerous metrics have been proposed for evaluating KG-to-text output. _Bleu_, short for _bilingual evaluation understudy_, is a text similarity measure that combines n-gram precision score and a penalty for overly short candidates (Papineni et al., 2002). In WebNLG 2020, _Bleu NLTK_ refers to Bleu extended with a specific smoothing algorithm referred to by the authors as _smoothing 3_(Chen and Cherry, 2014; Castro Ferreira et al., 2020). METEOR is a harmonic mean of precision and recall on stemmed words between the candidate and reference, with slight priority on recall, which also rewards candidates for matching large spans of the candidates word-by-word (Banerjee and Lavie, 2005). TER, short for _Translation Edit Rate_, counts the number of word shifts, insertions, substitutions and removals that must be performed to transform the candidate into a reference (Olive, 2005; Snover et al., 2006). CHRF is a weighted average of character 6-gram precision and recall between a reference and a candidate (Popovic, 2015); CHRF++ is an extension to that measure which also considers word unigrams and bigrams (Popovic, 2017).
BERTScore, henceforth simply BERT, uses contextual word embeddings to calculate the similarity between the meaning expressed by a candidate sentence and a reference, not necessarily requiring them to use the same words (Zhang et al., 2020). BLEURT is a BERT-based similarity metric that attempts to predict how a human annotator would rate the candidate compared to the reference, using the vector-based meaning encoding returned by BERT (Devlin et al., 2019; Sellam et al., 2020).
## 3 Applying LLMs to the WebNLG 2020 challenge
In the WebNLG 2020 Challenge, participants trained models to learn the transformation of sets of KG triples to natural language text. Graph-to-text data was provided for English and Russian (Castro Ferreira et al., 2020). The English-ALL test set contains 1779 sets of between 1 and 7 triples, each paired with up to five examples of human-written reference text that expresses those triples. The training set is not relevant to this paper, as zero-shot KG-to-text by definition does not perform training on the specific task. Models trained on the training set and evaluated on the test set (for which only the triples and no reference text were public until the end of the challenge) were ranked by METEOR score, with Bleu, Bleu-NLTK, TER, ChrF++, BERT and BLEURT scores also available for reference. The challenge organisers provided official evaluation scripts for evaluating hypotheses on the test set1. While recalculating the numbers for the other participants in Table 1, the BLEURT numbers we got were notably lower than those seen in Castro Ferreira et al. (2020) for all systems, but the internal order was the same.
Footnote 1: [https://github.com/faceface](https://github.com/faceface)
### Results on the WebNLG 2020 dataset
The 1779 English and 1102 Russian prompts of the WebNLG 2020 test set, as described in Section 3, were expressed with _gpt-3.5-turbo_ using the process described in Section 3.1. We present the results for English in Table 1, with every listed participant from Castro Ferreira et al. (2020) shown alongside ChatGPT. The table is ordered by METEOR as in the original challenge.
Beyond METEOR, ChatGPT performs less well on other measures, ranking slightly above the FORGE2020 baseline for BLEU and BLEU-NLTK, and below it for TER (note that higher TER values are worse). The relatively low BLEU and BLEU-NLTK scores and high TER - measures that reward exact word matches - but competitive METEOR and BLEURT scores, imply that ChatGPT consistently produces text that expresses roughly the same semantic content as the reference translations, using roughly the same stemmed words as the reference translations, but in orders and in forms (tenses, inflections) that are not expected from the reference translations.
ChatGPT's results for Russian were significantly worse than for English, obtaining a METEOR score of 0.403. This is below the FORGE2020 baseline which obtained a METEOR score of 0.467. The full results table for Russian is included in Appendix A.
Finally, it should be noted that we do not have access to the training data for ChatGPT, and we therefore cannot know whether the results of other models in the WebNLG 2020 challenge were part of the training. Thus, the results seen in Table 1 may be artificially inflated.
## 4 Evaluating the effects of KG factualness
As stated in the introduction, the LLM's pretraining on (mostly) factual data might influence its ability to generate text from the KG triples, if these are not also factual. To evaluate this effect, we chose to synthesise our own data through WikiData. This allowed us to retain metadata about the classes and types of entities in the graphs, limit and specify the types of properties that would be included in our triple set, and additionally guarantee that the LLM would not have seen the data during its training (which, as was noted above, cannot be guaranteed for the WebNLG 2020 test set).
We sampled the WikiData API for random small subgraphs of knowledge graph triples centered around an entity that represents a human. To further make sure that our generated text represented knowledge that was reasonably interesting to humans and representative of information that could appear in information text or a presentation, we manually created a list of 184 property identifiers that occured often in connection to humans2. Our prompts represented connected graphs; there was
\begin{table}
\begin{tabular}{l c c c c c c c c c}
**Team** & **BLEU** & **BLEU** & **METEOR** & **CHRF.-** & **TER** & **BERT** & **BERT** & **BERT** & **BLEURT** \\ & **NLTK** & & & & & & & & \\ Amazon AI (Shanghai) & **0.539** & **0.535** & **0.417** & **0.690** & **0.406** & **0.960** & **0.957** & **0.958** & **0.47** \\ OSU Neural NLG & 0.535 & 0.532 & 0.414 & 0.688 & 0.416 & 0.958 & 0.955 & 0.956 & 0.45 \\ FBConvAI & 0.526 & 0.523 & 0.413 & 0.686 & 0.423 & 0.957 & 0.955 & 0.956 & 0.46 \\ bt5 & 0.517 & 0.517 & 0.411 & 0.679 & 0.435 & 0.955 & 0.954 & 0.954 & 0.43 \\
**ChatGPT** & 0.424 & 0.417 & 0.409 & 0.671 & 0.533 & 0.948 & 0.955 & 0.951 & 0.42 \\ NUIG-DSI & 0.517 & 0.514 & 0.403 & 0.669 & 0.417 & 0.959 & 0.954 & 0.956 & 0.45 \\ cuni-ufal & 0.503 & 0.500 & 0.398 & 0.666 & 0.435 & 0.954 & 0.950 & 0.951 & 0.39 \\ DANGNT-SGU & 0.407 & 0.405 & 0.393 & 0.646 & 0.511 & 0.940 & 0.946 & 0.943 & 0.27 \\ CycleGT & 0.445 & 0.432 & 0.387 & 0.637 & 0.479 & 0.949 & 0.949 & 0.948 & 0.40 \\ RALI - Université de Montreal & 0.402 & 0.393 & 0.386 & 0.634 & 0.504 & 0.944 & 0.944 & 0.944 & 0.28 \\ TGen & 0.509 & 0.482 & 0.384 & 0.636 & 0.454 & 0.952 & 0.947 & 0.949 & 0.36 \\ Baseline-FORGE2020 & 0.405 & 0.396 & 0.373 & 0.621 & 0.517 & 0.946 & 0.941 & 0.943 & 0.26 \\ Huawei Noah’s Ark Lab & 0.395 & 0.387 & 0.372 & 0.613 & 0.536 & 0.935 & 0.937 & 0.935 & 0.10 \\ Baseline-FORGE2017 & 0.378 & 0.371 & 0.364 & 0.606 & 0.553 & 0.933 & 0.928 & 0.930 & 0.20 \\ NILC & 0.319 & 0.313 & 0.350 & 0.545 & 0.629 & 0.920 & 0.922 & 0.920 & 0.12 \\ UPC-POE & 0.391 & 0.379 & 0.337 & 0.579 & 0.564 & 0.933 & 0.927 & 0.929 & 0.08 \\ ORANGE-NLG & 0.382 & 0.376 & 0.335 & 0.571 & 0.577 & 0.920 & 0.920 & 0.920 & -0.09 \\ \end{tabular}
\end{table}
Table 1: Results when applying ChatGPT to the WebNLG 2020 English-ALL task. Note that ORANGE-NLG garnered slightly worse numbers in all metrics compared to Castro Ferreira et al. (2020) when we re-ran the evaluation scripts.
always a path from any entity in our prompts to all other entities in the same prompt. We will call data sampled in this way **factual**, although it is possible that some triples are incorrect, either through vandalism or mistakes by the authors of the data.
### Fictionalisation and counterfactualisation
For each _factual_ graph sampled according to the method described in Section 4, we applied substitutions to the names of the entities in the graph to produce two new graphs with identical structure but different entities. By retaining the graph structure but changing the entities contained in the graph, we could create prompts that expressed knowledge that would contradict the information stored in the LLM's parameters, or create prompts that we could guarantee would not match factual information stored in the LLM's parameters.
To produce what we call a **fictional** graph, we separately asked GPT-3 to generate fictional examples of the WikiData types present in the graph3. To produce **counterfactual** graphs, entities were randomly replaced with a different example of the same WikiData class sampled from WikiData. To reduce the number of cases where humans were stated to have died before they were born, we also sorted dates so that the earliest date in the original graphs always corresponded to the earliest date in the substituted graphs.
Footnote 3: See Appendix D for this prompt.
A small example graph with all three sets of labels seen at the same time can be seen in Figure 1. The **factual** data is marked in bold on top in each entity, with **fictional** in the middle, marked in italics, and **counterfactual** on the bottom. Note that our date sorting approach did not affect events and entities _named_ after a date, which allows the counterfactual graph in Figure 1 to state that someone who was born in 1975 also participated in a sporting event in 1970.
### WikiData LLM prompt
To express our WikiData dataset, we used a two-step prompt structure rather than the one-step method shown in Section 3.1. This prompt was originally set up to be able to control the theme-theme structure of the generated text - note that this is not relevant to the analysis presented here, and that we also do not evaluate potential performance differences between the two types of prompts.
For expressing knowledge graph data sampled from WikiData, we converted each edge in the graph into a string _Source / Property / Target_. _Source_ and _Target_ represented the WikiData labels of the entities or constants at both ends of the property. _Property_ was the WikiData label of the property connecting the two entities. If the property was _Godparent_, _Mother_, _Father_ or _Child_, we changed the label into _Has godparent_, _Has mother_, _Has father_, or _Has child_, respectively, as pilot testing found that both crowdworkers and ChatGPT often confused the intended direction of those properties.
Once all properties in the graph had been turned into a string according to the above process, we then passed the first triple to ChatGPT via a prompt that asked it to convert that triple into exactly one sentence; the remaining triples were then passed to the LLM in a second prompt using the context of the previous prompt and the model's previous response to ask it to insert the remaining triples into the text. The returned text from this second step was used as the KG-to-text output. An example prompt instantiated with graph data from Figure 1 is included in Appendix C.1.
### Evaluation on sampled WikiData KG triples
We generated a total of 70 sets of prompts containing 7 triples, each representing a connected graph with seven edges (properties). The choice of seven triples matches the largest graphs in the WebNLG dataset. The three conditions (factual, fictional or counterfactual as described in Section 4) gave us a total of 210 graphs. Text was then generated for these graphs according to the process described in Section 4.2.
#### 4.3.1 Results for grammar and coherence
Through Amazon's Mechanical Turk, we asked three annotators for each of our 210 graphs to evaluate the generated text for grammar and coherence (similarly to Li et al. (2021)), using two sliders. The Grammar slider had one end stating "_The grammar in the text is extremely poor. It is not written in well-formed English._" and the other stating "_The grammar in the text is perfect. It is written in well-formed English._". The Coherence slider stated "_It is incoherent. The different parts of the text do not lead into each other._" on one end and "_It is highly coherent. The different parts of the text flow well into each other._" on the other. To submit their responses, participants had to indicate whether they
understood that the task was not about the factual accuracy of the text but rather how well it matches the prompt; three annotators indicated they did not understand this and their evaluations were thus discarded. The average ratings by condition are listed in Table 2. The crowdworkers were paid \(\$0.2\) per prompt they rated. 34 unique crowdworkers participated, ranking an average of 18.5 prompts (SD = \(20.0\), min = 1, max = 83).
To evaluate the given ratings of grammaticality and coherence, we set up two Cumulative Link Mixed Models (CLMMs) to treat the ratings as an ordinal measure (Agresti, 2012; Christensen, 2019). A recent study of how linear mixed models can be applied to scales of this type can be found by Howcroft and Rieser (2021). The factualness was treated as a fixed factor, with the identity of the annotator treated as a random factor. For grammaticality, the null model was not significantly different (\(p=.0969\)) from the model considering condition as a fixed factor, and as such we could not reject the null hypothesis that grammaticality was the same across all three conditions.
For coherence, the data type was a significant factor (\(p=.0363\)), leading us to reject the null hypothesis that the three conditions were equally coherent. A post-hoc estimated marginal means analysis confirmed that **counterfactual** graphs were rated as less coherent than **factual** graphs when treating the identity of the annotator as a random factor (\(p<.0001\)), but the comparisons between counterfactual and fictional (\(p=.394\)) and fictional and factual (\(p=.197\)) were not significant.
#### 4.3.2 Results for triple coverage
For each of the seven triples in the prompt, annotators also had to check one of three exclusive options; _the text states this fact_ (henceforth _present_), _the text does not say anything about this_ (_absent_) or _the text states something else that actively goes against this fact_ (_hallucinated_). _Absent_ corresponds to _omission_ in Yin and Wan (2022), with _hallucinated_ corresponding to _inaccuracy intrinsic_, _inaccuracy extrinsic_ and _positive-negative aspect_(Yin and Wan, 2022).
While the grammaticality and coherence evaluations are subjective and were used in Section 4.3.1, annotators showed poor agreement for the triple coverage task, achieving a Fleiss' Kappa (Fleiss and Cohen, 1973) of only \(\kappa\approx 0.016\), at the low end of _slight agreement_ on the scale by Landis and Koch (1977). To address this, since we believe that the judgement is objective for most cases, we manually annotated each triple as being present, absent or hallucinated, and discarded the crowdworkers' evaluations for the triple coverage task. The resulting classifications are listed in Table 3.
A \(\chi^{2}\) test confirmed that the distribution of present, absent and hallucinated triple by each condition seen in Table 3 was significantly different from the expected distribution if the condition had had no effect (\(\chi^{2}(4,N=1470)=10.5,p=.0328\)), leading us to reject that null hypothesis. To analyse the results, we performed repeated Bonferroni-corrected \(\chi^{2}\) tests on each pair of conditions and triple label, applying Yates's cor
\begin{table}
\begin{tabular}{l r r}
**Condition** & **Avg. coherence** & **Avg. gramm.\({}^{\star}\)** \\ \hline Factual & \(72.0\%\) & \(71.6\%\) \\ Fictional & \(67.6\%\) & \(68.8\%\) \\ Counterf.\({}^{\dagger}\) & \(69.7\%\) & \(71.1\%\) \\ \end{tabular}
\end{table}
Table 2: Average ratings of coherence and grammaticality by condition. Unlike our CLMM analysis presented in Section 4.3.1, this table does not take the random factor of annotator identity into account. \(\star\): _Grammaticity_. \(\dagger\): _Counterfactual_.
Figure 1: An example graph with three edges representing **factual** claims about its root entity. Fictional and counterfactual substitutions are listed as the second (in italics) and third (plain) row, respectively, of each entity (box).
\begin{table}
\begin{tabular}{l r r r}
**Condition** & **Present** & **Absent** & **Hallucinated** \\ \hline Factual & 471 & 2 & 17 \\ Counterf.\({}^{\dagger}\) & 470 & 8 & 12 \\ Fictional & 468 & 13 & 9 \\ \end{tabular}
\end{table}
Table 3: The number of triples annotated as present, absent and hallucinated for the 490 (7 * 70) triples in each condition. \(\dagger\): _Counterfactual_.
rection if any class had an expected occurrence of less than five.
Two post-hoc comparisons were statistically significant (\(\alpha=0.05/9\approx 0.0056\)); the comparison between **present and absent** triples between the **factual and fictional** condition (\(\chi^{2}(1,N=954)=8.01,p=.00465\)) as well as the comparison between **absent and hallucinated** triples, also between the factual and fictional condition (\(\chi^{2}(1,N=41)=10.4,p=.00128\)). Residual analysis showed that factual graphs had more present but fewer absent triples than fictional graphs, and that factual graphs had more hallucinated but fewer absent triples than fictional graphs.
#### 4.3.3 Results for hallucinated inserted information
Each of the 210 expressed sets of 7 triples was also annotated for whether it contained any additional information beyond what was stated in the triples, corresponding to what Yin and Wan (2022) call _addition_ or what Ji et al. (2023) call _extrinsic hallucinations_. While we had originally set out to also do this via Mechanical Turk, we chose to perform the annotation ourselves after seeing low agreement on pilot tests. We did not choose to annotate cases where the LLM picked an unexpected tense (present tense for something that happened in the past, or vice versa), as we had not specified in the prompt what today's date was. Additionally, cases where the LLM picked a specific gendered pronoun for a fictional character with an ambiguous name were not annotated as hallucinations.
Out of the 70 graphs for each condition, 12 were found to have hallucinated extra information for the **factual** condition, 10 for the **counterfactual** condition and 9 for the **fictional** condition. A \(\chi^{2}\) goodness-of-fit test did not allow us to reject the null hypothesis that the rate of _inserted_ hallucinations across all three conditions was the same (\(\chi^{2}(2,N=31)=0.452,p=.798\)). A list of every hallucination of this type is attached in Appendix E. Recurring issues for all three conditions are unfounded statements that the subject was "survived by" a spouse, child or parent.
## 5 Discussion
Although LLMs appear to be able to do a relatively good job at generating text expressing arbitrary KG data, the relatively high rate of inserted hallucinated information (around 10-15% in Section 4.3.3) means system designers must be careful before deploying any system using a LLM KG-to-text as a synthesis engine or microplanner. The rate of _addition_ previously seen in Yin and Wan (2022) has one outlier of an approximate rate of 13%, but most of the high-performance models also seen in the WebNLG 2020 challenge Castro Ferreira et al. (2020) have a much lower rate of inserting information Yin and Wan (2022). This suggests ChatGPT is unusually likely to make these types of mistakes.
Factualness did have an effect on how many triples were present, absent or hallucinated. When expressing factual data, the most common error category was _hallucinated_; expressing triples in a way that was incompatible with the source prompt. When generating text for fictional data, the most common error was instead information missing from the generated text. Yin and Wan (2022) showed that the large pretrained T5 and BART models performed practically no addition or duplication errors on any of the KG datasets they were evaluated on, but that the rates of hallucinations (intrinsic and extrinsic inaccuracy) rose with the amount of pretraining - our results on ChatGPT do not follow this trend, as we see both types of errors on our factual dataset.
We found in Section 3.2 that ChatGPT performed significantly worse on Russian data than on English data. While it is possible that a two-tiered approach that first attempts to use an LLM to translate the triples into the target language, and then generate text for the translated triples, would perform better, we considered such prompt engineering to be outside the scope of this paper. Recent work by Lai et al. (2023) showed that low-resource languages made ChatGPT perform worse on a zero-shot summary task (with the prompt written in the target language), and while Russian is not necessarily a low-resource language, Lai et al. found that Russian ranked relatively low among high-resource languages.
The main difference between our approach and that of Yuan and Farber (2023) is how we create our prompts. Although the approach by Yuan and Farber is more logically consistent and arguably more minimal, basing their triple representation on previous work by Ribeiro et al. (2019), the authors run into issues with preventing the LLMs from synthesising text beyond what they asked for. It is otherwise difficult to compare the results obtained by Yuan and Farber to ours as their datasets are dif
ferent from the WebNLG 2020 dataset we utilised in Section 3.2. We do not see the type of hallucinatory text continuation that Yuan and Farber see in their dataset in ours, perhaps because we explicitly tell the LLM to only state what we ask it to state in our prompts (for which see Appendix B).
## 6 Conclusion
In this paper, we have shown that LLMs can perform the task of arbitrary domain zero-shot knowledge graph-to-text. The model's knowledge of the information for which it is generating text affects how likely it is to misstate or leave out information. This, in combination with the high likelihood that the expressed text contains some information that was not part of the triples that the model was asked to express, calls for caution when deploying LLM-powered systems for in-the-wild KG-to-text synthesis of unseen knowledge graph data on arbitrary domains. For closed comains, on seen knowledge graph data, where the consequences of accidentally omitting or misstating a fact are smaller, the LLM approach may be easier to implement than the models from Castro Ferreira et al. (2020), especially if the topic of the generated text is outside of the scope of typical KG-to-text datasets.
As LLMs with higher number of parameters are trained in the future, some of the issues mentioned in this paper - especially the low performance on Russian data - may be addressed, but the ability of the model to draw parallels between information it has encoded in its parameters and the information it has been asked to express means that issues of triple coverage and hallucination may not cleanly go away in the same fashion. For this reason, we believe that pretrained models that specialise on the KG-to-text task will retain their value.
## 7 Limitations
The low agreement of our Mechanical Turk annotators on both the coverage of triples and annotating extra hallucinated information in the generated text limited the scale of our evaluation, as we had to manually annotate the data ourselves. With more data, it is possible that more interesting patterns would appear regarding what type of information is dropped and what extra information is hallucinated when generating text for knowledge graphs with LLMs. Additionally, when attempting to express much larger graphs than the size of 7 we used in Section 4.3.2, it became clear that the ability of crowdworkers to annotate large amounts of data as present, absent or hallucinated deteriorated further as the number of triples grew beyond 5-10; this can be addressed by employing professional annotators.
This paper is not intended to be read as a direct review of the performance of ChatGPT or other OpenAI models on the KG-to-text task, but as a generalised analysis using ChatGPT to stand in for LLMs in general. Although some of the deficiencies of ChatGPT on both the WebNLG 2020 task and our WikiData expression task could be addressed by fine-tuning the prompt or using more advanced LLMs such as GPT-4, we believe that the issues of differing performance depending on factualness extend beyond the capacities of the model to understand the data it is reading, and is not necessarily something that improves as the model is able to relate the prompts it is reading to a larger understanding of the context through an increased number of parameters.
The prompts we present in Appendix B may not be the optimal prompts for making ChatGPT express knowledge graph data, and it is possible that different prompt design could significantly affect the ability of an LLM to perform the WebNLG task (Tables 1, 4) or the triple coverage task we presented in Section 4.3.2. We are not aware of a consistent approach to finding an optimal prompt for any task with LLMs.
A large number of recent papers in both the field of evaluating LLMs and in the field of KG-to-text are only available as non-peer-reviewed preprints. This can make it difficult to know the true scale of the field and to know which papers are the most representative for their area - we have made an attempt to do so in this paper.
## 8 Ethics Statement
Using public LLMs for KG-to-text poses a challenge in extracting explanations for the choices made by the system. Even if LLMs at some point in the near future outperform task-specific models on any NLG task, it may be worth using smaller models specifically to retain control over the model or to achieve explainability.
The use of crowdworkers for the types of annotation and evaluation we presented in Section 4.3.1 did not require ethics approval at our institution.
We made an attempt to filter our WikiData dataset such that it would not contain offensive
statements. The Wikidata data synthesis process described in Section 4.2 was rerun when an earlier version of our dataset was found to contain statements about individuals connected to historical events - specifically the Holocaust - that could be interpreted as Holocaust denial. It is nonetheless possible that counterfactual or factual statements in our current dataset, or LLM hallucinations relating to them, could have been perceived as offensive to our Mechanical Turk annotators, on account of the random nature of the process.
## 9 Data Availability Statement
Data files containing model output as well as annotator judgements have been made available on GitHub at [https://github.com/Agnesion/zero-shot-NLG-from-KGs-data](https://github.com/Agnesion/zero-shot-NLG-from-KGs-data).
## 10 Acknowledgements
The authors would like to thank the anonymous INLG reviewers for their insightful comments. This work was supported by the project _Social robots accelerating the transition to sustainable transport_ (50276-1), financed by Furhat Robotics & Swedish Energy Agency.
|
2305.09546 | An Exploratory Study on the Evidence of Hackathons' Role in Solving OSS
Newcomers' Challenges | Background: OSS projects face various challenges. One major challenge is to
onboard and integrate newcomers to the project. Aim: We aim to understand and
discuss the challenges newcomers face when joining an OSS project and present
evidence on how hackathons can mitigate those challenges. Method: We conducted
two searches on digital libraries to (1) explore challenges faced by newcomers
to join OSS projects, and (2) collect evidence on how hackathons were used to
address them. We defined four evidence categories (positive, inconclusive, and
no evidence) to classify evidence how hackathons address challenges. In
addition, we investigated whether a hackathon event was related to an OSS
project or not. Result: We identified a range of newcomer challenges that were
successfully addressed using hackathons. However, not all of the solutions we
identified were applied in the context of OSS. Conclusion: There seems to be
potential in using hackathons to overcome newcomers' challenges in OSS projects
and allow them to integrate faster into the project. | Ahmed Samir Imam Mahmoud, Alexander Nolte, Dietmar Pfahl | 2023-05-16T15:40:19Z | http://arxiv.org/abs/2305.09546v1 | # An Exploratory Study on the Evidence of Hackathons' Role in Solving OSS Newcomers' Challenges
###### Abstract
Background: OSS projects face various challenges. One major challenge is to onboard and integrate newcomers to the project. Aim: We aim to understand and discuss the challenges newcomers face when joining an OSS project and present evidence on how hackathons can mitigate those challenges. Method: We conducted two searches on digital libraries to (1) explore challenges faced by newcomers to join OSS projects, and (2) collect evidence on how hackathons were used to address them. We defined four evidence categories (positive, inconclusive, and no evidence) to classify evidence how hackathons address challenges. In addition, we investigated whether a hackathon event was related to an OSS project or not. Result: We identified a range of newcomer challenges that were successfully addressed using hackathons. However, not all of the solutions we identified were applied in the context of OSS. Conclusion: There seems to be potential in using hackathons to overcome newcomers' challenges in OSS projects and allow them to integrate faster into the project.
OSS, Open-Source, Hackathon, Newcomers, Challenges, Barriers, Evidence
## 1 Introduction
Open source software (OSS) projects have proven their success through projects like Debian, Linux kernel, and many more. Researchers have studied their success factors (Mateos-Garcia and Steinmueller, 2008; Sadowski et al., 2008; Norris, 2004) and presented lessons learned (Muller, 2018) when creating and maintaining successful OSS projects.
The management of OSS projects faces challenges. Maintaining engagement, attracting new developers (Sen et al., 2012), and integrating them such that they become contributors to a project are examples of such challenges. The range of challenges that exist in OSS projects has been discussed in literature (Lee et al., 2017; Hannebauer et al., 2014; Balali et al., 2018; Hannebauer and Grun, 2017). Researchers have proposed and evaluated various approaches (Steinmacher et al., 2016) that aim at helping the OSS communities to onboard newcomers and reduce the time it takes until they become productive.
Hackathons have previously been used to overcome challenges related to networking, sharing ideas, learning, and creating prototypes (Nolte et al., 2020). Hackathons might thus have the potential to address problems related to onboarding newcomers to OSS projects and fostering their contribution. Hackathons are time-bounded events where individuals form ad-hoc teams and engage in intensive collaboration on a project idea of their interest (Falk et al., 2022). These events have been organized and studied in various contexts, including corporations (Pe-Than et al., 2019; Nolte et al., 2018; Komssi et al., 2015), entrepreneurship (Cobham et al., 2017; Nolte, 2019), and education (Porras et al., 2019; Gama et al., 2018; Kienzler and Fontanesi, 2017). Despite their widespread use, research focusing on hackathons in the context of OSS projects in general and on how they can aid the onboarding of newcomers, in particular, is scarce.
To address this gap, we conducted a review of existing literature on challenges that affect OSS newcomers. We contrasted our findings with prior work on how hackathons were used to tackle such challenges. In this position paper, we report on findings from this initial analysis. These findings will subsequently serve as a basis for an empirical study on how hackathons can support newcomers to join and become productive members of OSS projects.
We provide the following contribution: We present an overview of challenges affecting newcomers to start contributing to OSS projects reported in the literature. We also collect evidence about how hackathons were used to tackle these challenges.
## 2 Background
### Challenges for OSS Projects
OSS projects are an important piece of the larger software ecosystem contributing software, libraries, and packages. Typical challenges of OSS projects that need continuous attention and monitoring include the risk of underproduction Champion and Hill (2021), the difficulty to attract and maintain developers Sen et al. (2012), knowledge management KM Dai et al. (2020), and the handling of community dynamics in agile OSS projects Muller (2018).
In this paper, we focus on challenges related to onboarding newcomers to OSS projects and how hackathons were used to overcome them.
### Developers Joining OSS Projects
Prior research has been conducted to understand the onboarding of new developers joining OSS projects. This work includes understanding their motivation join OSS projects Ye and Kishida (2003). Based on this understanding, researchers have proposed scripts for new developers to start contributing von Krogh et al. (2003), proposed a joining model Steinmacher et al. (2014), and proposed an approach to identify and recommend mentors in OSS projects Canfora et al. (2012). In addition, there is also work that aims to understand the impact of globalization and offshoring in OSS projects and provide tools to facilitate newcomers' learning Zhou and Mockus (2010).
### Hackathons and OSS
It is a common practice for OSS projects to arrange in-person events like conferences or hackathons. Research on such events has found that they can aid the development of trust and build relationships Geiger et al. (2021). There are empirical studies on the Google Summer of Code (GSoC) - a community code engagement event similar to hackathons. Findings indicate that such events can be used to attract new developers Silva et al. (2017). Similarly, there are studies on how project characteristics can affect the onboarding of developers by analyzing data from a kick-start hackathon at Facebook. Findings indicate positive effects of mentoring on the onboarding process Fagerholm et al. (2014).
## 3 Methodology
Our work is divided into two main steps. First, we conducted a literature review on the challenges and barriers affecting newcomers in OSS projects and categorized them into different groups (section 3.1). Second, we extracted evidence from existing work about how hackathons were used to overcome these challenges and barriers.
### Identifying Challenges That Affect Newcomers in OSS Projects
We first searched for secondary studies on newcomer barriers in OSS projects, i.e., Systematic Literature Reviews (SLRs) in the ACM digital library, Google scholar, and IEEE Xplore. As keywords we used _OSS_, _newcomers_, _barriers_, _challenges_, _SLR_, and _systematic literature review_. One of the identified papers Steinmacher et al. (2015) specifically discussed barriers faced by newcomers in OSS projects and grouped them into different categories and subcategories.
Since SLRs do not cover the most recent studies, we expanded our search on the same digital libraries for the time period after the publication of the aforementioned paper Steinmacher et al. (2015) excluding the keywords _SLR_ and _systematic literature review_. The goal was to identify additional challenges and barriers that were discussed after the SLR was published. This new search yielded 7 additional papers discussing newcomers' challenges and barriers in OSS projects. We only included papers that discussed developers joining OSS projects and excluded papers discussing newcomers in other settings.
From the final list of papers, we extracted all newcomers' challenges and barriers in OSS projects and classified them based on the categories proposed in the aforementioned SLR Steinmacher et al. (2015). Because we found new types of barriers and challenges, we had to extend the original set of categories.
### Finding Evidence on Hackathons Addressing Newcomer Challenges
Before searching for prior work on how hackathons were used to overcome newcomer challenges, we de
fined categories that describe the state of evidence based on two criteria. The first criterion covers evidence that hackathons helped address newcomer challenges. The second criterion covers the context of the hackathon events, i.e., if they were related to OSS or not. Based on these criteria, we defined four categories. _Positive evidence in open source_ indicates that there is positive evidence in literature that hackathons have helped to overcome a specific OSS challenge. _Positive evidence in different context_ indicates there is positive evidence in literature that hackathons helped overcome a specific challenge outside of OSS. _Inconclusive evidence_ indicates that there is contradicting evidence in literature, i.e., that there is evidence in favor and against hackathons helping to overcome a specific challenge. _No evidence_ indicates a lack of evidence that hackathons help overcome a specific challenge. Table 1 provides an overview.
After defining these categories, we conducted a search in digital libraries, including the ACM digital library, Google scholar, and IEEE Xplore, for papers that discuss newcomers and hackathon outcomes in general. There are no secondary studies on the topic so we used the following keywords in our search: _hackathon_, _newcomers_, _outcome_, _challenges_, _engagement_, and _collaboration_. In addition, we included newcomer challenges collected in the first step as keywords in our search.
Based on our search, we identified 8 papers. We analyzed these papers to find evidence that hackathons overcome or solve any of the previously identified challenges and barriers in the context of OSS and beyond. We focused on the method and approach described in the paper before extracting how newcomer challenges were addressed.
## 4 Results
Our main goal was to find evidence that hackathons helped overcome newcomer challenges in OSS projects. To achieve this goal, we collected and analyzed existing literature as discussed in 3. In this section, we present our findings regarding newcomer challenges (section 4.1) and regarding evidence about how hackathons helped overcome them (section 4.2).
### Challenges Affecting Newcomers in OSS Projects
Several publications report on barriers and challenges of newcomers joining an OSS project that can be categorized into five main groups (Steinmacher et al., 2015). These include _Finding a way to start_, _Technical hurdles_, _Poorly documented code_, _Newcomers' previous knowledge_, _Social interaction_, and _Individuals problems_. We extended these categories by including more recent work. Table 2 shows the extended set of categories of challenges faced by newcomers in OSS projects. It also includes the results of the evidence where hackathons have been used to overcome newcomer challenges in OSS and beyond.
#### 4.1.1 Finding a Way to Start
This category is related to newcomers' challenges to find the first steps to become engaged in a project and find their way to interact with the existing team of contributors (Steinmacher et al., 2015). One problem reported by Steinmacher et al. (2015); Balali et al. (2018) is to _"find an appropriate task to start with"_. Steinmacher et al. (2015, 2015) also report challenges to _"find a mentor"_ who guides a newcomer through onboarding and integration them into the team. _"Difficulties locating bugs they choose to fix"_ is a related challenge reported by Lee et al. (2017); Hannebauer and Gruhn (2017). Also related to bugs are challenges related to _"Bug reproduction"_ which is reported by Hannebauer and Gruhn (2017). Newcomers struggle to reproduce bugs and test if their fix resolved them. One last challenge reported by Balali et al. (2018) is that newcomers sometimes _"start with a too complex task"_ for their skillset. They might struggle to understand the task and are not able to accomplish what they desire with their current experience in the OSS project.
#### 4.1.2 Technical Hurdles
This category includes technical challenges related to handling code. One of the main issues for newcomers reported by Steinmacher et al. (2015) is _"setting up the local workspace"_. This is always a challenge for new developers joining a project. They often face problems building the project (Hannebauer and Gruhn, 2017), setting up the development environment (Hannebauer and Gruhn, 2017), and using different devices (Balali et al., 2018). Other barriers reported by Steinmacher et al.
\begin{table}
\begin{tabular}{l l} \hline Evidence category & Abbreviation \\ \hline Positive Evidence in OSS Projects & P-E-OSS \\ \hline Positive Evidence in Different Context & P-E-OTH \\ \hline Inconclusive Evidence & INC-E \\ \hline No Evidence & NO-E \\ \hline \end{tabular}
\end{table}
Table 1: Categories of evidence (including abbreviations)
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Category** & **Sub Category** & **Sub Category** & **Classification** \\ \hline \multirow{4}{*}{Failure \(\Rightarrow\) is not clear (Glamlamacher et al., 2018)} & Find impossible task with high (high)resolution of \(\pm\) 2010s, both in detail of 2010s, high (high) 2012s \\ \cline{2-3} & Find impossible (Slamacher et al., 2018) & Find impossible (Slamacher et al., 2018) & Find impossible (Slamacher et al., 2018) \\ \cline{2-3} & Find impossible (Slamacher et al., 2018) & Find impossible (Slamacher et al., 2018) & Find impossible (Slamacher et al., 2018) \\ \cline{2-3} & Find impossible (Slamacher et al., 2018) & Find impossible (Slamacher et al., 2018) & Find impossible (Slamacher et al., 2018) \\ \hline \multirow{4}{*}{Failure \(\Rightarrow\) is not clear (Glamacher et al., 2018)} & Find impossible task with high (high)resolution of \(\pm\) 2010s, high (high) 2012s \\ & Find impossible task with high (high)resolution of \(\pm\) 2010s & high (high)resolution of \(\pm\) 2010s \\
(2015b) are _"Code complexity"_ and _"Software architecture complexity"_ which always need time and effort by newcomers to get a grasp of the project and its specifics. One main issue reported by Lee et al. (2017) is that the _"Submission process is too long and too complex"_. This relates to issue tracker complexity (Hannebauer and Gruhn, 2017), bureaucracy (Hannebauer and Gruhn, 2017) and submission techniques (Hannebauer and Gruhn, 2017) (ex. patch-based or pull requests) which can be different for each project.
#### 4.1.3 Poorly Documented Code
This category is related to documentation, an important aspect of any software development project. Steinmacher et al. (2015b) categorized barriers related to documentation into _"Too much documentation"_, _"Outdated documentation"_, _"Unclear code comments"_, and _"Lack of documentation"_. Lack of documentation was also reported by Hannebauer and Gruhn (2017) referring to newcomers having to use scripts that are not fully documented and must be learned by observation or trial-and-error.
#### 4.1.4 Newcomers' Previous Knowledge
This category is related to inadequate previous knowledge that could become a challenge for newcomers when joining an OSS project. _"Lack of technical experience"_, _"Lack of domain experience"_, and _"Lack of knowledge of project practices"_ are reported by Steinmacher et al. (2015b). Other newcomers' previous knowledge challenges are _"Unfamiliar project management schemes"_(Lee et al., 2017) and _"Lack of knowledge of the project's programming language"_(Hannebauer et al., 2014).
#### 4.1.5 Social Interaction
This category refers to one of the most common groups of challenges faced by newcomers in OSS projects. subcategories include _"Not receiving a timely answer"_(Steinmacher et al., 2015b, a), e.g., in the form of delayed answers, and _"Receiving an improper answer"_(Steinmacher et al., 2015b), e.g., in the form of impolite answers or answers with advanced or complex content. A subcategory that we derived covers _Communication barriers_. It includes barriers related due to insufficient command of the English language (Steinmacher et al., 2015a; Balali et al., 2018), newcomers not sending meaningful messages (Steinmacher et al., 2015a), making useless comments in mailing lists (Steinmacher et al., 2015a), issues related to timezone and geographical location (Balali et al., 2018), lack of interpersonal skills of mentors (Balali et al., 2018), cultural differences (Balali et al., 2018), and the need of newcomers to contact a person face-to-face (Steinmacher et al., 2015a).
#### 4.1.6 Individuals Problems
This category was derived by us, and it consists of various problems of individuals in OSS projects, arising either from newcomers themselves or from mentors, and affecting the newcomers' contributions.
The first subcategory contains _Newcomers' personal issues_ that could affect their integration in an OSS project, including lack of clear professional goals (Balali et al., 2018), fear of judgement (Balali et al., 2018), low self-efficacy (Balali et al., 2018), performance anxiety (Balali et al., 2018), newcomer's personality conflicts with the role (Balali et al., 2018), newcomer's inability to improve upon criticism (Balali et al., 2018), difficulty in time management (Balali et al., 2018), difficulty in managing different accounts (Balali et al., 2018), and shyness (Steinmacher et al., 2015a).
The second subcategory contains _Mentor issues_ that could become barriers for newcomers in OSS projects. Mentors are often needed by newcomers to integrate into the team quickly. Mentor issues relate to difficulties with time-management (Balali et al., 2018), handling a large number of mentees (Balali et al., 2018), switching context (Balali et al., 2018), not having a formal procedure for introducing the community (Balali et al., 2018), and managing different accounts (Balali et al., 2018).
### Evidence on How Hackathons
Address Newcomer Challenges in OSS Projects
In this subsection, we present evidence on how hackathons have addressed newcomers' challenges in OSS projects and beyond (see the right-most column of Table 2).
#### 4.2.1 Finding a Way to Start
This category is related to newcomer challenges while becoming engaged with the project and finding their way to integrate with the existing team. The challenges in this category are directly related to learning and coaching when starting participation in a
project. This matches well with hackathons set-up as time-bounded events focusing on participants as potential project newcomers, with assigned mentors for each team. Nolte et al. (2020) reports about supporting newcomers in scientific hackathons and shares recommendations about how mentors should focus on mentoring the team, and their learning rather than the project completion. They also found that teams taking ownership of their projects, receiving proper support from mentors, and receiving learning-oriented support reported positive outcomes. Besides, Hogan (2022) also found positive outcomes for learning through hackathons. They used mentors to support teams, guide them, and carefully structure the hackathon events in order to improve their authentic learning.
#### 4.2.2 Technical Hurdles
This category refers to technical challenges related to the handling of program code by newcomers. Similar to the first category, newcomers' challenges related to building the project, setting up the development environment, code complexity, submission technique, and software architecture complexity can also be solved by learning Hogan (2022) and coaching by mentors. Researchers Steglich et al. (2020) used hackathons for learning and to engage students to learn and adapt software engineering practices, highlighting positive outcomes when hackathon participants learn from their peers, share knowledge and simply have an experience different from that provided in a classroom setting. On the other hand, we could not find studies that specifically mention challenges like issue tracker complexity, bureaucracy, long project processes, and difference in the devices that mentors and mentees use.
#### 4.2.3 Poorly Documented Code
This category relates to documentation, which is an important aspect of any software development project. We could not find any evidence that hackathons were used to overcome any of the sub-categories related to poorly documented code, including _"too much documentation"_, _"outdated documentation"_, _"unclear code comments"_, and _"lack of documentation"_. We found in one existing SLR Medina Angarita and Nolte (2020) that documentation could be one outcome of a hackathon, implying that hackathons may be used to improve the documentation or generate new documentation and, thus, OSS projects might use hackathons for that goal. However, this finding is not related to our focus on helping newcomers integrate faster in an OSS project.
#### 4.2.4 Newcomers' Previous Knowledge
Newcomers' challenges are related to the lack of knowledge about the project as a whole or aspects such as project management, domain, or programming language. The usage of hackathons for improving the learning curve is discussed by Steglich et al. (2020) and positive evidence has been achieved by allowing participants to interact and talk with mentors and stakeholders. Other researchers Noguera Salinas et al. (2019) report on improving learning skills through datathon events. Researchers Nolte et al. (2020) also provided several recommendations for choosing experienced mentors with previous knowledge about the domain, community, and project practices that can help the hackathon participants to achieve their goal and improve their engagement.
#### 4.2.5 Social Interaction
Hackathons are mainly social events where participants interact and work together to achieve a goal set during the event, so it can be used to overcome most of the challenges in the category. Challenges related to _"Not receiving a (timely) answer"_ and _"Receive an improper answer"_ are mainly discussed by Nolte et al. (2020) suggesting that assigning a mentor with previous knowledge about the community may help with guiding the team towards the idea and the solution and eliminating wrong communications between newcomers and mentors. Researchers Lyonnet (2022) found positive evidence for hackathons improving social interactions and communication which also covers the challenges faced by newcomers related to _Communication barriers_ - except the English level barrier for which we were not able to find any evidence of hackathons being used to solve it.
#### 4.2.6 Individuals Problems
This category is connected to the individual problems in the project that could arise from the newcomers or from the mentors. Few of the challenges in this category related to _"Mentors' issues_ are mentioned in the literature, however, no conclusive evidence was mentioned. Researchers Silva et al. (2020) recommend assigning 2 mentors (one experienced and another new mentor) can help in balancing the load on the assigned mentor, thus can help to overcome mentors' challenges like difficulty in time-management, handling a large number of mentees, and difficulty in switching context. No evidence was found of the usage of hackathons to overcome other challenges in
this category related to newcomers or mentors.
## 5 Limitations and Threats to Validity
Our findings are based on a literature search utilizing specific search terms. This approach does not guarantee that we found all relevant papers. Moreover, our selection of digital libraries might not have been comprehensive.
Since we only used refereed research publications and did not include gray literature, we might have missed challenges and barriers and evidence - both positive and negative - of using hackathons to address them. Moreover, the data extraction from the relevant literature as well as the analysis of evidence is subject to interpretation bias.
## 6 Conclusion and Future Work
Based on our findings, several of the newcomers' challenges and barriers might be solved by conducting hackathons and there is potential in using hackathons to overcome newcomers' challenges in OSS projects, allowing them to integrate faster into the project.
The work presented in this paper is the first step in a larger research undertaking. It should thus be perceived as a position paper paving the ground for future empirical work on how hackathons can support newcomers to join and become productive members of OSS projects.
## Acknowledgements
Part of this work was funded by grant PRG1226 of the Estonian Research Council.
|
2306.10547 | INDCOR white paper 3: Interactive Digital Narratives and Interaction | The nature of interaction within Interactive Digital Narrative (IDN) is
inherently complex. This is due, in part, to the wide range of potential
interaction modes through which IDNs can be conceptualised, produced and
deployed and the complex dynamics this might entail. The purpose of this
whitepaper is to provide IDN practitioners with the essential knowledge on the
nature of interaction in IDNs and allow them to make informed design decisions
that lead to the incorporation of complexity thinking throughout the design
pipeline, the implementation of the work, and the ways its audience perceives
it. This white paper is concerned with the complexities of authoring,
delivering and processing dynamic interactive contents from the perspectives of
both creators and audiences. This white paper is part of a series of
publications by the INDCOR COST Action 18230 (Interactive Narrative Design for
Complexity Representations), which all clarify how IDNs representing complexity
can be understood and applied (INDCOR WP 0 - 5, 2023). | Frank Nack, Sandy Louchart, Kris Lund, Mattia Bellini, Iva Georgieva, Pratama W. Atmaja, Peter Makai | 2023-06-18T12:51:01Z | http://arxiv.org/abs/2306.10547v3 | # INDCOR white paper 3: Interactive Digital Narratives and Interaction
###### Abstract
In this paper, we propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction. We propose a novel approach to the interactive Digital Narratives and interaction interaction. We propose a novel approach to the interactive Digital Narratives and interaction.
### Executive Overview
The nature of interaction within Interactive Digital Narrative (IDN) is inherently complex. This is due, in part, to the wide range of potential interaction modes through which IDNs can be conceptualised, produced and deployed and the complex dynamics this might entail. The purpose of this whitepaper is to provide IDN practitioners with the essential knowledge on the nature of interaction in IDNs and allow them to make informed design decisions that lead to the incorporation of complexity thinking throughout the design pipeline, the implementation of the work, and the ways its audience perceives it. This white paper is concerned with the complexities of authoring, delivering and processing dynamic interactive contents from the perspectives of both creators and audiences.
This white paper is part of a series of publications by the INDCOR COST Action 18230 (Interactive Narrative Design for Complexity Representations), which all clarify how IDNs representing complexity can be understood and applied (INDCOR WP 0 - 5, 2023).
## Introduction
The aim for this whitepaper on Interactive Digital Narrative (IDN) and interaction is to provide readers with a set of foundational knowledge, definitions and concepts that allow them to develop a deeper understanding of the factors enabling meaningful interactions within a dynamic narrative framework. This whitepaper is focused on providing readers with pointers, information and resources towards relevant avenues of research and practice to anyone with an interest in IDNs or interaction. For a detailed discussion of the advantages and challenges of IDNs in the context of representing complex issues the reader is referred to the INDCOR White Paper 0 (INDCOR WP 0, 2023).
In essence, an IDN is a form of interactive media, leaning towards the idea of a narrative environment in the form of a cybernetic system (Koenitz, 2023), where the observed outcomes of actions are taken as inputs for further action. Interactions drive the design of IDN technologies, tools and production methodologies, as the specificities of user engagement have to be integrated at the core of IDN concepts and development. The audience's understanding of an IDN is a direct product of user interaction with the interactive narrative artefact.
The whitepaper comprises, therefore, three distinct but related sections. Part I focuses on the definitions and concepts of interaction that contribute to the conceptualization of IDN. Part II considers the main processes of IDN authoring (addressing the point of view of the creator), meaning making (the point of view of the audience), and impact (the point of view of the discourse). Part III provides a critical reflection on the introduced concepts in the context of perceptions, communications and the still existing challenges and overall impact of IDNs.
## Part I - Interaction (Definitions, concepts)
Precise definitions of interaction depend on the vantage point of the person doing the looking (Longino, 2013). More generally, this means that each discipline's or community's way of thinking about concepts, methods, and theories is influenced by constraints on how to view the world, on what research questions people find interesting, and on how to go about answering them (Lund et al, 2020).
There are many disciplines and communities that have worked on the notion of interaction, which are relevant for the EU COST Action on "Interactive Narrative Design for Complexity Representations" (INDCOR). The scope of this whitepaper does not permit us to include all such definitions, but we have chosen a small subset, given our own expertise. We examine chosen definitions of interaction from the viewpoint of Interactional Linguistics, Embodied Interaction, Cognitive Science, Human Computer Interaction (HCI), and Interactive Digital Narrative (IDN). The consequences of how defining interaction in a particular way influences the decisions regarding IDN design, implementation, use and further exploration made by stakeholders (researchers, journalists, game designers, etc.) will be made in Part II and III of this paper.
### Interactional linguistics
Interactional linguistics focuses on the use of language within social interaction and although researchers represent diverse traditions, their "... unifying perspective is to describe linguistic structures and meanings as they serve social goals in naturally occurring spoken, in a broad sense, conversational language, viz. 'talk-in-interaction'" (Lindstrom, 2009, p. 96). Language forms and practices are constantly adjusting to their context and thus contribute to the emergence of aspects that are relevant to the context (Lund, 2019). More recently interactional linguists approach concepts on a level of micro-analytical specifics of how humans co-construct their embodied interactions from the point of view of contextualised language practices (Morek, 2015). For example, Lund et al (2022) argue that language is a complex adaptive system, examining its place in relation to interactive, pragmatic, multimodal discourse processes, but also in relation to cognition, argumentation and meaning-making, and to social structures and education. Considering an IDN as a communication system that contextualises relations between the self and other, private and public, inner thought and outer world, the outlined epistemological assumptions
by interactional linguists are in general relevant for the explanation of role of interaction in IDN and can be found in various incarnations of IDNs and related theoretical work (see INDCOR WP 3, 2023).
#### 3.1.1 Embodied interaction
Streeck, Goodwin, & LeBaon (2011) note that the variety with which the organisation of action in human interaction can be investigated. Whereas other disciplines may look at the mental intentions of individual actors or alternatively, at large, historically shaped social structures, they choose to study "events in which multiple parties are carrying out endogenous courses of action in concert with each within face-to-face human interaction" (opt.cit., p. 1). Put another way -- with a more extensive focus -- by Mazur & Traverso (2022):
"In simple terms (if we dare in this context), interactive processes are complex first and foremost because, if considered as composite systems, they involve a very large number of elements: resources related to languages (syntax, lexicon, words, sounds, etc.) as well as to other semiotic fields (gestures, gaze, face expression, manipulation of objects and artefacts); different senses (sight, hearing, touch, smell); contexts, activities and actions; objectives (that can be local, global, and that evolve as the exchanges unfold); stakes of different levels; participants, to whom are attached numerous possible characterisations, such as ongoing social relations, identities, cultures, emotions, etc."
Some aspects of the above definition are taken into consideration by drama related forms of IDN (Mateas & Stern, 2005; Peinado et al, 2008; Aylett et al, 2011). With respect to game-oriented IDNs or newer developments, like narratives in the metaverse, embodied interaction is naturally relevant.
#### 3.1.2 Cognitive Science
Cognitive science is the interdisciplinary, scientific study of the mind that examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognition here covers the mental action or process of acquiring knowledge and understanding through thought, experience, and the senses. Cognitive scientists study intelligence and behaviour, with a focus on how nervous systems represent, process, and transform information.
Working in robotics, and unaware of the assumptions in interactional linguistics, de Jaegher & Froese (2009) come nevertheless to similar conclusions, namely that there is a relation between interaction and meaning-making. They argue that these two aspects of human agency, namely individual cognition and individual interactions, are interlinked and if our goal is to model human's cognitive capacities, they both must be taken into account.
Jeannerod (2003, p. 1) explains that we recognize ourselves as different from other people by understanding that we are the agent of a behaviour. This sense of agency "is the way by which the self builds as an entity independent from the external world." It follows that self-recognition depends on distinguishing between our bodies that produce actions and actions produced by other agents. This change in the notion of agency is defined by Ibnelkaid (2019) as "distributed agency", which covers a complexified interactional multimodality, giving rise to a communicative gesture that becomes trans-subjective. Here, the relation to IDN can be seen in the examples involving different types of causality (e.g. mutual, cyclic, spiraling...), but also in the documentation of emergent properties of human interaction.
#### Human-Computer Interaction
Human-computer interaction (HCI) is a multidisciplinary field of study focusing on the design of technology, in particular, the interaction between humans (the users) and computers (Caroll, 2022; Dix, 2022). Interaction is here considered as a multifaceted concept, that covers the interaction (in communication terms verbal, visual, haptic, olfactory) between a human and a machine, between two humans through a machine, one human and an artificial agent through a machine (i.e a sales or information chatbot), or groups of humans and/or agents communicating with each other through a machine (i.e. a massive multiplayer online game, or social media networks). In HCI the terms "interaction" and "interactivity" are closely related, though there is little agreement over the meaning of the term "interactivity", but most definitions are related to interaction between users and computers and other machines through a user interface.
The Association for Computing Machinery (ACM) defines human-computer interaction as "a discipline that is concerned with the design, evaluation, and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them..... Because human-computer interaction studies a human and a machine in communication, it draws from supporting knowledge on both the machine and the human side. On the machine side,
techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, social psychology, and human factors such as computer user satisfaction are relevant. And, of course, engineering and design methods are relevant." (Hewett et al; 2022).
As IDNs are about digital artefacts in a technical system, (concerning digital content generation, access and maintenance tools), the interaction concepts of HCI are necessarily to be considered.
#### Interactive Digital Narratives
In the field of interactive digital narratives, Koenitz (2023) distinguishes between two kinds of interactions within an IDN, namely "Interactivity 1" and "Interactivity 2". Interactivity 1, covers the personal interpretation of an artefact and hence refers to cognitive and interpretative acts that take place when engaging with all texts. On the other hand, interactivity 2 is the one typical of digital media where an interactor plans an action and executes it after having considered all the options made available by the system they are interacting with - with the purpose of seeing the system's reaction to it.
These stimuli are (generally, with minor exceptions) ontologically dissimilar: "the IDN represented by the machine actively responds to the physical inputs coming from the player and the player actively responds to the sensory outputs coming from the computer" (bellini, 2022). The engines governing IDN require these mutual actions in order to instantiate the narrative in a specific way among the often countless ones.
Interactions with an IDN are traditionally considered meaningful when they have an impact on the development of the story, when the interactor is afforded "dramatic agency" (Murray, 2017). Dramatic agency is the ability to make'meaningful choices' and see their effects (Kolhoff and Nack, 2019; Roth and Koenitz, 2019).
### Part II - Interaction in IDN
As IDNs are considered as a system that provides an interplay between programmed discourse strategies and interaction models (System), and means for an interactor to use those (Process) to establish narratives (Product) (Koenitz, 2023), as outlined in Figure 1A.. For the interactors, the process is a double hermeneutic circle in which they reflect both the instantiated narrative path and the possibilities for future interaction (Roth et al, 2018), as outlined in Figure 1B. A similar finding is outlined for the domain of games in Arjoranta, 2022).
Having outlined that the influence of interaction is distributed over various stakeholders within an IDN system, it is now time to look in more detail on the different forms of interaction that can be applied by each role. We start with the creator, then look at the interactors and finally at additional discourse performed on an IDN that can be influenced by various parties.
### The Creator
IDN authoring can be considered as a complicated endeavour that addresses content selection, mode of interaction, audience perception, and narrative generation. Whatever paradigm the authoring process follows, namely a rational (plan-driven) or action-centric (improvised and cyclic) model, there are a set of necessary stages the overall process follows (see for narrative
Figure 1: Koenitz’ SPP model, where A outlines the relation between the 3 sub-parts of the IDN model, and B demonstrates how the three parts interconnect from the point of view of the interactor/player (adapted from (Koenitz, 2023 for A and (Roth et al, 2018)for B).
developments (Hardman et al, 2008; Koenitz, 2015; Swartjes &Theune, 2009) and for the development of entertainment software, such as narrative games (de Lope et al., 2017; O'Hagan et al., 2014)). In each of the processes a creator interacts with different sources and potentially other stakeholders in a direct or indirect manner. Inspired by the literature, we consider a 5 process action-thinking authoring model, where each process contributes to the system part in the SPP model:
1. Ideation, where the initial ideas about media production are established.
2. Meaning Making, where a creator specifies the actual message(s) to be conveyed to a particular audience for a particular designed context, resulting in processes where communication strategies in form of articulation techniques are designed and related media assets are captured, generated or transformed.
3. Interaction, where the creator establishes the interaction mechanisms through which the aimed-for audience can access the established narrative space (the protostory) and the related representational and analytic means so that the system can adapt to the specific interactor's needs.
4. Validation, where the creator can simulate the established narrative environment and observe that the aimed-for experience can be achieved.
5. Distribute, where final interaction between end-users and produced media occurs.
The different processes are interconnected and embedded in an overall cyclic and iterative production methodology, where the sequences of analysis, design and implementation are indistinguishable connected. Creators simultaneously refine the mental image of the design object based on the actual perception of the context in and for which it needs to be designed (sensemaking coevolution implementation framework (Ralph, 2010)), which facilitates the critical rethinking of the perceived idea and results in a new design cycle. The aim is to facilitate the creator of an IDN for complex issues to address the interactor's information needs, based on his or her knowledge and skills, so that he or she can actively explore various perspectives, details, and causal relations through the narrative. Thus, the creator is the provider of motivated exploration, reflection and experiences over time, as well as an expectation engineer.
The following outline is oriented towards processes performed by creators of different work fields, such as researchers, journalists, game designers, educators etc, and hence the processes also need
to be understood in the context of the respected work flows of the domains those creators operate in. As we cannot address all domains individually, we aim in this whitepaper for generalized role descriptions. We also consider the single creator similar to collaborative work, as the creation steps are rather similar but more synchronization, i.e. interaction between different views, is required when several creators collaborate.
### Ideation
The interaction here is mainly a self-reflected process that links into current social discourses, personal memories, beliefs, goals, and narrative skills, where notes taking and content collection are predominant approaches to form the idea for the IDN. If the IDN is considered not a work out of interest of the creator but rather as a product for a different stakeholder, then interactions with this organization and potential audience can happen, i.e. in the form of discussions or observations. The ideation process results in an IDN intent (a plan), which in later stages of the authoring process is taken as the basis against which the creator validates the actual development of the protostory space. Considering how the requirement gathering process can make or break IDN development the importance of this development process cannot be overstated and yet, the process itself needs to be adapted to the working needs of the creator (Boehm, 1988; Callele et al., 2005).
### Meaning Making and Interaction
With respect to the creators' interaction we can distinguish direct and indirect interaction processes. Direct interaction from the point of view of the creator is performed with the authoring tools, the material, the intent, and potential collaborators. Indirectly the creator interacts with the potential audience (see also process 4 - validation), based on the established image, and the stakeholders (i.e. the constituent) through the requirements.
IDN authoring on the level of meaning making needs to be considered as a creative human process that is done in collaboration with a system that establishes potential narratives, where the creator imagines the narrative in multiple iterations, as well as multiple modes, exploring many possible choices and outcomes (protostory). The offered support here is rather service driven, i.e. supporting the finding or the creation of material. This does not mean that a rational, plan-driven model can be excluded, as the engineering of the system requires discrete development sequences,
only that the traditional view on those (i.e. pre-production, production, and post production, see also (INDCOR-WP 2, 2023)) and related tasks is not applicable on the whole design process.
As IDNs are digital artefacts, the narrative engineering process necessitates digital tools. Such academic or community tools are widely available, either running on a pc, web, or cloud infrastructure. Good review papers are (Kybartas & Bidarra, 2017; Shibolet, et al., 2018; Kitromili, et al., 2019; EU Cost action INDCOR, 2022), where in particular (Shibolet, et al., 2018) is relevant as it provides a categorisation and description framework for IDN authoring tools (9 categories and 38 descriptors for tool analysis and comparison) based on around 300 tools which have been surveyed and classified (see also (INDCOR-WP 2, 2023) for a detailed discussion of authoring tools).
Tools, as outlined in literature, lean more towards the technical side, as they are basically developed to help non-programmers to build IDN systems. Their design aims at the development of open navigation but their user interfaces and underlying narrative technology in form of templates, modelling, analytics, and rendering mechanisms, are oriented toward the sense-making affordances (syntax and semantics) of the main medium they operate on, most often text or visuals (graphics or video). In complexity, they range between facilitating the integration and organization of recorded content up to fully generative, code-based tools (parsing, graphic rendering, etc), often applied in the domain of narratives in games. The simpler the media and the more reduced the explorative and adaptive capabilities of the envisioned IDN the more creator-friendly the tool. The price to pay is less narrative expressiveness and no or only rudimentary adaptation towards the information needs and skills of the potential interactor.
Validation
Validation is an integral part of the design and implementation of IDNs towards content understanding and user agency. (i.e. the user front- and back-end activities) Validating the overall experience of the user is desirable but experience validation is not addressed in most available tools and hence requires intensive interaction of the creator with the system created, in the form of simulations. More details on validation issues can be found in the Whitepaper of WG 3 [2].
### Distribution
With the distribution to the actual user population, either directly or via the contractor, the actual interaction of the creator with the IDN or the development environment stops. Necessarily, the creator can still interact with the IDN, i.e. to fix problems on the system side, but merely he or she will turn into audience or a commenter, thus interacting with the IDN on a discourse level.
### The Audience
The interactive nature of IDNs allows the audience to influence the progression of the story and so their own experience. The main interaction is hence between the audience and the narrative environment. Indirectly, the audience also interacts with the creator, in the form of using and validating the provided means of interaction (on an HCI level) and of adaptation (the system towards the user on needs and skills).
The individual user of an IDN as well as groups of users interact with an IDN in two ways. First, the audience accesses the IDN on a level of getting information and experience-needs satisfied. Here the audience addresses the IDN via front-end processes that reflect the explicit information within a narrative on a moment to moment basis. This can be best described through the ability to make meaningful choices and see their effects which was already discussed in Part 1.
At the same time the audience interacts with the IDN on back-end processes that address the building and maintenance of mental models, which facilitate the prediction on the still available content and the potential means of interaction and system adaptation to idiosyncratic needs. Mainly through the relation between the front- and backend processes the interaction of a user with an IDN creates a process that is shaped by the actions of the user and so results in different instantiated depending on the particular narrative created by the user. The narrative meaning then is a cognitive construct, built by the interpreter in reflected response to the narrative, namely how actions and choices lead to certain consequences and realisation of the importance of agency in an IDN.
The audience reaction to and reception of an IDN product thus depends on factors that might drive enjoyment of the product and other factors that might mitigate its impact and overall experience. Autonomy, presence, flow, character identification and believability, curiosity, suspense, interest, enjoyment, meaningfulness, narrative coherence (making sense and not being confusing) are different elements that the audience can perceive and is rewarded with through interacting with an
IDN as discussed in the analysis of a contemporary product, a harbinger in the field, Bandersnatch (Kolhoff and Nack, 2019).
### The Discourse
In essence, the choice of technology and concept can be difficult and it is important that we recognise that there is a real challenge for the domain in guiding the practitioner to best identify the conceptual and technical means and to recognize the relevant working environment. Here INDCOR already provides some initiatives, like the IDN encyclopaedia (INDCOR Encyclopedia, 2024 forthcoming), the collection of IDNs that address complex issues (INDCOR WP-3, 2023),, and a descriptive collection of IDN authoring tools [[https://omeka-s.indcor.eu/s/idn-authoring-tools/item-set/43](https://omeka-s.indcor.eu/s/idn-authoring-tools/item-set/43)].
However, the community has to involve practitioners in a far stronger way as currently done, so that the development of tools are done in a collaborative way between research and the actual working field. As pointed out in the section on "The Creator" the variety of needs is acknowledged but what a good way of addressing this problem is, it's still an issue. Perhaps a sandbox approach might be the direction to go. This means that not one tool for a particular IDN context needs to be generated but rather to aim for an environment where IDN concepts and technologies are mapped onto practitioner profiles that would assist in making the right decisions early on in the development of IDN (Nack, 2023). This requires the collection of adaptable representation templates, argumentation, memory and user models, and IDN analysis tools to be used by a creator. In other words, the IDN domain needs standardisation of not only concepts but also engines, potentially working processes, but far more discussion and exploration is necessary to work in this direction. The community can learn here from the game domain.
Similarly, to the creator's side, a far better understanding of the audience is required. There is, as outlined in Part I, a good understanding of interaction with respect to cognition, embodied interaction, media linguistics (addressing all modalities), and HCI. However, what that actually means with respect to the different levels of direct and indirect interaction and related reflection and experience processes for the interactor is still unclear. Which of the identified representational and related analysis processes are applicable for different IDN aims (i.e. education, exploration, information) applied to different types of either homogeneous or rather divergent audiences? Are
effects measurable so that adaptive processes can be implemented? How far would those processes need to be made obvious or hidden to the audience? How do those findings influence the choices made for particular technical modes of interaction?
Interaction with an IDN both mutually enables and constrains cognition. Indeed, interaction is deemed a necessary step to enable cognition in general, and even more so in IDN, in which an interaction is necessary to instantiate the narrative. Cognition is necessary for interaction as IDNs require interactivity 2 of planning and execution, and often foresee successful and unsuccessful interaction strategies from which to choose.
The insights of interactional linguistics can open the way to IDN analytics. The two important assumptions mentioned above can prompt two related observations: if social interactions in the real world are the settings where identities and relationships are shaped, similarly the fictional interactions taking place in the IDN storyworld give shape to the identity of the interactor and their relations with the fictional environment and the characters living therein. On the other hand, forms and practices of linguistic interaction are configured and structured by their context of occurrence, and in a similar way interactors modify their interaction strategy depending on the context that is presented to them in the fictional world of the IDN. Thus, it needs to be further explored how structures of interactional linguistics (text, but also visual media) can be utilised to improve the design, development and perception on representational levels.
Studies on embodiment applied to communication further highlight the complexity of embodied interactions. While embodiment could be deemed as significantly weakened in mediated context such as a digital fictional world, a number of the elements forming real-world complex embodied interactions still hold in IDNs. Among these are languages (with syntax, lexicon, words, sounds, etc.) and other physical semiotic resources (particularly in virtual reality, with gestures, gaze, but also in non-VR artefacts with the manipulation of objects and artefacts), different senses (sight, hearing, touch/haptic sense), contexts, activities and actions, objectives, participants, to whom are attached numerous possible characterisations, such as ongoing social relations, identities, cultures, emotions, etc.
## Conclusion
The white paper showed that interaction is a multifaceted concept that, due to its essential role in IDN, adds with its dynamic contextuality to the overall complexity of IDNs. We outlined briefly some of the relevant sub-fields within the multidisciplinary IDN domain, i.e. interactional linguistics, embodied interaction, cognitive psychology, HCI, and showed in what way they can contribute to the further understanding of the domain in itself, and particular advances that can be achieved for IDN related processes, such as authoring, perception and discourse.
The white paper exemplified what has been achieved with respect to the representation and modelling of the presented concepts and we outlined research directions that INDCOR can still address for the remaining second half of the project duration. It has been made clear, though, that by then mainly the direction of the research path has been established, but that essential work needs to be done after the project will have been finished.
|
2305.08880 | Semiparametrically Optimal Cointegration Test | This paper aims to address the issue of semiparametric efficiency for
cointegration rank testing in finite-order vector autoregressive models, where
the innovation distribution is considered an infinite-dimensional nuisance
parameter. Our asymptotic analysis relies on Le Cam's theory of limit
experiment, which in this context takes the form of Locally Asymptotically
Brownian Functional (LABF). By leveraging the structural version of LABF, an
Ornstein-Uhlenbeck experiment, we develop the asymptotic power envelopes of
asymptotically invariant tests for both cases with and without a time trend. We
propose feasible tests based on a nonparametrically estimated density and
demonstrate that their power can achieve the semiparametric power envelopes,
making them semiparametrically optimal. We validate the theoretical results
through large-sample simulations and illustrate satisfactory size control and
excellent power performance of our tests under small samples. In both cases
with and without time trend, we show that a remarkable amount of additional
power can be obtained from non-Gaussian distributions. | Bo Zhou | 2023-05-13T15:44:09Z | http://arxiv.org/abs/2305.08880v1 | # Semiparametrically Optimal Cointegration Test
###### Abstract
This paper aims to address the issue of semiparametric efficiency for cointegration rank testing in finite-order vector autoregressive models, where the innovation distribution is considered an infinite-dimensional nuisance parameter. Our asymptotic analysis relies on Le Cam's theory of limit experiment, which in this context takes the form of Locally Asymptotically Brownian Functional (LABF). By leveraging the structural version of LABF, an Ornstein-Uhlenbeck experiment, we develop the asymptotic power envelopes of asymptotically invariant tests for both cases with and without a time trend. We propose feasible tests based on a nonparametrically estimated density and demonstrate that their power can achieve the semiparametric power envelopes, making them semiparametrically optimal. We validate the theoretical results through large-sample simulations and illustrate satisfactory size control and excellent power performance of our tests under small samples. In both cases with and without time trend, we show that a remarkable amount of additional power can be obtained from non-Gaussian distributions.
**JEL classification:** C12, C14
**Keywords:** cointegration, semiparametric efficiency, limit experiment, LABF.
## 1 Introduction
Cointegration has been a central topic in time series econometrics ever since its concept was introduced by Granger (1981) and Engle and Granger (1987). Cointegration refers to the phenomenon where multiple nonstationary time series have stationary linear combinations, known as cointegrating relationships. Determining
the number of these relationships, or the cointegration rank, is of utmost importance. This inferential problem is often addressed in a finite-order vector autoregressive (VAR) process. Early attempts include, among others, residual-based tests by Engle and Granger (1987) and Phillips and Ouliaris (1990), likelihood ratio tests by Johansen (1988, 1991) and Johansen and Juselius (1990), and principal component tests by Stock and Watson (1988). Since then, extensive literature has been devoted to constructing cointegration rank tests with good power properties. Our paper shares the same goal, particularly investigating to what extent we can exploit testing power from the innovation distributions that deviate from Gaussianity.
The main objective of this paper is to develop semiparametrically optimal cointegration rank tests for a finite-order VAR model written in the ECM form. We assume that the innovations are independently and identically distributed, and we treat the innovation distribution as an infinite-dimensional nuisance parameter. Our analysis relying on Le Cam's asymptotic theory, where the concept of _limit experiment_ -- as the limit of the sequence of experiments of interest (here, the cointegration experiments) -- plays the central role; see, e.g., Le Cam (2012) and van der Vaart (2000). The most recent work using this approach is Hallin et al. (2016) (hereafter referred to as HvdAW), which provides a complete factorization for the cointegration parameter \(\mathbf{\Pi}\) in model given by (1)-(2) below and characterizes all possible limit experiments. HvdAW shows that the associated limit experiments are of different types, including Locally Asymptotically Normality (LAN), Locally Asymptotically Mixed Normality (LAMN), and Locally Asymptotically Brownian Functional (LABF), depending on the parameter directions; see Jeganathan (1995) for their definitions. HvdAW focuses on the LAN experiment brought by the time trend term, while some earlier works, such as Phillips (1991) and Hodgson (1998b, a), concentrate on on the LAMN direction. However, the LABF direction remains unexplored, and our aim is to fill this gap in the literature.
Our contribution to the literature is threefold. First, following Zhou et al. (2019)'s structural representation technique for the LABF-type experiments, we develop the semiparametric power envelopes of asymptotically invariant tests in both cases with and without a time trend. Specifically, in this structural versions as an Ornstein-Uhlenbeck (OU) experiment, the nuisance density perturbation parameter (\(\boldsymbol{\eta}\)) appears as a constant drift, which can be eliminated by taking the associate 'bridge' process. More importantly, we show that the \(\sigma\)-field consisting of
OU processes (for elements unaffected by \(\mathbf{\eta}\)) and OU bridges (for elements affected by \(\mathbf{\eta}\)) is maximally invariant, which provides optimality in the limit. According to the Neyman-Pearson lemma, the corresponding likelihood ratio test is optimal among all invariant tests since every invariant statistic is a function of the maximal invariant. The Asymptotic Representation Theorem then translates the limiting optimality to the sequence of cointegration experiments (see, e.g., van der Vaart (2000, Theorem 15.1)).
Our second contribution is to propose feasible tests whose powers can attain the semiparametric power envelopes asymptotically (as shown in Theorem 4.1 and Corollary 4.1). This confirms that the derived semiparametric power envelopes are indeed 'envelopes', rather than just upper bounds, and that our tests are semiparametrically optimal. To construct these tests, we follow the traditional semiparametric inference literature, where the unknown nonparametric part is replaced by its kernel estimates (as see in works such as Bickel (1982), Schick (1986), and Klaassen (1987)). To improve finite-sample performance, especially when the sample size is small, we adopt Schick (1987)'s technique and use all samples for our semiparametric statistic without sample splitting. Our Monte Carlo study confirms the validity and optimality of our tests using large-sample simulations and exhibits satisfactory size control and excellent power performance in small-sample scenarios.
We regard the treatment of the linear time trend specification as our third contribution. Following the unit root and cointegration literature, we augment the stochastic term with a deterministic time trend term as in (1). This additive structure, employed by many later works (e.g., Elliott et al. (1996) and Jansson (2008) for unit root testing, and Saikkonen and Lutkepohl (2000), Lutkepohl and Saikkonen (2000), and Boswijk et al. (2015) for cointegration), has several advantages, including the ability to emphasize that the trend is at most linear. This trend specification is one of the main differences between our paper and HvdAW.1 HvdAW employs the traditional trend-in-VAR representation as in (4), making the time trend the
main power source for their test (essentially due to the super-consistency rate \(T^{-3/2}\) introduced by the time trend). In contrast, our paper regards the time trend (local) parameter \(\mathbf{\delta}\) as a nuisance parameter to be eliminated (see Remark 2.1 for more detailed discussions). We rely on the "profile likelihood" approach to eliminate this parameter, taking advantage of the simple quadratic structure of the likelihood with respect to \(\mathbf{\tau}\). Additionally, we develop a limiting statistic \(\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*})\) for the semiparametric case, analogue to the \(\Lambda_{p,C}^{GLS}(\bar{C};\bar{C}^{*})\) statistic proposed by Boswijk et al. (2015) for the Gaussian case. The statistic \(\Lambda_{p,C}^{GLS}(\bar{C};\bar{C}^{*})\) embeds and thus can help compare many existing Gaussian cointegration tests. Likewise, our semiparametric version \(\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*})\) enables us to develop semiparametric versions of existing Gaussian tests that handle the time trend specification.
Finally, we discuss extensions that incorporate serially correlated errors and allow for more general reduced rank hypotheses, significantly expanding the applicability of our developed tests. Our discussion is based on existing limit experiment results, mostly provided by HvdAW, particularly their full limit experiment characterization in Proposition A.2 of their online supplementary appendix. Building on these results, we show that the inference for the parameter of interest, \(\mathbf{\Pi}\), is adaptive to both the parameters governing the serial correlation in the errors and those governing the existing cointegrating relationships. Therefore, we 'just' need to consistently estimate these parameters and replace them with their estimates.
The remainder of the paper is structured as follows. Section 2 presents the model setup and assumptions. Section 3 develops the limit experiment, sequentially eliminates the density perturbation and time trend parameters, and derives the corresponding semiparametric power envelopes for the cases with and without a time trend. Then, based on a nonparametrically estimated density, Section 4 proposes feasible semiparametrically optimal tests, whose finite-sample performances are accessed by a Monte Carlo study in Section 5. Section 6 provides discussions on necessarily extensions to expand the empirical applicability of our tests. The proofs of our theoretical results can be found in the supplementary Appendices.
## 2 Model
We consider observations \(\mathbf{y}_{1},\ldots,\mathbf{y}_{T}\in\mathbb{R}^{p}\) generated by the vector auto-regression (VAR) of order one in _error correction form_
\[\mathbf{y}_{t} =\boldsymbol{\mu}+\boldsymbol{\tau}t+\mathbf{x}_{t}, \tag{1}\] \[\Delta\mathbf{x}_{t} =\boldsymbol{\Pi}\mathbf{x}_{t-1}+\boldsymbol{\varepsilon}_{t}, \hskip 14.226378ptt=1,\ldots,T, \tag{2}\]
where \(\Delta\) denotes the first-order difference operator (i.e., \(\Delta\mathbf{x}_{t}=\mathbf{x}_{t}-\mathbf{x}_{t-1}\)), \(\boldsymbol{\mu}\) and \(\boldsymbol{\tau}\) (both in \(\mathbb{R}^{p}\)) are unknown parameters that govern the constant and linear time trend terms, respectively, \(\boldsymbol{\Pi}\in\mathbb{R}^{p\times p}\) is an unknown parameter of interest, and \(\{\boldsymbol{\varepsilon}_{t}\}\) is a \(p\)-dimensional i.i.d. sequence of innovations with density \(f\).
Throughout this paper, we assume \(\mathbf{x}_{0}=\mathbf{0}\) as the initial value condition. However, this assumption is less innocent than it may seem, as noted by Elliott et al. (1996) (ERS), who observed that even asymptotically, the initial observations can carry information. Further investigations into this issue can be found in Muller and Elliott (2003) and Elliott and Muller (2006). That being said, since our paper employs the same local-to-unity asymptotics as ERS, we can relax this condition to the same extent.
In this paper, we use the notation \(\mathfrak{F}_{p}\) to represent the family of densities that satisfy the following assumptions, which we impose on \(f\).
**Assumption 1**.:
1. \(f\) _is absolutely continuous with a.e. gradient_ \(\nabla f(\boldsymbol{\varepsilon}_{1})=\big{(}\partial f(\boldsymbol{ \varepsilon}_{1})/\partial\varepsilon_{1,1},\ldots,\partial f(\boldsymbol{ \varepsilon}_{1})/\partial\varepsilon_{1,p}\big{)}^{\prime}\)_._
2. \(\mathrm{E}_{f}[\boldsymbol{\varepsilon}_{t}]=\mathbf{0}\) _and the covariance_ \(\boldsymbol{\Sigma}:=\mathrm{Var}_{f}[\boldsymbol{\varepsilon}_{1}]\) _is positive definite and finite._
3. _The Fisher information_ \(\boldsymbol{J}_{f}:=\mathrm{E}_{f}[\boldsymbol{\ell}_{f}(\boldsymbol{ \varepsilon}_{1})\boldsymbol{\ell}_{f}(\boldsymbol{\varepsilon}_{1})^{\prime}]\)_, where_ \(\boldsymbol{\ell}_{f}(\boldsymbol{\varepsilon}_{1}):=-\nabla f(\boldsymbol{ \varepsilon}_{1})/f(\boldsymbol{\varepsilon}_{1})\) _denotes the location score of_ \(f\)_, is finite._
4. \(f\) _is positive._
The _absolute continuity_ assumption (a) on \(f\) is a mild smoothness condition commonly imposed in the semiparametric literature. This condition is imposed for two reasons: Firstly, it allows us to proceed with the limit experiment approach (Le Cam (2012)) since it implies the _differentiability in quadratic mean (DQM)_ result, which is the exactly right condition needed for our log-likelihood ratio expansion (van der Vaart (2000)).2 Secondly, the absolute continuity assumption enables us to perform
nonparametric estimation of the score function \(\mathbf{\ell}_{f}\), which will be used to construct a feasible semiparametrically optimal test. The finite-variance condition (b) guarantees that the Fisher information matrix \(\mathbf{J}_{f}\) is nonsingular (Mayer-Wolf (1990, Theorem 2.3)). Together with the finite-Fisher-information condition (c), they ensure the the weak convergence of the partial-sum processes of \(\mathbf{\varepsilon}_{t}\) and \(\mathbf{\ell}_{f}(\mathbf{\varepsilon}_{t})\) to Brownian motions. The positive density condition (d) is merely for notational convenience, e.g., when defining the score function \(\mathbf{\ell}_{f}\).
As the primary focus of this paper is to address the issue of semiparametric efficiency in the context of cointegration, we begin by considering a simple case of testing the null hypothesis of no cointegration against the alternative of the existence of at least one cointegrating relationship, stated as:
\[H_{0}:\,\mathbf{\Pi}=\mathbf{0}\quad\text{(equivalently, }\,\text{rank}\,(\mathbf{\Pi})=0). \tag{3}\]
In Section 6, we will briefly discuss the extension of our results to more general reduced rank hypotheses on \(\mathbf{\Pi}\), drawing upon existing literature.
_Remark 2.1_.: The choice of time trend specification can have significant implications for the asymptotic results of cointegration tests. In this paper, we adopt the specification from the branch of unit root and cointegration literature which adds a level constant and a linear time trend to the stochastic component, as given in (1). See, for example, Elliott et al. (1996) for unit root testing, and Lutkepohl and Saikkonen (2000) and Boswijk et al. (2015) for cointegration testing. It differs from the specification used in Hallin et al. (2016), which follows the traditional "trend-in-VAR" specification given by
\[\Delta\mathbf{y}_{t}=\mathbf{v}+\mathbf{v}_{1}t+\mathbf{\Pi}\mathbf{y}_{t-1}+\mathbf{ \varepsilon}_{t}. \tag{4}\]
In particular, the authors focus on the case where \(\mathbf{v}_{1}=\mathbf{0}\).
Although model (4) is equivalent to model (1)-(2) under the parameter constraints \(\mathbf{v}=-\mathbf{\Pi}\mathbf{\mu}+(\mathbf{I}_{p}+\mathbf{\Pi})\mathbf{\tau}\) and \(\mathbf{v}1=-\mathbf{\Pi}\mathbf{\tau}\), it may generate quadratic time trends without these constraints. That being said, model (1)-(2) has the advantage of emphasizing that the time trend considered in \(\mathbf{y}_{t}\) is at most linear (see Lutkepohl and Saikkonen (2000) for further discussion). These different time trend specifications lead to distinct asymptotic results. Under model (4), the cointegration test of Hallin et al. (2016) will have asymptotic power that depends on \(\mathbf{v}\). Specifically, the test is more powerful when \(\mathbf{v}\) is larger but has low power when the time trend
is close to zero. In contrast, our test's asymptotic power does not depend on \(\boldsymbol{\mu}\) or \(\boldsymbol{\tau}\), as they are eliminated using the invariance principle.
## 3 Semiparametric Power Envelopes
### Preliminaries
Our asymptotic analysis relies on the limit experiment approach. The limit experiment of a sequence of experiments (in this case, cointegration experiments) is defined by the convergence of likelihood ratios under specific local perturbations. To ensure that the likelihood-ratio convergence is neither explosive nor degenerate, we need to localize \(\boldsymbol{\mu}\), \(\boldsymbol{\tau}\), \(\boldsymbol{\Pi}\) and \(f\) with appropriate rates. Such local perturbations are known as the _contiguous_ alternative (see van der Vaart (2000, Chapter 6)). In what follows, we introduce these local reparameterizations separately.
We follow the unit root and cointegration literature in adopting the _local-to-unity_ asymptotics for the key parameter of interest, \(\boldsymbol{\Pi}\). The associated local reparameterization is given by
\[\boldsymbol{\Pi}=\boldsymbol{\Pi}_{\mathbf{C}}^{(T)}=\frac{ \mathbf{C}}{T}, \tag{5}\]
where \(\mathbf{C}\in\mathbb{R}^{p\times p}\) is referred to the local parameter. The "super-consistency" contiguity rate \(T^{-1}\) here is common in models for nonstationary time series (see Phillips (1987) and Chan and Wei (1988)), including unit root testing, cointegration, and predictive regression with persistent predictors.3 This nonstandard rate \(T^{-1}\) leads to LABF-type experiments, as we show in Proposition 3.2 below.
Footnote 3: For related predictive regression literature, see, e.g., Elliott and Stock (1994), Jansson and Moreira (2006), and Werker and Zhou (2022).
We localize the time trend parameter as follows:
\[\boldsymbol{\tau}=\boldsymbol{\tau}_{\boldsymbol{\delta}}^{(T)}= \boldsymbol{\tau}_{0}+\frac{\boldsymbol{\delta}}{\sqrt{T}}, \tag{6}\]
where \(\boldsymbol{\tau}_{0}\) represents the true value. This local reparameterization is the natural multivariate counterpart of the unit root testing problem, as discussed in Jansson (2008, Section 7). Following that paper, without loss of generality, we assume that \(\boldsymbol{\tau}_{0}\) is zero.
We do not localize the constant term parameter \(\boldsymbol{\mu}\) in our approach since we show in the proof of Proposition 3.2 that it vanishes asymptotically. This result is in line
with the findings of Jansson (2008, Section 7), as the univariate counterpart, and show that information about \(\mathbf{\mu}\) only comes from the first few observations rather than the data flow.
Finally, we adopt the approach by Zhou et al. (2019) and introduce explicit nonparametric local perturbations to the innovation density \(f\) as follows:
\[f_{\mathbf{\eta}}^{(T)}(\mathbf{\varepsilon}):=f(\mathbf{\varepsilon})\left(1+\frac{\mathbf{ \eta}^{\prime}}{\sqrt{T}}\mathbf{b}(\mathbf{\varepsilon})\right). \tag{7}\]
Here, \(\mathbf{b}:=(b_{1},b_{2},\dots)^{\prime}\) is a vector of functions that govern the perturbations in different directions, and \(\mathbf{\eta}:=(\eta_{1},\eta_{2},\dots)^{\prime}\) is the local parameter that determines the severity of these perturbations. We choose \(\mathbf{b}\) to be a countable orthonormal basis of the separable Hilbert space
\[\mathrm{L}_{2}^{0,f}(\mathbb{R}^{p},\mathcal{B}):=\left\{b\in\mathrm{L}_{2}^{ f}(\mathbb{R}^{p},\mathcal{B})\mid\mathrm{E}_{f}[b(\mathbf{\varepsilon})]=0,\, \mathrm{E}_{f}[\mathbf{\varepsilon}b(\mathbf{\varepsilon})]=0\right\},\]
\(\mathbf{\varepsilon}\in\mathbb{R}^{p}\), where \(\mathrm{L}_{2}^{f}(\mathbb{R}^{p},\mathcal{B})\) denotes the space of Borel-measurable functions \(b:\mathbb{R}^{p}\to\mathbb{R}\) that are square-integrable. Accordingly, we have \(\mathrm{Var}_{f}[b_{k}(\mathbf{\varepsilon})]=1\). The separability of the Hilbert space ensures the existence of such a countable orthonormal basis. We further assume that \(b_{k}\in C_{2,b}(\mathbb{R})\) for all \(k\), meaning that \(b_{k}\)'s are bounded and twice continuously differentiable with bounded derivatives.
We restrict the local perturbation parameter \(\mathbf{\eta}\) to have only finitely many non-zero elements, i.e., \(\mathbf{\eta}\in c_{00}\) where \(c_{00}:=\{(z_{k})_{k\in\mathbb{N}}\in\mathbb{R}^{\mathbb{N}}\,|\,\sum_{k=1}^{ \infty}\mathbbm{1}\,\{z_{k}\neq 0\}<\infty\}\), which is a dense subspace of the parameter space \(\ell_{2}=\{(z_{k})_{k\in\mathbb{N}}\mid\sum_{k=1}^{\infty}z_{k}^{2}<\infty\}\). This restriction does not sacrifice generality but rather helps avoid dealing with the convergence of infinite-dimensional Brownian motions and possibly associated measurability complexities. Notably, when \(\mathbf{\eta}=\mathbf{0}\), we obtain \(f_{\mathbf{\eta}}^{(T)}=f\). We demonstrate in the following proposition that for any \(\mathbf{\eta}\neq\mathbf{0}\), \(f_{\mathbf{\eta}}^{(T)}\) satisfies Assumption 1. The proof is detailed in Appendix B.
**Proposition 3.1**.: _For any fixed \(f\in\mathfrak{F}_{p}\) and \(\mathbf{\eta}\in c_{00}\), there exists \(T^{\prime}\in\mathbb{N}\) such that \(f_{\mathbf{\eta}}^{(T)}\in\mathfrak{F}_{p}\) for all \(T\geq T^{\prime}\)._
However, we do not aim to demonstrate that the particular nonparametric form \(f_{\mathbf{\eta}}^{(T)}\) with \(\mathbf{\eta}\in c_{00}\) is capable of generating all possible local perturbations on \(f\) in \(\mathfrak{F}_{p}\). Nonetheless, this concrete perturbation specification does not affect our analysis of semiparametric optimality, particularly the derivation of the semiparametric power envelopes. Essentially, \(f_{\mathbf{\eta}}^{(T)}\) can be regarded as a complex sub-model under which a local power upper bound can be derived. Once a feasible semiparametric test is
constructed that can attain this upper bound (as we will demonstrate in Section 4), it becomes the semiparametric power envelope.
### The limiting experiment
We define \(\mathrm{P}^{(T)}_{\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta};\mathbf{\mu},f}\) as the law of \(\mathbf{y}_{1},\ldots,\mathbf{y}_{T}\) generated by the error-correction model (1)-(2) with local reparameterizations in (5)-(7). Additionally, we introduce the probability measure of the associated limit experiment, denoted as \(\mathbb{P}_{\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta}}\), which will be formally introduced later. In the following proposition, we demonstrate that the log-likelihood ratio process \(\mathcal{L}^{(T)}_{f}(\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta}):=\log\left( \mathrm{dP}^{(T)}_{\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta};\mathbf{\mu},f}/\mathrm{dP }^{(T)}_{\mathbf{\mathrm{0}},\mathbf{0},\mathbf{0};\mathbf{\mu},f}\right)\) follows the Locally Asymptotically Brownian Functional form introduced by Jeganathan (1995).
**Proposition 3.2**.: _Consider \(f\in\mathfrak{F}_{p}\). Let \(\mathbf{\mathrm{C}}\in\mathbb{R}^{p\times p}\), \(\mathbf{\mu}\in\mathbb{R}^{p}\), \(\mathbf{\delta}\in\mathbb{R}^{p}\), and \(\mathbf{\eta}\in c_{00}\)._
1. _Under_ \(\mathrm{P}^{(T)}_{\mathbf{\mathrm{0}},\mathbf{\mathrm{0}},\mathbf{\mathrm{0}};\mathbf{\mu},f}\)_, as_ \(T\to\infty\)_, the log-likelihood ratio is decomposed as_ \[\mathcal{L}^{(T)}_{f}(\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta})=\varDelta^{(T)}_ {f}(\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta})-\frac{1}{2}\mathcal{Q}^{(T)}_{f}( \mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta})+o_{P}(1),\] (8) _where_ \[\varDelta^{(T)}_{f}(\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta}):= \ \frac{1}{T}\sum_{t=2}^{T}(\mathbf{\mathrm{C}}\mathbf{y}_{t-1,1})^{ \prime}\mathbf{\ell}_{f}(\Delta\mathbf{y}_{t})+\frac{1}{\sqrt{T}}\sum_{t=2}^{T} \left(\mathbf{\mathrm{d}}_{\mathbf{\mathrm{C}},t}\mathbf{\delta}\right)^{\prime}\mathbf{\ell}_ {f}(\Delta\mathbf{y}_{t})+\frac{1}{\sqrt{T}}\sum_{t=2}^{T}\mathbf{\eta}^{\prime} \mathbf{b}(\Delta\mathbf{y}_{t}),\] \[\mathcal{Q}^{(T)}_{f}(\mathbf{\mathrm{C}},\mathbf{\delta},\mathbf{\eta}):= \ \frac{1}{T^{2}}\sum_{t=2}^{T}(\mathbf{\mathrm{C}}\mathbf{y}_{t-1,1})^{ \prime}\mathbf{J}_{f}\mathbf{\mathrm{C}}\mathbf{y}_{t-1,1}+\frac{1}{T}\sum_{t=2}^{T}( \mathbf{\mathrm{d}}_{\mathbf{\mathrm{C}},t}\mathbf{\delta})^{\prime}\mathbf{J}_{f}\mathbf{\mathrm{ d}}_{\mathbf{\mathrm{C}},t}\mathbf{\delta}\] \[\
_where_ \[\varDelta_{f}(\mathbf{C},\mathbf{\delta},\mathbf{\eta}):= \int_{0}^{1}(\mathbf{C}\mathbf{W}_{\mathbf{\varepsilon}}(u))^{\mathrm{f}} \mathrm{d}\mathbf{W}_{\mathbf{\ell}_{f}}(u)+\int_{0}^{1}(\mathbf{d}_{\mathbf{C}}(u)\mathbf{ \delta})^{\mathrm{f}}\mathrm{d}\mathbf{W}_{\mathbf{\ell}_{f}}(u)+\mathbf{\eta}^{\prime}\mathbf{ W}_{\mathbf{b}}(1),\] \[\mathcal{Q}_{f}(\mathbf{C},\mathbf{\delta},\mathbf{\eta}):= \int_{0}^{1}(\mathbf{C}\mathbf{W}_{\mathbf{\varepsilon}}(u))^{\prime}\mathbf{ J}_{f}C\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathrm{d}u+\int_{0}^{1}(\mathbf{d}_{ \mathbf{C}}(u)\mathbf{\delta})^{\prime}\mathbf{J}_{f}\mathbf{d}_{\mathbf{C}}(u)\mathbf{ \delta}\mathrm{d}u\] \[+2\int_{0}^{1}(\mathbf{C}\mathbf{W}_{\mathbf{\varepsilon}}(u))^{\prime} \mathbf{J}_{f}\mathbf{d}_{\mathbf{C}}(u)\mathbf{\delta}\mathrm{d}u+2\mathbf{\eta}^{\prime} \mathbf{J}_{\mathbf{b}f}\mathbf{C}\overline{\mathbf{W}}_{\mathbf{\varepsilon}}+2\mathbf{\eta}^{ \prime}\mathbf{J}_{\mathbf{b}f}\mathbf{\overline{d}}_{\mathbf{C}}\mathbf{\delta}+\mathbf{\eta}^ {\prime}\mathbf{\eta},\] _with_ \(\mathbf{d}_{\mathbf{C}}(u):=\mathbf{I}_{p}-u\mathbf{C}\)_,_ \(\overline{\mathbf{d}}_{\mathbf{C}}:=\int_{0}^{1}\mathbf{d}_{\mathbf{C}}(u) \mathrm{d}u=\mathbf{I}_{p}-\mathbf{C}/2\)_, and_ \(\overline{\mathbf{W}}_{\mathbf{\varepsilon}}:=\int_{0}^{1}\mathbf{W}_{\mathbf{\varepsilon}}(u )\mathrm{d}u\)_._
3. _Under_ \(\mathbb{P}_{\mathbf{0},\mathbf{0},\mathbf{0}}\)_,_ \(\forall\,\mathbf{C}\in\mathbb{R}^{p\times p}\)_,_ \(\mathbf{\delta}\in\mathbb{R}^{p}\)_, and_ \(\mathbf{\eta}\in c_{00}\)_,_ \(\mathbb{E}\left[\exp(\mathcal{L}_{f}(\mathbf{C},\mathbf{\delta},\mathbf{\eta}))\right]=1\)_._
The proof of part (a) could have essentially followed from a Taylor expansion, assuming twice continuous differentiability on \(f\). However, instead, we employed the framework of Hallin et al. (2015, Proposition 1), which is built upon the DQM condition and is implied by the absolute continuity assumption. This allows for a broader family of innovation density \(f\), including distributions such the double exponential distribution. The proof for Part (b) is based on the functional central limit theorem, the continuous mapping theorem, and an application of Hansen (1992, Theorem 2.1). Part (c) follows from standard stochastic calculus of the Doleans-Dade exponential, once the Novikov's condition is verified. The detailed proofs are organized in Appendix B of the supplementary material.
Part (a) and Part (b) demonstrate that the limit experiment, specifically with respect to \(\mathbf{\Pi}\), is LABF as defined by Jeganathan (1995). In other words, the central sequence weakly converges to a stochastic integral where the integrand and integrator processes exhibit correlation. This is evident from the covariance matrix equation (9), which shows that \(\mathrm{Cov}(\mathbf{W}_{\mathbf{\varepsilon}}(1),\mathbf{W}_{\mathbf{\ell}_{f}}(1))=\mathbf{I }_{p}\).
Part (c) allows us to introduce a new collection of probability measures, denoted by \(\mathbb{P}_{\mathbf{C},\mathbf{\delta},\mathbf{\eta}}\), through its Radon-Nikodym derivative w.r.t. \(\mathbb{P}_{\mathbf{0},\mathbf{0},\mathbf{0}}\), given by
\[\frac{\mathrm{d}\mathbb{P}_{\mathbf{C},\mathbf{\delta},\mathbf{\eta}}}{\mathrm{d} \mathbb{P}_{\mathbf{0},\mathbf{0}}}=\exp\mathcal{L}_{f}(\mathbf{C},\mathbf{\delta },\mathbf{\eta}).\]
The measurable space \((\Omega,\mathcal{F})\) is that of the Brownian motions \((\mathbf{W}_{\mathbf{\varepsilon}},\mathbf{W}_{\mathbf{\ell}_{f}},\mathbf{W}_{\mathbf{b}})\), which are defined as \(\Omega:=C^{p}[0,1]\times C^{p}[0,1]\times C^{\infty}[0,1]\) and \(\mathcal{F}:=(\otimes_{p}\mathcal{B}_{C})\otimes(\otimes_{p}\mathcal{B}_{C}) \otimes(\otimes_{k=1}^{\infty}\mathcal{B}_{C})\), where \(\mathcal{B}_{C}\) denotes the Borel \(\sigma\)-field on \(C[0,1]\).
Having introduced the necessary ingredients, we can now provide a formal definition of the limit experiment as
\[\mathcal{E}(f):=\left(\Omega,\mathcal{F},\left\{\mathbb{P}_{\mathbf{C},\mathbf{ \delta},\mathbf{\eta}}:\mathbf{C}\in\mathbb{R}^{p\times p},\mathbf{\delta}\in\mathbb{R }^{p},\mathbf{\eta}\in c_{00}\right\}\right).\]
Then, in Le Cam's sense (see, e.g., van der Vaart (2000, Chapter 9)), the sequence of cointegration experiments, denoted by \(\mathcal{E}^{(T)}(f)\), converges to the limit experiment \(\mathcal{E}(f)\) as the sample size \(T\) tends to infinity. In the following proposition, we regard \(\exp\mathcal{L}_{f}(\mathbf{C},\boldsymbol{\delta},\boldsymbol{\eta})\) as the Radon-Nikodym derivative and apply Girsanov's Theorem to obtain a structural representation of the limit experiment \(\mathcal{E}(f)\).
**Proposition 3.3**.: _Let \(\mathbf{C}\in\mathbb{R}^{p\times p}\), \(\boldsymbol{\delta}\in\mathbb{R}^{p}\), \(\boldsymbol{\eta}\in c_{00}\), and fix \(f\in\mathfrak{F}_{p}\). The limit experiment \(\mathcal{E}(f)\) associated with the log-likelihood ratio \(\mathcal{L}_{f}(\mathbf{C},\boldsymbol{\delta},\boldsymbol{\eta})\) can be described as follows. We observe processes \(\boldsymbol{W}_{\boldsymbol{\varepsilon}}\), \(\boldsymbol{W}_{\boldsymbol{\ell}_{f}}\) and \(\boldsymbol{W}_{\boldsymbol{b}}\), which are generated according to the following stochastic differential equations (SDEs):_
\[\mathrm{d}\boldsymbol{W}_{\boldsymbol{\varepsilon}}(u)= \ \mathbf{C}\boldsymbol{W}_{\boldsymbol{\varepsilon}}(u)\mathrm{d}u+ \mathbf{d}_{\mathbf{C}}(u)\boldsymbol{\delta}\mathrm{d}u+\mathrm{d}\boldsymbol {Z}_{\boldsymbol{\varepsilon}}(u), \tag{11}\] \[\mathrm{d}\boldsymbol{W}_{\boldsymbol{\ell}_{f}}(u)= \ \boldsymbol{J}_{\boldsymbol{f}}\mathbf{C}\boldsymbol{W}_{ \boldsymbol{\varepsilon}}(u)\mathrm{d}u+\boldsymbol{J}_{\boldsymbol{f}} \mathbf{d}_{\mathbf{C}}(u)\boldsymbol{\delta}\mathrm{d}u+\boldsymbol{J}_{ \boldsymbol{b}}\boldsymbol{\eta}\mathrm{d}u+\mathrm{d}\boldsymbol{Z}_{ \boldsymbol{\ell}_{f}}(u),\] (12) \[\mathrm{d}\boldsymbol{W}_{\boldsymbol{b}}(u)= \ \boldsymbol{J}_{\boldsymbol{b}f}\mathbf{C}\boldsymbol{W}_{ \boldsymbol{\varepsilon}}(u)\mathrm{d}u+\boldsymbol{J}_{\boldsymbol{b}f} \mathbf{d}_{\mathbf{C}}(u)\boldsymbol{\delta}\mathrm{d}u+\boldsymbol{\eta} \mathrm{d}u+\mathrm{d}\boldsymbol{Z}_{\boldsymbol{b}}(u), \tag{13}\]
_where \(\boldsymbol{Z}_{\boldsymbol{\varepsilon}}\), \(\boldsymbol{Z}_{\boldsymbol{\ell}_{f}}\) and \(\boldsymbol{Z}_{\boldsymbol{b}}\) are Brownian motions under \(\mathbb{P}_{\mathbf{C},\boldsymbol{\delta},\boldsymbol{\eta}}\) with drift zero and covariance given by (9)._
Next, we will use this structural limit experiment to eliminate the nuisance parameters, \(\boldsymbol{\eta}\) and \(\boldsymbol{\delta}\), sequentially. By doing so, we will derive the semiparametric power envelopes for both cases, without and with a time trend.
### Eliminating \(\boldsymbol{\eta}\) using Brownian bridge
Despite exhibiting LABF behavior with respect to the direction of \(\mathbf{C}\), the limit experiment remains LAN with respect to \(\boldsymbol{\eta}\). This is a critical feature, as it results in that \(\boldsymbol{\eta}\) only appears as constant drifts in (11)-(13). To eliminate these drifts, we can simply "take the bridges" of the affected processes.
We formally define the transformation \(\mathfrak{g}_{\boldsymbol{\eta}}\) as follows:
\[\mathfrak{g}_{\boldsymbol{\eta}}\,:\,[\mathfrak{g}_{\boldsymbol{\eta}}( \boldsymbol{W})](u)=\boldsymbol{W}(u)-\boldsymbol{\eta}u,\quad u\in[0,1] \tag{14}\]
for a process \(\boldsymbol{W}\in D^{\mathbb{N}}[0,1]\). We denote by \(\mathfrak{G}_{\boldsymbol{\eta}}\) the group of \(\mathfrak{g}_{\boldsymbol{\eta}}\) for \(\boldsymbol{\eta}\in c_{00}\). Intuitively, the transformation \(\mathfrak{g}_{\boldsymbol{\eta}}\) adds a constant drift \(u\to-\boldsymbol{\eta}u\) to \(\boldsymbol{W}\). To eliminate such a constant drift, we employ the bridge-taking operator defined as
\[\boldsymbol{B}^{\boldsymbol{W}}(u):=\boldsymbol{W}(u)-u\boldsymbol{W}(1). \tag{15}\]
For a fixed \(\mathbf{\eta}\in c_{00}\), we have \(\mathbf{B}^{|\mathfrak{G}_{\mathbf{\eta}}(\mathbf{W})|}(u)=[\mathfrak{g}_{\mathbf{\eta}}(\mathbf{W})]( u)-u[\mathfrak{g}_{\mathbf{\eta}}(\mathbf{W})](1)=(\mathbf{W}(u)-\mathbf{\eta}u)-u(\mathbf{W}(1)-\mathbf{\eta})= \mathbf{W}(u)-u\mathbf{W}(1)=\mathbf{B}^{\mathbf{W}}(u)\). This shows that the deduced bridge process is an invariant statistic with respect to \(\mathfrak{G}_{\mathbf{\eta}}\).
Note that in the structural limit experiment described in (11)-(13), the parameter \(\mathbf{\eta}\) appears in \(\mathbf{W}_{\mathbf{\ell}_{f}}\) and \(\mathbf{W_{b}}\), but not in \(\mathbf{W_{\varepsilon}}\). Therefore, to eliminate the constant drifts caused by \(\mathbf{\eta}\), we take the bridges of \(\mathbf{W}_{\mathbf{\ell}_{f}}\) and \(\mathbf{W_{b}}\) while keeping \(\mathbf{W_{\varepsilon}}\) unchanged, resulting in an invariant \(\sigma\)-field given by
\[\mathcal{M}:=\sigma\big{(}\mathbf{W_{\varepsilon}},\mathbf{B_{\mathbf{\ell}_{f}}},\mathbf{B_ {b}}\big{)}, \tag{16}\]
where \(\mathbf{B_{\mathbf{\ell}_{f}}}:=\mathbf{B^{W_{\mathbf{\ell}_{f}}}}\) and \(\mathbf{B_{b}}:=\mathbf{B^{W_{b}}}\). Furthermore, we show below that \(\mathcal{M}\) is _maximally invariant_.
**Theorem 3.1**.: _In the limit experiment \(\mathcal{E}(f)\) modeled by (11)-(13), the \(\sigma\)-field \(\mathcal{M}\) defined in (16) is maximally invariant with respect to the group of transformations \(\mathfrak{G}_{\mathbf{\eta}}\), where \(\mathbf{\eta}\in c_{00}\)._
The proof of Theorem 3.1 is based on the definition of _maximal invariant_ in Section 6.2 of Lehmann and Romano (2006).4 The detailed proof is provided in Appendix B of the supplementary material. The maximal invariant result of \(\mathcal{M}\) plays the vital role in our semiparametric optimality study, as every invariant statistic (w.r.t. \(\mathbf{\eta}\)) is \(\mathcal{M}\)-measurable (see Lehmann and Romano (2006, Theorem 6.2.1)), and is every invariant test (w.r.t. \(\mathbf{\eta}\)). Therefore, by the Neyman-Pearson Lemma, the likelihood ratio test based on \(\mathcal{M}\) is optimal among all invariant test (w.r.t. \(\mathbf{\eta}\)). The log-likelihood ratio of \(\mathcal{M}\) is given by
Footnote 4: Note that \(\sigma(\mathbf{W_{\varepsilon}},\mathbf{W_{\ell_{f}}},\mathbf{W_{b}})=\sigma(\mathbf{W_{ \varepsilon}},\mathbf{W_{b}})\) due to the decomposition \(\mathbf{W_{\ell_{f}}}=\mathbf{\Sigma}^{-1}\mathbf{W_{\varepsilon}}+\mathbf{J}_{fb}\mathbf{W_{b}}\) according to the covariance (9). This fact simplifies the proof.
\[\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{C},\mathbf{\delta})= \,\Delta_{f}^{\mathcal{M}}(\mathbf{C},\mathbf{\delta})-\frac{1}{2} \mathcal{Q}_{f}^{\mathcal{M}}(\mathbf{C},\mathbf{\delta}), \tag{17}\]
where
\[\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{C},\mathbf{\delta})= \int_{0}^{1}\left(\mathbf{C}\mathbf{W_{\varepsilon}}(u)+\mathbf{d_{C} }(u)\mathbf{\delta}\right)^{\prime}\mathrm{d}\big{(}\mathbf{B_{\mathbf{\ell}_{f}}}(u)+\mathbf{ \Sigma}^{-1}\mathbf{W_{\varepsilon}}(1)u\big{)}\] \[\mathcal{Q}_{f}^{\mathcal{M}}(\mathbf{C},\mathbf{\delta})= \int_{0}^{1}\left(\mathbf{C}\mathbf{W_{\varepsilon}}(u)+\mathbf{d_{C} }(u)\mathbf{\delta}\right)^{\prime}\mathbf{J}_{f}(\mathbf{C}\mathbf{W_{\varepsilon}}(u)+ \mathbf{d_{C}}(u)\mathbf{\delta})\mathrm{d}u\] \[+\left(\mathbf{C}\overline{\mathbf{W}}_{\varepsilon}+\overline{ \mathbf{d}}_{\mathbf{C}}\mathbf{\delta}\right)^{\prime}\left(\mathbf{\Sigma}^{-1}-\bm {J}_{f}\right)\left(\mathbf{C}\overline{\mathbf{W}}_{\varepsilon}+\overline{ \mathbf{d}}_{\mathbf{C}}\mathbf{\delta}\right).\]
The detailed derivation is provided in the supplementary Appendix B.
Up to this point, we have applied the invariance principle to eliminate the nuisance parameter \(\mathbf{\eta}\), leaving us with another nuisance parameter \(\mathbf{\delta}\) to address. In the following subsections, we will sequentially consider two cases: one in which there is no time trend (\(\mathbf{\delta}=0\)), and another in which there is a time trend (\(\mathbf{\delta}\) is unknown).
### Semiparametric power envelope, no-time-trend case
We begin by considering the case of an intercept only, which corresponds to the error-correction model in (1)-(2) with \(\mathbf{\tau}=\mathbf{0}\) (or equivalently, \(\mathbf{\delta}=\mathbf{0}\)). This specification is important in its own right and is perhaps the most commonly used model in applied works. The advantage of the no-time-trend specification is that the associated tests enjoy better power performance, albeit with a potential cost of model misspecification, when researchers do not find obvious evidence of a time trend in their datasets. For further discussions on this issue, we refer to Lutkepohl and Saikkonen (2000) and Saikkonen and Lutkepohl (2000).
We use \(\mathcal{L}_{f}^{\mathbf{\mu}*}\) to denote the log-likelihood ratio associated with the maximal invariant under \(\mathbf{\delta}=\mathbf{0}\), for which we have
\[\mathcal{L}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}}):=\mathcal{L}_{f}^{\mathcal{M}}( \bar{\mathbf{C}},\mathbf{0})=\Delta_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}})-\frac{1}{2} \mathcal{Q}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}}), \tag{18}\]
where
\[\Delta_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}}):= \int_{0}^{1}\left(\bar{\mathbf{C}}\mathbf{W}_{\mathbf{\varepsilon}}(u) \right)^{\prime}\mathrm{d}\big{(}\mathbf{B}_{\mathbf{\ell}_{f}}(u)+\mathbf{\Sigma}^{-1} \mathbf{W}_{\mathbf{\varepsilon}}(1)u\big{)}\] \[\mathcal{Q}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}}):= \int_{0}^{1}\left(\bar{\mathbf{C}}\mathbf{W}_{\mathbf{\varepsilon}}(u) \right)^{\prime}\mathbf{J}_{f}\bar{\mathbf{C}}\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathrm{ d}u+\left(\bar{\mathbf{C}}\overline{\mathbf{W}}_{\mathbf{\varepsilon}}\right)^{\prime} \left(\mathbf{\Sigma}^{-1}-\mathbf{J}_{f}\right)\bar{\mathbf{C}}\overline{\mathbf{W}}_{ \mathbf{\varepsilon}}.\]
We define the associated likelihood ratio test by \(\phi_{f,\alpha}^{\mathbf{\mu}*}(\bar{\mathbf{C}}):=\mathbbm{1}\{\mathcal{L}_{f}^{ \mathbf{\mu}*}(\bar{\mathbf{C}})>\kappa_{\alpha}^{\mathbf{\mu}}(\bar{\mathbf{C}})\}\), where \(\kappa_{\alpha}^{\mathbf{\mu}}(\bar{\mathbf{C}})\) is the \(1-\alpha\) quantile of \(\mathcal{L}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}})\) under \(\bar{\mathbf{P}}_{\mathbf{0},\mathbf{0},\mathbf{\eta}}\). An upper power bound for \(\mathbf{\eta}\)-invariant tests can then be given as follows:
\[\mathbf{C}\mapsto\pi_{f,\alpha}^{\mathbf{\mu}*}(\bar{\mathbf{C}};\mathbf{C})= \mathbb{E}\left[\phi_{f,\alpha}^{\mathbf{\mu}*}(\bar{\mathbf{C}})\frac{\mathrm{d} \mathbb{P}_{\mathbf{C},\mathbf{0},\mathbf{\eta}}}{\mathrm{d}\mathbb{P}_{\mathbf{0},\mathbf{0}, \mathbf{0}}}\right].\]
Call a test \(\phi^{(T)}\) in \(\mathcal{E}^{(T)}(f)\)_asymptotically \(\mathbf{\eta}\)-invariant_ if it weakly converges, under \(\mathrm{P}_{\mathbf{C},\mathbf{0},\mathbf{\eta};\mathbf{\mu},f}^{(T)}\), to a test \(\phi\) in \(\mathcal{E}(f)\) that is invariant w.r.t. \(\mathbf{\eta}\). The combination of the Neyman-Pearson Lemma and the Asymptotic Representation Theorem (van der Vaart (2000, Chapter 9)) yields the following theorem.
**Theorem 3.2**.: _Assuming that \(\mathbf{\delta}=\mathbf{0}\) is known, let \(f\in\mathfrak{F}_{p}\), \(\mathbf{\mu}\in\mathbb{R}^{p}\), and \(\alpha\in(0,1)\). If a test \(\phi^{(T)}(\mathbf{y}_{1},\ldots,\mathbf{y}_{T})\), where \(T\in\mathbb{N}\), is an asymptotically \(\mathbf{\eta}\)-invariant test of size
\(\alpha\), _i.e.,_\(\limsup_{T\to\infty}\mathrm{E}_{\mathbf{0},\boldsymbol{0},\boldsymbol{\eta}; \boldsymbol{\mu},f}[\phi^{(T)}]\leq\alpha\), _then we have_
\[\limsup_{T\to\infty}\mathrm{E}_{\mathbf{C},\boldsymbol{0},\boldsymbol{\eta}; \boldsymbol{\mu},f}[\phi^{(T)}]\leq\pi_{f,\alpha}^{\boldsymbol{\mu}*}(\mathbf{ C};\mathbf{C}),\quad\forall\,\mathbf{C}\in\mathbb{R}^{p\times p},\, \boldsymbol{\eta}\in c_{00}.\]
The proof of the theorem involves two main steps. First, we use the Neyman-Pearson lemma to establish the upper bound for the power of \(\boldsymbol{\eta}\)-invariant tests in \(\mathcal{E}(f)\) at point \(\mathbf{C}\), which is \(\pi_{f,\alpha}^{\boldsymbol{\mu}*}(\mathbf{C};\mathbf{C})\). This determines the maximum achievable power in the limit experiment. Second, the Asymptotic Representation Theorem states that any test in the sequence of experiments \(\mathcal{E}^{(T)}(f)\) has a representation in the limit experiment \(\mathcal{E}(f)\). Therefore, the best achievable power in \(\mathcal{E}(f)\) is also the best possible power that can be achieved asymptotically in \(\mathcal{E}^{(T)}(f)\). The proof is completed by applying this result to the class of asymptotically \(\boldsymbol{\eta}\)-invariant tests.
Eliminating \(\boldsymbol{\delta}\) using profile likelihood & semiparametric power envelope for time-trend case
In this subsection, we consider case where the linear trend parameter \(\boldsymbol{\delta}\) is unknown and treated as a nuisance parameter. To eliminate \(\boldsymbol{\delta}\), observing that the log-likelihood ratio \(\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{C},\boldsymbol{\delta})\) is quadratic in \(\boldsymbol{\delta}\), we use the _profile likelihood_ method. Specifically, \(\boldsymbol{\delta}\) is "profiled out" by
\[\max_{\boldsymbol{\delta}}\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{C},\boldsymbol {\delta})-\max_{\boldsymbol{\delta}}\mathcal{L}_{f}^{\mathcal{M}}(\boldsymbol {0},\boldsymbol{\delta}), \tag{19}\]
which results in an invariant statistic w.r.t \(\boldsymbol{\delta}\).5 The optimality of this approach for quadratic-in-nuisance form is established in, e.g., Lehmann and Romano (2006, Problem 6.9), where it is shown that the resulting profile likelihood-based test is the best among invariant tests. For further reference on this method in the unit root testing literature, see Elliott et al. (1996) and Jansson (2008).
Footnote 5: More formally, the obtained statistic is invariant w.r.t. the group of transformations of the form \(\mathbf{y}_{t}=\mathbf{y}_{t}+\mathbf{c}u\), \(\forall\mathbf{c}\in\mathbb{R}^{p}\).
We split the optimization of (19) into two steps: (i) we derive the maximum likelihood estimate of \(\boldsymbol{\delta}\) by solving the maximization under one alternative value of \(\mathbf{C}\), and (ii) we plug this estimate into likelihood statistic \(\mathcal{L}_{f}^{\mathcal{M}}\) under another alternative value. This splitting allows us to obtain the semiparametric analogue of the statistic \(\Lambda_{p,C}^{GLS}(\bar{C};\bar{C}^{*})\) introduced in Boswijk et al. (2015, Section 2.2) (BJN). This statistic encompasses existing Gaussian cointegrati
by choosing different values of \(\bar{C}\) and \(\bar{C}^{*}\). By interpreting the choices of \(\bar{C}\) and \(\bar{C}^{*}\) in BJN's \(\varLambda^{GLS}_{p,C}(\bar{C};\bar{C}^{*})\), we contribute to the literature by providing insights into how the current Gaussian cointegration tests for the time trend case are derived in the limiting perspective, and by proposing semiparametric efficient versions of these tests using our semiparametric analogue afterwards.
In detail, we select the reference alternative \(\bar{\mathbf{C}}^{*}\) and derive the maximum likelihood estimate (MLE) of \(\mathbf{\delta}\) as follows:
\[\hat{\mathbf{\delta}}_{\bar{\mathbf{C}}^{*}} =\arg\max_{\mathbf{\delta}\in\mathbb{R}^{p}}\mathcal{L}_{f}^{\mathcal{ M}}(\bar{\mathbf{C}}^{*},\mathbf{\delta})\] \[=\mathbf{B}(\bar{\mathbf{C}}^{*})^{-1}\mathbf{A}(\bar{\mathbf{C} }^{*}), \tag{20}\]
where
\[\mathbf{A}(\mathbf{C}):= \int_{0}^{1}\mathbf{d}_{\mathbf{C}}(u)^{\prime}\mathrm{d}\big{(} \mathbf{B}_{\mathbf{\ell}_{f}}(u)+\mathbf{\Sigma}^{-1}\mathbf{W}_{\mathbf{\varepsilon}}(1)u-\mathbf{ J}_{f}\mathbf{C}\mathbf{\widetilde{W}}_{\mathbf{\varepsilon}}(u)u-\mathbf{\Sigma}^{-1} \mathbf{C}\mathbf{\overline{W}}_{\mathbf{\varepsilon}}u\big{)},\] \[\mathbf{B}(\mathbf{C}):= \int_{0}^{1}\mathbf{d}_{\mathbf{C}}(u)^{\prime}\mathbf{J}_{f} \mathbf{d}_{\mathbf{C}}(u)\mathrm{d}u+\overline{\mathbf{d}}_{\mathbf{C}}^{ \prime}\left(\mathbf{\Sigma}^{-1}-\mathbf{J}_{f}\right)\overline{\mathbf{d}}_{\mathbf{ C}}\]
are the linear and quadratic term in \(\mathbf{\delta}\), respectively. Once we have \(\hat{\mathbf{\delta}}_{\bar{\mathbf{C}}^{*}}\), we use another reference alternative \(\bar{\mathbf{C}}\) to construct the profile likelihood ratio as
\[\mathcal{L}_{f}^{\mathcal{T}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^ {*}) =\mathcal{L}_{f}^{\mathcal{M}}(\bar{\mathbf{C}},\hat{\mathbf{\delta}} _{\bar{\mathbf{C}}^{*}})-\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{0},\hat{\mathbf{ \delta}}_{\mathbf{0}})\] \[=\varDelta_{f}^{\mathcal{T}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^ {*})-\frac{1}{2}\mathcal{Q}_{f}^{\mathcal{T}*}(\bar{\mathbf{C}};\bar{\mathbf{ C}}^{*})-\frac{1}{2}\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}\bar{ \mathbf{C}}^{*}}(1)^{\prime}\mathbf{\Sigma}^{-1}\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{ \mathbf{\delta}}\bar{\mathbf{C}}^{*}}(1) \tag{21}\]
with
\[\varDelta_{f}^{\mathcal{T}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{ *}):= \int_{0}^{1}\big{(}\bar{\mathbf{C}}\mathbf{W}_{\mathbf{\varepsilon}}^{ \hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}}(u)\big{)}^{\prime}\mathrm{d}\big{(} \mathbf{B}_{\mathbf{\ell}_{f}}(u)+\mathbf{\Sigma}^{-1}\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\bm {\delta}}\bar{\mathbf{C}}^{*}}(1)u\big{)},\] \[\mathcal{Q}_{f}^{\mathcal{T}*}(\bar{\mathbf{C}};\bar{\mathbf{C}} ^{*}):= \int_{0}^{1}\big{(}\bar{\mathbf{C}}\mathbf{W}_{\mathbf{\varepsilon}}^{ \hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}}(u)\big{)}^{\prime}\mathbf{J}_{f}\bar{ \mathbf{C}}\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}}( u)\mathrm{d}u\] \[+\big{(}\bar{\mathbf{C}}\mathbf{\overline{W}}_{\mathbf{\varepsilon}}^{ \hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}}\big{)}^{\prime}\big{(}\mathbf{\Sigma}^{-1}- \mathbf{J}_{f}\big{)}\bar{\mathbf{C}}\mathbf{\overline{W}}_{\mathbf{\varepsilon}}^{\hat{ \mathbf{\delta}}\bar{\mathbf{C}}^{*}},\]
where \(\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}}(u):=\mathbf{W}_{ \mathbf{\varepsilon}}(u)-u\hat{\mathbf{\delta}}_{\bar{\mathbf{C}}^{*}}\) is the de-drifted version (of \(\mathbf{W}_{\mathbf{\varepsilon}}\)) with drift estimate \(\hat{\mathbf{\delta}}_{\bar{\mathbf{C}}^{*}}\), and \(\mathbf{\overline{W}}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}}:= \overline{\mathbf{W}}_{\mathbf{\varepsilon}}-\frac{1}{2}\hat{\mathbf{\delta}}_{\bar{\mathbf{ C}}^{*}}\) is its average.6
Footnote 6: Here the profile log-likelihood \(\mathcal{L}_{f}^{\mathcal{M}}(\bar{\mathbf{C}},\hat{\mathbf{\delta}}_{\bar{\mathbf{ C}}^{*}})-\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{0},\hat{\mathbf{\delta}}_{ \mathbf{0}})\) is derived by a sum of two terms, \(\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{0},\hat{\mathbf{\delta}}_{\bar{\mathbf{C}}^{*} })-\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{0},\hat{\mathbf{\delta}}_{\mathbf{0}})=-0.5 \,\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}}(1)^{ \prime}\mathbf{\Sigma}^{-1}\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}\bar{ \mathbf{C}}^{*}}(1)\) and \(\mathcal{L}_{f}^{\mathcal{M}}(\bar{\mathbf{C}},\hat{\mathbf{\delta}}_{\bar{\mathbf{ C}}^{*}})-\mathcal{L}_{f}^{\mathcal{M}}(\mathbf{0},\hat{\mathbf{\delta}}_{\bar{ \mathbf{C}}^{*}})=\varDelta_{f}^{\mathcal{T}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{ *})-\frac{1}{2}\mathcal{Q}_{f}^{\mathcal{T}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{ *})\), where \(\hat{\mathbf{\delta}}_{\mathbf{0}}\) denotes the estimate for \(\mathbf{\delta}\) under the chosen alternative \(\bar{\mathbf{C}}^{*}=\mathbf{0}\). By (20), we have \(\hat{\mathbf{\delta}}_{\mathbf{0}}=\mathbf{W}_{\mathbf{\varepsilon}}(1)\). The second term is intuitive and easy-to-interpret — it is nothing but the likelihood ratio of the sigma-field \(\sigma(\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}\bar{\mathbf{C}}^{*}},\mathbf{B}_{ \mathbf{\ell}_{f}})\).
Our statistic \(\mathcal{L}^{\mathcal{T}*}f(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*})\) serves as the semiparametric counterpart and reduces to BJN's \(\varLambda p,C^{GLS}(\bar{C};\bar{\mathbf{C}}^{*})\) when \(f\) is Gaussian. Specifically, under Gaussianity, the
score function is given by \(\mathbf{\ell}_{f}(\mathbf{\varepsilon}_{t})=\Sigma^{-1}\mathbf{\varepsilon}_{t}\), and the limit experiment in Proposition 3.3 simplifies to (11). Using \(\bar{\mathbf{C}}^{*}\) to estimate \(\mathbf{\delta}\) and \(\bar{\mathbf{C}}\) to construct the statistic, we arrive at BJN's \(\Lambda^{GLS}_{p,C}(\bar{C};\bar{C}^{*})\). It is worth noting again that \(\Lambda p,C^{GLS}(\bar{C};\bar{C}^{*})\) embodies asymptotic representations of existing Gaussian cointegration tests. In a similar vein, in Section 4, we will base on our \(\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*})\) to derive their semiparametric versions.
We conclude this section by presenting the semiparametric power envelope for the case of a linear time trend. We let \(\bar{\mathbf{C}}^{*}=\bar{\mathbf{C}}\) and define the likelihood ratio test as \(\phi_{f,\alpha}^{\mathbf{\tau}*}(\bar{\mathbf{C}}):=\mathbbm{1}\{\mathcal{L}_{f}^{ \mathbf{\tau}*}(\bar{\mathbf{C}};\mathbf{C})>\kappa_{\alpha}^{\mathbf{\tau}}(\bar{ \mathbf{C}})\}\), where \(\kappa_{\alpha}^{\mathbf{\tau}}(\mathbf{C})\) is the \(1-\alpha\) quantile of \(\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}})\). The power function is given by
\[\mathbf{C}\mapsto\pi_{f,\alpha}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\mathbf{C})= \mathbb{E}\left[\phi_{f,\alpha}^{\mathbf{\tau}*}(\bar{\mathbf{C}})\frac{\mathrm{d} \mathbb{P}_{\mathbf{C},\mathbf{\delta},\mathbf{\eta}}}{\mathrm{d}\mathbb{P}_{\mathbf{0 },\mathbf{0},\mathbf{0}}}\right].\]
We refer to a test \(\phi_{T}\) in \(\mathcal{E}^{(T)}(f)\) as _asymptotically \((\mathbf{\delta},\mathbf{\eta})\)-invariant_ if it weakly converges to a test in \(\mathcal{E}(f)\) that is invariant w.r.t. both \(\mathbf{\eta}\) and \(\mathbf{\delta}\). Using reasoning analogous to Theorem 3.2, we provide the semiparametric power envelope of asymptotically \((\mathbf{\delta},\mathbf{\eta})\)-invariant tests for the time trend case in the following theorem.
**Theorem 3.3**.: _Assuming that \(\mathbf{\delta}\) is unknown, let \(f\in\mathfrak{F}_{p}\), \(\mathbf{\mu}\in\mathbb{R}^{p}\), and \(\alpha\in(0,1)\). If a test \(\phi^{(T)}(\mathbf{y}_{1},\ldots,\mathbf{y}_{T})\), where \(T\in\mathbb{N}\), is an asymptotically \((\mathbf{\delta},\mathbf{\eta})\)-invariant test of size \(\alpha\), that is, \(\limsup_{T\to\infty}\mathrm{E}_{\mathbf{0},\mathbf{\delta},\mathbf{\eta};\mathbf{\mu},f}[ \phi^{T}]\leq\alpha\), then we have_
\[\limsup_{T\to\infty}\mathrm{E}_{\mathbf{C},\mathbf{\delta},\mathbf{\eta};\mathbf{\mu},f}[ \phi^{(T)}]\leq\pi_{f,\alpha}^{\mathbf{\tau}*}(\mathbf{C};\mathbf{C}),\quad\forall \,\mathbf{C}\in\mathbb{R}^{p\times p},\,\mathbf{\delta}\in\mathbb{R}^{p},\,\mathbf{ \eta}\in c_{00}.\]
## 4 Semiparametric Inference
This section proposes semiparametrically optimal cointegration tests based on the asymptotic results above. We first employ \(\mathcal{L}_{f}^{\mathbf{\mu}*}(\mathbf{C})\) (resp. \(\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*})\)) to derive (asymptotic) semiparametric counterparts of some existing Gaussian cointegration tests for the case without time trend (resp. the case with time trend) in Section 4.1 (resp. Section 4.2). Then, in Section 4.3, we construct feasible semiparametric cointegration tests primarily by replacing \(\mathbf{B}_{\mathbf{\ell}_{f}}\) by its finite-sample counterpart, which is built upon a nonparametric estimate of the score function \(\mathbf{\ell}_{f}\).
### Semiparametric counterparts of existing tests (no-time-trend case)
When there is no time trend (\(\mathbf{\delta}=\mathbf{0}\)), we follow the line of likelihood ratio test by Johansen (1991) and Saikkonen and Luukkonen (1997), and profile out the chosen alternative \(\bar{\mathbf{C}}\). Specifically, we maximize the likelihood ratio statistic \(\mathcal{L}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}})\) in (18) w.r.t. \(\bar{\mathbf{C}}\in\mathbb{R}^{p\times p}\), and obtain the _semiparametric Johansen trace statistic_
\[\Lambda_{f}^{Johansen}:=\mathrm{Tr} \left(\int_{0}^{1}\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathrm{d}\big{(} \mathbf{B}_{\mathbf{\ell}_{f}}(u)+\mathbf{\Sigma}^{-1}\mathbf{W}_{\mathbf{\varepsilon}}(1)u\big{)} ^{\prime}\right)^{\prime}\] \[\times\left(\int_{0}^{1}\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathbf{J}_{f} \mathbf{W}_{\mathbf{\varepsilon}}(u)^{\prime}\mathrm{d}u+\overline{\mathbf{W}}_{\mathbf{ \varepsilon}}\left(\mathbf{\Sigma}^{-1}-\mathbf{J}_{f}\right)\overline{\mathbf{W}}_{\mathbf{ \varepsilon}}\right)^{-1}\] \[\times\left(\int_{0}^{1}\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathrm{d} \big{(}\mathbf{B}_{\mathbf{\ell}_{f}}(u)+\mathbf{\Sigma}^{-1}\mathbf{W}_{\mathbf{\varepsilon}}(1) u\big{)}^{\prime}\right), \tag{22}\]
which does not require a specification for \(\bar{\mathbf{C}}\) anymore. See Jansson and Nielsen (2012) for a comprehensive study on this operation for unit root testing. Borrowing their conclusion to our multivariate case, we argue that the resulting \(\Lambda_{f}^{Johansen}\)-based test attains \(\pi_{f,\alpha}^{\mathbf{\mu}*}(\mathbf{C};\mathbf{C})\) pointwise at a random alternative point.
Under the Gaussianity of the true density, i.e., \(f=\phi\), we have \(\mathbf{B}_{\mathbf{\ell}_{f}}=\mathbf{\Sigma}^{-1}\mathbf{B}_{\mathbf{\varepsilon}}\) and \(\mathbf{J}_{f}=\mathbf{\Sigma}^{-1}\), and the semiparametric statistic \(\Lambda_{f}^{Johansen}\) simplifies to the original _Gaussian Johansen trace statistic_ as follows:
\[\Lambda_{\phi}^{Johansen}:= \mathrm{Tr}\,\left(\int_{0}^{1}\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathbf{ \Sigma}^{-1}\mathrm{d}\mathbf{W}_{\mathbf{\varepsilon}}(u)^{\prime}\right)^{\prime}\] \[\times\left(\int_{0}^{1}\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathbf{\Sigma}^{ -1}\mathbf{W}_{\mathbf{\varepsilon}}(u)^{\prime}\mathrm{d}u\right)^{-1}\left(\int_{0}^ {1}\mathbf{W}_{\mathbf{\varepsilon}}(u)\mathbf{\Sigma}^{-1}\mathrm{d}\mathbf{W}_{\mathbf{ \varepsilon}}(u)^{\prime}\right). \tag{23}\]
### Semiparametric counterparts of existing tests (time-trend case)
When dealing with a linear time trend (i.e., unknown \(\mathbf{\delta}\)), we consider the GLS trace test proposed by Saikkonen and Lutkepohl (2000) (SL test) (see also Lutkepohl and Saikkonen (2000)). In this case, we choose \(\bar{\mathbf{C}}^{*}=\mathbf{0}\) for the trend parameter estimate \(\hat{\mathbf{\delta}}_{\bar{\mathbf{C}}^{*}}\), which leads to \(\hat{\mathbf{\delta}}_{\bar{\mathbf{C}}^{*}}=\hat{\mathbf{\delta}}_{\mathbf{0}}=\mathbf{W}_{\bm {\varepsilon}}(1)\). Consequently, \(\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}_{\mathbf{C}^{*}}}(u)=\mathbf{W}_{\bm {\varepsilon}}(u)-u\mathbf{W}_{\mathbf{\varepsilon}}(1)=\mathbf{B}_{\mathbf{\varepsilon}}(u)\) and \(\mathbf{W}_{\mathbf{\varepsilon}}^{\hat{\mathbf{\delta}}_{\mathbf{C}^{*}}}(1)=\mathbf{0}\). The likelihood ratio statistic is then given by:
\[\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\mathbf{0})=\int_{0}^{1}\big{(}\bar{ \mathbf{C}}\mathbf{B}_{\mathbf{\varepsilon}}(u)\big{)}^{\prime}\mathrm{d}\mathbf{B}_{\mathbf{ \ell}_{f}}(u)-\frac{1}{2}\int_{0}^{1}\big{(}\bar{\mathbf{C}}\mathbf{B}_{\mathbf{ \varepsilon}}(u)\big{)}^{\prime}\mathbf{J}_{f}\big{(}\bar{\mathbf{C}}\mathbf{B}_{\mathbf{ \varepsilon}}(u)\big{)}\mathrm{d}u.\]
By maximizing \(\mathcal{L}_{f}^{\tau*}(\bar{\mathbf{C}};\mathbf{0})\) w.r.t. \(\mathbf{C}\), we obtain the _semiparametric SL trace statistic_ defined as
\[\Lambda_{f}^{SL}\,:= \,\mathrm{Tr}\,\left(\int_{0}^{1}\mathbf{B}_{\mathbf{\varepsilon}}(u) \mathrm{d}\mathbf{B}_{\mathbf{\ell}_{f}}(u)^{\prime}\right)^{\prime}\] \[\qquad\times\left(\int_{0}^{1}\mathbf{B}_{\mathbf{\varepsilon}}(u)\mathbf{J} _{f}\mathbf{B}_{\mathbf{\varepsilon}}(u)^{\prime}\mathrm{d}u\right)^{-1}\left(\int_{0 }^{1}\mathbf{B}_{\mathbf{\varepsilon}}(u)\mathrm{d}\mathbf{B}_{\mathbf{\ell}_{f}}(u)^{\prime} \right). \tag{24}\]
When the true density is Gaussian (i.e., \(f=\phi\)), we have \(\mathbf{B}_{\mathbf{\ell}_{f}}=\mathbf{\Sigma}^{-1}\mathbf{B}_{\mathbf{\varepsilon}}\) and \(\mathbf{J}_{f}=\mathbf{\Sigma}^{-1}\). In this case, the semiparametric SL trace statistic \(\Lambda_{f}^{SL}\) reduces to the original _Gaussian SL trace statistic_, which is given by
\[\Lambda_{\phi}^{SL}\,:= \,\mathrm{Tr}\,\left(\int_{0}^{1}\mathbf{B}_{\mathbf{\varepsilon}}(u) \mathrm{d}\big{[}\mathbf{\Sigma}^{-1}\mathbf{B}_{\mathbf{\varepsilon}}(u)\big{]}^{\prime} \right)^{\prime}\] \[\qquad\times\left(\int_{0}^{1}\mathbf{B}_{\mathbf{\varepsilon}}(u)\mathbf{ \Sigma}^{-1}\mathbf{B}_{\mathbf{\varepsilon}}(u)^{\prime}\mathrm{d}u\right)^{-1}\left( \int_{0}^{1}\mathbf{B}_{\mathbf{\varepsilon}}(u)\mathrm{d}\big{[}\mathbf{\Sigma}^{-1}\mathbf{ B}_{\mathbf{\varepsilon}}(u)\big{]}^{\prime}\right). \tag{25}\]
_Remark 4.1_.: Alternatively, one can consider the statistic proposed by Boswijk et al. (2015), which corresponds to the case where \(\bar{\mathbf{C}}^{*}=\bar{\mathbf{C}}\).7 By taking the maximum of \(\mathcal{L}_{f}^{\tau*}(\bar{\mathbf{C}};\bar{\mathbf{C}})\) w.r.t. \(\bar{\mathbf{C}}\), one obtains the _semiparametric BJN statistic_
Footnote 7: After substantial rearrangement, one can achieve \(\mathcal{L}_{f}^{\tau*}(\bar{\mathbf{C}};\bar{\mathbf{C}})=\mathcal{L}_{f}^{ \mu*}(\bar{\mathbf{C}})+\frac{1}{2}\mathcal{L}_{f}^{d}(\bar{\mathbf{C}})- \frac{1}{2}\mathcal{L}_{f}^{d}(\mathbf{0})\), where \(\mathcal{L}_{f}^{d}(\mathbf{C})=\mathcal{L}_{f}^{d}(\mathbf{C})^{\prime} \mathcal{Q}_{f}^{dd}(\mathbf{C})^{-1}\mathcal{L}_{f}^{d}(\mathbf{C})\), with \(\mathcal{L}_{f}^{d}(\mathbf{C}):=\int_{0}^{1}\mathbf{d}_{\mathbf{C}}(u)^{ \prime}\mathrm{d}\big{(}\mathbf{B}_{\mathbf{\ell}_{f}}(u)+\mathbf{\Sigma}^{-1}\mathbf{W}_{\mathbf{ \varepsilon}}(1)u-\mathbf{J}_{f}\mathbf{C}\widetilde{\mathbf{W}}_{\mathbf{\varepsilon}}(u)u -\mathbf{\Sigma}^{-1}\bar{\mathbf{C}}\overline{\mathbf{W}}_{\mathbf{\varepsilon}}u)\), \(\mathcal{Q}_{f}^{dd}(\mathbf{C}):=\int_{0}^{1}\mathbf{d}_{\mathbf{C}}(u)^{ \prime}\mathbf{J}_{f}\mathbf{d}_{\mathbf{C}}(u)\mathrm{d}u+\overline{\mathbf{d}}_ {\mathbf{C}}^{\prime}\left(\mathbf{\Sigma}^{-1}-\mathbf{J}_{f}\right)\overline{\mathbf{ d}}_{\mathbf{C}}\), and \(\widetilde{\mathbf{W}}_{\mathbf{\varepsilon}}(u):=\mathbf{W}_{\mathbf{\varepsilon}}(u)-\mathbf{W}_{ \mathbf{\varepsilon}}\).
Similarly, when \(f\) is Gaussian, the statistic \(\Lambda_{f}^{BJN}\) reduces to its Gaussian counterpart introduced in their Section 2.2 of their paper.
### Semiparametrically optimal cointegration tests
This subsection proposes feasible cointegration tests whose powers can achieve the power bounds developed earlier, establishing that these power bounds are sharp (globally in \(f\)) and our tests are semiparametrically optimal (pointwise in \(\mathbf{C}\)).
Our tests are constructed using a nonparametric estimate of the unknown score function \(\mathbf{\ell}_{f}\). For this estimation, one could consider the sample splitting technique, which requires fairly minimal conditions on \(f\) (see Bickel (1982), Schick (1986), and Drost et al. (1997)). However, to improve the finite-sample performance, especially when the sample size is moderately small, we follow the approach of Schick (1987) which uses the full sample without sample splitting. Unlike other works along
this line that require the symmetric density assumption (e.g., Kreiss (1987) for the ARMA model and Ling and McAleer (2003) for the ARMA-GARCH model), our test statistic construction is based on the framework of Koul and Schick (1997), which does not need any additional condition on \(f\).
Our estimate for \(\boldsymbol{\ell}_{f}\) is constructed as follows:
\[\hat{\boldsymbol{\ell}}_{f}(\boldsymbol{\varepsilon})=-\frac{ \nabla\hat{f}(\boldsymbol{\varepsilon})}{\hat{f}(\boldsymbol{\varepsilon})+b_ {T}}, \tag{26}\]
where \(\hat{f}\) is the usual kernel density estimator given by
\[\hat{f}(\boldsymbol{\varepsilon})=\frac{1}{Ta_{T}^{p}}\sum_{t=2 }^{T}K\left(\frac{\boldsymbol{\varepsilon}-\Delta\mathbf{y}_{t}}{a_{T}}\right). \tag{27}\]
Here \(K(\boldsymbol{\varepsilon})=k(\varepsilon_{1})\cdots k(\varepsilon_{p})\) is the kernel function, and \(a_{T}\) and \(b_{T}\) are positive sequences converging to zero. We impose the following mild assumptions.
**Assumption 2**.:
1. _The marginal kernel function_ \(k(\cdot)\) _is bounded, symmetric, continuously differentiable with_ \(\int_{\mathbb{R}}r^{2}k(r)dr<\infty\) _and, for some positive constant C,_ \(|\dot{k}(r)|/k(r)<\infty\) _for all_ \(r\in\mathbb{R}\)_._
2. _The sequences_ \(\langle a_{T}\rangle\) _and_ \(\langle b_{T}\rangle\) _satisfy_ \(a_{T}\to 0\)_,_ \(b_{T}\to 0\)_, and_ \(Ta_{T}^{4}b_{T}^{2}\to\infty\)_._
Using the nonparametric estimate \(\hat{\boldsymbol{\ell}}_{f}\), we construct estimators for \(\boldsymbol{B}_{\boldsymbol{\ell}_{f}}\) and \(\boldsymbol{J}_{f}\) as follows:
\[\widehat{\boldsymbol{B}}_{\boldsymbol{\ell}_{f}}^{(T)}(u):=\frac {1}{\sqrt{T}}\sum_{t=2}^{\lfloor uT\rfloor}\hat{\boldsymbol{\ell}}_{f}^{(T)}( \Delta\mathbf{y}_{t})-\frac{\lfloor uT\rfloor}{T^{3/2}}\sum_{t=2}^{T}\hat{ \boldsymbol{\ell}}_{f}^{(T)}(\Delta\mathbf{y}_{t}),\ \ u\in[0,1], \tag{28}\]
and
\[\widehat{\boldsymbol{J}}_{f}:=\hat{\boldsymbol{\ell}}_{f}( \boldsymbol{\varepsilon}_{1})\hat{\boldsymbol{\ell}}_{f}(\boldsymbol{ \varepsilon}_{1})^{\prime}. \tag{29}\]
We also define
\[\boldsymbol{W}_{\boldsymbol{\varepsilon}}^{(T)}(u):=\frac{1}{ \sqrt{T}}\sum_{t=2}^{\lfloor uT\rfloor}\Delta\mathbf{y}_{t}, \tag{30}\]
and assume that \(\widehat{\boldsymbol{\Sigma}}\) is some consistent estimate of \(\boldsymbol{\Sigma}\). With these tools in hand, we can define \(\widehat{\mathcal{L}}_{f}^{\boldsymbol{\mu}\star}(\mathbf{C})\) as the feasible version of the likelihood ratio statistic \(\mathcal{L}_{f}^{\boldsymbol{\mu}\star}(\mathbf{C})\) by replacing \(\boldsymbol{W}_{\boldsymbol{\varepsilon}}\), \(\boldsymbol{B}_{\boldsymbol{\ell}_{f}}\), \(\boldsymbol{\Sigma}\), and \(\boldsymbol{J}_{f}\) in the latter with their finite-sample counterparts \(\boldsymbol{W}_{\boldsymbol{\varepsilon}}^{(T)}\), \(\widehat{\boldsymbol{B}}_{\boldsymbol{\ell}_{f}}^{(T)}\), \(\widehat{\boldsymbol{\Sigma}}\), and \(\widehat{\boldsymbol{J}}_{f}\), respectively. In similar way, we define \(\widehat{\mathcal{L}}_{f}^{\boldsymbol{\tau}\ast}(\bar{\mathbf{C}};\bar{ \mathbf{C}}^{\ast})\) as the feasible version of \(\mathcal{L}_{f}^{\boldsymbol{\tau}}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{\ast})\). The following theorem summarizes the consistency results for these nonparametric estimators. The proof is presented in Appendix B.
**Theorem 4.1**.:
1. _Assuming that_ \(f\in\mathfrak{F}_{p}\) _and Assumption_ 2 _holds, let_ \(\widehat{\mathbf{B}}_{\mathbf{\ell}_{f}}^{(T)}\) _and_ \(\widehat{\mathbf{J}}_{f}\) _be defined as in (_28_)-(_29_). Then, as_ \(T\to\infty\) _under_ \(\mathrm{P}^{(T)}_{\mathbf{C},\mathbf{0};f}\)_, we have_ \[\widehat{\mathbf{B}}_{\mathbf{\ell}_{f}}^{(T)}\Rightarrow\mathbf{B}_{\mathbf{\ell}_{f}}\ \ {\rm and }\ \ \widehat{\mathbf{J}}_{f}\stackrel{{ p}}{{\to}}\mathbf{J}_{f}.\] (31)
2. _Let_ \(\widehat{\mathbf{W}}_{\mathbf{\varepsilon}}\) _be defined by (_30_) and let_ \(\widehat{\mathbf{\Sigma}}\) _be some consistent estimator for_ \(\mathbf{\Sigma}\) _such that_ \(\widehat{\mathbf{\Sigma}}\stackrel{{ p}}{{\to}}\mathbf{\Sigma}\)_. Then, under_ \(\mathrm{P}^{(T)}_{\mathbf{C},\mathbf{\delta},\mathbf{\eta};\mathbf{\mu},f}\) _and as_ \(T\to\infty\)_,_ \[\widehat{\mathcal{L}}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}})\Rightarrow\mathcal{L} _{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}})\ \ {\rm and}\ \ \widehat{\mathcal{L}}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*}) \Rightarrow\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*}),\] (32) _where_ \(\mathcal{L}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}})\) _and_ \(\mathcal{L}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}}^{*})\) _are defined respectively in (_18_) and (_21_), whose behaviors are characterized by Proposition_ 3.3_._
In the proof, a key step is to address the bias, denoted as \(\mathbf{\zeta}\), that arises when estimating \(\mathbf{\ell}_{f}\) due to the lack of an \(\sqrt{T}\)-unbiased estimator for \(\mathbf{\ell}_{f}\) when \(f\) is not further restricted, such as being symmetric (see Klaassen (1987)). However, this bias is canceled out by \(\widehat{\mathbf{B}}_{\mathbf{\ell}_{f}}^{(T)}\) as shown in the following calculation:
\[\frac{1}{\sqrt{T}}\sum_{t=2}^{\lfloor uT\rfloor}\tilde{\mathbf{\ell}} _{f}^{(T)}(\Delta\mathbf{y}_{t})-\frac{\lfloor uT\rfloor}{T^{3/2}}\sum_{t=2}^{ T}\tilde{\mathbf{\ell}}_{f}^{(T)}(\Delta\mathbf{y}_{t})\] \[=\frac{1}{\sqrt{T}}\sum_{t=2}^{\lfloor uT\rfloor}(\mathbf{\zeta}+ \tilde{\mathbf{\ell}}_{f}^{(T)}(\Delta\mathbf{y}_{t}))-\frac{\lfloor uT\rfloor}{T ^{3/2}}\sum_{t=2}^{T}(\mathbf{\zeta}+\tilde{\mathbf{\ell}}_{f}^{(T)}(\Delta\mathbf{y}_ {t}))\] \[=\frac{1}{\sqrt{T}}\sum_{t=2}^{\lfloor uT\rfloor}\tilde{\mathbf{\ell} }_{f}^{(T)}(\Delta\mathbf{y}_{t})-\frac{\lfloor uT\rfloor}{T^{3/2}}\sum_{t=2}^{ T}\tilde{\mathbf{\ell}}_{f}^{(T)}(\Delta\mathbf{y}_{t}),\]
where \(\tilde{\mathbf{\ell}}_{f}^{(T)}\) denotes the debiased version of \(\tilde{\mathbf{\ell}}_{f}^{(T)}\). Notably, as \(\mathbf{B}_{\mathbf{\ell}_{f}}\) eliminates the density perturbation \(\mathbf{\eta}\) in the limit, its finite-sample counterpart \(\widehat{\mathbf{B}}_{\mathbf{\ell}_{f}}^{(T)}\) eliminates the nonparametric estimation bias in the sequence, effectively addressing the bias issue in the estimation of \(\mathbf{\ell}_{f}\).
As a direct consequence of Theorem 4.1, the following corollary demonstrates that the power upper bounds \(\pi_{f,\alpha}^{\mathbf{\mu}*}(\mathbf{C};\mathbf{C})\) and \(\pi_{f,\alpha}^{\mathbf{\tau}*}(\mathbf{C};\mathbf{C})\) are globally sharp in \(f\in\mathfrak{F}_{p}\).
**Corollary 4.1**.: _Let \(f\in\mathfrak{F}_{p}\) and assume that Assumption 2 holds. Define likelihood ratio tests as \(\hat{\phi}_{\alpha}^{\mathbf{\mu}*}(\bar{\mathbf{C}}):=\mathbb{1}\{\widehat{ \mathcal{L}}_{f}^{\mathbf{\mu}*}(\bar{\mathbf{C}})>\kappa_{\alpha}^{\mathbf{\mu}}(\bar{ \mathbf{C}})\}\) and \(\hat{\phi}_{\alpha}^{\mathbf{\tau}*}(\bar{\mathbf{C}}):=\mathbb{1}\{\widehat{ \mathcal{L}}_{f}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\bar{\mathbf{C}})>\kappa_{ \alpha}^{\mathbf{\tau}}(\bar{\mathbf{C}})\}\). Then, under \(\mathrm{P}^{(T)}_{\mathbf{C},\mathbf{\delta},\mathbf{\eta};\mathbf{\mu},f}\), we have_
\[\lim_{T\to\infty}\mathrm{E}\big{[}\hat{\phi}_{\alpha}^{\mathbf{\mu}*}(\bar{ \mathbf{C}})\big{]}=\pi_{f,\alpha}^{\mathbf{\mu}*}(\bar{\mathbf{C}};\mathbf{C})\ \ {\rm and}\ \ \lim_{T\to\infty}\mathrm{E}\big{[}\hat{\phi}_{\alpha}^{\mathbf{\tau}*}(\bar{ \mathbf{C}})\big{]}=\pi_{f,\alpha}^{\mathbf{\tau}*}(\bar{\mathbf{C}};\mathbf{C}),\]
_for all \(\bar{\mathbf{C}},\mathbf{C}\in\mathbb{R}^{p\times p}\)._
To avoid the necessity of selecting a reference alternative \(\mathbf{C}\), we adopt the approach used in the aforementioned limiting Johansen test \(\Lambda_{f}^{Johansen}\) and the limiting SL test \(\Lambda_{f}^{SL}\). In other words, we construct the feasible versions of these tests by replacing \(\mathbf{W}_{\mathbf{\varepsilon}}\), \(\mathbf{B}_{\mathbf{\ell}_{f}}\), \(\mathbf{\Sigma}\), and \(\mathbf{J}_{f}\) with their finite-sample counterparts \(\mathbf{W}_{\mathbf{\varepsilon}}^{(T)}\), \(\widehat{\mathbf{B}}_{\mathbf{\ell}_{f}}^{(T)}\), \(\widehat{\mathbf{\Sigma}}\), and \(\widehat{\mathbf{J}}_{f}\), respectively. These feasible versions are based on the estimate \(\hat{f}\) and are denoted as \(\hat{\Lambda}^{Johansen}f\) and \(\hat{\Lambda}f^{SL}\), respectively.
_Remark 4.2_.: In addition to the kernel estimation method used in this paper, an alternative approach for density estimation is the semi-nonparametric (SNP) method proposed by Gallant and Nychka (1987). For the SNP method employed for the cointegration testing problem, see Boswijk and Lucas (2002).
## 5 Simulations
In this section, we conduct a Monte Carlo study to validate the asymptotic results using large-sample simulations and evaluate the small-sample performance of our proposed semiparametrically optimal cointegration tests above. Specifically, we compare our tests, which are based on the statistics \(\Lambda_{f}^{Johansen}\) for the case without time trend, and \(\Lambda_{f}^{SL}\) for the case with time trend, to their original Gaussian versions, which are based on the statistics \(\Lambda_{\phi}^{Johansen}\) and \(\Lambda_{\phi}^{SL}\), respectively.
We consider a two-dimension VAR model in error correction form (1)-(2) with i.i.d. innovations \(\mathbf{\varepsilon}_{t}\) of density \(f\in\mathfrak{F}_{2}\). Following the asymptotic local power analysis, we set the parameter of interest to be \(\mathbf{\Pi}=\mathbf{C}/T\) as in (5). We specify the local parameter \(\mathbf{C}\) using the form
\[\mathbf{C}=c\begin{pmatrix}1&1\\ 0&0\end{pmatrix},\]
where \(c\in(-\infty,0]\). Thus, \(c=0\) corresponds to the null hypothesis of \(r=0\), while \(c<0\) corresponds to the alternative hypothesis of \(r=1\), recalling that \(r\) denotes the rank of \(\mathbf{\Pi}\). We consider \(T=2500\) for the large-sample case and \(T=250\) for the small-sample case. We base all results on \(20,000\) independent replications, and set the significance level to be \(5\%\) throughout this section.
We estimate the covariance matrix \(\mathbf{\Sigma}\) using the following estimator:
\[\widehat{\mathbf{\Sigma}}=\frac{1}{T-1}\sum_{t=2}^{T}\left(\Delta\mathbf{y}_{t}- \overline{\Delta\mathbf{y}}\right)^{\prime}\left(\Delta\mathbf{y}_{t}- \overline{\Delta\mathbf{y}}\right),\]
where \(\overline{\Delta{\bf y}}=\sum_{t=2}^{T}\Delta{\bf y}_{t}/(T-1)\). To construct the density estimator in (27), which is used to obtain \(\boldsymbol{\ell}_{f}\) in (26) and subsequently \(\widehat{\boldsymbol{B}}_{\boldsymbol{\ell}_{f}}^{(T)}\) and \(\widehat{\boldsymbol{J}}_{f}\) as defined in (28)-(29), we choose standard Logistic kernels for \(k(\cdot)\) and set \(b_{T}=0\). Note that the choice of \(b_{T}=0\) violates Assumption 2-(b), but we made this choice for simplicity and concreteness as the qualitative results appeared to be more sensitive to the choice of \(a_{T}\) than to the choice of \(b_{T}\). Following Silverman's rule of thumb (see Silverman (1986)), we set the bandwidth \(a_{T}\) for each dimension \(i=1,\ldots,p\) to be
\[a_{i,T}=\left[\frac{4}{T(p+2)}\right]^{\frac{2}{p+4}}\hat{\sigma}_{i}^{2},\]
where \(\hat{\sigma}_{i}^{2}\) is the \(i\)-th diagonal element of \(\widehat{\boldsymbol{\Sigma}}\).
Figure 1 displays the large-sample performances (\(T=2500\)) of the Johansen test (labeled Johansen-\(\phi\)) and the \(\hat{f}\)-based Johansen test (labeled Johansen-\(\hat{f}\)) for the case without a time trend (upper panel), and the SL test (labeled SL-\(\phi\)) and the \(\hat{f}\)-based SL test (labeled SL-\(\hat{f}\)) for the case with a time trend (bottom panel). We consider two different distributions: the Multivariate Student-\(t_{3}\) distribution
Figure 1: Semiparametric power envelope (black solid) and large-sample (\(T=2500\)) rejection rates of the Johansen test (blue dotted) and the \(\hat{f}\)-based Johansen test (red dash-dot) under true density \(f=student\)\(t_{3}\) (left panel) and \(f=Gaussian\) (right panel).
(left panel) and the Gaussian distribution (right panel). For the case without a time trend and under Gaussian \(f\), both Johansen-type tests exhibit similar powers that are close to the Gaussian power envelope. Under the Student-\(t_{3}\) distribution, the \(\hat{f}\)-based Johansen test outperforms the original Johansen test by a considerable amount of power gain. For instance, at \(-c=10\), the rejection rate of Johansen-\(\phi\) is slightly below 60%, while that of Johansen-\(\hat{f}\) is nearly 90%. This demonstrates that our semiparametric efficient Johansen test can effectively utilize the information contained in the innovation distribution, which is particularly advantageous when the true density deviates significantly from Gaussian, a limitation of the original version.
Furthermore, it is worth noting that a small discrepancy may be observed between the power of the Johansen-\(\hat{f}\) test and the semiparametric power envelope. This slight reduction in power can be attributed to the finite-sample nature of the data, even with a relatively large sample size, and the nonparametric estimation of the density \(f\). Nevertheless, this gap tends to decrease as the sample size
Figure 2: Small-sample (\(T=250\)) rejection rates of the Johansen test (blue dotted) and the \(\hat{f}\)-based Johansen test (red dash-dot) under true density \(f=student\)\(t_{3}\) (left panel) and \(f=Gaussian\) (right panel).
increases towards infinity, and the power of the Johansen-\(\hat{f}\) test converges to the semiparametric power envelope. These results are also applicable to the time trend case for the corresponding SL-\(\phi\) and SL-\(\hat{f}\) tests, which provides numerical evidence that our proposed \(\hat{f}\)-based tests are semiparametrically optimal.
Figure 2 presents the small-sample performances with a sample size of \(n=250\), which corresponds to the scenarios depicted in Figure 1. These results confirm that all four tests under evaluation, namely Johansen-\(\phi\) and Johansen-\(\hat{f}\) for the no-time-trend case, and SL-\(\phi\) and SL-\(\hat{f}\) for the time-trend case, maintain satisfactory size control even with a relatively small sample size. Furthermore, we observe that the power properties observed in the large-sample case are also applicable in the small-sample case. Notably, the \(\hat{f}\)-based tests provide significant power improvement over their Gaussian counterparts when \(f\) follows a Student-\(t_{3}\) distribution, even with a small sample size. However, when \(f\) is Gaussian, the \(\hat{f}\)-based tests may exhibit slightly lower power compared to their Gaussian counterparts, mainly due to the need for nonparametric density estimation.
Figure 3: Large-sample (\(T=2500\), left panel) and small-sample (\(T=250\), right panel) rejection rates of the Johansen test (blue dotted) and the \(\hat{f}\)-based Johansen test (red dash-dot) under true density \(f=skewed\)\(t_{4}\).
We extend our investigation to the scenarios under asymmetric distributions, as illustrated in Figure 3. Specifically, we examine the performance of the four tests (Johansen-\(\phi\), Johansen-\(\hat{f}\), SL-\(\phi\), and SL-\(\hat{f}\)) under the Skewed-\(t_{4}\) distribution in both large (\(T=2500\), left panel) and small (\(T=250\), right panel) samples for both the no-time-trend case (upper panel) and the time-trend case (bottom panel). Our results confirm the previously reported findings, including satisfactory size control and the remarkable power improvement of the \(\hat{f}\)-based tests over their Gaussian counterparts, are applicable to the asymmetric distribution case as well. Moreover, additional simulations unreported here indicate that the \(\hat{f}\)-based tests gain more and more power as the Skewed-\(t_{4}\) distribution becomes more and more skewed (i.e., further away from Gaussian).
## 6 Discussions on extensions
The preceding sections have focused on a simple case for semiparametric efficiency, which assumed no serial correlation in the error term and considered only the null hypothesis of no cointegrating relationship (i.e., \(H_{0}:\,r=0\)). To broaden the applicability of our tests, we briefly discuss whether our results remain valid when we relax the assumption of no serial correlation and consider a more general reduced rank hypothesis.8
Footnote 8: In the Gaussian case, Boswijk et al. (2015) have addressed the results of relaxing these two assumptions in their Sections 2.3 and 2.4.
We consider a generalized model where the observations \(\mathbf{y}_{t}=(y_{1,t},\ldots,y_{p,t})^{\prime}\) are generated as follows:
\[\mathbf{y}_{t} =\boldsymbol{\mu}+\boldsymbol{\tau}t+\mathbf{x}_{t}, \tag{33}\] \[\Delta\mathbf{x}_{t} =\boldsymbol{\Pi}\mathbf{x}_{t-1}+\sum_{j=1}^{k-1}\boldsymbol{ \Gamma}_{j}\Delta\mathbf{x}_{t-j}+\boldsymbol{\varepsilon}_{t}, \tag{34}\]
where, in addition, the parameter \(\boldsymbol{\Gamma}=\{\boldsymbol{\Gamma}_{1},\ldots,\boldsymbol{\Gamma}_{k-1 }\}\in\mathbb{R}^{p\times(k-1)p}\) of (known and finite) order \(p\) governs the lag terms. We are interested in testing the hypothesis
\[H_{0}:r=r_{0}\ \ \text{against}\ \ H_{1}:r>r_{0},\]
where \(r_{0}\in\mathbb{N}_{0}\) and \(r_{0}<p\).
Our discussion is based on the results in HvdAW, specifically their 'complete' version of the limit experiment for cointegration provided in their online supplementary appendix (Proposition A.2). Following HvdAW, we consider the following localized parameterizations:
* For \(\mathbf{\Gamma}\), we use \[\mathbf{\Gamma}=\mathbf{\Gamma}_{\mathbf{G}}^{(T)}=\mathbf{\Gamma}_{0}+\frac{\mathbf{G}}{ \sqrt{T}},\] (35) where the local parameter \(\mathbf{G}=\{\mathbf{G}_{1},\ldots,\mathbf{G}_{k-1}\}\in\mathbb{R}^{p\times(k- 1)p}\)
* For \(\mathbf{\Pi}\), building upon the factorization \(\mathbf{\Pi}=\mathbf{\alpha}_{0}\mathbf{\beta}_{0}\), where \(\mathbf{\alpha}_{0}\) and \(\mathbf{\beta}_{0}\) are full-rank \(p\times r_{0}\) matrices, we consider \[\mathbf{\Pi}=\mathbf{\Pi}_{\mathbf{A},\mathbf{B},\mathbf{C}}^{(T)}=\mathbf{\alpha}\mathbf{ \beta}^{\prime}+\frac{\mathbf{\alpha}_{\perp}\mathbf{C}\mathbf{\alpha}_{\perp}^{ \prime}}{T},\] and \[\mathbf{\alpha}=\mathbf{\alpha}_{\mathbf{A}}^{(T)}=\mathbf{\alpha}_{0}+\frac{\mathbf{A}}{ \sqrt{T}}\quad\text{and}\quad\mathbf{\beta}=\mathbf{\beta}_{\mathbf{B}}^{(T)}=\mathbf{ \beta}_{0}+\frac{\mathbf{\beta}_{\perp}\mathbf{B}^{\prime}}{T}\] (36) where \(\mathbf{\alpha}_{\perp}\) and \(\mathbf{\beta}_{\perp}\) are chosen \(p\times(p-r_{0})\) matrices of rank \(p-r_{0}\) satisfying \(\mathbf{\alpha}^{\prime}\mathbf{\alpha}_{\perp}=\mathbf{0}_{r_{0}\times(p-r_{0})}\) and \(\mathbf{\beta}^{\prime}\mathbf{\beta}_{\perp}=\mathbf{0}_{r_{0}\times(p-r_{0})}\), and \(\mathbf{C}\in\mathbb{R}^{p-r_{0}\times p-r_{0}}\).
With these local alternatives, our inference problem becomes testing the null hypothesis of \(\mathbf{C}=\mathbf{0}\) (since \(r=r_{0}\) if and only if \(\mathbf{C}=\mathbf{0}\)), while treating \(\mathbf{G}\), \(\mathbf{A}\) and \(\mathbf{B}\) as nuisance parameters.
We incorporate our generalized model into the framework of Hallin et al. (2016, Proposition A.2). Recalling above that the key difference between our model and theirs is the specification of the trend term, with our model described by equations (1)-(2), while theirs is described by equation (4). Our case corresponds to theirs when their parameter \(\mu=0\) (and our additional time trend term \(\mathbf{\mu}+\mathbf{\tau}t\) can be handled in a similar manner as described in Section 3.5). As a result, all local perturbations associated with \(\mu\), specifically their parameters \(b\), \(d\), and \(\mu\) itself, are absent in our model. While HvdAW focus on the parameter of interest \(d\), which enjoys a "super-consistency" rate \(T^{-3/2}\) and the traditional LAN result, our paper focuses on the parameter \(\mathbf{C}\) (corresponding to their \(D\)), which leads to the LABF result and was left uninvestigated in their work.
Another difference between HvdAW and our present paper is that the former assumes \(f\) is elliptical, while we do not make this assumption. The elliptical density assumption in HvdAW is specifically used for their test construction based on the
pseudo-Mahalanobis distance-based rank statistic. However, it is not necessary for the limit experiment approach to proceed. Therefore, it is reasonable to conjecture that their limit experiment has the same structure without this distribution shape restriction. In the following, we will translate their results by replacing their \(W_{\epsilon}\) and \(W_{\phi}\) with our \(\mathbf{W}_{\mathbf{\varepsilon}}\) and \(\mathbf{W}_{\mathbf{\ell}_{f}}\), respectively, which are both \(p\)-dimensional Brownian motions that play the role of the limits of the partial-sum processes of the innovations and the score functions.
To save space, we will not present a full exposition of the limit experiment, nor provide a rigorous proof for it (which could be done similarly to Proposition 3.2). Instead, we only list the limiting central sequences with respect to \((\operatorname{vec}\mathbf{C})^{\prime}\)\((\operatorname{vec}\mathbf{G})^{\prime}\), \((\operatorname{vec}\mathbf{A})^{\prime}\) and \((\operatorname{vec}\mathbf{B})^{\prime}\), respectively, as follows (borrowed from Hallin et al. (2016, Proposition A.2)):
\[\Delta_{\mathbf{C}} =\int_{0}^{1}\big{(}\mathbf{\beta}^{\prime}_{\perp}\mathbf{D}\mathbf{W}_ {\mathbf{\varepsilon}}(u)\otimes\mathbf{I}_{p-r_{0}}\big{)}\mathrm{d}\big{(}\mathbf{ \alpha}^{\prime}_{\perp}\mathbf{W}_{\mathbf{\ell}_{f}}\big{)}(u),\] \[\Delta_{\mathbf{A}} =\big{(}\mathbf{\beta}^{\prime}\otimes\mathbf{I}_{p}\big{)}\mathbf{W}_{1 }(1),\] \[\Delta_{\mathbf{B}} =\int_{0}^{1}\big{(}\mathbf{\beta}^{\prime}_{\perp}\mathbf{D}\mathbf{W}_ {\mathbf{\varepsilon}}(u)\otimes\mathbf{I}_{r_{0}}\big{)}\mathrm{d}\big{(}\mathbf{ \alpha}^{\prime}\mathbf{W}_{\mathbf{\ell}_{f}}\big{)}(u),\] \[\Delta_{\mathbf{G}} =\mathbf{W}_{2}(1),\]
where \(\mathbf{D}=\mathbf{D}_{\mathbf{\Gamma},\mathbf{\Pi}}:=\mathbf{\beta}_{\perp}\left(\mathbf{ \alpha}^{\prime}_{\perp}(\mathbf{I}_{p}-\sum_{j=1}^{k-1}\mathbf{\Gamma}_{j})\mathbf{ \beta}_{\perp}\right)^{-1}\mathbf{\alpha}^{\prime}_{\perp}\), and \(\mathbf{W}_{1}\) and \(\mathbf{W}_{2}\) are Brownian motions defined in the same space of \((\mathbf{W}_{\mathbf{\varepsilon}},\mathbf{W}_{\mathbf{b}},\mathbf{W}_{\mathbf{\ell}_{f}})\).
According to the covariance analysis of HvdAW (as discussed above their Lemma A.1), it can be shown that \(\mathbf{W}_{2}\) (referred to as \(W_{\Delta X\otimes\phi}\) in HvdAW) is uncorrelated with \(\mathbf{W}_{\mathbf{\varepsilon}}\) and \(\mathbf{W}_{\mathbf{\ell}_{f}}\) when their \(\mu\) is zero. This can be explained by observing that the score associated with \(\mathbf{W}_{2}\) is constructed by multiplying \(\mathbf{\ell}_{f}(\Delta\mathbf{x}_{t})\) with the lagged increments, \(\Delta\mathbf{x}_{t-1},\ldots,\Delta\mathbf{x}_{t-k}\), which is independent of any function of \(\Delta\mathbf{x}_{t}\), and has mean zero under the null hypothesis.9 As a result, \(\Delta_{\mathbf{C}}\) is independent of \(\Delta_{\mathbf{G}}\), which implies that the inference for \(\mathbf{C}\) is adaptive with respect to \(\mathbf{G}\). In other words, the parameters \(\mathbf{\Gamma}\) that control the serial correlation in the error term can be treated "as if" they are known, and one can simply replace them with their consistent estimates. Unreported simulation results with ARMA errors can confirm our conjecture (these results are available upon request).
Footnote 9: It is worth noting that this property is also shared by our univariate counterpart, Zhou et al. (2019), where \(\mathbf{W}_{2}\) here corresponds to the \(W_{\Gamma}\) there, and the latter is uncorrelated with the univariate counterparts of \(\mathbf{W}_{\mathbf{\varepsilon}}\) and \(\mathbf{W}_{\mathbf{\ell}_{f}}\) as described in (3.8) of their paper.
The adaptivity result also applies to \(\mathbf{\alpha}\) and \(\mathbf{\beta}\), as it can be easily shown that \(\Delta_{\mathbf{C}}\) is independent of both \(\Delta_{\mathbf{A}}\) (due to \(\mathbf{\beta}^{\prime}\mathbf{\beta}_{\perp}=\mathbf{0}\)) and \(\Delta_{\mathbf{B}}\) (due to of \(\mathbf{\alpha}^{\prime}\mathbf{\alpha}_{\perp}=\mathbf{0}\)). Hence, by the same reasoning, one can simply substitute these parameters with their consistent estimates to achieve feasible inference for \(\mathbf{C}\) without sacrificing asymptotic efficiency. In summary, for this general reduced null hypothesis case, one can follow these steps: (i) estimate \(\alpha\) and \(\beta\) based on the original data \(\mathbf{y}_{t}\), obtain an estimator \(\hat{\alpha}\perp\) for \(\alpha\perp\) using the orthogonality of \(\alpha\) and \(\alpha_{\perp}\) based on the estimates from step (i), and (iii) apply the our proposed semiparametric optimal tests to the transformed data \(\hat{\alpha}_{\perp}\mathbf{y}_{t}\). This procedure is similar to the one outlined in Boswijk et al. (2015, Section 2.3) except for step (iii), except that we use our proposed tests in step (iii).
|
2302.03972 | A FPGA-based architecture for real-time cluster finding in the LHCb
silicon pixel detector | This article describes a custom VHDL firmware implementation of a
two-dimensional cluster-finder architecture for reconstructing hit positions in
the new vertex pixel detector (VELO) that is part of the LHCb Upgrade. This
firmware has been deployed to the existing FPGA cards that perform the readout
of the VELO, as a further enhancement of the DAQ system, and will run in real
time during physics data taking, reconstructing VELO hits coordinates
on-the-fly at the LHC collision rate. This pre-processing allows the first
level of the software trigger to accept a 11% higher rate of events, as the
ready-made hits coordinates accelerate the track reconstruction and consumes
significantly less electrical power. It additionally allows the raw pixel data
to be dropped at the readout level, thus saving approximately 14% of the DAQ
bandwidth. Detailed simulation studies have shown that the use of this
real-time cluster finding does not introduce any appreciable degradation in the
tracking performance in comparison to a full-fledged software implementation.
This work is part of a wider effort aimed at boosting the real-time processing
capability of HEP experiments by delegating intensive tasks to dedicated
computing accelerators deployed at the earliest stages of the data acquisition
chain. | G. Bassi, L. Giambastiani, K. Hennessy, F. Lazzari, M. J. Morello, T. Pajero, A. Fernandez Prieto, G. Punzi | 2023-02-08T10:08:34Z | http://arxiv.org/abs/2302.03972v3 | # A FPGA-based architecture for real-time cluster finding in the LHCb silicon pixel detector
###### Abstract
This article describes a custom VHDL firmware implementation of a two-dimensional cluster-finder architecture for reconstructing hit positions in the new vertex pixel detector (VELO) that is part of the LHCb Upgrade. This firmware has been deployed to the existing FPGA cards that perform the readout of the VELO, as a further enhancement of the DAQ system, and will run in real time during physics data taking, reconstructing VELO hits coordinates on-the-fly at the LHC collision rate. This pre-processing allows the first level of the software trigger to accept a 11% higher rate of events, as the ready-made hits coordinates accelerate the track reconstruction and consumes significantly less electrical power. It additionally allows the raw pixel data to be dropped at the readout level, thus saving approximately 14% of the DAQ bandwidth. Detailed simulation studies have shown that the use of this real-time cluster finding does not introduce any appreciable degradation in the tracking performance in comparison to a full-fledged software implementation. This work is part of a wider effort aimed at boosting the real-time processing capability of HEP experiments by delegating intensive tasks to dedicated computing accelerators deployed at the earliest stages of the data acquisition chain.
Clustering, Connected Component Labelling, FPGA, LHCb, VHDL
## I Introduction
The LHCb experiment has collected data over the past decade, during the Run 1 and Run 2 of the LHC, and recently underwent a major update for the current Run 3. In addition to replacing most of the subdetectors, the front-end electronics and data-acquisition system were completely renewed [1], to read out and process the complete information of the detector at the full LHC beam crossing rate of 40 MHz (30 MHz averaged over the LHC cycle). This change is motivated by the needs of the LHCb physics program, which requires the collection of low transverse momentum events that need high-level processing to be distinguished from background events [2]. This evolution puts a large computing toll on the new real-time processing system, motivating the deployment of innovative features, with a general trend of increasing customisation, parallelization, and early data pre-processing. A new trigger system [1, 3] was designed to allow the experiment to collect data effectively at an instantaneous luminosity of \(2\times 10^{33}\) cm\({}^{-2}\)s\({}^{-1}\), five times higher than during Run 2, corresponding to a bandwidth of about 32 Tb/s. The subsequent event-building stage and software high-level-trigger (HLT) processing lead to a data storage flow of 80 Gb/s.
The triggering process is divided into two main stages, named HLT1 and HLT2. The HLT1 uses an array of GPU servers to perform a faster event reconstruction, with the only purpose of reducing the event rate, while retaining as much signal as possible, to a level acceptable for HLT2. The HLT2, based on an array of CPU servers, performs a complete reconstruction of events with an offline-level quality, that is permanently stored for subsequent analysis. To perform its function effectively, the HLT1 needs to perform a nearly complete event reconstruction. First, it finds track segments in the VErtex LOcator detector (VELO), attaching to them hits from the further tracking stations upstream and downstream of the magnet to obtain complete tracks; then, the positions of the primary vertices of the proton-proton (\(pp\)) collisions are found, as well as those of displaced vertices that constitute the main signature of heavy flavour particle decays.
The feasibility of implementing several parts of this sequence in a specialised architecture, using programmable digital electronics co-processors (FPGAs), has been studied with the aim of achieving a faster and cheaper reconstruction, especially with a view to future runs, moving parts of it in front of the event-building stage [4].
In this article, we address the very first step in the HLT1 event reconstruction, that is the search for clusters of active pixels in the VELO [5]. Grouping contiguous pixels in clusters is a conceptually simple but computationally demanding task, due to the two-dimensional (2D) geometry and the large number of pixels of the VELO detector (approximately 40 million). In the preliminary version of the HLT1, designed to run entirely on CPUs, this task alone consumed 17% of the time required by the complete HLT1 reconstruction sequence. We address here this issue and describe an efficient architecture of this functionality, requiring a very modest amount of FPGA resources, while providing the throughput and the performance required for its use within the LHCb DAQ system. The core ideas underlying the design of this architecture are based on studies of a FPGA-based track-finding system, performed within the INFN-RETINA R&D project [4]. The overall structure of our algorithm and its main
building blocks are rather general, and can be applied to any pixel detector. A baseline version is available for download from a public code repository [6]. However, the LHCb version contains specific features tailored to the VELO detector.
## II Format and features of VELO data
The Run 3 VELO is a silicon pixel-detector consisting of 26 layers both downstream (19) and upstream (7) of the nominal point of \(pp\) collisions. Each layer consists of two modules, each read by a dedicated readout card. A module is made of four sensors, each of which is bump-bonded to three VeloPix [7] ASICs (chips), as shown in Fig. 1. The VELO front-end data arrive to the LHCb readout cards via optical links as aggregated groups of 4\(\times\)2 pixels, named SuperPixels (SPs), with binary response. Data are deserialized, decoded and sent to the data processing stage. SuperPixels are output by the detector without a well-defined time ordering, and data from different LHC beam crossings (separated by 25 ns) may not be synchronized and mixed over time. The first step of the VELO data-processing firmware reorders the SPs, making sure that SPs coming from the same proton bunch crossing (event) are grouped together [8], before data are sent to the clustering stage. Reconstructed clusters are then formatted into LHCb event fragments and sent to the PCIe bus. Figure 2 shows a schematic view of the firmware [8] of the custom PCIe cards (TELL40 [9]) that perform the readout of the VELO. TELL40 cards are used as readout units for each subdetector within LHCb. Each TELL40 card carries an Altera Arria-10 GX 1150 FPGA with 1150k logic elements.
The clustering firmware was designed to take as its input the list of all active SPs found in a given event, and to produce a list of reconstructed clusters, each with the local \((x,y)\) coordinates of its centroid. In addition, it provides the detailed shape of the pixel cluster, as well as some flags indicating cluster quality. These additional quantities are not required by HLT1 reconstruction, but are computed to allow the HLT2 to perform a fully optimised reconstruction of tracks, despite of the lack of the original raw pixel data.
The size of clusters generated by individual charged particles crossing the VELO layers is less or equal to 4 pixels in 96% of the cases, whereas larger clusters are mostly the product of merged hits or secondary emissions (\(\delta\)-rays, etc.). The distribution of cluster sizes as predicted by the LHCb Upgrade simulation (MC) [10] is shown in Fig. 3.
## III Core Architecture
The distribution in Fig. 3 implies that clusters produced by a single particle hit are often contained within a single SP word. In those cases, the reconstruction of the cluster can be performed through a look-up table, and it is therefore advantageous to perform an initial pre-processing of SPs to separate these occurrences from the others, and to send them to two distinct parallel processing blocks. The separation is performed by comparing the 2D position of each SP with that of all other SPs of the same sensor in the same event. Each SP is then flagged as "isolated" if none of its eight SP neighbours has any active pixels. The LHCb simulation predicts that isolated SPs will account for 53% of the total number of SPs at nominal Run 3 luminosity conditions.
The centroid of clusters within isolated SPs is directly calculated by means of a look-up table (LUT). Each of the
Fig. 1: Illustration of the basic constituents of a VELO layer [5]. Red dots mark the origin, pixel (0,0), of the local cartesian coordinate system of each sensor (see Sect. III). As an example, the axis orientation is displayed in both sensors of the upper module.
Fig. 3: Distribution of the number of pixels per cluster.
Fig. 2: Schematics of the TELL40 firmware. A detailed description can be found in Ref. [8].
possible pixel configurations within a SP is linked to the pre-calculated centre of mass of the corresponding reconstructed cluster(s). This LUT-based reconstruction allows an extremely fast processing of isolated SPs, with a very limited amount of logic resources. It is possible for up to two distinct clusters to be present within a single SP. The firmware correctly handles this case as well, generating two independent clusters in output.
The algorithm for non-isolated SPs requires instead the concurrent processing of multiple SPs. This part of the processing is performed at the level of individual pixels, dropping the SP-based formatting of the data. Each detector pixel is mapped to a cell within an active bit matrix, set to 1 or 0 according to the pixel status. The bit matrix has a built-in logic, capable of recognising certain predetermined patterns, signalling the presence of a cluster corner at a certain pixel position. Since more than 96% of the reconstructed clusters is made of no more than four contiguous pixels, the more efficient choice for the patterns is a "L" shaped sequence of inactive pixels with two different configurations of active pixels with the cluster candidate contained in a 3\(\times\)3 pixel grid (Fig. 4). If one of the patterns is matched, the cluster candidate is recognised in the grid (green pixels in figure), as well as the anchor pixel (blue pixel in figure), positioned in one of the corners of the grid depending on the orientation of the sensor. The presence of a cluster corner is simultaneously checked on every bit of the matrix and it is completed in a single clock cycle. On the next clock cycle, the first found cluster is extracted from the matrix, and on the following clock it is pushed into the output FIFO.
This highly parallelized mechanism is key to the successful operation of the architecture at the extremely high throughput levels required by the LHC collision rate. However, the amount of FPGA logic resources needed to implement a complete bit matrix map of the approximately \(40\) millions pixels of the VELO detector would be excessive. To overcome this problem, a sparse representation of the bit matrix is adopted, breaking it down into a set of small matrices of fixed size, that get dynamically allocated only for the regions of the detector that contain active pixels. After some optimization studies, accounting for the expected detector occupancy, the average size of reconstructed clusters1, and computational requirements, the size of each small matrix has been chosen to cover the same area as 3\(\times\)3 SPs, that is, 6\(\times\)12 individual pixels (Fig. 5). In order to reconstruct cluster candidates having an anchor pixel lying near the edge, each matrix is surrounded by edges of registers fixed at zero, as shown in Fig. 6. These edges are not used during the filling process, but are necessary to determine the 3\(\times\)3 cluster candidate when there are active pixels at the edge of the 3\(\times\)3 matrix. An example of such a configuration is shown in Fig. 6b. The width of the edges is determined by the VELO sensor number, the allowed patterns, and the cluster candidate topology (Fig. 4).
Footnote 1: The expected occupancy of the VELO sensors is around 0.125% [5] in the regions closest to the beam pipe. A cluster is made of 2 pixels on average.
To allow the allocation of matrices to proceed in real time without any delay, matrices are organised in a sequential chain for each VELO sensor, with SP data flowing continuously along the chain at the same rate as they are fed into the clustering block. All matrices are initialised as empty. When a SP arrives at an empty matrix, it fills the center of the matrix and it defines the physical location of the matrix inside the VELO detector, as well as the set of coordinates of the other SPs that can fill it. The allocated position of the matrix is checked against the coordinates of every SP going through the chain. If a SP belongs to the region inside the matrix, it is used to fill its appropriate location, otherwise it moves forward along the input line. Eventually, every SP gets stored in some matrix of the chain. An explanatory diagram illustrating the mechanism is shown in Fig. 5.
When the input flow of SPs has ended, data from each matrix is copied in a single clock cycle to a twin matrix (pattern recognition matrix) where cluster finding is performed. In this way, the input matrix is ready to accept data from the next event immediately. The pattern recognition of all potential cluster candidates in this twin matrix is then performed, and the local coordinates of the centroid, with respect to the anchor pixel position, of each found cluster are determined using a LUT. The absolute position of the cluster candidate is obtained as the sum of three vectors of coordinates: the position of the matrix with respect to the sensor, the position of the anchor pixel with respect to the matrix, and the position of the reconstructed cluster with respect to the anchor pixel.
The clustering algorithm described has several parameters
Fig. 4: Pixel patterns used to identify a cluster candidate. The patterns are optimised for the sensor mounting orientation. See Ref. [11, 12] for further details.
Fig. 5: Sketch of the matrix filling mechanism for non-isolated SPs. SPs with same colour (label) are neighbours with active pixels. The blue SP (B) fills the first matrix in the line that is already populated with one of its neighbours. The green SP (G) does not belong to any of the already populated matrices, so it moves forward. The orange SP (O) has reached an non-initialised matrix, so it fills the centre.
that can be tuned to optimise its performance in terms of speed, efficiency and quality of the reconstruction. The shape and size of the matrix are determined by how non-isolated SPs are arranged together, whereas the distribution of the number of SPs with neighbours per event sets the number of matrices that need to be instantiated. The implementation of the above algorithm as FPGA firmware does not allow the number of matrices to be dynamically adjusted to cope with the variable number of non-isolated SPs per event. However, we determined from LHCb simulations that a fixed number of 20 matrices per VELO sensor is sufficient to ensure that less than 0.1% of the SPs overrun the matrix chain at the nominal Run 3 instantaneous luminosity (\(2\times 10^{33}\) cm\({}^{-2}\)s\({}^{-1}\)), and this number was adopted for the final implementation. SuperPixels exceeding this limit are not discarded. Instead, partial information is extracted from them, by resolving them via a LUT as if they were isolated. This approach avoids inefficiencies, at the expense of a slight increase in the number of split clusters, since more than one cluster ends up being reconstructed from a single group of neighboring pixels, when they happen to be spread over multiple SPs [11, 12]. These clusters are flagged in the output as "non-isolated", to allow the reconstruction algorithms in the HLT to deal with them properly.
The LHCb experiment foresees to collect data also for heavy-ion collisions [13]. A modified version of the clustering architecture will be used to cope with the higher number of SPs (around six times larger than in \(pp\) collisions). Due to the limited amount of FPGA resources, the same matrices will be used multiple times to accommodate all SPs. The cluster reconstruction of a heavy-ion event requires more time that the \(pp\) case, given the higher number of SPs, however, thanks to the much lower interaction frequency (50 kHz) the firmware can easily provide the necessary throughput also in this case.
## IV Physics performance
The FPGA cluster-finding architecture was designed with the intent of replacing the raw pixel data with reconstructed hit coordinates at the detector readout level. Except when running in debug mode (that preserves the full original information together with the cluster data), the raw pixel data are discarded and can not be recovered at any later stages. Extensive simulation studies were therefore performed to assess the effect of using the FPGA-reconstructed clusters on the physics performance of the LHCb reconstruction, both at the HLT1 and HLT2 stage. This was compared to the alternative scenario in which the VELO hits are reconstructed by a full-fledged software reconstruction within the HLT system, free from all the constraints imposed by the FPGA architecture, and from the severe throughput requirements of operating at pre-build level (30 MHz vs. 0.17 MHz, where a farm of about 170 GPUs is assumed for HLT1).
For the sake of generality, comparisons are made to a CPU-based clustering algorithm, that is free from implementation-specific constraints2. The key differences between the FPGA and CPU algorithms that can potentially affect reconstruction performances are the cluster finding mechanism, the maximum cluster size in the FPGA algorithm (limited to a 3\(\times\)3 pixel grid), and the constraints of the FPGA matrix filling scheme. They can potentially lead to inefficiencies, cluster splitting, or incomplete reconstruction of some clusters. An example of partial cluster reconstruction is illustrated in Fig. 7a, where the red pixel is left out from the reconstructed cluster. The shift of the reconstructed hit position may lead to a degradation of the precision on the reconstruction of the particle trajectory, or even to a loss of efficiency if the associated track is not reconstructed at all. Figure 7b shows an example of cluster splitting, where the algorithm finds two clusters, with a pixel in common, from six contiguous active pixels.
Footnote 2: The actual HLT1 implementation at LHCb is GPU-based, but its performance are indistinguishable from the CPU version we take as reference.
To perform the studies, a faithful software simulation of the FPGA-based clustering algorithm has been produced. This simulation was integrated within the official LHCb software
Fig. 6: Matrix edges and pattern orientations (a) for sensors 0 and 3 (b) for sensors 1 and 2.
Fig. 7: Example of corner cases of the FPGA clustering algorithm: (a) partial cluster reconstruction and (b) cluster splitting.
simulation, and a CPU-FPGA comparison was performed on a sample of 50,000 \(pp\) bunch crossings3, at instantaneous luminosity of \(2\times 10^{33}\)\(\mathrm{cm}^{-2}\,\mathrm{s}^{-1}\), corresponding to a total of \(10.4\times 10^{7}\) SPs and \(7.1\times 10^{7}\) clusters, generated at the foreseen LHCb-Upgrade running conditions with a centre of mass energy of 14 TeV. The efficiency in reconstructing VELO clusters is defined as the ratio between the number of MC hits matched to a cluster and the total number of MC hits. Only the MC hits that produce enough charge in the detector to activate at least one pixel are considered. A MC hit is matched to a cluster if they share at least one pixel. The efficiency in reconstructing VELO clusters of the FPGA-based algorithm is about 99.8%, and almost indistinguishable from that of the CPU algorithm, as illustrated in Fig. 8. Here, the efficiency of reconstructing clusters from tracks that can be reconstructed using information from VELO hits only is shown as a function of the pseudo-rapidity (\(\eta\)) and momentum of the tracks. The overall FPGA cluster inefficiency, with respect to the CPU algorithm, is below 0.1% within the LHCb geometrical acceptance (\(2<\eta<5\)).
Footnote 3: This corresponds to an average number of \(pp\) interactions per bunch crossing \(\nu=7.6\).
The quality of the reconstructed clusters is also studied by looking at the distributions of cluster residuals. The residual is defined as the distance between the position of the reconstructed cluster and the true coordinates of the hit generated by the passage of the the particle on the associated detector layer. A comparison between residual distributions of reconstructed clusters, between the CPU and FPGA algorithm, is shown in Fig. 9. Distributions are plotted over the \(x\) coordinate, in the LHCb global reference frame4. The two distributions are indistinguishable in the core, with very small differences in the tails. It is also checked that the most of the non-reconstructed hits are of inferior quality, sitting in the tails of the resolution curve.
Footnote 4: Similar results are obtained for the \(y\) coordinate.
Extensive studies are also performed to measure the quality of the full track reconstruction, when FPGA VELO clusters are used. The trajectories of the charged particles traversing the tracking system are reconstructed from hits in some of the three tracking detectors, that is the VELO, the Upstream Tracker (UT) placed upstream of the magnet, and the SciiFil detector placed downstream of the magnet [14]. Tracks reconstructed using only hits from the VELO detector are called VELO tracks.5 VELO tracks with \(2<\eta<5\) can also have hits in the SciFi detector, and optionally in the UT. These tracks are called "long tracks". As they traverse the whole magnetic field of the LHCb detector they have the most precise measurement of the momentum and therefore are key for physics analyses. Table I shows a comparison between the CPU- and FPGA-based reconstruction performances for VELO tracks and for the VELO segment of long tracks. It also reports the relative fraction of clone reconstructed tracks with respect to the total number of tracks in the category they belong to, and the relative fraction of ghost reconstructed tracks with respect to the total number of tracks. A clone is defined as any additional reconstructed track matching an already truth-matched Monte Carlo track, whereas a ghost is a reconstructed track not associated to any true Monte Carlo track [15]. The efficiencies and clone fractions are almost indistinguishable when comparing CPU and FPGA algorithms for VELO and long tracks, not displaying any perceptible systematic difference. The fractions of ghost tracks differ at the per-mille level. This difference is due to tracks in the pseudorapidity region below 1.5. These tracks graze VELO sensors at a very low angle, and produce very large clusters. For this reason, the position of the particle hitting the detector and creating the cluster is unlikely to be accurately measured whatever the clustering algorithm.
Footnote 5: VELO tracks can also have \(\eta<2\), in which case they are used only for the primary vertex reconstruction.
The quality of the reconstructed tracks is also studied in terms of momentum, primary vertex and impact parameter
Fig. 8: Efficiency in reconstructing clusters as a function of (top) the pseudo-rapidity and (bottom) the momentum of the associated tracks for the CPU- and FPGA-based algorithm. Clusters, from tracks that can be reconstructed using only information from VELO hits, are shown. The blue histograms show (top) the pseudo-rapidity and (bottom) the momentum distributions of the tracks.
Fig. 9: Cluster residual distributions for the CPU and FPGA based clustering algorithms. Distributions are normalized to unity.
resolutions, as shown in Fig. 10.
In conclusion, all studies have shown that FPGA-reconstructed clusters lead to a quality of track reconstruction that is effectively indistinguishable from the software reconstruction.
## V Implementation details and integration
Given the indistinguishable performance of the FPGA-based clustering algorithm with respect to software-based one, the LHCb collaboration decided to integrate the cluster finder architecture within the TELL40 cards that perform the readout of the VELO, exploiting spare FPGA resources not utilised by the readout firmware. The VELO time-ordering firmware, plus the common LHCb firmware, takes up about 44% of its logic resources and 64% of its M20K memory blocks.
Each VELO TELL40 receives data from a single VELO module. Two independent and identical parallel processing chains are implemented in the FPGAs, each of which receives and processes data from one VELO half-module [8]. The clustering architecture, with all needed ancillary logic, is integrated as a self-contained block at the end of each chain, and it has, therefore, two identical instances running in parallel, in analogy with the readout firmware (Fig. 11). The output of the clustering is transmitted out of the readout card via its PCIe interface to the host server, which assembles the data information from different subdetectors for each event.
The clustering architecture is itself composed of several units, each devoted to a specific task (Fig. 11). Firstly, a decoding and flagging stage splits data into separate streams, while flagging isolated SPs. Secondly, a pair of switch blocks sends data to the cluster processing blocks. Reconstructed clusters are then finally encoded with the chosen output format. A back-pressure mechanism is implemented throughout the pipeline: each processing block sends a "ready" signal to the previous unit when it is capable of receiving data.
A detailed description of the firmware implementation, its integration and commissioning within the LHCb data acquisition, can be found in Ref. [12].
### _Clock domains_
Each unit in an instance of the clustering architecture writes its output to a FIFO that is read by the subsequent unit. The purpose of the FIFOs is twofold. Firstly, they allow buffering and flow control between the clustering units. In addition, FIFOs allow data synchronisation between different clock domains. In our application, the decoder and the encoder stages run on a 250 MHz clock, whereas the switch and clustering processing block use a 350 MHz clock. These values have been chosen to ensure that the system as a whole can provide a throughput in excess of 30 MHz (see Sect. VI), while still respecting the timing constraints due to internal signal propagation in all of its parts.
### _Data formats_
Active SPs are encoded by 32-bit words. Each word contains the pixel hitmap (8 bits), the SP position inside the sensor (15 bits) and the sensor identifier within the sensor pair (1 bit). Each VELO sensor is made of 256\(\times\)768 pixels. Each SP is composed of 4\(\times\)2 pixels, such that 6 bits are needed to specify the SP row whereas 9 bits are required for the column. One extra bit is needed to identify the source sensor, as each data chain receives SPs from two sensors.
Fig. 10: Track reconstruction resolutions for the CPU and FPGA based clustering algorithms: (top) momentum resolution as a function of the momentum, (middle) primary vertex resolution along the beam direction as a function of the number of tracks in the reconstructed primary vertex and (bottom) impact parameter resolution along the horizontal direction as a function of the inverse of the transverse momentum. Impact parameter resolutions are fitted with a linear function. The blue histograms show the distributions of the (top) momentum of the reconstructed tracks, (middle) number of reconstructed tracks per primary vertex and (bottom) inverse of the transverse momentum of the reconstructed tracks.
Clusters are encoded in 32-bit output words, as sketched in Fig. 12. Of these, 22 bits are used to specify the position of the cluster centroid, with 18 bits specifying the position of the pixel where the cluster centroid is located (Integer column and Integer row), and additional 4 bits are used to specify the position of the centroid within the pixel, in units of \(1/4\) of a pixel (Frac col and Frac row). Analogously to SP data, 1 bit is used to identify the sensor (ID). Eight additional bits are used to encode a cluster-topology identifier (Topology ID) and the reconstruction-quality flags (Flags). The topology identifier is used to distinguish cluster topologies that share the same centroid position within the pixel, so that the full cluster topology can be retrieved. If the cluster is reconstructed from an isolated SP (bit 30 = 1 in Fig. 12 top), six bits are used to store the topology identifier, whereas five bits are needed to store the identifier for clusters reconstructed through the matrices (bit 30 = 0 in Fig. 12 bottom). The cluster topology information is used both for monitoring purposes and for the ultimate optimisation of tracking performance, as the uncertainty associated to the 2D position of a cluster depends on its topology. The reconstruction-quality flags allow to distinguish between: clusters from isolated SPs, clusters reconstructed inside a matrix, and clusters built from SPs overflowing the maximum number of instantiated matrices (which are arbitrarily treated as isolated). For clusters reconstructed within matrices, the word contains two additional quality flags, which specify whether a cluster was fully contained in the \(3\times 3\) grid, and whether the grid touched the boundary of the host matrix (which potentially means that the reconstructed cluster is a fragment of a larger cluster).
### _Input-output interfaces_
Our architecture block requires a "valid" signal to confirm the validity of the current input data word. Additional start of package (SOP) and end of package (EOP) signals allow to separate data coming from different events. The SOP signal is received with the first word of every event, whereas the EOP is generated together with the last input word. The "valid", SOP and EOP signals are also present on the output side, where clusters are transmitted.
### _Decoder block_
Input data to the clustering firmware arrive grouped in 256-bit words, each carrying 8 SP words. The first block is a decoder, which splits the 256-bit words into eight 32-bit streams. The decoder is also responsible for converting the SOP-EOP protocol to the EndEvent (EE) protocol used within the clustering architecture: a 32-bit EE word is interposed between SPs of different events in all the 8 data streams. Each EE word carries a specific flag to distinguish it from SPs, and an event identifier (5 bits), that can be used during subsequent data processing to cross-check data synchronisation.
### _Isolation flagging_
Within the decoder block, SPs are flagged with an isolation bit. The flagging process includes five steps: read, buffer, load, flag and write, arranged in a pipeline as shown in Fig. 13.
First, all SPs of a given event are read and stored into registers. The maximum number of SPs that can be stored in the read registers is not dynamically adjustable. It has been set to 144 based on the distribution of the expected number of SPs in the most crowded VELO module, requiring more than 98% of the LHCb simulated events to be accommodated into the read registers. In events where the number of SPs exceeds the size of the read registers, SPs are not sent to the flagging process but are instead bypassed and sent directly to the matrix chains. This causes a local slowdown of the entire
Fig. 11: Basic blocks of the clustering architecture. VELO data are received as 256-bit words, each containing 8 SPs. A “Data Valid” signal states whether the incoming data are valid. Start of package (SOP) and end of package (EOP) signals delimit the start and the end of the data corresponding to each event. The clustering block sends a ready signal to the previous architecture component when it is ready to accept data. The “decoder and isolation flagging” splits the 256-bit bus into eight 32-bit wide bases, each containing one SP. It also flags SPs that do not have any active neighbour SPs (isolation flag). A pair of switches arrange SPs by sensor (S0/S1) and by isolation flag (IF). The “clustering isolated” and “clustering matrices” blocks reconstruct clusters, that are encoded back into 256-bit words by means of an encoder.
Fig. 12: Data formats for (top) clusters reconstructed from isolated SPs and (bottom) clusters reconstructed from not isolated SPs. Bit 31 (Res) is reserved for internal use.
chain as the matrices need to reconstruct clusters from a high number of SPs. This effect is included in the measurement of the average throughput of the entire system. In addition, as the number of input SPs increases, the fraction of SPs overflowing the number of available matrices gets higher, increasing the number of split clusters. The corresponding increase in split clusters is also accounted for in the evaluation of the reconstruction performance.
As soon as all SPs of one event have been received, the content of the read registers is copied to the buffer. This data exchange decouples the reading and flagging operations, allowing SPs of one event to be read in while the flagging of the previous event is still ongoing. The flagging process compares the coordinates of each SP to the ones of the other SPs in the same event. A status vector is used to store the isolation flag for each SP: if two SPs are found to be neighbours, the corresponding bits in the status vector are set to 1. SP comparisons are not all performed in a single clock cycle. On each clock, the load block extracts two subsets of 16 SPs each from the buffer (Fig. 13). For each SP in the two subsets it also computes the set of coordinates to be matched by the neighbours by one-unit additions and subtractions of the coordinates of the SP row and column. The two SP subsets, together with the coordinates of the neighbours, are passed to the flag block that performs the \(16\times 16\) comparisons on the two subsets. For each SP of the first subset, the flag block checks if the SP row is equal to one of the rows of the SPs in the second subset or to the row above or below; the same check is performed on columns. If both the row and the column checks yield a positive result, the two SPs are flagged as neighbours, and the corresponding bits in the status vector are set to 1. On each clock cycle, the load block selects a different pair of SP subsets from the buffer sending them to the flag block, until all possible combinations of 16-SPs subsets have been checked. The described architecture allows reusing the same logic resources while updating the SP subsets to be flagged at each clock cycle. To perform the comparisons between \(n\) 16-SP blocks, \(n(n+1)/2\) clock cycles are needed.
The number of parallel comparisons performed for every clock cycle is the result of a trade-off between resource usage and throughput, and is based on the constraints of its use within the LHCb experiment.
As soon as all comparisons have been completed, the contents of the flagging registers and the status vector are copied to the write block, thus decoupling the flag and write processes. The write block is responsible for adding the isolation flag to the SP words and for sending flagged data to the next component, the switch. The data exchange within the read-buffer-load-flag-write pipeline is regulated by back-pressure: if a component cannot accept the data of an event because it is still processing the previous event, the control unit keeps the previous component on hold.
The decoder block, including flagging and bypass, uses 7% of the logic and 1% of the M20K memories available in an Arria 10 FPGA.
### _Switch block_
The cluster processing chain receives SPs from both sensors of a VELO half-module. The switch, placed after the decoder, arranges SPs by sensor and by isolation flag, feeding them to the appropriate cluster-reconstructing blocks. Each of the two switching units shown in Fig. 11 performs a \(4\to 4\) switching, allowing every input data word to be directed to any of the four output streams according to its flags, regardless of the origin input stream. The basic switch constituents are the splitter and the merger (Fig. 14). The former has one input and two outputs, and it sends input data to one of the two outputs according to their isolation flag or origin sensor. The latter has two inputs and one output, and it routes two inputs in a single output line. Two splitters and two mergers combine to form a \(2\to 2\) dispatcher. To implement a switch with \(2n\) inputs/outputs, \(N\) 2-way dispatchers connected together are needed, where
\[N(n)=2N(n-1)+2^{(n-1)}.\]
The block diagram of the splitter is shown in Fig. 15. The splitter is based on a finite state machine (FSM). The next state is determined by the R0 register state, the arrival of valid input data and the hold state of the following processing block. On the arrival of valid input data, the FSM decides between sending it directly to the output and storing it in the R0 register, based on the input hold signal. In the latter case, a latch enable (LE) write signal is sent to the register. A multiplexer controlled by the FSM routes data to the output. If a SP is received then one of the two valid signals is set to 1,
Fig. 14: Block diagram of a 4 to 4 switching unit.
Fig. 13: Block diagram of the isolation flagging.
according to the routing scheme (isolation flag or sensor). If an EE signal arrives, it is sent to both outputs. The input hold signal determines whether data can be sent to the output. An output hold is generated as long as the R0 register is full, since no more data can be accepted as input, given the possibility of an input hold signal assertion.
The block diagram of the merger is shown in Fig. 16. As for the splitter, a FSM determines if input data can be sent directly to the output or must be stored in appropriate registers (R0 and R1). If an EE word arrives on one of the inputs, it is stored until a second EE word arrives to the other input. The two EE words are then compared and, if their event IDs match, a single EE word is output; otherwise, a sync error signal is set to 1.
### _Cluster reconstruction_
All isolated SPs, identified by the switch, are sent to the corresponding clustering block, and are resolved by means of a LUT, as shown in Fig. 17.
The LUT reconstructs the cluster centroid from the active-pixel hitmap extracted from the SP word. The cluster word is built by combining the LUT output with the original SP row and column. If two different clusters are reconstructed inside an isolated SP a bit is set to 1, and the two outputs are combined by a merger block (Fig. 16). Reconstructed clusters are then sent to the output FIFO.
The reconstruction of clusters from non-isolated SPs requires two different processing steps. Input data are first sent and distributed in the matrix chain, and then, when matrices have been filled with SPs, the actual reconstruction of the clusters takes place, as shown in Fig. 18. In order to ensure a high throughput, each matrix receives data from two parallel input lines. Each input line is combined with a hold signal, that is propagated backwards through the whole chain to control the data flow by back-pressure to avoid data loss. As the first SP populates a matrix, a set of coordinates is calculated and stored, to be matched with all further SPs arriving to the same matrix. The initialization of an empty matrix is done using only one of the two input lines, since only a single SP can enter the center of the matrix at a time. A second SP coming simultaneously from the other parallel line would need the
Fig. 16: Merger block diagram. R0, R1 and State are registers, MUX is a multiplexer and FSM is a finite state machine that manages hold, valid, control and latch enable (LE) write signals.
Fig. 17: Cluster reconstruction of isolated SPs by means of a LUT.
Fig. 18: SP distribution in a matrix chain. Clusters are reconstructed through the cluster finder block and merged into a FIFO.
Fig. 15: Splitter block diagram. R0 and State are registers, MUX is a multiplexer and FSM is a finite state machine that manages hold, valid, control and latch enable (LE) write signals.
coordinates of free slots to fill the matrix, that cannot be immediately available for timing constraints. For this reason, input line 0 (Fig. 18) has the priority over input line 1 during a matrix initialization. In order to keep a good load balancing, input lines are swapped when going from one matrix to the next: line 0 of a matrix feeds line 1 of the next matrix and vice-versa. When EE words have arrived on both input lines, the content of the matrix is moved to the cluster finder block. An error is raised if two different EE signals are detected.
During the second step, the cluster finder block processes the content of the corresponding filled matrix. Figure 19 shows the logic of how clusters are reconstructed, starting from the matrix pixel content. Each pixel in a matrix checks if it belongs to one of the L-shaped patterns of the algorithm through the pixel checker block. This process is performed in parallel at full speed for each pixel in each matrix. When a pattern match is found, an anchor pixel in the matrix is identified. As a consequence, the bit in the pixel flag vector corresponding to the position of the anchor pixel in the matrix is set to 1. An encoder reads the pixel flag vector content and passes the addresses of all found anchor pixels to a multiplexer, one at a time. The multiplexer extracts the 3\(\times\)3 cluster candidate corresponding to the address received from the encoder. As soon as an anchor pixel has been processed and corresponding cluster candidate found, the decoder block receives the pixel address from the encoder and resets the corresponding bit to zero in the pixel flag vector. The reset operation is performed by means of the pixel flush signal. For each cluster, a word containing the matrix coordinates, the anchor pixel position and the 3\(\times\)3 cluster candidate is written to the matrix FIFO. A merger reads the cluster candidates from all the matrix FIFOs and sends them to a LUT, which computes the centroid of each cluster (Fig. 18). The cluster position is obtained by combining the matrix position in the detector, the anchor-pixel position in the matrix and the LUT output. The clusters words are then saved into a FIFO that contains all the clusters from non-isolated SPs of a VELO sensor that do not overflow the matrix chain. The two data lines at the end of the matrix chain which carry overflow SPs are merged into a single line. Overflow SPs are reconstructed as if they where isolated by means of a LUT, and the reconstructed clusters are stored into a FIFO.
### _Encoder_
The last processing block of the clustering architecture is devoted to encoding the eight separate 32-bit data streams into a single 256-bit bus, to comply with the required output format. The encoder architecture has been designed as a trade off between speed and bandwidth optimisation. The encoder is required to output a 256-bit word at each clock cycle, to maintain a throughput larger than 30 MHz. Given the speed constraint, the SP packing performed by the encoder is not optimal in each event, interleaving zero-padded words in between 256-bit words to match the output width. To build the complete 8-to-1 encoder, seven 2-to-1 encoders are instantiated, as shown in Fig. 20. The 2-to-1 encoder block puts together two input data lines (N + N bits) into a single output (2N bits) by means of buffer registers (R0, R1 and R3) and a control FSM (Fig. 21). If two cluster words are received and no hold signal is asserted by the subsequent block, the two words are packed together and sent out. If a single cluster is received, it is stored in the R3 register and matched with the next input cluster. If a hold signal is received the incoming cluster is stored in the R0 or R1 register depending on its input line. In case an odd number of words is received within an event, a zero-padded word is added to match the 2N output width. When two EE signals are received, they are compared and, if they match, sent out. An error signal is generated otherwise.
### _Monitoring and error handling_
The clustering architecture has several blocks whose behaviour affects the functioning of the entire data processing chain. A monitoring procedure is, therefore, implemented to probe each block throughout the whole reconstruction process to ensure a correct data handling. Between each block of the diagram illustrated in Fig. 11 a FIFO is inserted as a buffering element. It decouples the data writing process of the previous input block from the data reading of the subsequent output block, absorbing local processing rate fluctuations.
The occupancy levels of all interposed FIFOs, as well as their maxima over a certain time interval, are periodically read to check for, and diagnose, possible slowdowns of any processing blocks. The fraction of SPs overflowing the matrix chain is also monitored. Each processing block is also equipped with an error-checking logic, which monitors two types of errors.
Fig. 19: Cluster finder block diagram and its data flow.
Fig. 20: Structure of a 8-to-1 encoder built from 2-to-1 encoders.
The first type corresponds to a data loss, occurring when a block receives valid data in input and the register in which data should be written is already full. The second type occurs when mismatching EE signals are received, indicating a loss of synchronisation in the input data. In both cases, a signal is generated and an error word is output, containing a code to trace back the origin of the error for debugging purposes. A reset signal needs to be sent to the clustering logic and memories to recover from both the error types.
## VI FPGA resource usage and throughput
The clustering architecture was initially compiled and tested standalone on a Stratix V based prototyping board [16]. The FPGA device mounted on the prototyping board has similar amount of logic, memory resources and clock speed to the the Arria 10 carried by the TELL40 readout boards. During the test, the firmware was fed with simulated SP data from RAM memories that are read in a loop. The clusters reconstructed in hardware were compared to the output of the high-level C++ simulation of the algorithm, run on the same set of input SPs. The quality of the reconstruction and the reliability of the measurements was verified.
The firmware can process events with up to an average of 32 SPs per VELO half-module, using a 350 MHz clock rate. This condition is met for the whole VELO detector, where the average occupancy is 26 SPs per event, near the nominal interaction point. An average event processing rate of 38.9 MHz is measured on minimum-bias LHC collision events, in the most crowded VELO module. The measurement is also performed on \(pp\) collisions with higher than average track multiplicity, containing reconstructible \(B_{s}^{0}\rightarrow\phi\phi\) decays, as a sample of typical data that the LHCb DAQ would select and save on permanent storage. The measured throughput of 30.9 MHz is still higher than the average LHC bunch crossing rate, and ensures that even a random fluctuation leading to the occurrence of several high-occupancy events in a row poses no risk of clogging the pipeline. The clustering firmware is therefore expected to run safely throughout the entire Run 3 physics data taking.
Compiling the entire VELO firmware within the Arria 10 allows the measurement of the amount of resources needed to perform clustering in real time. The clustering firmware requires roughly 31% of the logic and 11% of the memory of an Arria 10 chip to process an entire VELO module. After standalone validation, the clustering firmware was combined and fully integrated with the readout firmware to build the complete VELO readout firmware. Additional features were added in the integration process, like the handling of global LHCb control signals, response to errors, and an optional bypass that allows both the SP and the cluster data to be output for debugging purposes. SPs are then fed to the LHCb simulation that outputs software-based clusters which are then compared to firmware-based ones. The optional bypass option will be enabled periodically, or in case of a need to debug, during data taking to check that the firmware is reconstructing clusters correctly. The final overall chip occupancy turns out to be about 75%. Some tuning was required to fix timing violations occurring due to the large fraction of resource usage and to the complex connectivity of the design. The complete firmware was then compiled and loaded on the LHCb readout boards, and successfully tested within the DAQ system by means of signal injection in the detector front-end. The firmware has also been tested on the detector read-out boards using an internal front-end generator within the firmware, capable of generating input data at the nominal data rate (64 Gb/s). At the time of this writing, the firmware is fully commissioned and has started to take physics data in LHC Run 3.
The FPGA power consumption of all VELO TELL40s is
Fig. 21: 2-40-1 encoder block diagram. R0, R1, R3 and State are registers, MUX0, MUX1 and MUX3 are multiplexers and FSM is a finite state machine that manages hold, valid and latch enable (LE) write signals.
Fig. 22: Power consumption of individual VELO TELL40 cards (top) processing data at an event input rate of 30 MHz. The average value over the 52 cards is also superimposed with an horizontal line. Average power consumption aver all 52 cards as a function of the input event rate (bottom). Measurements using the firmware without the cluster finding block (outputting SPs instead of clusters) are also reported for comparison.
measured at the nominal event rate of 30 MHz (Fig. 22). For comparison, the same measurements are performed using the firmware without the cluster finding block, outputting SPs instead of clusters. The average power consumption of a TELL40 card when processing an event rate of 30 MHz with the readout firmware only is 6.1 W; this increases to a total of 8.6 W for the full firmware, including the clustering block. The same measurements are repeated for different values of input event rate, showing a very slow increase of power consumption with the input rate.
## VII Summary and Conclusions
A novel two-dimensional clustering architecture was developed, implemented in the VHDL language, and integrated in the LHCb readout FPGA cards. The architecture exploits the principles developed within the INFN-RETINA R&D project [4] for real-time track reconstruction, and effectively represents its first processing stage.
This firmware proved capable of directly processing every event at the 30 MHz LHC crossing rate (a total flow of 5 Tb/s) without time-multiplexing or buffering of any sort, in a manner that serves the needs of an actual high energy physics experiment. The physics performances of the algorithm were extensively studied and showed to be effectively indistinguishable from software clustering algorithms. The sparse-matrix technique adopted in its implementation proved successful in handling large detectors (order of 40 million pixels) with a modest amount of logic and memory resources. This allowed its insertion into the existing LHCb readout hardware, for use in the Run 3 physics data taking. This is a significant advancement over the previous state of the art in HEP. The previous best performing cluster-finding system implemented in FPGAs run at about 8 MHz, without centroid determination, and required the deployment of about 10 parallel firmware copies [17].
Moving the VELO clustering reconstruction from the HLT1 sequence6 to the FPGA readout cards leads to a measurable throughput improvement. Without accounting for isolation flagging, for which no software implementation is available for comparison, the present cluster finder firmware allows a savings of about 11% of the computing power of the LHCb HLT1 full reconstruction sequence, allowing a corresponding increase of the LHCb data-taking rate. As a further advantage, a reduction of the VELO data size of approximately 14% was obtained, which allows to save resources both in the DAQ chain and in permanent data storage. In addition, the FPGA implementation consumes significantly less electrical power of its GPU analog. From the data in Fig. 22, it follows that the set of 52 VELO TELL40s requires about 130 W of power to perform cluster reconstruction of the entire VELO, while the GPU implementation would require about 6 kW (again not including isolation flagging)7.
Footnote 6: Details on the GPU-based VELO clustering reconstruction can be found in Ref. [18].
Footnote 7: The power needed to perform cluster reconstruction on GPUs is estimated by multiplying the GPU power usage (230 W) by the number of GPUs (236) required to process a 30 MHz input event rate and by the fraction of time spent in cluster reconstruction (11%). This is also in agreement with the measurements presented in Ref. [19].
In a broader perspective, this work can be seen as a special case of Connected Component Labelling with Center of Gravity calculation (COG) - a computation that often occurs in image processing systems with the purpose of identifying connected sets of pixels belonging to the same visual feature. The main difference is the modest size of the features of our interest, that we could contain within a 3\(\times\)3 matrix, and their sparseness, that makes our problem somewhat simpler. However, this greater simplicity comes with a 'frame rate' requirement (30 MHz) that is orders of magnitude larger than typical image processing rates (\(<\)1 kHz). In fact, a CPU implementation exists of the same VELO clustering task discussed in this paper that was inspired by some algorithms in use in image processing problems, appropriately revisited to exploit the smallness of the size of the components and their sparseness [20].
In recent years, also this type of image processing tasks is increasingly being moved from CPUs to dedicated FPGA firmware to achieve greater speed and efficiency, and it may be interesting to compare those solutions to the present work. As an illustrative example, we take the FPGA implementation described in Ref. [21]. There, frames of 640\(\times\)480 pixels are processed at a rate of 730 Hz, by a Zynq AP-SOC 7045 FPGA, running a 225 MHz clock, without COG. This system compares well with our case, where each of the 104 instances of our firmware processes a matrix of 512\(\times\)768 pixels, and our clock frequency and resource usage are also quite similar 8. However our frame rate is larger by a huge factor, of nearly \(10^{5}\). This difference is likely due to the sequential structure of the image processing firmwares, that proceeds by a raster scan rather than by a massively parallel calculation; but is definitely also a consequence of the greater simplicity of our problem in terms of cluster size and occupancy. In fact, cases of FPGA-based CCL implementations that reach a throughput comparable to that of our architecture are based on breaking down the image in smaller parts that are analysed in parallel, and later coalesced [22]; an approach that bears some resemblance to our use of sparse matrices.
Footnote 8: The Arria 10 FPGA mounted on TELL40 cards has a capacity of 1150k logic elements whereas the Zynq 7045 FPGA has 350k logic cells. A single instance of the clustering firmware requires about 15% of the available logic on the chip, while for the studies reported in Ref. [21] we assume a typical usage of about 50% of the total resources.
However, all the above examples assume that the image data arrives as an ordered sequence of pixels, and do not provide detailed topology analysis of the found clusters, so they could not be straightforwardly applied to our problem. Conversely, the smallness of the components addressed by our system may not be of interest in general image processing application; nevertheless, it cannot be excluded that some of the ideas described in this article could find some use in image processing tasks, at least in some specific instances.
## Acknowledgement
We are thankful for the support of the LHCb Real-Time Analysis group, within which this project was developed. A special thank goes to the LHCb VELO group for the tight collaboration, support, and integration coordination, without
which this work would not have been possible. The authors would also like to thank the LHCb computing and simulation teams for providing the simulated samples used in the paper.
|
2308.07597 | A Statistical Framework and Analysis for Perfect Radar Pulse Compression | Perfect radar pulse compression coding is a potential emerging field which
aims at providing rigorous analysis and fundamental limit radar experiments. It
is based on finding non-trivial pulse codes, which we can make statistically
equivalent, to the radar experiments carried out with elementary pulses of some
shape. A common engineering-based radar experiment design, regarding
pulse-compression, often omits the rigorous theory and mathematical
limitations. In this work our aim is to develop a mathematical theory which
coincides with understanding the radar experiment in terms of the theory of
comparison of statistical experiments. We review and generalize some properties
of the It\^{o} measure. We estimate the unknown i.e. the structure function in
the context of Bayesian statistical inverse problems. We study the posterior
for generalized $d$-dimensional inverse problems, where we consider both
real-valued and complex-valued inputs for posteriori analysis. Finally this is
then extended to the infinite dimensional setting, where our analysis suggests
the underlying posterior is non-Gaussian. | Neil K. Chada, Petteri Piiroinen, Lassi Roininen | 2023-08-15T06:56:37Z | http://arxiv.org/abs/2308.07597v1 | # A statistical framework and analysis for perfect radar pulse compression
###### Abstract.
Perfect radar pulse compression coding is a potential emerging field which aims at providing rigorous analysis and fundamental limit radar experiments. It is based on finding non-trivial pulse codes, which we can make statistically equivalent, to the radar experiments carried out with elementary pulses of some shape. A common engineering-based radar experiment design, regarding pulse-compression, often omits the rigorous theory and mathematical limitations. In this work our aim is to develop a mathematical theory which coincides with understanding the radar experiment in terms of the theory of comparison of statistical experiments. We review and generalize some properties of the Ito measure. We estimate the unknown i.e. the structure function in the context of Bayesian statistical inverse problems. We study the posterior for generalized \(d\)-dimensional inverse problems, where we consider both real-valued and complex-valued inputs for posteriori analysis. Finally this is then extended to the infinite dimensional setting, where our analysis suggests the underlying posterior is non-Gaussian.
Key words and phrases:Pulse compression, radar experiments, statistical estimation, comparison of experiments 2020 Mathematics Subject Classification: 94A12, 86A22, 60G35, 62M99
## 1. Introduction
Developing mathematical theory of comparison of statistical measurements [30] is crucial for understanding fundamental limits of radar experiments [14, 15, 21, 31, 29]. In the specific field of radar coding, one is interested in studying modulation patterns of transmitted radar signals. We are interested in pulse compression coding of incoherent scatter radar experiments [19, 32], where coding schemes play a crucial role in achieving a high range resolution. Pulse codes are a common approach to modelling the underlying target function, which can be thought of as concentrated length pulses with constant amplitude and phase. General pulse codes of length \(L\), which can be viewed as waveforms, can be expressed as
\[\epsilon(t)=\sum_{n=1}^{L}\int\phi_{n}\delta(t-nl-\tau)b(\tau,l)d\tau, \tag{1.1}\]
where \(\delta(\cdot)\) denotes a Dirac delta function and \(b(\cdot)\) is a boxcar function. Pulses can be represented through its phase and amplitude, which has motivated various pulse codes. Arguably one of the most common example are binary phase codes which omit a constant amplitude and two phases \(\phi\in\{-1,1\}\). Other examples of codes include Barker codes [1] and alternating codes [1, 11, 17]. Usually these pulses are compressed in such a way, to allow for longer pulses which have low peak power, illustrated in Figure 1. The accuracy of the estimated target function, i.e. the scattering function as used in radar modelling, depends hugely on the pulse compression design. There is a rich literature on coding techniques, see e.g. [8, 9, 11, 14, 35], that discusses how to best optimize radar experiments with various compression
techniques and assumptions. Given the complexity of these experiments it is important to understand, through a mathematical, and statistical, framework, how we can best formulate these experiments and gain an understanding from them.
Given the level of uncertainty that can arise within radar coding, a useful way to tackle these issues is through a statistical understanding. The work of Lehtinen [11] first considered this problem by modelling the scattering measurements within the signal as a statistical inverse problem [7, 33]. In other words we could characterize our signal through noisy measurements. With this work an important assumption was taken regarding the signal, which is that it is normally distributed. This assumption was made both for practical purposes but also that many signals omit a pulse form similar to a Gaussian density or kernel. Since this initial development there has been a number of papers looking to extend these results in a more rigorous fashion. Much of the current literature has considered a comparison of statistical measurements. This has lead to various pieces of work which have adapted ideas from Le Cam Theory, notably the work by Piiroinen et al. [13, 21, 26]. Other fundamental questions that have been considered in this context is how one can optimize the baud length of the radar. The baud length can be described as the time step which is used to discretize the radar signal. Numerically this was tested in the work of [12] which looked at the simple case for optimizing the baud length to minimise the posteriori variance. This was shown only in the context of specific targets.
Our motivation behind this work is to bridge the gap between the various communities in radar coding, namely by deriving a first simplified Bayesian statistical analysis for perfect radar pulse compression [25]. In particular we aim to build upon the current theory and develop a better understanding of statistical properties through characterizing a posterior distribution of the radar signal. The underlying mathematics of the posterior signal and its properties pose intriguing questions, such as whether itself is a Gaussian and its behavior, and understanding this for high and infinite dimensions. This question will act as the motivation behind this work.
### Contributions
The following bulletpoints summarize the contributions of this work.
* To the best of our knowledge this is the first paper at deriving a statistical framework, and analysis, for the theory of perfect radar pulse compression. Our framework will be largely based on the notion and generalization of Ito measures to scattering functions.
* We aim to analyze perfect radar pulse compression in a Bayesian setting. This motivates studying and understanding statistical properties of our scattering function. We aim to form a posterior distribution of this scattering function. To aidm we first consider a \(d\)-dimensional case, where \(d<\infty\). Related to this we also provide a result related to showing if two posterior
Figure 1. Left: Long pulse with limited power. Right: Short pulse with more power through pulse compression.
variances coincide, of two different signals. This will be considered for both real valued and complex valued values.
* To conclude our analysis we consider the \(d\)-dimensional setting, for \(d=\infty\), where we show our underlying posterior is non-Gaussian which follows an inverse Wishart distribution. Here we use the notion of rapidly decreasing functions for our function spaces setting, to characterize the posterior.
* We discuss and review a number of key open questions which are still very much at the core of this field. These problems are motivated through both a mathematical and engineering perspective. Much of these questions follow on from the results obtained in this work.
### Outline
Our work will be split into the following sections: we begin Section 2 with a review of radar signaling, and in particular pulse compression. Section 3 will be dedicated to understanding posterior distribution of the signal that is defined through the previous section, which highlights our main results. Appendix A and B will be devoted to the analysis of the \(d\)-dimensional and infinite-dimensional analysis, which ultimately shows the proof of our main theorem. Finally we review and discuss a number of questions still to be answered while concluding our findings, in Section 4.
## 2. Radar coding
In radar modelling commonly a signal of interest \(\mu\) corresponds to a transmitted code, which can be defined as an uninterrupted sine wave multiplied with some envelope \(\epsilon^{q}(t)\), by the following integral equation
\[z^{q}(t)=\int_{\tau\in\mathbb{R}^{3}}\epsilon^{q}(t-S(r))\mu^{q}(d^{3}r)+ \sqrt{T}\xi^{q}(t). \tag{2.1}\]
The notation \(S(r)\) denotes the total travel time of the signal from the transmission through to the scattering point \(r\) to the receiver. This implies that (2.1) sums up all elementary scatterings which takes into account the phase of the signal. The final term is related to thermal noise, where \(T\) denotes the temperature and \(\xi(t)\sim\mathcal{N}(0,I)\) is complex additive Gaussian white noise. While (2.1) holds for a wide class of transmissive and receptive antennas in this work we consider a slightly different model. For simplicity we will assume that we have a mono-static single beam radar. To be more precise, if the back and forth signal time along the beam is denoted by \(r\), then \(S(r)=r\) and we describe the signal model as a different integral equation
\[z^{q}(t)=\int_{0}^{\infty}\epsilon^{q}(t-r)\mu^{q}(dr)+\sqrt{T}\xi^{q}(t). \tag{2.2}\]
We can rewrite (2.2) as the formal equation
\[\langle\mu(dr),\overline{\mu(dr^{\prime})}\rangle=X(r)\varphi(r-r^{\prime})drdr ^{\prime},\]
where \(X(r)\) is known as the structure function or simply the target density and its corresponding covariance can be explicitly determined as
\[\langle\mu(dr),\overline{\mu(dr^{\prime})}\rangle =\int_{r\geq 0}\int_{r^{\prime}\geq 0}\epsilon^{q}(t-r) \overline{\epsilon^{q}(t^{\prime}-r)}+T\varphi(t-t^{\prime})\] \[=\int X(r)\epsilon^{q}(t-r)\overline{\epsilon^{q}(t^{\prime}-r)}+ T\varphi(t-t^{\prime})\] \[=\int A_{tt^{\prime}}X(r)dr+T\varphi(t-t^{\prime}),\]
where \(A_{tt^{\prime}}=\epsilon^{q}(t-r)\overline{\epsilon^{q}(t^{\prime}-r)}\). Both (2.1) and (2.2) assume that we have a time-incoherent signal, whereas in the case if the signal was time dependent our signal would be modified to
\[z^{q}(t)=\int_{0}^{\infty}\epsilon^{q}(t-r)\mu^{q}(dr;t)+\sqrt{T}\xi^{q}(t), \tag{2.3}\]
so that now \(t\) can be treated as either the scattering time or the reception time. However as already stated our focus will be on analyzing signals which are time-incoherent. Our quantity of interest in this model is the signal denoted by \(\mu(\cdot)\). In radar signaling this unknown we are aiming to estimate is known as an incoherent scattering target. A fundamental question that arises is how to best estimate or model the underlying signal? In order to answer this question, below we provide some useful definitions to give understanding to quantities, and concepts, in pulse compression.
We will introduce specific way of modeling the signal, which we refer to as an Ito measure. Throughout we will assume that our unknown takes the form of a Gaussian distribution.
**Definition 2.1**.: _Assume we have two measurements defined as_
\[(m_{1}^{q})_{q=1}^{N_{q}} =\epsilon_{1}*(S(\sigma)^{q})_{q=1}^{N_{q}}+(\xi_{1}^{q})_{q=1}^ {N_{q}}, \tag{2.5}\] \[(m_{2}^{q})_{q=1}^{N_{q}} =\epsilon_{2}*(S(\sigma)^{q})_{q=1}^{N_{q}}+(\xi_{2}^{q})_{q=1}^ {N_{q}}, \tag{2.4}\]
_where \(\xi_{1}\sim\xi_{2}\) are of a complex Gaussian form, \(\epsilon\) is a transmitted waveform and \(S(\sigma)\) is the scattering function such that \(S(\sigma)\sim\mathcal{N}(0,\sigma)\)._
We now provide some definitions related to the notion of an Ito measure, which represents our signal, as within mathematics. A physical interpretation is that is it the spatially incoherent scatter.
**Definition 2.2**.: _Let \(\mathcal{B}_{0}\) be the Borel field of \(\mathcal{D}\subset\mathbb{R}^{n}\) a random set function \(\mu:\mathcal{B}_{0}\to L^{2}(\Omega,\mathcal{F},\mathbb{P})\) is called an Ito measure on \(\mathcal{D}\) with a structure measure \(X\) if_
1. \(X\) _is a_ \(\sigma\)_-finite Borel measure on_ \(\mathcal{D}\)_._
2. \(\mu(\emptyset)=0\)_._
3. _For all pairwise disjoint sets_ \(B_{k}\in\mathcal{B}_{0}\) _it holds that_ \[\mu\big{(}\cup_{k=1}^{\infty}B_{k}\big{)}=\sum_{k=1}^{\infty}\mu(B_{k}).\]
4. \(\mathbb{E}[\mu(B_{1})\overline{\mu(B_{2})}]=X(B_{1}\cap B_{2})\)_._
**Definition 2.3**.: _An Ito measure \(\mu(\cdot)\) is a signal with the following satisfying properties:_
1. _Complexity:_ \(\mu(\cdot)\in\mathbb{C}\) _given the phase and amplitude of the signal._
2. _Additivity:_ \(\mu(B_{1}\cup B_{2})=\mu(B_{1})+\mu(B_{2})\)_, if_ \(B_{1}\cap B_{2}=\emptyset\)_._
3. _Gaussian: the measure is taken as a mean zero Gaussian measure with covariance structure_ \(C\) _i.e._ \(\mu\sim\mathcal{N}(0,C)\)_._
4. _Incoherence:_ \(\mathbb{E}[\mu(B_{1})\overline{\mu(B_{2})}]=X(B_{1}\cap B_{2})=0\) _if_ \(B_{1}\cap B_{2}=\emptyset\)_._
**Definition 2.4**.: _Given a transmitted waveform of the form_
\[\epsilon(t)=\sum_{j}\epsilon_{j}\phi(t-j\triangle t), \tag{2.6}\]
_such that \(e_{j}\in\mathbb{C}\) with the index \(j\in\mathbb{Z}\). \(\triangle t\) is known as the baud length and \(\phi\) is the pulse form. We can define it corresponding Fourier transformation as_
\[\hat{\epsilon}^{q}(t)=\int\epsilon(s)\exp(ist)dt. \tag{2.7}\]
Our final definition is related to the scattering function.
**Definition 2.5**.: _We say the scattering function \(S(\sigma)\), or its structure \(\sigma\), are a set of admissible priors if satisfies the conditions_
1. _The state space of the random variable_ \(S(u)=S(\omega,u)\) _is_ \(L^{1}(\mathbb{R}^{N})\) _for every_ \(u\in\mathcal{U}\) _where_ \(\mathcal{U}\) _is the state space of the structure_ \(\sigma\)_._
2. _The mapping_ \(S(\sigma)(\omega)=S(\omega,\sigma(\omega))\) _is a random variable._
## 3. Posterior analysis
In this section we provide a statistical analysis on signals arising from perfect radar pulse compression. In particular the focus will be on understanding the posterior distribution of \(\sigma\), when we assume a fully Gaussian system, related to the prior and noise level. The derived analysis will form a basis for the higher and infinite dimensional setting, in succeeding sections.
All the definitions what is meant by densities and conditioning of the generalised random variables are reviewed in the Appendix. By the posterior distribution we mean the regular conditional distribution of the generalised random variable given the data random variable, and when it possesses a density we will refer to its density. Specifically the characteristic functions are defined in Appendix A.1, and densities are defined in Appendix A.2.
We now present our main theorem of the paper, which is the characterization of the posterior variance, related to the scattering function. This is given through the follow result, whose proof is divided up into multiple results used in the Appendix.
**Theorem 3.1**.: _Assume the priori distribution of \(|\,\sigma\,|^{2}\) is interpretable as an affine transform of inverse Wishart distribution, then the posteriori distribution of \(|\,\sigma\,|^{2}\) is a generalized limit of affine transforms of inverse Wishart distributions of the similar type._
The above result in Theorem 3.1 highlights that the underlying posterior is not Gaussian, with the assumption that the prior is. The proof follows the results obtained from Appendix B, which is extended from the analysis conducted in Appendix A.
For the infinite dimensional setting the underlying spaces are taken to be the rapidly decreasing functions, or Schwartz functions. These are defined by \(\mathscr{S}(\mathcal{C}^{n})\) (or the compactly supported test functions \(\mathscr{D}(\Omega)\)) and their dual spaces \(\mathscr{S}^{\prime}(\mathcal{C}^{n})\) of tempered distributions (or the distributions \(\mathscr{D}^{\prime}(\Omega)\)).
Our next main result, is related to characterizing a relationship between two signals in relation to their posterior variance of the scattering function. This is provided through the following theorem.
**Theorem 3.2**.: _Suppose the prior covariance structure \(X(\sigma)\) is a complex Gaussian1 with constant \(|\,\sigma\,|^{2}>0\). If the moduli of the Fourier transforms of the transmitted waveforms coincide, i.e. if_
Footnote 1: see Appendix A
\[|\,\widehat{\epsilon}_{1}\,|=|\,\widehat{\epsilon}_{2}\,|,\]
as Schwartz distributions2, then the posterior variances \(\operatorname{var}(\left|\,\sigma\,\right|^{2}\left|\,m_{1}\right)\) and \(\operatorname{var}(\left|\,\sigma\,\right|^{2}\left|\,m_{2}\right)\) of the \(\sigma\) given \(m_{1}\) and \(m_{2}\) are equal, i.e._
Footnote 2: in most cases just as function pointwise for almost every point
\[\operatorname{var}(\left|\,\sigma\,\right|^{2}\left|\,m_{1}\right)= \operatorname{var}(\left|\,\sigma\,\right|^{2}\left|\,m_{2}\right).\]
Proof.: We show in Appendix A that if the covariance structure corresponds to a constant multiplier, then
\[\operatorname{var}(m_{j}\,|\,|\,\sigma\,\right|^{2})=\phi\mapsto\left|\,\phi \,\right|^{2}\bigl{(}\left|\sigma\right|^{2}\left|\,A_{j}\,\right|^{2}+T \bigr{)},\]
where \(\left|\,A_{j}\,\right|^{2}=\left|\,\widehat{\epsilon}_{j}\,\right|^{2}\). Using the assumption of equal moduli of Fourier transforms of the transmitted waveforms, we see that
\[\operatorname{var}(m_{1}\,|\,|\,\sigma\,\right|^{2})=\operatorname{var}(m_{2 }\,|\,|\,\sigma\,\bigr{|}^{2}),\]
and therefore by the results of Appendix A, we obtain that this implies that the conditional characteristic functions
\[J_{m_{1}\,|\,|\,\sigma\,|^{2}}=J_{m_{2}\,|\,|\,\sigma\,|^{2}},\]
as generalized functions. Since the prior was constant, we obtain that the conditional densities of \(\left|\,\sigma\,\right|^{2}\) given \(m_{1}\) and \(m_{2}\) are both following the same inverse Wishart distribution under the same spatial discretisation. Therefore, they have the same discretisation limits and hence also their posterior variances coincide.
## 4. Conclusion & Discussion
Pulse compression has been a cornerstone of modern applied mathematics incorporating tools from information theory, Fourier analysis and harmonic analysis. Recently statistical methodologies have gained interest most notably for enabling some form of uncertainty quantification. This papers motivation is exactly in this where our aim is to understand perfect pulse compression through a statistical framework. What we showed was that, through the introduction of Ito measures where we assume our signal is distributed according to a Gaussian, we were able to characterize a posterior distribution of the signal \(\sigma\). As our results suggest, the resulting posterior is indeed non-Gaussian specifically an inverse Wishart distribution. This was achieved through analysis in both a finite-dimensional setting and infinite-dimensions, where we introduced Gaussian measures and the concept of Schwartz functions for our function-space setting.
As this is the first instance in understanding perfect radar pulse compression in a statistical manner, there are numerous directions to take for future work. One direction to consider is to understand the relationship between different pulses. To do so one can consider using various probabilistic metrics for Gaussian measures. A natural one to consider is the Kullback-Liebler divergence which has been analyzed in infinite dimensions [22, 23, 34]. However given how this is not an actual metric per say one could consider extensions to the Wasserstein distance and also the Le Cam distance [3], which has been used for statistical experiments.
Another more applied direction is to consider a better way to model the pulses as usually they take the form of box-car functions or piecewise constant functions, where imposing Gaussian [18] modeling assumptions can hinder performance. Recent work has shown that \(\alpha\)-stable processes [5] can be used in place which can be used for edge-preserving inversion. This would imply the prior random field has the particular form
\[U(x)=\int_{[0,1]^{d}}f(x,x^{\prime})M(dx^{\prime}),\;x\in[0,1]^{d},\]
where
\[f(x,x^{\prime})=\begin{cases}1\text{ when }x_{i}^{\prime}\leq x_{i}\text{ for all }i=1,\dots,d\\ 0\text{ otherwise,}\end{cases}\]
and \(M\) is symmetric \(\alpha\)-stable random measure. An example of a non-Gaussian \(\alpha\)-stable process are Cauchy processes [4, 16, 24, 27, 28] which have already been tested within inverse problems. This could be a natural direction for using more advanced non-Gaussian priors.
More specific to the pulse compression an important question to quantify, is the relationship of the pulses and the temperature \(T\). Specifically what occurs in the limit \(T\to 0\). For the case of \(T=0\) let us assume the code is modeled as a boxcar of width \(a>0\) and unit \(L^{2}\) norm
\[\epsilon(t)=\epsilon_{a}(t)=a^{-1/2}\chi_{[0,a)}(t).\]
Then choosing \(a=1/2N\) results in the following expression for the signal
\[z^{q}(n/N+t) =\int_{0}^{1}\epsilon_{1/2N}(n/N+t-r(\text{mod }1))\,\mu^{q}(dr)+ \sqrt{T}\xi^{q}\] \[\text{ with }0\leq t<1/2N\text{ and }n=0\dots N-1,\]
are all mutually independent and equally informative measurements of \(\sigma\), each separately adding the same amount of information to \(\sigma\), independent of \(N\). It follows that the posteriori variance of \(\sigma\) approaches \(0\), when \(N\longrightarrow\infty\). This is in contradiction with a rather common engineer understanding saying that increasing radar power ( equivalent to decreasing additional noise ) will give no extra benefit after some level is reached. One will naturally benefit by choosing increasingly narrow pulses as extra power becomes available.
However for the case of \(T>0\), where \(T\) is close to \(0\), what is explained above it seems plausible that the optimal radar code might be a narrow pulse. If true then the width would approach \(0\) as \(T\longrightarrow 0\).
**Conjecture 4.1**.: _For each \(T\) it is possible to find an optimal code \(\epsilon_{T}(t)\) so that_
\[\lim_{T\longrightarrow 0}\sqrt{T}\epsilon_{T}(t/T),\]
_defines a well-defined limiting shape: a fundamental typical shape of optimal radar baud._
Related to this a final direction to consider is to quantify whether the optimal code, discussed in the above conjecture is unique or not. This of course could be related to how one defined the prior form, or the scattering function.These and other directions will be considered for future work.
## Acknowledgements
The authors thank Dr. Markku S. Lehtinen, for helpful discussions and directions for the paper. NKC is supported by an EPSRC-UKRI AI for Net Zero Grant: "Enabling CO2 Capture And Storage Projects Using AI", (Grant EP/Y006143/1).
## Appendix A finite dimensional analysis
In this appendix we consider a generalized setting, which is the \(d\)-dimensional case. For our analysis we will consider four separate cases namely; (i) real valued Gaussian random vector, (ii) complex valued Gaussian random vector, (iii) real valued white noise and (iv) complex valued white noise. In order to do so we recall a number of key definitions which we will use for our analysis. Our analysis will be based on the notion of computing means and covariances through moment generating functions.
**Definition A.1**.: _(Gaussian random vector) Assume \(X:=(X_{1},\ldots,X_{n})\) is a real finite dimensional random vector. We say \(X\) is a Gaussian random vector if it can be expressed in the form_
\[X=\mu+AY,\]
_where \(\mu\in\mathbb{R}^{n}\), \(A\in\mathbb{R}^{n\times k}\) and \(Y=(Y_{1},\ldots,Y_{k})\) is a vector of independent standard Gaussian random variables. Such a vector \(Y\) is called standard multinormal random vector or discrete real white noise vector._
**Definition A.2**.: _(Complex Gaussian random vector) Assume \(X=(X_{1},\ldots,X_{n})\) is a complex finite dimensional random vector. We say \(X\) is a complex Gaussian random vector if it can be expressed in the form_
\[X=\mu+AY,\]
_where \(\mu\in\mathbb{R}^{n}\), \(A\in\mathbb{C}^{n\times k}\) and \(Y=(Y_{1},\ldots,Y_{k})\) is a discrete complex white noise vector. We say a complex random vector \(Y=Y_{R}+iI_{I}\in\mathbb{C}^{k}\) is a discrete complex white noise, if \((Y_{R},Y_{I})/\sqrt{2}\) is a discrete \(2k\)-dimensional real white noise._
Let us list some properties that hold for complex and real Gaussian vector.
**Proposition A.3**.: _Assume that \(X\in\mathbb{K}^{k}\) is a \(k\)-dimensional complex (\(\mathbb{K}=\mathbb{C}\)) or real (\(\mathbb{K}=\mathbb{R}\)) Gaussian random vector. Suppose \(A\in\mathbb{K}^{n\times k}\) and \(\mu\in\mathbb{K}^{n}\). Then \(Z=\mu+AX\) is a \(\mathbb{K}\)-Gaussian random vector with \(\mathbb{K}\)-expectation_
\[\mathbb{E}(Z)=\mu+\mathbb{E}(X),\]
_and its \(\mathbb{K}\)-covariance matrix is_
\[\mathrm{Cov}(Z)=A\mathrm{Cov}(X)A^{\prime},\]
_where \(A^{\prime}=A^{\top}\), when \(\mathbb{K}=\mathbb{R}\) and \(A^{\prime}=\overline{A}^{\top}\), when \(\mathbb{K}=\mathbb{C}\)_
Proof.: The expectation of \(Z\) is defined as a mapping \(\phi\mapsto\mathbb{E}(Z^{\prime}\phi)\). Since \(X=\lambda+BY\) for some \(\mathbb{K}\)-Gaussian random vector, we have
\[Z^{\prime}\phi=\mu^{\prime}\phi+(A\lambda)^{\prime}\phi+Y^{\prime}B^{\prime}A ^{\prime}\phi.\]
Since the expectation of \(Y\) is a zero mapping, we see that
\[\mathbb{E}(Z)=\mu+(A\lambda),\]
where \(\mu\) is identified with the mapping \(\phi\mapsto\mu^{\prime}\phi\). When \(\mu=0\) and \(A\) is identity, this gives also that
\[\mathbb{E}(X)=\lambda,\]
so the first claim follows.
The \(\mathbb{K}\)-covariance of \(Z\) is defined as a the covariance of \(W=Z-\mathbb{E}(Z)=ABY\) which is in turn the mapping
\[\phi\mapsto\mathbb{E}(W^{\prime}\phi)^{\prime}(W^{\prime}\phi)=\mathbb{E}( \phi^{\prime}WW^{\prime}\phi).\]
Since
\[W^{\prime}\phi=Y^{\prime}(AB)^{\prime}\phi,\]
we have
\[(W^{\prime}\phi)^{\prime}(W^{\prime}\phi)=\phi^{\prime}ABYY^{\prime}B^{\prime }A^{\prime}\phi,\]
This implies since the covariance of \(Y\) is an \(\mathbb{K}\)-identity operator, that
\[\mathrm{Cov}(Z)=AB\mathrm{Cov}(Y)B^{\prime}A^{\prime}=ABB^{\prime}A^{\prime}.\]
Again, when \(A\) is an identity, this gives that the covariance of \(X\) is \(BB^{\prime}\) so the latter claim follows.
**Remark A.4**.: _Note that this proof generalizes immediately to infinite dimensional setting, as we will see in the succeeding section. The reason that the covariance of \(Y\) is an identity in both real and complex case is the following._
_When \(\mathbb{K}=\mathbb{R}\) this is well-known, however for \(\mathbb{K}=\mathbb{C}\) we can argue as follows. For any complex vector \(z\) then \(z^{\prime}z\) is real-valued and its real part is \(z_{R}^{\top}z_{R}+z_{I}^{\top}z_{I}\), where \(R\) denotes real and \(I\) denotes imaginary. Now let \(X\) and \(Z\) be the real and imaginary part of \(Y^{\prime}\phi\)._
\[X=(Y^{\prime}\phi)_{R} =((Y_{R}+iY_{I})^{\prime}(\phi_{R}+i\phi_{I}))_{R}=Y_{R}^{\top} \phi_{R}+Y_{I}^{\top}\phi_{I},\] \[Z=(Y^{\prime}\phi)_{I} =((Y_{R}+iY_{I})^{\prime}(\phi_{R}+i\phi_{I}))_{I}=Y_{R}^{\top} \phi_{I}+Y_{I}^{\top}\phi_{R},\]
_so both are \(\mathbb{R}\)-linear transformations of real Gaussian random vector \((Y_{R},Y_{I})\). Therefore, the expectation of \((X,Z)\) is \(\mathbb{E}[(X,Z)]=0\) and the variance of \(X\) is_
\[\mathrm{var}(X)=B\mathrm{Cov}((Y_{R},Y_{I}))B^{\top}=\frac{1}{2}BB^{\top},\]
_where the matrix \(B\) is_
\[B=\left(\phi_{R}^{\top}\quad\phi_{I}^{\top}\right),\]
_therefore we have \(\mathrm{var}(X)=\frac{1}{2}\phi^{\prime}\phi\). We can similarly verify, that \(\mathrm{var}(Z)=\frac{1}{2}\phi^{\prime}\phi\). Since \(\mathbb{E}(Y^{\prime}\phi)^{\prime}(Y^{\prime}\phi)=\mathrm{var}(X)+\mathrm{ var}(Z)=\phi^{\prime}\phi\), we see that the covariance of \(Y\) is complex identity. We used the real version to make the calculation easier._
### Characteristic functions
Since the complex Gaussian random vectors is defined as an affine transformations of complex Gaussian white noise and the \(k\)-dimensional complex Gaussian white noise is isomorphic with scaled \(2k\)-dimensional real white noise, we can define the characteristic function via the following idea.
If \(Y\) is a discrete \(k\)-dimensional complex white noise, then \(\widetilde{Y}=(Y_{R},Y_{I})/\sqrt{2}\) is discrete \(2k\)-dimensional real white noise and its characteristic function is
\[J_{\widetilde{Y}}(\widetilde{\phi})=\mathbb{E}\exp(i(\widetilde{\phi}^{\top} \widetilde{Y}))=\mathbb{E}\exp(i(\phi_{R}^{\top}Y_{R}+\phi_{I}^{\top}Y_{I})/ \sqrt{2})=\mathbb{E}\exp(i\mathrm{Re}(Y^{\prime}\phi)),\]
where again \(\mathrm{Re}(\cdot)\), denotes the real component.
**Definition A.5**.: _(Characteristic function of complex Gaussian random vector) Assume \(X:=(X_{1},\ldots,X_{n})\) is a complex finite dimensional random vector. The function_
\[J_{X}(\phi)=\mathbb{E}\exp(i\mathrm{Re}(X^{\prime}\phi)),\]
_where \(\phi\in\mathbb{C}^{n}\) is the characteristic function of complex Gaussian random vector._
Note that via isomorphicity, the characteristic function fully determines the distribution [2].
**Proposition A.6**.: _The characteristic function of discrete \(k\)-dimensional complex white noise \(Y\) is_
\[J_{Y}(\phi)=\exp(-\frac{1}{4}\phi^{\prime}\phi),=\exp(-\frac{1}{4}|\phi|^{2}),\]
_where \(|\phi|^{2}=\phi^{\prime}\phi=|\phi_{1}|^{2}+\cdots+|\phi_{k}|^{2}\)._
Proof.: This follows with a straightforward computation. The \(\mathbb{C}\)-covariance \(\mathrm{Cov}(Y)\) of \(Y=Y_{R}+iY_{I}\) is by definition \(\frac{1}{2}I_{\mathbb{C}}\), so \(Y_{R}\) and \(Y_{I}\) are independent and \(\mathrm{Cov}(Y_{R})=\mathrm{Cov}(Y_{I})=\frac{1}{2}I_{\mathbb{R}}\). Therefore
\[J_{Y}(\phi) =\mathbb{E}\exp(iY_{R}^{\top}\phi_{R})\mathbb{E}\exp(iY_{I}^{ \top}\phi_{I})=\exp(-\frac{1}{4}\phi_{R}^{\top}\phi_{R})\exp(-\frac{1}{4}\phi_ {I}^{\top}\phi_{I})\] \[=\exp(-\frac{1}{4}|\phi|^{2}).\]
**Proposition A.7**.: _The characteristic function of \(X=AY+\mu\), where \(Y\) is \(k\)-dimensional complex white noise, \(A\in\mathbb{C}^{n\times k}\) and \(\mu\in\mathbb{C}^{n}\) is_
\[J_{X}(\phi)=\exp(i\mathrm{Re}(\mu^{\prime}\phi)-\frac{1}{4}\phi^{\prime}\Sigma \phi),\]
_where \(\Sigma=AA^{\prime}\) is an self-adjoint matrix in \(\mathbb{C}^{n\times n}\)._
Proof.: Since \(i\mathrm{Re}(X^{\prime}\phi)=i\mathrm{Re}(\mu^{\prime}\phi)+i\mathrm{Re}((AY)^ {\prime}\phi)\), we may assume that \(\mu=0\) without a restriction. Since \((AY)^{\prime}\phi=Y^{\prime}A^{\prime}\phi=Y^{\prime}\psi\), where \(\phi=A^{\prime}\phi\), the previous proposition gives that
\[J_{X}(\phi)=J_{Y}(\psi)=\exp(-\frac{1}{4}\psi^{\prime}\psi)=\exp(-\frac{1}{4}( A^{\prime}\phi)^{\prime}A\phi)=\exp(-\frac{1}{4}\phi^{\prime}\Sigma\phi),\]
which proves the claim.
**Corollary A.8**.: _The characteristic function of a complex Gaussian vector \(X\) is_
\[J_{X}(\phi)=\exp(i\mathrm{Re}(\mathbb{E}(X)^{\prime}\phi)-\frac{1}{2}\phi^{ \prime}\mathrm{Cov}(X)\phi),\]
_and the expectation and the complex covariance fully determine the distribution._
Proof.: This follows from previous results and the fact that \(\mathrm{Cov}(Y)=\frac{1}{2}I\) for the complex white noise.
### Densities for complex Gaussian vectors
By stating the density of the complex Gaussian vector \(X\) we mean the non-negative function \(f\geq 0\) such that
\[\mathbb{P}(X\in A)=\int_{\mathbb{C}^{n}}[\,x\in A\,]f(x)\,\mathrm{d}x,\]
where the integral is understood as a Lebesgue (volume) integral on \(\mathbb{R}^{2n}\). Note that not every complex Gaussian vector has a density in this sense. However, every non-zero complex Gaussian vector has a \(\mathbb{C}\)-affine subspace (potentially of lower dimension) of \(\mathcal{C}^{n}\) such that the distribution is supported on this subspace and relative to that the subspace it has a density. The complex white noise itself has a density in this sense.
In order to extend this to other complex Gaussian vectors, we first consider the orthogonal and unitary transformations. These are given through the following propositions.
**Proposition A.9**.: _The density function of discrete \(k\)-dimensional complex white noise \(Y\) is_
\[f_{Y}(z)=\pi^{-n}\ \exp(-z^{\prime}z)=\pi^{-n}\ \exp(-|z|^{2}),\]
_for every \(z\in\mathbb{C}^{k}\)._
Proof.: Since \(Y\) is isomorphic to \(\mathbb{R}^{2k}\)-dimensional scaled white noise \((Y_{R},Y_{I})\) and the latter has a density on \(\mathbb{R}^{2k}\) since it is a vector of \(2k\) independent Gaussian random variables with zero mean and \(\frac{1}{2}\) variance. Therefore
\[f_{(Y_{R},Y_{I})}(z_{R},z_{I}) =\prod_{j=1}^{n}(2\pi(1/2))^{-\frac{1}{2}}(2\pi(1/2))^{-\frac{1}{ 2}}\exp\Bigg{(}-\frac{(z_{R})_{j}^{2}+(z_{I})_{j}^{2}}{2\cdot(\frac{1}{2})} \Bigg{)}\] \[=\pi^{-n}\exp(-(z_{R}^{\top}z_{R}+z_{I}^{\top}z_{I}))\] \[=\pi^{-n}\exp(-z^{\prime}z).\]
**Proposition A.10**.: _Suppose \(U\in\mathbb{C}^{k\times k}\) is a unitary and \(Y\) is a \(k\)-dimensional Gaussian random vector with density. Then \(X=UY\) also has density and its density is given by_
\[f_{X}(z)=f_{Y}(U^{\prime}z),\]
_for every \(z\in\mathbb{C}^{k}\)._
Proof.: This follows from the isomorphicity and the general transformation rule, since \(U^{\prime}\) is the inverse matrix of \(U\) and the Jacobian determinant of the isomorphich copy of \(U^{\prime}\) is identically one, since
\[\mathcal{J}_{\mathbb{C}}(U^{\prime})=\det\begin{pmatrix}U_{R}&-U_{I}\\ U_{I}&U_{R}\end{pmatrix}=\det(U_{R}^{\top}U_{R}+U_{I}^{\top}U_{I})=\det((U^{ \prime}U)_{R})=1.\]
**Proposition A.11**.: _Suppose \(U\in\mathbb{C}^{k\times k}\) is a diagonal matrix \(U=\operatorname{diag}(\lambda_{1},\dots,\lambda_{k})\) and \(Y\) is discrete \(k\)-dimensional complex white noise \(Y\). Then \(X=UY\) has a density if and only if the determinant \(D=\lambda_{1}\ \dots\lambda_{k}\neq 0\). In this case it is given by_
\[f_{X}(z)=|D|^{-1}f_{Y}(U^{-1}z),\]
_for every \(z\in\mathbb{C}^{k}\)._
Proof.: Let us first assume \(D\neq 0\). In this case \(X_{j}=\lambda_{j}Y_{j}\) for each \(j=1,\dots,k\). Moreover, the random variables \(X_{1},\dots,X_{k}\) are independent. This implies that each \(X_{j}\) has a density function and the joint density is the product of the densities.
Each \(Y_{j}=\lambda_{j}^{-1}X_{j}\) which is isomorphic to \(2\)-dimensional real linear transformation: therefore,
\[f_{X_{j}}(z_{j})=\sqrt{|\lambda_{j}^{-1}|^{2}}f_{Y_{j}}(z_{j}/\lambda_{j})=| \lambda_{j}|^{-1}f_{Y_{j}}(z_{j}/\lambda_{j}).\]
The isomorphicity is inside the first identity, since the Jacobian determinant is
\[\det\begin{pmatrix}(\lambda_{j}^{-1})_{R}&-(\lambda_{j}^{-1})_{I}\\ (\lambda_{j}^{-1})_{I}&(\lambda_{j}^{-1})_{R}\end{pmatrix}^{1/2}=|\lambda_{j}^ {-1}|^{2},\]
The claim follows by taking the products.
If \(D=0\), then at least one the \(\lambda_{j}\)'s is zero. Without a loss of generality, we can for simplicity assume that \(\lambda_{1}=0\). Then \(Y=(0,Y_{2},\dots,Y_{k})\) and hence \(Y\) is supported on a hypersurface of at most \(k-1\) complex dimensions. This already implies that the density cannot exist.
**Proposition A.12**.: _Suppose \(A\in\mathbb{C}^{n\times n}\) is a matrix, \(Y\) is discrete \(n\)-dimensional complex white noise \(Y\) and \(\mu\in\mathcal{C}^{n}\). The complex Gaussian vector \(X=AY+\mu\) has a density if and only if \(A\) is invertible. When \(A\) is invertible, it is given by_
\[f_{X}(z)=\pi^{-n}|\det(B)|^{-1/2}\exp(-(z-\mu)^{\prime}B^{-1}\ (z-\mu)),\]
_for every \(z\in\mathbb{C}^{n}\), where \(B=AA^{\prime}\)._
Proof.: Without a restriction, we can assume \(\mu=0\). The matrix \(B\) is self-adjoint, since \(B^{\prime}=(AA^{\prime})^{\prime}=AA^{\prime}=B\), so it has a spectral decomposition \(B=U\Lambda U^{\prime}\) and a self-adjoint square root \(\sqrt{B}:=U\sqrt{\Lambda}U^{\prime}\), i.e. \((\sqrt{B})^{\prime}=\sqrt{B}\) and \((\sqrt{B})^{2}=B\). Note that \(\det(\Lambda)=\det A\) so the invertibility encoded into the diagonal matrix.
Let \(Z=\sqrt{B}Y\). The characteristic function of \(Z\) is
\[J_{Z}(\phi) =\exp(-\frac{1}{4}\phi^{\prime}\sqrt{B}(\sqrt{B})^{\prime}\phi)\] \[=\exp(-\frac{1}{4}\phi^{\prime}B\phi)=\exp(-\frac{1}{4}\phi^{ \prime}AA^{\prime}\phi)\] \[=J_{X}(\phi),\]
so \(Z\) and \(X\) are identically distributed. Therefore, \(X\) has a density exactly when \(Z\) has a density and in that case \(f_{X}=f_{Z}\). Moreover, since \(Z=U\sqrt{\Lambda}U^{\prime}Y\), we moreover see that \(Z\) and \(U\sqrt{\Lambda}Y\) are identically distributed. This shows that
\[f_{\sqrt{\Lambda}Y}(z)=\pi^{-n}|D|^{-1}\exp(-(\sqrt{\Lambda}^{-1}z)^{\prime}( \sqrt{\Lambda}^{-1}z))=\pi^{-n}|D|^{-1}\exp(-(z^{\prime}\Lambda^{-1}z)),\]
where \(D=\det(B)\) and thus
\[f_{Z}(z)=\pi^{-n}|D|^{-1}\exp(-((U^{\prime}z)^{\prime}\Lambda^{-1}U^{\prime}z) )=\pi^{-n}|D|^{-1}\exp(-(z^{\prime}B^{-1}z)),\]
which proves the claim.
Now one can write the previous result directly with the general transformation rule, but then the calculation of the determinant is more involved since we cannot use the independence.
**Corollary A.13**.: _If the covariance of a complex Gaussian \(n\)-dimensional vector \(X\) is invertible, then \(X\) has a density which is given by_
\[f_{X}(z)=(2\pi)^{-n}\,\left(\det(\operatorname{Cov}(X))\right)^{-1/2}\exp(- \frac{1}{2}(z-\mathbb{E}(X))^{\prime}\operatorname{Cov}(X)^{-1}(z-\mathbb{E}( X)),\]
_for every \(z\in\mathcal{C}^{n}\)._
Proof.: When \(X\) is discrete \(n\)-dimensional complex white noise, the \(\operatorname{Cov}(X)=I_{\mathbb{C}}/2\), so \((\det(\operatorname{Cov}(X)))^{-1/2}=2^{n}\) and therefore
\[\pi^{-n}=(2\pi)^{-n}(\det(\operatorname{Cov}(X)))^{-1/2},\]
and
\[\exp(-z^{\prime}z)=\exp(-\frac{1}{2}z^{\prime}\operatorname{Cov}(X)^{-1}z),\]
so the claim holds for the discrete complex white noise. The remaining case follows from the previous proposition.
## Appendix B Infinite-dimensional analysis
In this Appendix we extend the results of the previous section towards the infinite dimensional case, where the underlying spaces are taken to be the rapidly decreasing functions \(\mathscr{S}(\mathcal{C}^{n})\) (or the compactly supported test functions \(\mathscr{D}(\Omega)\)) and their dual spaces \(\mathscr{S}^{\prime}(\mathcal{C}^{n})\) of tempered distributions (or the distributions \(\mathscr{D}^{\prime}(\Omega)\)). In particular these can be done on the spaces of linear operators \(L(\mathscr{S}(\mathcal{C}^{n}),\mathscr{S}^{\prime}(\mathcal{C}^{n}))\) between the dual spaces. For the time being we will denote these as \(\mathcal{X}_{\mathbb{C}}\) and \(\mathcal{X}_{\mathbb{C}}^{\prime}\) only to indicate that these are \(\mathcal{C}\)-linear vector spaces with regularity in the topology, such that we can rigorously define the concepts. In particular this appendix concludes the result of Theorem 3.1.
By defining a Gaussian random object on \(\mathcal{X}_{\mathbb{C}}\) as generalized Gaussian random variable \(X\,:(\Omega,\mathscr{F},\mathbb{P})\to(\mathcal{X}_{\mathbb{C}}^{\prime}, \mathscr{B}(\mathcal{X}_{\mathbb{C}}^{\prime})\) via
\[\omega\mapsto(\phi\mapsto\langle\,\phi\,,\,X(\omega)\,\rangle_{\mathcal{X}_{ \mathbb{C}}\times\mathcal{X}_{\mathbb{C}}^{\prime}}).\]
We will drop the spaces from the dual action for simplicity. We define the complex Gaussian noise as \(Y\) on the underlying structure as such that for every finite collection of "test functions" \(\phi_{1},\dots,\phi_{n}\) the random object
\[Z:=(\langle\,\phi_{1}\,,\,\overline{Y}\,\rangle,\dots,\langle\,\phi_{n}\,,\, \overline{Y}\,\rangle),\]
is a complex Gaussian vector \(n\)\(\mathbb{C}\)-dimensions. Moreover, the \(\mathbb{C}\)-expectation \(\mathbb{E}(Z)\) of \(Z\) is (isomorphic) to zero vector and \(\operatorname{Cov}(Z)\) is isomorphic to a \(\mathbb{C}^{n\times n}\)-matrix
\[\left(\tfrac{1}{2}\langle\,\phi_{j}\,,\,\overline{\iota\phi_{i}}\,\rangle \right)_{i,j},\]
where \(\iota\colon\mathcal{X}\to\mathcal{X}^{\prime}\) is the natural embedding of the "test function" space into its dual space. In order to proceed we first need to "mimic" the definitions, but in infinite dimensions.
**Definition B.1**.: _Suppose \(X\) is a \(\mathcal{X}^{\prime}\)-valued random object. It has an expectation \(\mathbb{E}X\in\mathcal{X}^{\prime}\) if the following system of equations makes sense and has a unique solution_
\[\langle\,\phi\,,\,\overline{\mathbb{E}(X)}\,\rangle=\mathbb{E}\langle\,\phi\,, \,\overline{X}\,\rangle,\]
_for every \(\phi\in\mathcal{X}\)._
**Definition B.2**.: _Suppose \(X\) is a \(\mathcal{X}^{\prime}\)-valued random object. It has a covariance \(\operatorname{Cov}(X)\in L(\mathcal{X},\mathcal{X}^{\prime})\), if it has an expectation, the following system of equations makes sense and has a unique solution_
\[\langle\,\phi\,,\,\overline{\operatorname{Cov}(X)\phi}\,\rangle=\mathbb{E}| \langle\,\phi\,,\,\overline{W}\,\rangle|^{2},\]
_for every \(\phi\in\mathcal{X}\) and where \(W=X-\mathbb{E}X\)._
**Definition B.3**.: _Suppose \(X\) is a \(\mathcal{X}^{\prime}\)-valued random object. The characteristic function of \(X\) is a mapping \(J_{X}\colon\mathcal{X}\to\mathbb{C}\) given by_
\[J_{X}(\phi)=\mathbb{E}\exp(i\mathrm{Re}(\langle\,\phi\,,\,\overline{X}\, \rangle)).\]
We can verify that complex white noise \(Y\) has the expectation \(0\in\mathcal{X}^{\prime}\) and its covariance \(Y\) is \(\operatorname{Cov}(Y)=\frac{1}{2}\iota\) which we will later (incorrectly) call \(\frac{1}{2}I\) even though it is not the identity in that sense, it would preserve the space. We can define the general complex Gaussian object on \(\mathcal{X}^{\prime}\) exactly as before.
**Definition B.4**.: _(Complex Gaussian object) Assume \(X\) is a \(\mathcal{X}^{\prime}\)-valued random object. We say \(X\) is a complex Gaussian random object if it can be expressed in the form_
(B.1) \[X=\mu+AY,\]
_where \(\mu\in\mathcal{X}^{\prime}\), \(A\in L(\mathcal{Y}^{\prime},\mathcal{X}^{\prime})\) and \(Y\) is a \(\mathcal{Y}^{\prime}\)-valued complex white noise._
The main results generalize nearly verbatim, which are provided through the following propositions,
**Proposition B.5**.: _Assume that \(X\) is a \(\mathcal{X}^{\prime}\)-valued complex Gaussian object. Suppose \(A\in L(\mathcal{X}^{\prime},\mathcal{Z}^{\prime})\) and \(\mu\in\mathcal{Z}^{\prime}\). Then \(Z=\mu+AX\) is a \(\mathcal{Z}^{\prime}\)-Gaussian random object with expectation_
\[\mathbb{E}(Z)=\mu+\mathbb{E}(X).\]
_It has covariance_
\[\operatorname{Cov}(Z)=A\operatorname{Cov}(X)A^{\prime},\]
_where \(A^{\prime}=\overline{A}^{*}\in L(\mathcal{Z},\mathcal{X})\)._
**Proposition B.6**.: _The characteristic function of a complex Gaussian \(\mathcal{X}^{\prime}\)-valued random object is_
\[J_{X}(\phi)=\exp(i\mathrm{Re}(\langle\,\phi\,,\,\overline{\mathbb{E}(X)}\, \rangle)-\tfrac{1}{2}\langle\,\phi\,,\,\overline{\operatorname{Cov}(X)\phi}\, \rangle),\]
_and the expectation and the complex covariance fully determine the distribution._
### Connection to radar equation
Let's recall the radar equation (2.1) that was written as
\[z^{q}(t)=\int_{0}^{1}\epsilon^{q}(t-r)\,\mu^{q}(dr)+\sqrt{T}\xi^{q}(t).\]
In order to be precise, this should be understood as a cyclic convolution
\[z^{q}=\epsilon_{q}*\mu^{q}+\sqrt{T}\xi^{q},\]
where _given the covariance stucture_ of the \(\mu^{q}\), then \(z^{q},\mu^{q},\xi^{q}\in\mathcal{X}^{\prime}\) are complex Gaussian \(\mathcal{X}^{\prime}\)-valued random objects and \(\mathcal{X}^{\prime}=\mathscr{D}^{\prime}(\mathbb{T};\mathcal{C})\), the \(\mathbb{T}\) standing for the torus formed out of the interval \([0,1]\).
More precisely, we assume that the conditional distribution of \(\mu^{q}\) given its covariance is known to be \(X\), then \(\mu^{q}\,|\,X\) is a complex Gaussian \(\mathcal{X}^{\prime}\)-valued random object with zero mean and _random but given_ covariance \(X\). Writing \(A_{q}\eta=\epsilon_{q}*\eta\) we see that _provided_ the convolution makes sense \(A_{q}\) is a linear mapping form \(\mathcal{X}^{\prime}\) to \(\mathcal{X}^{\prime}\). Therefore, the conditional characteristic function of \(z^{q}\) is
\[J_{z^{q}\,|\,X}(\phi)=\exp(-\tfrac{1}{2}\langle\,\phi\,,\,\overline{A_{q}XA_{q }^{\prime}\phi}\,\rangle-\frac{T}{2}|\phi|^{2}).\]
Note that this is an extension of the simplified model. In order to proceed, we assume that the covariance operators is parametrized. More specifically,
\[X=X(\sigma^{2})=\phi\mapsto\sum_{j=1}^{N}\sigma_{j}^{2}\langle\,\phi\,,\, \iota\chi_{j}\,\rangle\iota\chi_{j},\]
where \(\{\chi_{j}\}_{j=1}^{N}\) form a periodic, smooth partition of unity normalized in the \(L^{2}\)-sense.
This turns the bilinear form in the characteristic function into a bilinear matrix form. This corresponds to the idea that the autocovariance function is "piecewise constant", with \(\chi_{j}\) acting like a smooth indicator function. We will assume that the set \(\{\chi_{j}\}_{j=1}^{N}\) is known and the parameter vector \(\sigma^{2}=(\sigma_{1}^{2},\ldots,\sigma_{N}^{2})\) is the unknown replacing the full covariance operator \(X\).
For this special case, the conditional characteristic (given \(\sigma^{2}\)) is
\[J_{z^{q}\,|\,\sigma^{2}}(\phi)=\exp(-\tfrac{1}{2}\langle\,\phi\,,\,\overline{ A_{q}X(\sigma^{2})A_{q}^{\prime}\phi}\,\rangle-\frac{T}{2}|\phi|^{2}).\]
With a straight forward calculation (recalling \(A_{q}\eta=\epsilon_{q}*\eta\) is understood as a mapping \(\mathcal{X}^{\prime}\to\mathcal{X}^{\prime}\) and its dual as a mapping \(\mathcal{X}\to\mathcal{X}\)), we see that
\[\langle\,\phi\,,\,\overline{A_{q}X(\sigma^{2})A_{q}^{\prime}\phi}\,\rangle= \sum_{j=1}^{N}\sigma_{j}^{2}|\langle\,\phi\,,\,A_{q}\iota\chi_{j}\,\rangle|^{ 2}.\]
Using \(\phi=\phi_{1}\pm\phi_{2}\) and summing up the previous identity implies
\[\langle\,\phi_{1}\,,\,\overline{A_{q}X(\sigma^{2})A_{q}^{\prime}\phi_{2}}\, \rangle=\sum_{j=1}^{N}\sigma_{j}^{2}\langle\,\phi_{1}\,,\,A_{q}\iota\chi_{j} \,\rangle\overline{\langle\,\phi_{2}\,,\,A_{q}\iota\chi_{j}\,\rangle},=\sum_{ j=1}^{N}\sigma_{j}^{2}\langle\,\phi_{1}\,,\,A_{q}\iota\chi_{j}\,\rangle \overline{\langle\,\phi_{2}\,,\,A_{q}\iota\chi_{j}\,\rangle}.\]
Therefore, if we use a discrete dimensional complex Gaussian
\[Y_{q}=(z^{q}(\phi_{1}),\ldots,z^{q}(\phi_{M})),\]
as a discrete observation from the measurement device, then
\[J_{Y_{q}\,|\,\sigma^{2}}(\phi)=\exp(i\mathrm{Re}(\mathbb{E}(Y_{q}\,|\,\sigma^{ 2})^{\prime}\phi)-\frac{1}{2}\phi^{\prime}\mathrm{Cov}(Y_{q}\,|\,\sigma^{2}) \phi).\]
Linearity implies that
\[\mathbb{E}(Y_{q}\,|\,\sigma^{2})=0,\]
therefore,
\[\phi^{\prime}\mathrm{Cov}(Y_{q}\,|\,\sigma^{2})\phi=\sum_{i,j=1}^{M}\mathbb{E} \langle\,\phi_{i}\,,\,\overline{z^{q}}\,\rangle\langle\overline{\phi_{j}}\,,\,z ^{q}\,\rangle).\]
Using complex polarization, namely by calculating
\[\mathbb{E}|\langle\,(\phi_{i}+\rho\phi_{j})\,,\,\overline{z^{q}}\,\rangle|^{2 }=\langle\,(\phi_{i}+\rho\phi_{j})\,,\,\overline{(A_{q}X(\sigma^{2})A_{q}^{ \prime}+T)(\phi_{i}+\rho\phi_{j})}\,\rangle,\]
for \(\rho\in\{\,1,-1,i,-i\,\}\) we find that
\[\mathbb{E}(\langle\,\phi_{i}\,,\,\overline{z^{q}}\,\rangle\langle \,\overline{\phi_{j}}\,,\,z^{q}\,\rangle) =\langle\,\phi_{i}\,,\,\overline{(A_{q}X(\sigma^{2})A_{q}^{ \prime}+T)\phi_{j}}\,\rangle\] \[=\sum_{k=1}^{N}\sigma_{k}^{2}\langle\,\phi_{i}\,,\,A_{q}\iota_{k }\,\rangle\overline{\langle\,\phi_{j}\,,\,A_{q}\iota_{k}\,\rangle}\] \[=\sum_{k=1}^{N}\sigma_{k}^{2}\phi_{j}^{\prime}\overline{A_{q} \iota_{k}(\overline{A_{q}\iota_{k}})^{\prime}}\phi_{i}+T\phi_{j}^{\prime}\phi _{i}\] \[=\phi_{j}^{\prime}\big{(}\sum_{k=1}^{N}\sigma_{k}^{2}\overline{A_{ q}\iota_{k}\chi_{k}\chi_{k}^{\prime}t^{\prime}A_{q}^{\prime}}+T\big{)}\phi_{i}.\]
Interpreting this generalized covariance operator as an complex covariance operator of complex Gaussian vector, the density of \(Y_{q}\,|\,\sigma^{2}\) is as a function of \(\sigma^{2}\) seen to be proportional to an affine transform of the inverse Wishart distribution. The density function of inverse Wishart distribution \(\mathcal{W}^{-1}(\Psi,\nu)\) is
\[|\sigma^{2}|^{-(\nu+p+1)/2}\exp(-\frac{1}{2}\mathrm{tr}(\Psi\sigma^{-2})),\]
where \(\Psi\) is positive definite \(p\times p\) scale matrix and \(\nu>p-1\) is the degrees of freedom. The \(\Psi\) is constructed from the observations (the actions of the measurement to the test functions). If we write this out explicitly we can get a representation for the posteriori distribution of the covariance of the signal. In the special case of Theorem 3.2, the \(|\sigma|^{2}\) is constant and we can use a special smooth partition of unity that is obtained with a single \(\chi_{1}\) so that the all the others are periodic translates of this \(\chi_{j}=\tau^{j}(\chi_{1})\) with \(\tau^{j}\) representing the \(j^{\mathrm{th}}\) iterate of the single translate operation and which are rescaled to correspond to the discretization of the measured signal. Moreover, since translation commute with convolutions, we see that covariance operator for the disretization of the following quadratic form
(B.2) \[\varphi\mapsto\int_{\mathbb{T}}|\,\sigma\,|^{2}\,|\widehat{\epsilon}_{j}\,|^{ 2}(t)|\,\widehat{\varphi}(t)\,|^{2}\mathrm{d}t.\]
where \(\mathbb{T}\) denotes the one-dimensional torus that is isomorphic with the half-open interval \([0,2\pi)\) just as in the equation (2.7) in Definition 2.4.
|
2303.13690 | Picosecond X-ray Imaging of Shockwaves with Non-Rankine-Hugoniot
Behavior | The first-known observation of plasma-induced cavitation bubbles and
expanding shockwaves in liquid during plasma initiation timescales reveals
deviation from expected Rankine-Hugoniot shock behavior due to coupled
shock-cavitation dynamics, imaged using megahertz-framerate picosecond X-ray
imaging. The imaging target features an inexpensive benchtop-scale pulsed
plasma device used to generate well-timed spark discharges in ambient liquid
heptane at an unprecedented repetition rate ($>$3/min) compared with more
commonly used dynamic targets. These shockwaves are relatively weak (Mach
number $\leq$ 1.4) compared with X-ray-imaged shockwaves in prior literature,
advancing the resolution and sensitivity limits of this high-speed imaging
diagnostic. Phase contrast imaging (PCI) has facilitated enhanced quantitative
analysis of the expanding shocks in this work, via comparison to thermodynamic
models and a Fresnel-Kirchhoff diffraction model. | Christopher S. Campbell, Mirza Akhter, Samuel Clark, Kamel Fezzaa, Zhehui Wang, David Staack | 2023-03-23T21:52:45Z | http://arxiv.org/abs/2303.13690v3 | # Ultrafast X-ray Phase Contrast Imaging of High Repetition Rate Shockwaves
###### Abstract
High-repetition-rate plasma-induced shockwaves in liquid have been observed using ultrafast X-ray phase contrast imaging (PCI) for the first time. Using a laser-triggered nanosecond-pulsed plasma device in heptane at ambient conditions, it is demonstrated that these well-timed weak shocks can be generated at an unprecedented repetition rate (\(>\)3 per minute), significantly faster than that of more commonly-used dynamic targets (exploding wire, gas gun). This simple portable target can easily be adapted to study discharges in different media (water, oils, solids) at comparably high repetition rates and over a wide range of possible input energies. Compared to previously PCI-imaged shocks, these shocks are relatively weak (1 \(<\) Mach number \(<\) 1.4), which advances the resolution and sensitivity limits of this high-speed imaging diagnostic. Numeric solutions of a Fresnel-Kirchhoff diffraction model are used to estimate post-shock thermodynamic conditions, the results of which show good agreement with expectations based on Rankine-Hugoniot normal shock thermodynamic relations.
In the fields of high-speed X-ray science and synchrotron radiation, the maximum achievable repetition rate of a dynamic target of interest is an important figure of merit when pursuing efficient use of limited beamtime. However, many of the common dynamic processes of most interest feature destructible devices, requiring complete or partial reassembly of the target after each imaging event [1; 2; 3] which severely limits repetition rate. It would therefore be beneficial to develop a target which requires minimal to no maintenance between events, without compromising phenomena of interest such as high instantaneous power density, high mass density gradients, high pressure and temperature gradients, and supersonic behavior/shockwaves. In this letter we present such a target, a pulsed power device submerged in ambient liquid heptane which can produce well-timed nanosecond-pulsed spark discharges. This target was taken to the Advanced Photon Source (APS) for ultrafast phase-contrast imaging (PCI) experiments, the results of which are presented herein.
Of particular interest in this subset of imaging results is the presence of a visible expanding shock front generated by the spark discharge event, which to the best of our knowledge represents one of the weakest shock fronts ever imaged using PCI (Ma \(\approx\) 1.2). While a sufficiently strong shock would be easily visible using less sensitive imaging techniques due to its relatively high mass density ratio, the fact that such a weak shock front is still observable in this work highlights the superior sensitivity of this implementation of PCI to very subtle dynamic phenomena, while still revealing the limits of current techniques and the path forward for the next generation of ultrafast imaging. The high repetition rate (\(>\)3 events/minute), low cost (\(<\)USS100k), and portability of this imaging target makes it quite attractive to those fields interested in events of similar timescales and power densities (e.g. ICF, dynamic compression, shock physics), but which rely on apparatuses which are either immovable or have a prohibitively slow event repetition rate. This target has the potential to open new opportunities for such fields to benefit from the superior imaging capabilities of the APS and similar user facilities.
The pulsed power device and high-voltage circuit used in this work to generate the submerged spark discharge event is similar to those used in our prior work [4; 5], with this implementation consisting of two electrodes between which a well-timed submerged spark discharge occurs (Figure 1). The event of interest dissipates approximately 100mJ of nanosecond-timescale plasma processes (light, sound, chemistry, shockwaves) in the target over a pulse duration of 100ns, implying an instantaneous power of roughly 1MW. Assuming an approximate discharge cross section of 5\(\upmu\)m during peak current across a gap of 0.5mm, we estimate a peak energy density of 15GJ/kg, within two orders of magnitude (albeit at a lower instantaneous power) of the 1TJ/kg implied by recent hotspot energy and mass results from the National Ignition Facility [6]. The X-ray imaging method consists of a 128-frame Shimadzu HPV-X2 camera (3\(\upmu\)m/pixel), used to image a scintillator placed 46cm from the imaging target (see Figure 1). This setup is capable of a 6.5MHz X-ray framerate, made possible by the APS's 24-singlet standard operating mode [7].
See Figure 2 for selected sequential PCI frames from a single spark discharge event. Additionally, see Figure 3 for a compilation of frames from multiple similar events in which the shock is visible in frame, sorted by frame time relative to the instant of plasma initiation. The estimated speed of this shock is shown in Figure 4 to be \(1.45\pm 0.13\)km/s for this dataset, corresponding to a Mach number in ambient heptane (\(v_{\text{sound}}=1.129\)km/s) of \(1.28\pm 0.13\). The transverse profile of these shock images is consistent with expected PCI for a step discontinuity in density.
Also apparent via comparison to this linear trend is the slight negative concavity of the data. This suggests a shock speed which decreases with time, which is consistent which the time-dependent shock speed found by linearly fitting to subsamples of the full dataset (within 100ns of a given instant in time). While at first glance the time dependency from Taylor-von Neumann-Sedov blast wave theory (proportional to \(t^{0.4}\) for spherically expanding shocks [8] and \(t^{0.5}\) for cylindrically expanding shocks [9]) would presumably serve as a physically-grounded model to fit to this data, the plasma-induced shock front imaged here violates two of the main assumptions required by the Taylor-von Neumann-Sedov theory: instantaneous energy input with the shock originating from a zero-radius point or line (compare to Section SM.II), and negligible ambient pressure (\(p_{\text{post-shock}}\gg p_{\text{ambient}}\)). By the time that the shock becomes visible to this diagnostic (earliest measured shock image at 45ns after initiation), the post-shock pressure has decreased drastically, resulting in a near-linear position vs. time data. For this analysis it was decided that a phenomenological quadratic fit (green curve on Figure 4) would be appropriate, since it requires the least amount of assumptions but still captures this apparent negative concavity.
The radiographic attenuation contrast for such a weak shock is quite low; in our imaging target, the ambient X-ray path consists of a 6mm-thick layer of heptane, the shock has a characteristic size of about 100\(\upmu\)m within the field of view, and the expected density within the post-shock region is approximately 1.2 times that of ambient, implying a maximum possible attenuation contrast of 0.02% in this case, which is well below the the level of detectability in this experiment. However, the diffraction-induced contrast enhancement and edge detection features of PCI cause this shock to be visible above background noise as localized maxima and minima in brightness, with the maxima occurring on the side of the discontinuity with lower density, and the minima on the side with higher density. This type of diffraction is the essence of PCI and is governed by the Fresnel-Kirchhoff integral [10], modified for cylindrically symmetric geometries in
Figure 1: Simplified schematic of the target and field of view for X-ray imaging.
Figure 2: Selected frames from a single spark discharge event in heptane for which the plasma-induced shock is visible, with timestamps measured relative to spark initiation. Left and right columns show contrast-enhanced raw images and corresponding background-subtracted frames respectively; the average of all 128 frames from the event was used as the background. Note the location of the shock front visible at t=79ns and t=232ns, denoted by red arrows.
Equation 2 to ease computation:
\[g_{\text{out}}(x^{\prime},y^{\prime}) =\frac{e^{2\pi iz/\lambda}}{i\lambda z}\iint g_{\text{in}}(x,y)e^{ \frac{i\pi}{\lambda z}((x^{\prime}-x)^{2}+(y^{\prime}-y)^{2})}dxdy \tag{1}\] \[=\frac{e^{2\pi iz/\lambda}}{\sqrt{i\lambda z}}\int g_{\text{in}}(x )e^{\frac{i\pi}{\lambda z}(x^{\prime}-x)^{2}}dx \tag{2}\]
where \(g_{\text{in}}\) and \(g_{\text{out}}\) represent the complex-valued electric field at the target and the imaging plane respectively. In this work, \(z=46\)cm, and all complex index of refraction data is from [11]. This model can now be fit to experiment (Figure 5, analyzing the fifth frame from Figure 3), constituting a measurement technique to estimate post-shock density. See Section SM.III for a more complete derivation of Equation 2 and further explanation of how the computational model was implemented, and also refer to the similar model from our prior work [4].
Alongside this X-ray diffraction method for estimating the density change across the shock from experimental data, it is also possible to relate \(\rho_{2}\) and \(v_{\text{shock}}\) by solving the system of Rankine-Hugoniot thermodynamic shock relations in heptane: \(\rho_{1}v_{1}=\rho_{2}v_{2}\) (mass), \(p_{1}+\rho_{1}v_{1}^{2}=p_{2}+\rho_{2}v_{2}^{2}\) (momentum), \(h_{1}+\frac{1}{2}v_{1}^{2}=h_{2}+\frac{1}{2}v_{2}^{2}\)
Figure 4: Plot of cylindrical shock positions/radii, relative to the axis of the plasma, compiling data from eighty-five different PCI frames for which the shock was visible (Figure 3), some of which came from more than one frame of a single event. Shock position measurement was performed manually for each of the 85 frames in which a shock was visible. The twenty measurements for each frame were then used to determine uncertainty (two standard deviations away from the average). The solid line (red) shows the least-squares linear fit to these points (1.45 km/s), with the two red dotted lines assume a shock speed 90% and 110% of that linear fit to roughly illustrate inherent shock speed uncertainty. Blue circles indicate shock speed substantiates calculated by fitting to subsamples of the full set (within 100ns of that substimate’s position on the horizontal axis). The green curve shows a quadratic fit to the position vs. time data, used for later analysis.
Figure 3: Frames from selected PCI heptane spark events in which a shock front was visible, sorted by frame time relative to spark initiation. Each frame is duplicated across both columns, the right column includes annotations which indicate the contour of the shock using red splines.
(energy), and a tabulated equation of state for heptane [12]. See Section SM.I for a derivation of this system of equations. Implicitly solving this system results in a unique relationship between \(\rho_{2}\) and \(v_{\text{shock}}\), shown in Figure 6 with a red curve. However, measured values for \(\rho_{2}\) and \(v_{\text{shock}}\) (from the diffraction model and quadratic fit to Figure 4) tend to reside above this red curve, suggesting a higher-density post-shock region than what is implied by this thermodynamic model. We attribute this to the fact that the thermodynamic model ignores the cavitation bubble which follows the expanding shock, clearly visible in Figures 2 and 3. This bubble interface is generated at a relatively small radius (\(<10\upmu\)m) and high energy density, and expands outward with a large amount of inertia, compressing the post-shock region to a higher density. This is shown with green asterisks in Figure 6, generated by assuming that this compression has no effect on the speed of the shock; although this still does not result in a model that sufficiently matches experiment, it represents the maximum post-shock density \(\rho_{2}\) which could be justified by the data. The original thermodynamic model (red curve) represents the minimum bound on \(\rho_{2}\), since it completely neglects this compression effect. In reality, the higher \(\rho_{2}\) caused by this compression effect should increase the shock speed, though the full nature of this effect is not explored here; to fully investigate this effect, a multiphysics model would be necessary which couples together the dynamics of the cavitation bubble (Rayleigh-Plesset), the liquid post-shock region (Navier-Stokes), and the jump conditions which govern the shock (Rankine-Hugoniot).
Similarity of this work to the submerged exploding wire PCI experiments by Yanuka [13] at the European Synchrotron Radiation Facility (ESRF) allows us to directly compare figures of merit for the two different implementations of PCI and of cylindrically expanding shock events, of which the most dramatic difference is peak instantaneous power and total event energy; for Yanuka, these values were approximately 1GW and 300J, respectively (recall 1MW and 100mJ for this work). The fact that the shock front images presented in this work are still visible with such a small event energy demonstrates the superior sensitivity and utility of PCI as a diagnostic for observing and analyzing propagating shocks.
In summary, we present successful observation of weak shocks in liquid heptane; this work constitutes a feat in imaging sensitivity and resolution and serves as strong evidence that further utilization of PCI for shock imaging in different media and phases of matter may likely prove fruitful. The plasma-based method of shock generation used here exhibits an order of magnitude increase in repetition rate (multiple events per minute) over conventional dynamic targets in similar experiments (e.g. exploding wire), and can easily be increased to well over 1Hz given a sufficient data acquisition scheme. The ability to quickly generate large datasets is potentially useful for machine learning applications; the eighty-five shock fronts cataloged in Figure 4 could conceivably be used to train a deep learning model which could then rapidly
Figure 5: Illustration of a cutline extraction algorithm which uses a spline fit to convert the two-dimensional X-ray frame of the shock front (a) into a one-dimensional plot (b). The image analyzed here corresponds with the fifth image from Figure 3 (t=234ns). The black solid line in represents the average cutline (dotted lines show upper and lower quartiles), and the red line shows the best simulated shock front PCI profile with a post-shock density of \(\rho_{2}=0.799^{+0.116}_{-0.0578}\)/cc (\(\rho_{2}/\rho_{1}=1.176^{+0.171}_{-0.084}\)).
Figure 6: Hugoniot states in the \(\rho_{2}\)–\(v_{\text{shock}}\) space, showing how values estimated from this work’s PCI data and diffraction model (black) compare to normal shock thermodynamic relations in heptane both with (green) and without (red) the cavitation bubble compression effect. The dashed portion of the red curve indicates extrapolation of the heptane equation of state. The blue datapoint corresponds with the particular X-ray diffraction model fit result from Figure 5.
compute shock parameters such as position and density. A wide range of possible parameter sweeps is achievable using this target, either by changing the input energy via choice of charging capacitor or simply taking advantage of the stochasticity of the phenomena of interest to automatically vary parameters such as imaging delay time, shock shape, or breakdown voltage. By exchanging the heptane for alternate discharge media (e.g. water, mineral oil, ice, plastics, rock), shock propagation can be studied in these materials without compromising repetition rate. Future work will push the PCI sensitivity limit further, while at the same time continuing to develop a quantitative analysis toolkit for shock imaging as well as PCI in general.
###### Acknowledgements.
The authors would like to acknowledge the support of the U.S. DOE NNSA. Los Alamos National Laboratory is managed by Triad National Security, LLC for the U.S. Department of Energy's NNSA. This document has been approved for release under No. LA-UR-23-22175. This research used resources of the Advanced Photon Source, a U.S. Department of Energy (DOE) Office of Science user facility operated for the DOE Office of Science by Argonne National Laboratory under Contract No. DE-AC02-06CH11357.
|
2307.01763 | On evolution kernels of twist-two operators | The evolution kernels that govern the scale dependence of the generalized
parton distributions are invariant under transformations of the
$\mathrm{SL}(2,\mathrm R)$ collinear subgroup of the conformal group. Beyond
one loop the symmetry generators, due to quantum effects, differ from the
canonical ones. We construct the transformation which brings the {\it full}
symmetry generators back to their canonical form and show that the eigenvalues
(anomalous dimensions) of the new, canonically invariant, evolution kernel
coincide with the so-called parity respecting anomalous dimensions. We develop
an efficient method that allows one to restore an invariant kernel from the
corresponding anomalous dimensions. As an example, the explicit expressions for
NNLO invariant kernels for the twist two flavor-nonsinglet operators in QCD and
for the planar part of the universal anomalous dimension in $ N=4$ SYM are
presented. | Yao Ji, Alexander Manashov, Sven-Olaf Moch | 2023-07-04T15:05:28Z | http://arxiv.org/abs/2307.01763v2 | # On evolution kernels of twist-two operators
###### Abstract
The evolution kernels that govern the scale dependence of the generalized parton distributions are invariant under transformations of the \(\mathrm{SL}(2,\mathrm{R})\) collinear subgroup of the conformal group. Beyond one loop the symmetry generators, due to quantum effects, differ from the canonical ones. We construct the transformation which brings the _full_ symmetry generators back to their canonical form and show that the eigenvalues (anomalous dimensions) of the new, canonically invariant, evolution kernel coincide with the so-called parity respecting anomalous dimensions. We develop an efficient method that allows one to restore an invariant kernel from the corresponding anomalous dimensions. As an example, the explicit expressions for NNLO invariant kernels for the twist two flavor-nonsinglet operators in QCD and for the planar part of the universal anomalous dimension in \(\mathcal{N}=4\) SYM are presented.
evolution kernels, DVCS, conformal symmetry, generalized parton distribution +
Footnote †: preprint: TUM–HEP–1461/23, MPP–2023–137, DESY–23–091
## I Introduction
The study of deeply-virtual Compton scattering (DVCS) gives one access to the generalized parton distributions [1; 2; 3] (GPDs) that encode the information on the transverse position of quarks and gluons in the proton in dependence on their longitudinal momentum. In order to extract the GPDs from experimental data one has to know, among other things, their scale dependence. The latter is governed by the renormalization group equations (RGEs) or, equivalently, evolution equations for the corresponding twist two operators. Essentially the same equations govern the scale dependence of the ordinary parton distribution functions (PDFs) in the Deep Inelastic Scattering (DIS) process. In DIS one is interested in the scale dependence of forward matrix elements of the local twist-2 operators and therefore can neglect the operator mixing problem between local operators from the operator product expansion (OPE). In the nonsinglet sector, there is only one operator for a given spin/dimension. The anomalous dimensions of such operators are known currently with the three-loop accuracy [4; 5] and first results at four loops are becoming available [6; 7]. In contrast, the DVCS process corresponds to non-zero momentum transfer from the initial to the final state and, as a consequence, the total derivatives of the local twist-two operators have to be taken into consideration. All these operators mix under renormalization and the RGE has a matrix form. The DIS anomalous dimensions appear as the diagonal entries of the anomalous dimensions matrix which, in general, has a triangular form for the latter.
It was shown by Dieter Muller [8; 9] that the off-diagonal part of the anomalous dimension matrix is completely determined by a special object, the so-called conformal anomaly. Moreover, in order to determine the off-diagonal part of the anomalous dimension matrix with \(\ell\)-loop accuracy it is enough to calculate the conformal anomaly at one loop less. This technique was used to reconstruct all relevant evolution kernels/anomalous dimension matrices in QCD at two loops [10; 11; 12].
A similar approach, but based on the analysis of QCD at the critical point in non-integer dimensions, was developed in refs. [13; 14; 15]. It was shown that the evolution kernels in \(d=4\) in the \(\overline{\mathrm{MS}}\)-like renormalization scheme inherit the symmetries of the critical theory in \(d=4-2e\) dimensions. As expected, the symmetry generators deviate from their canonical form. Corrections to the generators have a rather simple form if they are written in terms of the evolution kernel and the conformal anomaly. It was shown in ref. [16] that by changing a renormalization scheme one can get rid of the conformal anomaly term in the generators bringing them into the so-called "minimal" form. Beyond computing the evolution kernels, the conformal approach has also been employed to calculate the NNLO coefficient (hard) functions of vector and axial-vector contributions in DVCS [17; 18], the latter in agreement with a direct Feynman diagram calculation [19]. Moreover, the conformal technique is also applicable to computing kinematic higher-power corrections in two-photon processes as was recently shown in refs. [20; 21].
In this paper we construct a similarity transformation that brings the full quantum generators back to the canonical form. Correspondingly, the transformed evolution kernel is invariant under the canonical \(\mathrm{SL}(2,\mathrm{R})\) transformation. Moreover, we will show that the eigenvalues of this kernel are given by the so-called parity respecting anomalous dimension, \(f(N)\)[22; 23] which is re
lated to the PDF anomalous dimension spectrum \(\gamma(N)\) as
\[\gamma(N)=f\left(N+\bar{\beta}(a)+\frac{1}{2}\gamma(N)\right), \tag{1}\]
where \(\bar{\beta}(a)=\beta(a)/a\) with \(\beta(a)\) being the QCD beta function. The strong coupling \(\alpha_{s}\) is normalized as \(a=\alpha_{s}/(4\pi)\). We develop an effective approach to restore the canonically invariant kernel from its eigenvalues \(\gamma(N)\). As an example, we present explicit expressions for three-loop invariant kernels in QCD and \(\mathcal{N}=4\) supersymmetric Yang-Mills (SYM) theory. The answers are given by linear combinations of harmonic polylogarithms [24], up to weight four in QCD and up to weight three in \(\mathcal{N}=4\) SYM. We also compare our exact result with the approximate expression for the three-loop kernels in QCD given in ref. [16].
The paper is organized as follows: in section II we describe the general structure of the evolution kernels of twist-two operators. In section III we explain how to effectively recover the evolution kernel from the known anomalous dimensions and present our results for the invariant kernels in QCD and \(\mathcal{N}=4\) SYM. Sect. IV contains the concluding remarks. Some technical details are given in the Appendices.
## II Kernels & Symmetries
We are interested in the scale dependence of the twist-two light-ray flavor nonsinglet operator [25]
\[\mathcal{O}(z_{1},z_{2})=[\bar{q}(z_{1}n)\gamma_{+}[z_{1}n,z_{2}n]q(z_{2}n)]_ {\overline{\rm MS}}, \tag{2}\]
where \(n^{\mu}\) is an auxiliary light-like vector, \(n^{2}=0\), \(z_{1,2}\) are real numbers, \(\gamma_{+}=n^{\mu}\gamma_{\mu}\) and \([z_{1}n,z_{2}n]\) stands for the Wilson line ensuring gauge invariance, and the subscript \(\overline{\rm MS}\) denotes the renormalization scheme. This operator can be viewed as the generating function for local operators, \(\mathcal{O}^{\mu_{1}\ldots\mu_{N}}\) that are symmetric and traceless in all Lorentz indices \(\mu_{1}\ldots\mu_{N}\).
The renormalized light-ray operator (2) satisfies the RGE
\[\Big{(}\mu\partial_{\mu}+\beta(a)\partial_{a}+\mathbb{H}(a)\Big{)}\mathcal{O} (z_{1},z_{2})=0, \tag{3}\]
where \(\beta(a)\) is \(d\)-dimensional beta function
\[\beta(a)=-2a\big{(}\epsilon+\beta_{0}a+\beta_{1}a^{2}+O(a^{3})\big{)}, \tag{4}\]
\(\beta_{0}=11/3N_{c}-2/3n_{f}\), etc., and \(\mathbb{H}(a)=a\mathbb{H}_{1}+a^{2}\mathbb{H}_{2}+\ldots\) is an integral operators in \(z_{1},z_{2}\).
It follows from the invariance of the classical QCD Lagrangian under conformal transformations that the one-loop kernel \(\mathbb{H}_{1}\) commutes with the _canonical_ generators of the collinear conformal subgroup, \(S_{0},S_{\pm}\),
\[S_{-}=-\partial_{z_{1}}-\partial_{z_{2}}\,,\] \[S_{0}=z_{1}\partial_{z_{1}}+z_{2}\partial_{z_{2}}+2\,,\] \[S_{+}=z_{1}^{2}\partial_{z_{1}}+z_{2}^{2}\partial_{z_{2}}+2z_{1} +2z_{2}\,. \tag{5}\]
This symmetry is preserved beyond one loop albeit two of the generators, \(S_{0},S_{+}\) receive quantum corrections, \(S_{\alpha}\mapsto\widetilde{S}_{\alpha}(a)=S_{\alpha}+\Delta S_{\alpha}(a)\). The explicit form of these corrections can be found in ref. [15].
It is quite useful to bring the generators to the following form using the similarity transformation [16],
\[\mathbb{H}(a)=e^{-X(a)}\mathrm{H}(a)e^{X(a)}\,,\] \[\widetilde{S}_{\alpha}(a)=e^{-X(a)}\mathrm{S}_{\alpha}(a)e^{X(a)}\,, \tag{6}\]
where \(X(a)=aX_{1}+a^{2}X_{2}+\ldots\) is an integral operator known up to terms of \(O(a^{3})\)[11; 16]. This transformation can be thought of as a change in a renormalization scheme.
The shift operator \(\mathrm{S}_{-}\) is not modified and hence identical to \(S_{-}\) in Eq. (II), and the quantum corrections to \(\mathrm{S}_{0}\) and \(\mathrm{S}_{+}\) come only through the evolution kernel
\[\mathrm{S}_{0}(a)=S_{0}+\bar{\beta}(a)+\frac{1}{2}\mathrm{H}(a)\,, \tag{7a}\] \[\mathrm{S}_{+}(a)=S_{+}+(z_{1}+z_{2})\left(\bar{\beta}(a)+\frac{1 }{2}\mathrm{H}(a)\right)\,, \tag{7b}\]
where \(\bar{\beta}(a)=\beta_{0}a+\beta_{1}a^{2}+\cdots\) is the beta function in four dimensions, cf. Eq. (1). The form of the generator \(\mathrm{S}_{0}(a)\) is completely fixed by the scale invariance of the theory, while Eq. (7b) is the "minimal" ansatz consistent with the commutation relation \([\mathrm{S}_{+},\mathrm{S}_{-}]=2\mathrm{S}_{0}\). Since the operator \(\mathrm{H}(a)\) commutes with the generators, \([\mathrm{H}(a),\mathrm{S}_{\alpha}(a)]=0\) its form is completely determined by its spectrum (anomalous dimensions). However, since the generators do not have the simple form as in Eq. (II), it is yet necessary to find a way to recover the operator from its spectrum.
To this end we construct a transformation which brings the generators \(\mathrm{S}_{\alpha}(a)\) to the canonical form \(S_{\alpha}\), Eq. (II). Let us define an operator \(\mathrm{T}(\mathrm{H})\):
\[\mathrm{T}(\mathrm{H})=\sum_{n=0}^{\infty}\frac{1}{n!}\mathrm{L}^{n}\left(\bar {\beta}(a)+\frac{1}{2}\mathrm{H}(a)\right)^{n}\,, \tag{8}\]
where \(\mathrm{L}=\ln z_{12}\), \(z_{12}\equiv z_{1}-z_{2}\). Recall that \(z_{1},z_{2}\) are real variables, so for \(z_{12}<0\) it is necessary to choose a specific branch of the logarithm function. Although this choice is irrelevant for further analysis we chose the \(+i0\) recipe for concreteness, i.e., \(\mathrm{L}=\ln(z_{12}+i0)\). It can be shown that the operator \(\mathrm{T}(\mathrm{H})\) intertwines the symmetry generators \(\mathrm{S}_{\alpha}(a)\) and the canonical generators, \(S_{\alpha}\). Namely,
\[\mathrm{T}(\mathrm{H})\,\mathrm{S}_{\alpha}(a)=S_{\alpha}\,\mathrm{T}(\mathrm{H }), \tag{9}\]
see Appendix A for details. Let us also define a new kernel \(\widehat{\mathrm{H}}\) as
\[\mathrm{T}(\mathrm{H})\,\mathrm{H}(a)=\widehat{\mathrm{H}}(a)\,\mathrm{T}( \mathrm{H}). \tag{10}\]
It follows from Eqs. (9), (10) that the operator \(\widehat{\mathrm{H}}\) commutes with the canonical generators in Eq. (II)
\[[S_{\alpha},\widehat{\mathrm{H}}(a)]=0. \tag{11}\]
The problem of restoring a canonically invariant operator \(\widehat{\mathrm{H}}(a)\) from its spectrum is much easier than that for the operator \(\mathrm{H}(a)\) and will be discussed in the next section. It can be shown that the inverse of \(\mathrm{T}(\mathrm{H})\) takes the form
\[\mathrm{T}^{-1}(\mathrm{H})=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\mathrm{L}^{n} \left(\bar{\beta}(a)+\frac{1}{2}\widehat{\mathrm{H}}(a)\right)^{n}\,, \tag{12}\]
see Appendix A. Further, it follows from Eq. (10) that
\[\mathrm{H}(a) =\mathrm{T}^{-1}(\mathrm{H})\,\widehat{\mathrm{H}}(a)\,\mathrm{T }(\mathrm{H})\] \[=\widehat{\mathrm{H}}(a)+\sum_{n=1}^{\infty}\frac{1}{n!}\mathrm{T }_{n}(a)\left(\bar{\beta}(a)+\frac{1}{2}\mathrm{H}(a)\right)^{n}\,. \tag{13}\]
The operators \(\mathrm{T}^{(n)}\) are defined by recursion
\[\mathrm{T}_{n}(a)=[\mathrm{T}_{n-1}(a),\mathrm{L}] \tag{14}\]
with the boundary condition \(\mathrm{T}_{0}(a)=\widehat{\mathrm{H}}(a)\). The \(n\)-th term in the sum in Eq. (13) is of order \(\mathcal{O}(a^{n+1})\) so that one can easily work out an approximation for \(\mathrm{H}(a)\) with arbitrary precision, e.g.,
\[\mathrm{H}(a) =\widehat{\mathrm{H}}(a)+\mathrm{T}_{1}(a)\left(1+\frac{1}{2} \mathrm{T}_{1}(a)\right)\left(\bar{\beta}(a)+\frac{1}{2}\widehat{\mathrm{H}}( a)\right)\] \[+\frac{1}{2}\mathrm{T}_{2}(a)\left(\bar{\beta}(a)+\frac{1}{2} \widehat{\mathrm{H}}(a)\right)^{2}+\mathcal{O}(a^{4})\,. \tag{15}\]
It can be checked that this expression coincides with that obtained in ref. [16, Eq. (3.9)] 1.
Footnote 1: The notations adopted here and in ref. [16] differ slightly. To facilitate a comparison we note that the operators \(\mathrm{T}_{n}\) defined here satisfy the equation \([S_{+},\mathrm{T}_{n}]=n[\mathrm{T}_{n-1},z_{1}+z_{2}]\).
The evolution kernel \(\widehat{\mathrm{H}}(a)\) can be realized as an integral operator. It acts on a function of two real variables as follows
\[\widehat{\mathrm{H}}(a)\,f(z_{1},z_{2})=\!Af(z_{1},z_{2})+\!\int_{+}h(\tau)f(z _{12}^{\alpha},z_{21}^{\beta}), \tag{16}\]
where \(A\) is a constant, \(z_{12}^{\alpha}\equiv z_{1}\bar{\alpha}+z_{2}\alpha\), \(\bar{\alpha}\equiv 1-\alpha\), and
\[\int_{+}\equiv\int_{0}^{1}d\alpha\int_{0}^{\bar{\alpha}}d\beta. \tag{17}\]
\(\tau=\alpha\beta/\bar{\alpha}\bar{\beta}\) is called conformal ratio. The weight function \(h(\tau)\) in Eq. (16) only depends on this particular combination of the variables \(\alpha,\beta\) as a consequence of invariance properties of \(\widehat{\mathrm{H}}\), Eq. (11).
It is easy to find that the operators \(\mathrm{T}_{n}\) take the form
\[\mathrm{T}_{n}(a)f(z_{1},z_{2})=\!\int_{+}\ln^{n}(1-\!\alpha-\!\beta)h(\tau)f(z _{12}^{\alpha},z_{21}^{\beta})\,, \tag{18}\]
that again agrees with the results of ref. [16]. Note, that this expression does not depend on the choice of the branch of the logarithm defining the function \(\mathrm{L}=\ln z_{12}\) in Eq. (8), see Appendix A for more discussion.
## III Anomalous dimensions vs kernels
First of all let us establish a connection between the eigenvalues of the operators \(\mathrm{H}\) and \(\widehat{\mathrm{H}}\). Since both of them are integral operators of the functional form in Eqs. (16), (18), both operators are diagonalized by functions of the form \(\psi_{N}(z_{1},z_{2})=(z_{1}-z_{2})^{N-1}\), where \(N\) is an arbitrary complex number. One may worry that the continuation of the function \(\psi_{N}\) for negative \(z_{12}\) is not unique and requires special care. But it does not matter for our analysis. Indeed, \(z_{12}^{\alpha}-z_{21}^{\beta}=(1-\alpha-\beta)z_{12}\) with \(\alpha+\beta<1\), therefore the operators do not mix the regions \(z_{12}\gtrless 0\). For definiteness let us suppose that
\[\psi_{N}(z_{1},z_{2})=\theta(z_{12})z_{12}^{N-1}. \tag{19}\]
Let \(\gamma(N),\widehat{\gamma}(N)\) be eigenvalues (anomalous dimensions) of the operators \(\mathrm{H}\), \(\widehat{\mathrm{H}}\) corresponding to the function \(\psi_{N}\), respectively,
\[\mathrm{H}(a)\psi_{N} =\gamma(N)\psi_{N}, \tag{20}\] \[\widehat{\mathrm{H}}(a)\psi_{N} =\widehat{\gamma}(N)\psi_{N}. \tag{21}\]
The anomalous dimensions \(\gamma(N),\widehat{\gamma}(N)\) are analytic functions of \(N\) in the right complex half-plane, \(\mathrm{Re}(N)>0\). For integer even (odd) \(N\), \(\gamma(N)\) gives the anomalous dimensions of the local (axial)vector operators 2.
Footnote 2: As usual one has to consider the operators of certain parity, \(\mathcal{O}_{\pm}(z_{1},z_{2})=\mathcal{O}(z_{1},z_{2})\mp\mathcal{O}(z_{2},z_ {1})\), then the functions \(\gamma_{\pm}(N)\) give the anomalous dimensions of local operators, for even and odd \(N\) respectively.
Now let us note that the operator \(\mathrm{T}(\mathrm{H})\) acts on \(\psi_{N}\) as follows
\[\mathrm{T}(\mathrm{H})\psi_{N}(z_{1},z_{2}) =\sum_{n=0}^{\infty}\frac{\mathrm{L}^{n}}{n!}\left(\bar{\beta}(a )+\frac{1}{2}\gamma(N)\right)^{n}\!\psi_{N}(z_{1},z_{2})\] \[=z_{12}^{\bar{\beta}(a)+\frac{1}{2}\gamma(N)}\psi_{N}(z_{1},z_{2})\] \[=\psi_{N+\bar{\beta}+\frac{1}{2}\gamma(N)}(z_{1},z_{2}). \tag{22}\]
Thus, it follows from Eq. (13) that the anomalous dimensions \(\gamma(N)\) and \(\widehat{\gamma}(N)\) satisfy the relation (cf. also Eq. (1))
\[\gamma(N)=\widehat{\gamma}\left(N+\bar{\beta}(a)+\frac{1}{2}\gamma(N)\right). \tag{23}\]
This relation appeared first in refs. [22; 23] as an generalization of the Gribov-Lipatov reciprocity relation [26; 27]. It was shown that the asymptotic expansion of the function \(\widehat{\gamma}(N)\) for large \(N\) is invariant under the reflection \(N\to-N-1\), see e.g., refs. [28; 29; 30; 22]. This property strongly restricts harmonics sums which can appear in the perturbative expansion of the anomalous dimension \(\widehat{\gamma}(N)\)[29]. Explicit expressions for \(\widehat{\gamma}(N)\) are known at four loops in QCD [6] and at seven loops in the \(\mathcal{N}=4\) SYM, see refs. [31; 32; 33; 34; 29].
### Kernels from anomalous dimensions
For large \(N\) the anomalous dimension \(\widehat{\gamma}(N)\) grows as \(\ln N\). This term enters with a coefficient \(2\Gamma_{\rm cusp}(a)\) where \(\Gamma_{\rm cusp}(a)\) is the so-called cusp anomalous dimension [35; 36] known to the four-loop order in QCD [37; 38] and in \({\cal N}=4\) SYM [37]. Thus, we write \(\widehat{\gamma}(N)\) in the following form
\[\widehat{\gamma}(N)=2\Gamma_{\rm cusp}(a)S_{1}(N)+A(a)+\Delta \widehat{\gamma}(N)\,, \tag{24}\]
where \(S_{1}(N)=\psi(N+1)-\psi(1)\) is the harmonic sum responsible for the \(\ln N\) behavior at large \(N\), and \(A(a)\) is a constant term. The remaining term, \(\Delta\widehat{\gamma}(N)\), vanishes at least as \(O(1/N(N+1))\) at large \(N\). The constant \(A(a)\) is exactly the same which appears in Eq. (16). The first term in Eq. (24) comes from a special \({\rm SL}(2,\mathbb{R})\) invariant kernel
\[\widehat{\cal H}f=\int_{0}^{1}\frac{d\alpha}{\alpha}\Big{\{}2f(z_ {1},z_{2})-\bar{\alpha}\big{(}f(z_{12}^{\alpha},z_{2})+f(z_{1},z_{21}^{\alpha} )\big{)}\Big{\}}, \tag{25}\]
which in momentum space gives rise to the so-called plus-distribution. The eigenvalues of this kernel are \(2S_{1}(N)\) (\(\widehat{\cal H}z_{12}^{N-1}=2S_{1}(N)z_{12}^{N-1}\)). It corresponds to a singular contribution of the form \(-\delta_{+}(\tau)\) to the invariant kernel \(h(\tau)\), see ref. [16, Eq. (2.19)] for detail. Thus the evolution kernel can be generally written as
\[\widehat{\rm H}=\Gamma_{\rm cusp}(a)\widehat{\cal H}+A(a)+\Delta \widehat{\rm H}\,. \tag{26}\]
Here \(\Delta\widehat{\rm H}\) is an integral operator,
\[\Delta\widehat{\rm H}f(z_{1},z_{2})=\int_{+}h(\tau)f(z_{12}^{ \alpha},z_{21}^{\beta})\,, \tag{27}\]
where the weight function \(h(\tau)\) is a regular function of \(\tau\in(0,1)\). The eigenvalues of \(\Delta\widehat{\rm H}\) are equal to \(\Delta\widehat{\gamma}(N)\) and are given by the following integral
\[\Delta\widehat{\gamma}(N)=\int_{+}h(\tau)(1-\alpha-\beta)^{N-1}\,. \tag{28}\]
The inverse transformation takes the form [14]
\[h(\tau)=\int_{C}\frac{dN}{2\pi i}(2N+1)\Delta\widehat{\gamma}(N)P_{N}\left( \frac{1+\tau}{1-\tau}\right), \tag{29}\]
where \(P_{N}\) are the Legendre polynomials. The integration path \(C\) goes along the line parallel to the imaginary axis, \({\rm Re}(N)>0\), such that all poles of \(\Delta\widehat{\gamma}(N)\) lie to the left of this line. Some details of the derivation can be found in Appendix B.
One can hardly hope to evaluate the integral (29) in a closed form for an arbitrary function \(\Delta\widehat{\gamma}(N)\). However, as was mentioned before, the anomalous dimensions \(\Delta\widehat{\gamma}(N)\) in quantum field theory are rather special functions. Most of the terms in the perturbative expansion of \(\Delta\widehat{\gamma}(N)\) have the following form
\[\eta^{k}(N)\,\Omega_{\vec{m}}(N),\qquad\qquad\eta^{k}(N)\,\Omega _{1}^{p}(N) \tag{30}\]
where \(\eta(N)=1/(N(N+1))\), and the functions \(\Omega_{\vec{m}}(N)=\Omega_{m_{1},\ldots,m_{p}}(N)\) are the parity respecting harmonic sums [29], (\(\Omega_{\vec{m}}(N)\sim\Omega_{\vec{m}}(-N-1)\) for \(N\to\infty\)). We will assume that the sums \(\Omega_{\vec{m}}(N)\) are "subtracted", i.e. \(\Omega_{\vec{m}}(N)\to 0\) at \(N\to\infty\). The second structure occurs only for \(k>0\), since \(\Omega_{1}(N)=S_{1}(N)\) grows as \(\ln N\) for large \(N\).
Since all \({\rm SL}(2,\mathbb{R})\) invariant operators share the same eigenfunctions, the product of two invariant operators \(H_{1}\) and \(H_{2}\), \(H_{1}H_{2}(=H_{2}H_{1})\) with eigenvalues \(H_{1}(N)\) and \(H_{2}(N)\) respectively, has eigenvalues \(H_{1}(N)H_{2}(N)\). One can use this property to reconstruct an operator with the eigenvalue (30).
First, we remark that the operator with the eigenvalues \(\eta(N)\), (we denote it as \({\cal H}_{+}\)), has (as follows from Eq. (28)) a very simple weight function, \(h_{+}(\tau)=1\). This can also be derived from Eq. (29). Since \(P_{N}(x)=P_{-N-1}(x)\) the integral in Eq. (29) vanishes for the integration path \({\rm Re}(N)=-1/2\) due to antisymmetry of the integrand. Therefore, the integral (29) can be evaluated by the residue theorem 2
Footnote 2: This trick allows one to calculate the integral (29) for any function \(\Delta\widehat{\gamma}(N)\) with _exact_ symmetry under \(N\to-1-N\) reflection.
\[h_{+}(\tau)=\frac{2N+1}{N+1}P_{N}\left(\frac{1+\tau}{1-\tau} \right)\Big{|}_{N=0}=1. \tag{31}\]
Let us consider the product \(H_{2}=H_{+}\,H_{1}(=H_{1}\,H_{+})\), where \(H_{1}\) is an integral operator with the weight function \(h_{1}(\tau)\). Then the weight function \(h_{2}(\tau)\) of the operator \(H_{2}\) is given by the following integral
\[h_{2}(\tau)=\int_{0}^{\tau}\frac{ds}{\overline{s}^{2}}\ln(\tau/s)h_{1}(s), \tag{32}\]
see Appendix B for details. Thus the contribution to the anomalous dimension of type (30) can be evaluated with the help of this formula if the weight function corresponding to the harmonic sums \(\Omega_{\vec{m}}\) is known.
We also give an expression for another product of the operators: \(H_{2}=\widehat{\cal H}H_{1}\),
\[h_{2}(\tau)=-\ln\tau\,h_{1}(\tau)+2\bar{\tau}\int_{0}^{\tau} \frac{ds}{\bar{s}}\frac{h_{1}(\tau)-h_{1}(s)}{(\tau-s)}\,, \tag{33}\]
which appears to be useful in the calculations as well.
### Recurrence procedure
Let us consider the integral (29) with \(\Delta\widehat{\gamma}=\Omega_{\vec{m}}\),
\[h_{\vec{m}}(\tau)=\int_{C}\frac{dN}{2\pi i}(2N+1)\Omega_{\vec{m}}(N)P_{N}\left( z\right), \tag{34}\]
where \(z=(1+\tau)/(1-\tau)\). Using a recurrence relation for the Legendre functions
\[(2N+1)P_{N}(z)=\frac{d}{dz}\Big{(}P_{N+1}(z)-P_{N-1}(z)\Big{)} \tag{35}\]
we obtain
\[h_{\vec{m}}(\tau)=-\frac{d}{dz}\int_{C}\frac{dN}{2\pi i}P_{N}(z)F_{\vec{m}}(N), \tag{36}\]
where
\[F_{\vec{m}}(N)=\Big{(}\Omega_{\vec{m}}(N+1)-\Omega_{\vec{m}}(N-1)\Big{)}. \tag{37}\]
It is easy to see that the function \(F_{\vec{m}}(N)\) has the negative parity under \(N\to-N-1\) transformation and can be represented in the form
\[F_{m_{1},\ldots,m_{p}}(N)=\sum_{k=2}^{p}r_{k}(N)\Omega_{m_{k},\ldots,m_{p}}(N) +r(N), \tag{38}\]
where \(r_{k}(N)\) are rational functions of \(N\). The harmonic sums \(\Omega_{m_{k},\ldots,m_{p}}(N)\) in Eq. (38) can be either of positive or negative parity. Therefore the coefficient \(r_{k}(N)\) accompanying the positive parity function \(\Omega_{m_{k},\ldots,m_{p}}(N)\) has the form \(r_{k}(N)=(2N+1)P_{k}(\eta)\), where \(P_{k}\) is some polynomial, while \(r_{k}=P_{k}(\eta)\) for the harmonic sums of negative parity. The free term has the form \(r(N)=(2N+1)P(\eta)\). Together, they make \(F_{m_{1},\ldots,m_{p}}(N)\) with negative parity. For example, for the harmonic sum \(\Omega_{1,3}\) (see appendix C for a definition), one gets
\[F_{1,3}(N)=(2N+1)\eta\underbrace{\left(\Omega_{3}+\zeta_{3}-\eta^{2}-\frac{1} {2}\eta^{3}\right)}, \tag{39}\]
while for the harmonic sum \(\Omega_{2,2}\)
\[F_{2,2}(N)=(2N+1)\frac{1}{2}\eta^{3}(3+\eta)+\underline{\eta(2+\eta)\Omega_{2 }}. \tag{40}\]
Note the reappearance of the common factor \((2N+1)\) in the first case, (39). This implies that, up to the derivative \(d/dz\), the integral (36) has the form (29). Hence, if the kernel corresponding to the underlined terms in Eq. (39) is known, the kernel corresponding to \(\Omega_{1,3}\) can be easily obtained. Thus the problem of finding the invariant kernel with the eigenvalues \(\Omega_{1,3}(N)\) is reduced to the problem of finding the kernel with the eigenvalues \(\Omega_{3}(N)\) (\(\Omega_{1,3}\mapsto\Omega_{3}\)).
However, as it seen from our second example, not all parity preserving harmonic sums share this property. Indeed, the underlined term on the right hand side (rhs) of Eq. (40) does not have the factor \((2N+1)\). Hence, all these transformations do not help to solve the problem for \(\Omega_{2,2}\).
It is easy to see that the above recurrence procedure works only if all the harmonic sums \(\Omega_{m_{k},\ldots,m_{p}}\) appearing in Eq. (38) are of positive parity. It was proven in ref. [29, Theorem 2] that any harmonic sum, \(\Omega_{\vec{m}}\), with all indices \(\vec{m}\) positive odd or negative even has positive parity (see Appendix C for explicit examples of the harmonic sums satisfying these conditions). Therefore, the rhs of Eq. (38) only contains harmonic sums of the same type. Thus the invariant kernels corresponding to the harmonic sums of positive parity can _always_ be calculated recursively, using Eqs. (36), (38) and (32), (33). Crucially, only such harmonic sums appear in the anomalous dimensions \(\widehat{\gamma}(N)\) in QCD and \(\mathcal{N}=4\) SYM. All convolution integrals (32) and (33) can in turn be systematically calculated with the packages HyperInt [39] or PolyLogTools [40].
The explicit expressions for the kernels corresponding to the lowest harmonic sums are given in Appendix C for references.
### Invariant kernels: QCD
Below we give an explicit expression for the invariant kernel of twist-two flavor nonsinglet operator in QCD. We will not split the operator \(\mathcal{O}(z_{1},z_{2})\) into positive (negative) parity operators. The evolution operator still takes the form (26), with \(\Delta\widehat{\mathrm{H}}\) given by the following integral
\[\Delta\widehat{\mathrm{H}}f(z_{1},z_{2})=\int_{+}(h(\tau)+\bar{h}(\tau)P_{12}) f(z_{12}^{\alpha},z_{21}^{\beta})\,, \tag{41}\]
where \(P_{12}\) is a permutation operator, \(P_{12}f(z_{1},z_{2})=f(z_{2},z_{1})\)4. For (anti)symmetric functions \(f(z_{1},z_{2})\) the operator (41) takes a simpler form (27) with the kernel \(h\pm\bar{h}\).
Footnote 4: In order to avoid possible misunderstandings we write down it explicitly, \(P_{12}f(z_{12}^{\alpha},z_{21}^{\beta})=f(z_{21}^{\alpha},z_{12}^{\beta})\).
Our expression for the constant term \(A(a)\) agrees with the constant term \(\chi\) given in ref. [16, Eq. (5.5)], \(A=\chi-2\Gamma_{\mathrm{cusp}}\). For completeness, we provide explicit expressions for the constant \(A=aA_{1}+a^{2}A_{2}+a^{3}A_{3}+\cdots\),
\[A_{2} =C_{F}\left[n_{f}\left(\frac{16}{3}\zeta_{2}+\frac{2}{3}\right)-N_{c }\left(\frac{52}{3}\zeta_{2}+\frac{43}{6}\right)+\frac{1}{N_{c}}\left(24\zeta_{3 }-12\zeta_{2}+\frac{3}{2}\right)\right]\,,\] \[A_{3} =C_{F}\bigg{[}n_{f}^{2}\left(\frac{32}{9}\zeta_{3}-\frac{160}{27} \zeta_{2}+\frac{34}{9}\right)+n_{f}N_{c}\left(-\frac{256}{15}\zeta_{2}^{2}+ \frac{8}{9}\zeta_{3}+\frac{2492}{27}\zeta_{2}-17\right)+\frac{n_{f}}{N_{c}} \left(\frac{232}{15}\zeta_{2}^{2}-\frac{136}{3}\zeta_{3}+\frac{20}{3}\zeta_{2} -23\right)\] \[\qquad\quad+N_{c}^{2}\left(-80\zeta_{5}+\frac{616}{15}\zeta_{2}^{ 2}+\frac{266}{9}\zeta_{3}-\frac{5545}{27}\zeta_{2}+\frac{847}{18}\right)+ \left(-120\zeta_{5}-16\zeta_{2}\zeta_{3}-\frac{124}{15}\zeta_{2}^{2}+\frac{10 48}{3}\zeta_{3}-\frac{356}{3}\zeta_{2}+\frac{209}{4}\right)\] \[\qquad\quad+\frac{1}{N_{c}^{2}}\left(120\zeta_{5}+16\zeta_{2} \zeta_{3}-\frac{144}{5}\zeta_{2}^{2}-34\zeta_{3}-9\zeta_{2}-\frac{29}{4}\right) \bigg{]}\,, \tag{42}\]
where \(C_{F}=(N_{c}^{2}-1)/(2N_{c})\) is the quadratic Casimir in the fundamental representation of \(SU(N_{c})\) and we take \(T_{F}=1/2\). Note that we are adopting a different color basis compared to ref. [16].
The explicit expressions for the cusp anomalous dimensions \(\Gamma_{\rm cusp}(a)=a\Gamma_{\rm cusp}^{(1)}+a^{2}\Gamma_{\rm cusp}^{(2)}+a^{ 3}\Gamma_{\rm cusp}^{(3)}\) up to three loops are provided in Eq. (D.3). Finally we give answers for the kernels \(h(\bar{h})(a)=\sum_{k}a^{k}h_{k}(\bar{h}_{k})\). Explicit one- and two-loop expressions are known [16; 14] but for completeness we give them here
\[h_{1}=-4C_{F}\,,\qquad\qquad\bar{h}_{1}=0\,, \tag{43}\]
and
\[h_{2} =C_{F}\bigg{\{}n_{f}\frac{88}{9}+N_{c}\left(-2\text{H}_{1}+8\zeta _{2}-\frac{604}{9}\right)\] \[\quad+\frac{1}{N_{c}}\left(-8\Big{(}\text{H}_{11}+\text{H}_{2} \Big{)}+2\left(1-\frac{4}{\tau}\right)\text{H}_{1}\right)\bigg{\}}\,,\] \[\bar{h}_{2} =-\frac{8C_{F}}{N_{c}}\left(\text{H}_{11}+\tau\,\text{H}_{1} \right)\,, \tag{44}\]
where \(\text{H}_{\bar{m}}=\text{H}_{\bar{m}}(\tau)\) are the harmonic polylogarithms (HPLs) [24]. The three-loop expressionP is more involved and
Footnote ¶: A file with our main results can be obtained from the preprint server [http://arXiv.org](http://arXiv.org) by downloading the source. Furthermore, they are available from the authors upon request.
\[h_{3} =C_{F}\bigg{\{}-\frac{64}{9}n_{f}^{2}+n_{f}N_{c}\frac{8}{3} \bigg{[}\text{H}_{3}-\text{H}_{110}-\text{H}_{20}+\text{H}_{12}+\frac{1}{ \tau}\text{H}_{2}-\frac{1}{\tau}\text{H}_{10}-\frac{19}{12}\text{H}_{1}+8 \zeta_{3}-\frac{32}{3}\zeta_{2}+\frac{5695}{72}\bigg{]}\] \[\quad+\frac{n_{f}}{N_{c}}\frac{16}{3}\bigg{[}3\zeta_{3}-\frac{75} {16}+\text{H}_{3}+\text{H}_{21}+\text{H}_{12}+\text{H}_{111}+\left(\frac{16} {3}+\frac{1}{\tau}\right)\left(\text{H}_{2}+\text{H}_{11}\right)+\left(\frac{3 1}{24}+\frac{10}{3\tau}\right)\text{H}_{1}\bigg{]}\] \[\quad+N_{c}^{2}4\bigg{[}\text{H}_{13}+\text{H}_{112}-\text{H}_{12 0}-\text{H}_{1110}+2\text{H}_{4}-2\text{H}_{30}-2\text{H}_{210}+2\text{H}_{2 2}+\left(\frac{8}{3}-\frac{2}{\tau}\right)\left(\text{H}_{20}-\text{H}_{3}+ \text{H}_{110}-\text{H}_{12}\right)\] \[\quad-\frac{5}{4}\Big{(}\text{H}_{10}+\text{H}_{11}\Big{)}+\frac{ 2}{3\tau}\Big{(}\text{H}_{10}-\text{H}_{2}\Big{)}-\frac{5}{2}\text{H}_{0}+ \left(\frac{115}{72}+\zeta_{2}+\frac{1}{\tau}\right)\text{H}_{1}-\frac{44}{5} \zeta_{2}^{2}-\frac{22}{3}\zeta_{3}+\frac{436}{9}\zeta_{2}-\frac{4783}{27}\bigg{]}\] \[\quad+16\left[\text{H}_{4}-\text{H}_{30}+\text{H}_{13}+\text{H}_{ 121}-\frac{3}{2}\text{H}_{120}+\frac{3}{2}\text{H}_{22}+\frac{3}{2}\text{H}_{ 112}+2\text{H}_{31}+2\text{H}_{1111}+3\text{H}_{211}-\frac{1}{2}\text{H}_{1110 }-\left(\frac{1}{\tau}+1\right)\text{H}_{20}\right.\] \[\quad-\left(\frac{11}{6}-\frac{1}{\tau}\right)\text{H}_{3}-\text{H}_ {110}+\left(-\frac{37}{12}+\frac{3}{2\tau}\right)\text{H}_{12}-\left(\frac{7}{3 }-\frac{2}{\tau}\right)\text{H}_{21}+\left(-\frac{43}{12}+\frac{3}{\tau} \right)\text{H}_{111}+\left(\frac{13}{8}+\frac{1}{2}\zeta_{2}\right)\text{H}_{10}\] \[\quad-\left(\frac{1}{2}\zeta_{2}+\frac{127}{9}+\frac{11}{6\tau} \right)\text{H}_{2}-\left(\frac{899}{72}+\frac{1}{3\tau}\right)\text{H}_{11}+ \left(\zeta_{2}-1\right)\text{H}_{0}+\left(\frac{7}{4}\zeta_{2}-\frac{143}{36}- \frac{1}{\tau}\left(\frac{1}{2}\zeta_{2}+\frac{67}{9}\right)\right)\text{H}_{ 1}+\frac{5}{2}\zeta_{2}-\frac{47}{24}\bigg{]}\] \[\quad+\frac{8}{N_{c}^{2}}\bigg{[}\text{H}_{4}-\text{H}_{30}-\text{ H}_{210}+\text{H}_{112}-\text{H}_{1111}-2\,\text{H}_{120}+2\,\text{H}_{13}+2\text{H}_{31}-2 \text{H}_{1110}-2\text{H}_{211}+3\text{H}_{121}\] \[\quad-\left(\frac{1}{2}+\frac{1}{\tau}\right)\left(\text{H}_{2 0}-\text{H}_{3}+\text{H}_{110}\right)+2\left(1+\frac{1}{\tau}\right)\text{H}_{ 21}+\left(\frac{3}{2}-\frac{2}{\tau}\right)\text{H}_{111}+\left(\frac{7}{8}+ \frac{3}{2\tau}\right)\text{H}_{10}-\left(\zeta_{2}-\frac{1}{2}+\frac{3}{2 \tau}\right)\text{H}_{2}\] \[\quad+\left(\frac{11}{8}-\zeta_{2}\right)\text{H}_{11}-\frac{11}{4 }\,\text{H}_{0}+\left(\zeta_{2}-\frac{107}{16}-\frac{\zeta_{2}}{\tau}- \frac{1}{2\tau}\right)\text{H}_{1}+\frac{7}{2}\bigg{]}\bigg{\}} \tag{45}\]
and
\[\bar{h}_{3} = -8C_{F}\Biggl{\{}-\frac{2n_{f}}{3N_{c}}\left[\mathrm{H}_{111}+ \mathrm{H}_{110}+\tau\,\mathrm{H}_{10}+\left(\frac{16}{3}+\tau\right)\mathrm{H} _{11}+\left(\frac{1}{2}+\frac{10}{3}\tau\right)\mathrm{H}_{1}+\frac{1}{2}\right] \tag{46}\] \[+\mathrm{H}_{120}+\mathrm{H}_{22}-\mathrm{H}_{1110}-\mathrm{H}_{ 112}-2\mathrm{H}_{121}+2\mathrm{H}_{211}-4\mathrm{H}_{1111}+\tau\mathrm{H}_{2 0}+\left(\frac{13}{6}-\tau\right)\mathrm{H}_{110}+\left(\frac{1}{2}-2\tau \right)\mathrm{H}_{12}\] \[+\left(\frac{5}{2}-2\tau\right)\!\mathrm{H}_{21}+\left(\frac{43}{ 6}-6\tau\right)\mathrm{H}_{111}-\left(\zeta_{2}-\frac{13}{6}\tau\right)\mathrm{ H}_{10}-\left(3+\zeta_{2}+\frac{3}{2}\tau\right)\!\mathrm{H}_{2}+\left(\frac{236}{9 }+\frac{2}{3}\tau\right)\mathrm{H}_{11}-\zeta_{2}\tau\mathrm{H}_{0}\] \[+\left(\frac{53}{6}+\zeta_{2}+3\zeta_{3}+\frac{134}{9}\tau+\zeta _{2}\tau\right)\mathrm{H}_{1}+\frac{11}{6}+3\left(\zeta_{2}+\zeta_{3}-\frac{1} {2}\right)\tau\] \[+\frac{1}{N_{c}^{2}}\biggl{[}\mathrm{H}_{1111}-\mathrm{H}_{22}- \mathrm{H}_{211}+3\mathrm{H}_{120}+3\mathrm{H}_{112}-3\mathrm{H}_{1110}+4 \mathrm{H}_{121}+3\,\tau\,\mathrm{H}_{20}+3\,\left(\frac{1}{2}-\tau\right)\, \mathrm{H}_{110}-\left(\frac{7}{2}-4\tau\right)\,\mathrm{H}_{12}\] \[+\left(\frac{1}{2}+4\tau\right)\,\mathrm{H}_{21}+\left(-\frac{3}{ 2}+2\tau\right)\,\mathrm{H}_{111}-3\,\left(\zeta_{2}-\frac{1}{2}\tau\right)\, \mathrm{H}_{10}+\left(\zeta_{2}-2-\frac{3}{2}\tau\right)\,\mathrm{H}_{2}+2 \bigl{(}\zeta_{2}-1\bigr{)}\,\mathrm{H}_{11}-3\zeta_{2}\,\tau\,\mathrm{H}_{0}\] \[+\left(5+2\zeta_{2}+3\zeta_{3}+\zeta_{2}\tau\right)\mathrm{H}_{1} +3\tau\,\Bigl{(}\zeta_{3}-\frac{1}{2}\Bigr{)}\biggr{]}\Biggr{\}}.\]
The kernels are smooth functions of \(\tau\) except for the endpoints \(\tau=0\) and \(\tau=1\). For \(\tau\to 1\) the three-loop kernel functions behave as \(\sum_{0\leq k\leq 4}\sum_{m>0}r_{km}\bar{\tau}^{m}\ln^{k}\bar{\tau}\). For small \(\tau\) - which determines the large \(N\) asymptotic of the anomalous dimensions - the kernels (for each color structure) have the form \(\sum_{k\geq 0}(a_{k}+b_{k}\ln\tau)\tau^{k}\). We note here that the reciprocity property of the anomalous dimension is equivalent to the statement that the small \(\tau\) expansion of the kernels does not involve non-integer powers of \(\tau\), namely \(h(\tau)\sim\sum_{m,k\geq 0}a_{mk}\tau^{m}\ln^{k}\tau\).
Below we compare our exact three-loop results with the approximate expressions constructed in ref. [16]. The approximate expressions reproduce the asymptotic behaviors of the exact kernels at both \(\tau\to 0,1\). We therefore subtract the logarithmically divergent pieces (see Eqs. (15) and (16) for explicit expressions) from both the exact and the approximated expressions to highlight their (small) deviations as shown in Figs. 1 and 2. For illustrative purposes, we plot the planar contribution (\(C_{F}N_{c}^{2}\) and \(C_{F}\) in \(h_{3}\) and \(\bar{h}_{3}\) respectively) and the subsubplanar contribution (\(C_{F}/N_{c}^{2}\)). The former is numerically dominant and generates the leading contribution in the large-\(N_{c}\) limit whereas the latter shows the worst-case scenario for the previous approximation using a simple HPL function ansatz. The error of other color structures all fall between the planar and subsubplanar cases, hence are numerically small.
## Appendix D Invariant kernels: \(\mathcal{N}=4\) Sym
In this section we present the invariant kernels for the universal anomalous dimensions of the planar \(\mathcal{N}=4\) SYM, see e.g., refs. [41; 31] for expressions up to NNLO. They are rather short so that we quote them here. We use the parametrization (24), where \(\Gamma_{\mathrm{cusp}}(a)\) can be found in ref. [37] and the constant term \(A(a)\) is
\[A(a) =-24a^{2}\zeta_{3}+32a^{3}\bigl{(}\zeta_{2}\zeta_{3}+5\zeta_{5} \bigr{)}+O(a^{4}), \tag{47}\]
where \(a=\frac{N_{c}g_{\mathrm{cYM}}^{2}}{16\pi^{2}}\), and
\[\Delta\widehat{\gamma}(N) =-a^{2}16\Bigl{(}\Omega_{3}-2\,\Omega_{-2,1}+2\,\Omega_{1}\, \Omega_{-2}\Bigr{)}\] \[\quad+a^{3}64\Bigl{(}\Omega_{5}+2\,\Omega_{3,-2}-8\,\Omega_{1,1,- 2,1}+2\Omega_{1,-4}\] \[\quad+\Omega_{1}(\Omega_{-4}+\Omega_{-2}^{2}+\zeta_{2}\Omega_{-2}) -2\zeta_{2}\Omega_{-2,1}\Bigr{)}. \tag{48}\]
For the kernels we find \(h_{1}=\bar{h}_{1}=0\),
\[h_{2}=8\frac{\bar{\tau}}{\tau}{\rm H}_{1}\,,\qquad\qquad\bar{h}_{2}=-8\bar{\tau}{ \rm H}_{1} \tag{49}\]
and
\[h_{3} =-16\frac{\bar{\tau}}{\tau}\Big{(}4\,{\rm H}_{111}+{\rm H}_{21}+{ \rm H}_{12}+{\rm H}_{110}\Big{)}\,, \tag{50}\] \[\bar{h}_{3} =16\bar{\tau}\Big{(}4{\rm H}_{111}+3\big{(}{\rm H}_{21}+{\rm H}_{ 12}\big{)}-{\rm H}_{110}+{\rm H}_{20}-\zeta_{2}{\rm H}_{0}\Big{)}.\]
These expressions are extremely simple in comparison with the expressions in QCD of the same order. Let us notice that the two-loop kernels contain only HPLs of weight one with the three-loop kernels involving HPLs of weight three, while in QCD the corresponding kernels require HPLs of weight two and four, respectively. Note also that the kernel \(h\) is proportional to the factor \(\bar{\tau}/\tau\) and the kernel \(\bar{h}\) to the factor \(\bar{\tau}\). It would be interesting to see if these properties persist in higher loops.
Figure 1: Comparison of two distinct color contributions (\(C_{F}N_{c}^{2}\) and \(C_{F}/N_{c}^{2}\)) in the exact (black solid) and approximated (red dashed) three-loop kernel \(h_{3}\) (see ref. [16] for explicit expressions of the latter). The inset curves show the relative percentage errors (\((1-h_{3,{\rm appr}}^{(c)}/h_{3,{\rm exact}}^{(c)})\times 100\%\)) of the approximation.
Summary
We have constructed a transformation that brings the evolution kernels of twist-two operators to the canonically conformal invariant form. The eigenvalues of these kernels are given by the parity respecting anomalous dimensions. We have developed a recurrence procedure that allows one to restore the weight functions of the corresponding kernels. It is applicable to a subset of the harmonic sums (with positive odd and negative even indices). It is interesting to note that exactly only such harmonic sums appear in the expressions for the reciprocity respecting anomalous dimensions.
We have calculated the three-loop invariant kernels in QCD and in \(\mathcal{N}=4\) SYM (in the planar limit). In QCD it was the last missing piece to obtain the three-loop evolution kernels for the flavor-nonsinglet twist-two operators in a fully analytic form, see ref. [16].
In the case of \(\mathcal{N}=4\) SYM the lowest order expressions for the kernels are rather simple and exhibit some regularities, \(h\sim\tau/\tau\), \(\bar{h}\sim\bar{\tau}\). It would be interesting to check if these properties survive at higher loops. We expect that at \(\ell\)-loops the kernels \(h^{(\ell)}(\tau)\) will be given by linear combinations (up to common prefactors) of HPLs of weight \(2\ell-3\) with positive indices. Therefore going over to the invariant kernel can lead to a more compact representation of the anomalous dimensions than representing the anomalous dimension spectrum \(\gamma(N)\) in terms of harmonic sums. The much smaller function basis in terms of HPLs (\(\tau/\bar{\tau}H_{\vec{m}}\) and \(\bar{\tau}H_{\vec{m}}\)) opens the possibility of extracting the analytical expressions of the higher-order evolution kernels from minimal numerical input through the PSLQ algorithm.
###### Acknowledgements.
We are grateful to Vladimir M. Braun and Gregory P. Korchemsky for illuminating discussions and comments on the manuscript. This work is supported by Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Center TRR110/2, grant 409651613 (Y.J.), and the Research Unit FOR 2926, project number 40824754 (S.M.).
## Appendix A
In this appendix, we describe in detail the derivations of some of the equations presented in section II. Let us start with Eq. (9). For the generator \(\mathrm{S}_{-}(a)=S_{-}\) the statement is trivial. Next, making use of Eq. (8) for the operator \(\mathrm{T}(\mathrm{H})\) and, taking into account that \(\mathrm{H}(a)\) commutes with the generators \(\mathrm{S}_{\alpha}(a)\), one can write the left hand side (lhs) of Eq. (9) in the form
\[\sum_{n=0}^{\infty}\frac{1}{n!}\mathrm{L}^{n}\mathrm{S}_{\alpha}(a)\mathrm{X}^ {n}\,, \tag{10}\]
where \(\mathrm{X}=\bar{\beta}(a)+\frac{1}{2}\mathrm{H}(a)\). Using the representation (7) for the generators and taking into account that \([S_{0},\mathrm{L}]=1\) and \([S_{+},\mathrm{L}]=z_{1}+z_{2}\) (we recall that \(\mathrm{L}=\ln z_{12}\)) one obtains
\[\mathrm{L}^{n}\mathrm{S}_{0} = S_{0}\mathrm{L}^{n}-n\mathrm{L}^{n-1}+\mathrm{L}^{n}X,\] \[\mathrm{L}^{n}\mathrm{S}_{+} = S_{+}\mathrm{L}^{n}+(z_{1}+z_{2})\left(-n\mathrm{L}^{n-1}+ \mathrm{L}^{n}X\right). \tag{11}\]
Substituting these expressions back into Eq. (10) one finds that the contributions of the last two terms on the rhs of Eq. (11) cancel each other. Hence Eq. (10) takes the form
\[S_{\alpha}\sum_{n=0}^{\infty}\frac{1}{n!}\mathrm{L}^{n}\mathrm{X}^{n}=S_{ \alpha}\mathrm{T}(\mathrm{H}), \tag{12}\]
that finally results in Eq. (9).
Let us now show that the inverse to \(\mathrm{T}(\mathrm{H})\) has the form (12). The product \(\mathcal{I}=\mathrm{T}^{-1}(\mathrm{H})\mathrm{T}(\mathrm{H})\) can be written as
\[\mathcal{I}=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\mathrm{L}^{n}\left(\bar{ \beta}(a)+\frac{1}{2}\widehat{\mathrm{H}}(a)\right)^{n}\mathrm{T}(\mathrm{H}). \tag{13}\]
Moving \(\mathrm{T}(\mathrm{H})\) to the left with help of the relation (10) and then using Eq. (8) for \(\mathrm{T}(\mathrm{H})\) one gets (\(\mathrm{X}=\bar{\beta}(a)+\frac{1}{2}\mathrm{H}(a)\))
\[\mathcal{I}=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{n!}\mathrm{L}^{n}\mathrm{T}( \mathrm{H})\mathrm{X}^{n}=\sum_{n,k=0}^{\infty}\frac{(-1)^{n}}{n!k!}\mathrm{L }^{n+k}\mathrm{X}^{n+k}=1\,.\]
Finally, we consider the product of operators \(\mathrm{T}\) with a differently defined function \(\mathrm{L}\). Namely, let us take \(\mathrm{T}_{\pm}(\mathrm{H})\equiv\mathrm{T}(\mathrm{L}_{\pm},\mathrm{H})\), where \(L_{\pm}=\ln(z_{12}\pm i0)\) so that \(\mathrm{L}_{+}-\mathrm{L}_{-}=2\pi\theta(z_{2}-z_{1})\). In order to calculate the product \(\mathrm{U}=\mathrm{T}_{+}(H)\mathrm{T}_{-}(H)\) one proceeds as before: use expansion (8) for \(\mathrm{T}_{+}(H)\), move \(\mathrm{T}_{-}(H)\) to the left and then expand it into a power series. It yields
\[\mathrm{U}=\sum_{n,k=0}^{\infty}\frac{(-1)^{n}}{n!k!}\mathrm{L}_{+}^{n} \mathrm{L}_{-}^{k}\widehat{\mathrm{X}}^{n+k}\,, \tag{14}\]
where \(\widehat{\mathrm{X}}=\bar{\beta}(a)+\frac{1}{2}\widehat{\mathrm{H}}(a)\). Let \(L_{+}=L_{-}+2\pi i\theta(z_{2}-z_{1})\) one can get for the sum in Eq. (14)
\[\mathrm{U}=\sum_{m=0}^{\infty}\frac{(2\pi i\theta)^{m}}{m!}\widehat{\mathrm{X }}^{m}(a)=1-\theta\left(1-e^{2\pi i\left(\bar{\beta}+\frac{1}{2}\widehat{ \mathrm{H}}\right)}\right),\]
where \(\theta\equiv\theta(z_{2}-z_{1})\). Since \(S_{0,+}\theta(z_{21})\sim z_{21}\delta(z_{21})=0\) one concludes that \(\mathrm{U}\) commutes with the canonical generators \(S_{\alpha}\) and hence \(\mathrm{U}\widehat{\mathrm{H}}=\widehat{\mathrm{H}}\mathrm{U}\).
## Appendix B
Let us check that the kernel \(h(\tau)\) given by Eq. (29) has the eigenvalues \(\Delta\widehat{\gamma}(N)\). First, after some algebra, the integral in Eq. (28) can be brought to the following form
\[\Delta\widehat{\gamma}(N)=\int_{1}^{\infty}dt\,h\left(\frac{t-1}{t+1}\right)\,Q _{N}(t)\,, \tag{38}\]
where \(Q_{N}(t)\) is the Legendre function of the second kind [42]. Inserting \(h\) in the form of Eq. (29) into Eq. (38) one gets
\[\int_{C}\frac{dN^{\prime}}{2\pi i}(2N^{\prime}+1)\Delta\gamma(N^{\prime})\int _{1}^{\infty}dt\,P_{N^{\prime}}(t)\,Q_{N}(t)\,. \tag{39}\]
The \(t\)-integral of the product of the two Legendre functions gives [42]
\[\left((N-N^{\prime})(N+N^{\prime}+1)\right)^{-1}. \tag{40}\]
Then closing the integration contour in the right half-plane one evaluates the \(N^{\prime}\) integral with the residue theorem at \(N^{\prime}=N\) yielding the desired lhs of Eq. (38).
Finally, in order to verify Eq. (32) one can check that the integral (38) with the kernel \(h_{2}\), \(\Delta\widehat{\gamma}_{2}(N)\), is equal to \(\Delta\widehat{\gamma}_{1}(N)/N/(N+1)\). The simplest way to do it is to substitute the Legendre function in the form
\[Q_{N}(t)=-\partial_{t}(1-t^{2})\partial_{t}Q_{N}(t)/N/(N+1), \tag{41}\]
and perform integration by parts.
## Appendix C
In this appendix, we collect the harmonic sums and the corresponding kernels which we have used. We split them into two parts: the first one includes the harmonic sums \(\Omega_{m_{1},\ldots,m_{k}}\) such that \(\prod_{i}^{k}\text{sign}(m_{i})=1\).
\(\Omega_{3}=S_{3}-\zeta_{3}\),
\[\Omega_{3,1}=S_{3,1}-\frac{1}{2}S_{4}-\frac{3}{10}\zeta_{2}^{2}\] \[\Omega_{-2,-2}=S_{-2,-2}-\frac{1}{2}S_{4}+\frac{1}{2}\zeta_{2}S_ {-2}+\frac{1}{8}\zeta_{2}^{2},\] \[\Omega_{1,3,1}=S_{1,3,1}-\frac{1}{2}S_{1,4}-\frac{1}{2}S_{4,1}+ \frac{1}{4}S_{5}-\frac{3}{10}\zeta_{2}^{2}S_{1}+\frac{3}{4}\zeta_{5},\] \[\Omega_{-2,-2,1}=S_{-2,-2,1}-\frac{1}{2}S_{4,1}-\frac{1}{2}S_{-2,-3}+\frac{1}{4}\zeta_{3}S_{-2}+\frac{5}{16}\zeta_{5},\] \[\Omega_{5}=S_{5}-\zeta_{5}. \tag{42}\]
Here \(S_{\vec{m}}\) are the harmonic sums with argument \(N\). We define the sums of negative signature, \(\prod_{i}^{k}\text{sign}(m_{i})=-1\), with an additional sign factor:
\[\Omega_{-2}=(-1)^{N}\left[S_{-2}+\frac{\zeta_{2}}{2}\right],\] \[0\Omega_{-2,1}=(-1)^{N}\left[S_{-2,1}-\frac{1}{2}S_{-3}+\frac{1} {4}\zeta_{3}\right],\] \[\Omega_{1,-2,1}=(-1)^{N}\left[S_{1,-2,1}-\frac{1}{2}S_{1,-3}- \frac{1}{2}S_{-3,1}+\frac{1}{4}S_{-4}\right.\] \[\left.+\frac{1}{4}\zeta_{3}S_{1}-\frac{1}{80}\zeta_{2}^{2}\right],\] \[\Omega_{-4,1}=(-1)^{N}\left[S_{-4,1}-\frac{1}{2}S_{-5}+\frac{11} {8}\zeta_{5}-\frac{1}{2}\zeta_{2}\zeta_{3}\right],\] \[\Omega_{3,-2}=(-1)^{N}\left[S_{3,-2}-\frac{1}{2}S_{-5}+\frac{1} {2}\zeta_{2}S_{3}+\frac{9}{8}\zeta_{5}-\frac{3}{4}\zeta_{2}\zeta_{3}\right],\] \[\Omega_{1,1,-2,1}=(-1)^{N}\bigg{[}S_{1,1,-2,1}-\frac{1}{2}S_{1,1,-3}-\frac{1}{2}S_{1,-3,1}\] \[\qquad\qquad\qquad-\frac{1}{2}S_{2,-2,1}+\frac{1}{4}S_{2,-3}+ \frac{1}{4}S_{-4,1}+\frac{1}{4}S_{1,-4}-\frac{1}{8}S_{-5}\] \[\qquad\qquad\qquad+\frac{1}{4}\zeta_{3}S_{1,1}-\frac{1}{80}\zeta _{2}^{2}S_{1}-\frac{1}{8}\zeta_{3}S_{2}+\frac{1}{8}\zeta_{5}-\frac{1}{16} \zeta_{2}\zeta_{3}\bigg{]},\] \[\Omega_{1,-4}=(-1)^{N}\bigg{[}S_{1,-4}-\frac{1}{2}S_{-5}+\frac{7 }{20}\zeta_{2}^{2}S_{1}-\frac{11}{8}\zeta_{5}+\frac{1}{2}\zeta_{2}\zeta_{3} \bigg{]}. \tag{43}\]
These combinations of harmonic sums are generated by the following kernels,
\[\mathcal{H}_{3}=-\frac{1}{2}\frac{\bar{\tau}}{\tau}\text{H}_{1},\] \[\mathcal{H}_{3,1}=\frac{1}{4}\frac{\bar{\tau}}{\tau}\left(\text{H} _{11}+\text{H}_{10}\right)\] \[\mathcal{H}_{-2,-2}=\frac{1}{4}\frac{\bar{\tau}}{\tau}\text{H}_{11},\] \[\mathcal{H}_{1,3,1}=-\frac{1}{8}\frac{\bar{\tau}}{\tau}\left(\text{H }_{20}+\text{H}_{110}+\text{H}_{21}+\text{H}_{111}\right),\] \[\mathcal{H}_{-2,-2,1}=\frac{1}{8}\frac{\bar{\tau}}{\tau}\left(\text{H }_{12}-\text{H}_{110}\right),\] \[\mathcal{H}_{5}=-\frac{1}{2}\frac{\bar{\tau}}{\tau}\left(\text{H}_{ 11}+\text{H}_{12}\right) \tag{44}\]
and
\[\mathcal{H}_{-2}=\frac{1}{2}\bar{\tau},\] \[\mathcal{H}_{-2,1}=-\frac{1}{4}\bar{\tau}(\text{H}_{1}+\text{H}_{0}),\] \[\mathcal{H}_{1,-2,1}=\frac{1}{8}\bar{\tau}\left(\text{H}_{10}+ \text{H}_{11}\right),\] \[\mathcal{H}_{-4,1}=-\frac{1}{4}\bar{\tau}\left(\text{H}_{21}+ \text{H}_{20}+\text{H}_{111}+\text{H}_{110}\right),\] \[\mathcal{H}_{3,-2}=-\frac{1}{4}\bar{\tau}\left(\text{H}_{21}+ \text{H}_{111}\right),\] \[\mathcal{H}_{1,1,-2,1}=-\frac{1}{16}\bar{\tau}\,\left(\text{H}_{ 111}+\text{H}_{110}\right),\] \[\mathcal{H}_{1,-4}=-\frac{1}{4}\bar{\tau}\left(\text{H}_{12}+ \text{H}_{111}\right), \tag{45}\]
where all HPLs have argument \(\tau\). These functions serve as a basis and more complicated structures can be generated as products of \(\Omega_{\vec{m}}\).
## Appendix D
Here we give the small (\(\tau\to 0\)) and large (\(\tau\to 1\)) expansions of the invariant kernels \(h_{3},\bar{h}_{3}\). By \(h_{3}^{(A)}\) (\(\bar{h}_{3}^{(A)}\)) we denote the function which appears in the expression for \(h_{3}\) (\(\bar{h}_{3}^{(A)}\)) with the color factor \(C_{F}\times A\). We will keep the logarithmically enhanced and constant terms in both limits. The former is subtracted from both the exact and approximated three-loop kernel to obtain the two figures in Eqs. 1 and 2. At \(\tau\to 0\) one gets
\[h_{3}^{(n_{f}N_{c})} =\frac{5839}{27}-\frac{256}{9}\zeta_{2}+\frac{64}{3}\zeta_{3}- \frac{8}{3}\ln\tau\,,\] \[h_{3}^{(n_{f}/N_{c})} =-\frac{17}{9}+16\zeta_{3}\,,\] \[\bar{h}_{3}^{(n_{f}/N_{c})} =\frac{8}{3}\,,\] \[h_{3}^{(N_{c}^{2})} =-\frac{18520}{27}-\frac{88}{3}\zeta_{3}-\frac{176}{5}\zeta_{2}^ {2}+\frac{1744}{9}\zeta_{2}-\frac{46}{3}\ln\tau\] \[h_{3}^{(N_{c}^{0})} =-\frac{1186}{9}+32\zeta_{2}+(-32+16\zeta_{2})\ln\tau\,,\] \[h_{3}^{(N_{c}^{-2})} =24-8\zeta_{2}-18\ln\tau\,,\] \[\bar{h}_{3}^{(N_{c}^{0})} =-\frac{44}{3}\,,\] \[\bar{h}_{3}^{(N_{c}^{-2})} =-48\tau\left(\zeta_{2}+\zeta_{3}+\frac{1}{4}-\zeta_{2}\ln\tau \right)\,, \tag{15}\]
and for \(\tau\to 1\) one obtains
\[h_{3}^{(n_{f}N_{c})} =\frac{5695}{27}-\frac{208}{9}\zeta_{2}+\frac{64}{3}\zeta_{3}+ \left(-\frac{16}{3}\zeta_{2}+\frac{38}{9}\right)\ln\bar{\tau}\,,\] \[h_{3}^{(n_{f}/N_{c})} =\frac{304}{9}\zeta_{2}+16\zeta_{3}-25-\left(\frac{16}{3}\zeta_{ 2}+\frac{74}{3}\right)\ln\bar{\tau}\] \[\quad+\frac{152}{9}\ln^{2}\bar{\tau}-\frac{8}{9}\ln^{3}\bar{\tau}\,,\] \[\bar{h}_{3}^{(n_{f}/N_{c})} =\frac{16}{3}\left(\frac{1}{2}-\zeta_{2}+\zeta_{3}\right)+\left( \frac{16}{3}\zeta_{2}-\frac{184}{9}\right)\ln\bar{\tau}\] \[\quad+\frac{152}{9}\ln^{2}\bar{\tau}-\frac{8}{9}\ln^{3}\bar{\tau}\,,\] \[h_{3}^{(N_{c}^{2})} =-\frac{72}{5}\zeta_{2}^{2}+\frac{1741}{9}\zeta_{2}-\frac{88}{3} \zeta_{3}-\frac{19132}{27}\] \[\quad+\left(\frac{4}{3}\zeta_{2}-\frac{187}{18}\right)\ln\bar{ \tau}+\left(-\frac{5}{2}+4\zeta_{2}\right)\ln^{2}\bar{\tau}\,,\] \[h_{3}^{(N_{c}^{0})} =\frac{136}{5}\zeta_{2}^{2}-\frac{2170}{9}\zeta_{2}+80\zeta_{3}- \frac{94}{3}\] \[\quad+\left(-\frac{32}{3}\zeta_{2}-24\zeta_{3}+\frac{548}{3} \right)\ln\bar{\tau}\] \[\quad+\left(16\zeta_{2}-\frac{923}{9}\right)\ln^{2}\bar{\tau}+ \frac{14}{9}\ln^{3}\bar{\tau}+\frac{4}{3}\ln^{4}\bar{\tau}\,,\] \[h_{3}^{(N_{c}^{-2})} =-28\zeta_{2}^{2}-27\zeta_{2}+56\zeta_{3}+28\] \[\quad+\left(\frac{115}{2}-12\zeta_{2}-40\zeta_{3}\right)\ln\bar{\tau}\] \[\quad+\left(8\zeta_{2}+\frac{11}{2}\right)\ln^{2}\bar{\tau}+\frac{ 2}{3}\ln^{3}\bar{\tau}-\frac{1}{3}\ln^{4}\bar{\tau}\,,\] \[\bar{h}_{3}^{(N_{c}^{0})} =-\frac{136}{5}\zeta_{2}^{2}+\frac{88}{3}\zeta_{2}-\frac{136}{3} \zeta_{3}-\frac{8}{3}\] \[\quad+\left(-\frac{16}{3}\zeta_{2}+\frac{1708}{9}\right)\ln\bar{\tau}\] \[\quad-\frac{968}{9}\ln^{2}\bar{\tau}+\frac{14}{9}\ln^{3}\bar{ \tau}+\frac{4}{3}\ln^{4}\bar{\tau}\,,\] \[\bar{h}_{3}^{(N_{c}^{-2})} =-\frac{216}{5}\zeta_{2}^{2}+40\zeta_{2}+8\zeta_{3}+12\] \[\quad+(40\zeta_{2}-64\zeta_{3}+40)\ln\bar{\tau}\] \[\quad-8(4\zeta_{2}-1)\ln^{2}\bar{\tau}+\frac{2}{3}\ln^{3}\bar{ \tau}-\frac{1}{3}\ln^{4}\bar{\tau}\,. \tag{16}\]
Here we quote the cusp anomalous dimensions up to three loops for reference [4; 35; 36],
\[\Gamma_{\rm cusp}^{(1)} =4C_{F}\,,\] \[\Gamma_{\rm cusp}^{(2)} =C_{F}\left[N_{c}\left(\frac{536}{9}-16\zeta_{2}\right)-\frac{40} {9}n_{f}\right]\,,\] \[\Gamma_{\rm cusp}^{(3)} =C_{F}\bigg{[}N_{c}^{2}\left(\frac{176}{5}\zeta_{2}^{2}+\frac{88}{ 3}\zeta_{3}-\frac{1072}{9}\zeta_{2}+\frac{490}{3}\right)\] \[\quad+N_{c}n_{f}\left(-\frac{64}{3}\zeta_{3}+\frac{160}{9}\zeta_{2 }-\frac{1331}{27}\right)\] \[\quad+\frac{n_{f}}{N_{c}}\left(-16\zeta_{3}+\frac{55}{3}\right)- \frac{16}{27}n_{f}^{2}\bigg{]}\,. \tag{17}\]
|
2303.03483 | In-Storage Domain-Specific Acceleration for Serverless Computing | While (1) serverless computing is emerging as a popular form of cloud
execution, datacenters are going through major changes: (2) storage
dissaggregation in the system infrastructure level and (3) integration of
domain-specific accelerators in the hardware level. Each of these three trends
individually provide significant benefits; however, when combined the benefits
diminish. Specifically, the paper makes the key observation that for serverless
functions, the overhead of accessing dissaggregated persistent storage
overshadows the gains from accelerators. Therefore, to benefit from all these
trends in conjunction, we propose Domain-Specific Computational Storage for
Serverless (DSCS-Serverless). This idea contributes a serverless model that
leverages a programmable accelerator within computational storage to conjugate
the benefits of acceleration and storage disaggregation simultaneously. Our
results with eight applications shows that integrating a comparatively small
accelerator within the storage (DSCS-Serverless) that fits within its power
constrains (15 Watts), significantly outperforms a traditional disaggregated
system that utilizes the NVIDIA RTX 2080 Ti GPU (250 Watts). Further, the work
highlights that disaggregation, serverless model, and the limited power budget
for computation in storage require a different design than the conventional
practices of integrating microprocessors and FPGAs. This insight is in contrast
with current practices of designing computational storage that are yet to
address the challenges associated with the shifts in datacenters. In comparison
with two such conventional designs that either use quad-core ARM A57 or a
Xilinx FPGA, DSCS-Serverless provides 3.7x and 1.7x end-to-end application
speedup, 4.3x and 1.9x energy reduction, and 3.2x and 2.3x higher cost
efficiency, respectively. | Rohan Mahapatra, Soroush Ghodrati, Byung Hoon Ahn, Sean Kinzer, Shu-ting Wang, Hanyang Xu, Lavanya Karthikeyan, Hardik Sharma, Amir Yazdanbakhsh, Mohammad Alian, Hadi Esmaeilzadeh | 2023-03-06T20:28:37Z | http://arxiv.org/abs/2303.03483v2 | # Domain-Specific Computational Storage for Serverless Computing
###### Abstract
While (1) serverless computing is emerging as a popular form of cloud execution, datacenters are going through major changes: (2) storage disaggregation in the system infrastructure level and (3) integration of domain-specific accelerators in the hardware level. Each of these three trends individually provide significant benefits; however, when combined the benefits diminish. Specifically, the paper makes the key observation that for serverless functions, the overhead of accessing disaggregated persistent storage overshadows the gains from accelerators. Therefore, to benefit from all these trends in conjunction, we propose Domain-Specific Computational Storage for Serverless (_DSCS-Serrereless_). This idea contributes a serverless model that leverages a programmable accelerator within computational storage to conjugate the benefits of acceleration and storage disaggregation simultaneously. Our results with eight applications shows that integrating a comparatively small accelerator within the storage (_DSCS-Serreless_) that fits within its power constrains (15 Watts), significantly outperforms a traditional disaggregated system that utilizes the NVIDIA RTX 2080 Ti GPU (250 Watts). Further, the work highlights that disaggregation, serverless model, and the limited power budget for computation in storage require a different design than the conomal practices of integrating microprocessors and FPGAs. This insight is in contrast with current practices of designing computational storage that are yet to address the challenges associated with the shifts in datacenters. In comparison with two such conventional designs that either use quad-core ARM A57 or a Xilinx FPGA, _DSCS-Serverless_ provides 3.7\(\times\) and 1.7\(\times\) end-to-end application speedup, 4.3\(\times\) and 1.9\(\times\) energy reduction, and 3.2\(\times\) and 2.3\(\times\) higher cost efficiency, respectively.
+
Footnote †: Work done while pursuing PhD at University of California, San Diego.
## I Introduction
(1) Serverless computing is emerging as a prevalent form of cloud execution that has been adopted across different market sectors [1, 2, 3, 4, 5, 6, 7, 8]. This adoption is backed by the public cloud services such as AWS Lambda [9], Google Cloud Functions [10], and Azure Serverless Computing [11]. The popularity of serverless is driven by ease of programming, pay-as-you-go pricing model, and alleviating the need of managing the cloud execution environment [9, 10, 11].
Besides this shift in the cloud-native application development, the datacenter is going through major changes: (2) storage disaggregation in the system infrastructure level [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], and the (3) integration of domain-specific accelerators [24, 25, 26, 27] at the hardware architecture level. Disaggregation is enabled by the increase in network bandwidth to hundreds of Gbps and reduction in latency to single-digit microseconds [16, 18, 20, 23]. Disaggregation has shown promising results in resource utilization, elasticity, and failure mitigation in datacenters [28, 29, 17, 30]. While the improvements in networking is making storage disaggregation a viable solution, the failure of Dennard scaling [31] and the rise of dark silicon [32, 33, 34] has ignited a golden age of domain-specific accelerators [35]. Accelerators [36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 87, 88, 89, 91] have made their way into the datacenters of major cloud providers including Amazon [26], Google [24, 92], Meta [93, 94], and Microsoft [95, 96].
The trend towards serverless has coincided with these two structural changes in infrastructure and hardware. Each of these trends individually provide significant benefits but collectively poses interesting challenges. On the one hand, the gains from domain-specific accelerators can potentially expand serverless usecases [97, 98, 99, 100] and/or potentially improve their speed and efficiency. On the other hand, serverless functions operate on ephemeral data and they need to read and store their inputs and outputs from persistent storage for every
Fig. 1: This paper explores _DSCS-Serverless_ at the conjunction of three different trends in cloud computing: (1) serverless functions in the programming level; (2) storage disaggregation in the system infrastructure level; and (3) domain-specific architectures in the hardware level.
invocation [101, 102, 103, 104]. This paper makes the observation that for serverless applications with disaggregated storage, the overhead of moving input and output data from remote storage limits the benefits from acceleration. The gain will be limited since current accelerators are inherently designed to myopically focus on the compute. They are not meant to deal with the significant data movement cost in serverless functions that also involves further networking when the storage is disaggregated. Observing these insights, as shown in Figure 1, the paper **explores the confluence of the three trends and provides a pathway towards utilizing accelerators for serverless computing on storage-disaggregated datacenters.** We propose **D**omain-**S**pecific **C**omputational **S**torage for **S**er**verless** computing or for short _DSCS-Serverless_.
This idea contributes a serverless model that leverages a relatively small programmable accelerator within computational storage to conjugate the benefits of disaggregation and acceleration simultaneously. The proposed model does not advocate moving back heavy compute to the storage but take _a more balanced approach_ by integrating a rather small accelerator to mitigate the communication overheads when applicable. In other words, some of the remote storage nodes will have a tiny accelerator closely coupled next to the storage media. These programmable accelerators specifically target a domain of function and accelerate when a workload falls into the domain.
However, placing a compute near the storage comes with challenges, specially when it is a domain-specific accelerator.
**Tight power constraints.** While every accelerator has its own Power, Performance and Area (PPA), the storage device imposes a strict upper bound on the power budget (<25 watts [104]). Further, this power budget is divided amongst the flash and the accelerator. As such, one of the primary challenges is to architect a near-storage accelerator that not only covers a broad range of applications, but also adheres to the tight design constraints. We explore using various compute platforms (ARM CPU, Low-power GPU, FPGA, and Domain-Specific Accelerator) near-storage for a domain of serverless application while abiding by the constraints imposed by the storage. Considering the constraints, we perform a Pareto design space exploration that examines more than 650 configurations.
**System software stack changes.** Serverless functions use frameworks such as OpenFaaS [105] and Kubernetes [106] for deployment and orchestration. The challenge here is how to minimally change the system stack and frameworks to integrate the _DSCS-Serverless_ such that serverless functions can be offloaded seamlessly. Furthermore, the changes should not interfere with the normal execution with disaggregated storage. To identify if a serverless function can utilize _DSCS-Serverless_, we use software hints provided at function deployment time. We further develop a OpenCL based runtime that can be used to from within the serverless functions to utilize the near-storage accelerator.
The paper makes the following contributions:
* _The observation that with disaggregated storage, the overhead of moving data from remote storage limits the benefits from acceleration in serverless functions._ The results show that integrating a comparatively small accelerator within the storage that fits within its power constrains significantly outperforms traditional disaggregated systems that even use high-end GPUs.
* _The DSCS-Serverless serverless model that leverages a relatively small programmable accelerator within computational storage to opportunistically accelerate a domain of serverless functions. DSCS-Serverless_ does not aim to move back heavy compute to the storage but shows that taking a more balanced approach by integrating a rather small accelerator can unlock the significant gains for a domain of serverless functions amenable to acceleration.
* _The work highlights that disaggregation, serverless model, and the limited power budget for computation in storage require an alternative design than the conventional practices of integrating microprocessors and FPGAs._ This insight is in contrast with current practices of designing computational storage that are yet to address the challenges associated with the shifts in datacenters with respect to disaggregation.
We choose machine learning domain to design a programmable accelerator that can accelerate algorithms such as linear regression, image classification, object detection, semantic segmentation, neural machine translation, vision transformers, etc. and showcase an implementation of _DSCS-Serverless_. This is because machine learning services are commonly deployed as serverless pipelines [107, 108, 109, 110, 111, 112, 113, 114, 115] and need to operate under tight Service Level Objective (SLO) and cost budget constraints. We evaluate the system through a rigorous study with eight real-world, latency critical, _end-to-end_ applications inspired from AWS Lambda case studies [116, 117, 118, 119, 108, 112, 114, 115, 116, 117, 118] and model the applications as a sequence of serverless functions on AWS. _DSCS-Serverless_ performs better than existing computational storage solutions that either use microprocessors [119, 120] (3.7\(\times\) end-to-end application speedup, 4.3\(\times\) energy reduction, and 3.2\(\times\) higher cost efficiency) or FPGAs [121, 122] (1.7\(\times\) end-to-end application speedup, 1.9\(\times\) energy reduction, and 2.3\(\times\) higher cost efficiency). Evaluations show that integrating a comparatively small accelerator for _DSCS-Serverless_ significantly outperforms a traditional disaggregated system that utilizes the NVIDIA RTX 2080 Ti GPU. In comparison, _DSCS-Serverless_ achieves 2.7\(\times\) end-to-end speedup, 4.2\(\times\) energy reduction, and 3.0\(\times\) higher cost efficiency.
## II Background and Motivation
This section describes a model execution flow in serverless computing and characterizes benchmarks on AWS [9] to identify the bottlenecks of the current serverless computing platform.
### _Life of Serverless Functions_
In serverless, applications are broken down into functions having following phases:
**Deployment.** Figure 2 depicts a widely used serverless pipeline for machine learning application that consists of three functions:
Data Pre-Processing, Machine Learning Inference, and Post-Processing & Notification Service [108, 112, 113, 115, 123]. During deployment, the application provider configures metadata constraints (timeout, trigger mechanisms, hardware requirements, etc.) into a YAML file for each of the functions. Since serverless functions are stateless, the provider also allocates a persistent storage (such as AWS S3) that is used by the functions to read and store data. The provider deploys application modelled as chain of functions to serverless frameworks [9, 10, 105, 124].
**Invocation.** An application is launched when an user's request (with data) arrives at the storage and triggers an event. The serverless framework (AWS Lambda [9] or OpenFaaS [105]) based on the deployment constraints from the function's YAML file, schedules the function to a compute node. The compute node initiates data read using an RPC to the persistent storage node. At the storage node, this RPC invokes a system call that reads data from the physical storage over PCIe. The data is then serialized [125], converted to network packets and transmitted to the compute node. After function execution, the output data (ephemeral or not) is written back to the persistent storage following similar steps discussed above for reading the data. Moreover, if the function utilizes a specialized domain specific accelerator (DSA) such as GPUs, ASICs or FPGA at the compute node, the compute node has to initiate a data transfer (e.g. cudaMemcpy for GPUs [126]) to the DSA devices' memory generally over PCIe [97, 127, 128]. Overall, these steps are expensive for serverless functions with strict Service Level Objective requirements [103, 129] since they involve RPCs [101, 130], system calls [131], and I/Os [103].
### _Characterization of Serverless Functions_
As demonstrated above, there are variegated components that contribute to the end-to-end latency of the serverless functions. We profile the benchmarks (Ref Table I) on AWS EC2 instances using the methodology mentioned in Section VI-A.
**Computation vs. Communication.** Figure 4 shows the compute, communication (network + I/O), and the system stack overhead of launching the function using OpenFaaS [105] and Kubernetes [106]. We observe that latency to access the storage makes up a significant portion of the end-to-end latency (on average > 55%). The average for reading and writing the data to the remote storage is greater than the time it takes to perform the computation. In fact, Credit Risk Assessment, Asset Damage Detection, and Content Moderation consists of more than 70% communication. Our analysis commensurate with prior studies [132, 133] that demonstrate \(\approx\)75% communication overhead. This communication overhead is naturally expected because of the serverless function execution flows discussed earlier (refer II-A).
**Communication in storage disaggregated datacenters.** Figure 5 shows the cumulative density functio
Fig. 4: Runtime latency breakdown for application modeled as serverless functions deployed on AWS EC2 with remote S3 storage.
Fig. 5: Cumulative Density Function for (a) reading inputs and (b) writing outputs from/to AWS S3 for different data sizes.
Fig. 3: Traditional serverless computing system and _DSCS-Serverless_ that augments some storage drives with a tiny domain specific accelerator (DSA).
Fig. 2: Serverless computing workflow for an end-to-end application. The application consists of three functions exchanging data via disaggregated persistent storage.
write latency of remote S3 storage across a range of benchmarks. The results show that both read/write latency suffer from tail latency. However, the average latency difference between the median and the 99\({}^{\text{th}}\) percentile is a factor of 110% and 75% for read and write accesses, respectively. The higher average latency difference for reads is due to larger input data size compared to the outputs (refer Table I). This long tail latency is primarily because of disaggregation mechanism that increases the network communication overhead. Our analyses about tail latency of serverless functions commensurate with prior studies [134, 135, 136, 101]. Indeed, recent work has devised solutions to mitigate network latency for microservices or serverless functions through RPC accelerations [130], QUIC protocol [137], and communication bypassing/fused functions [102, 103].
**Domain specific acceleration.** Accelerators have been integrated into major cloud providers including Google [92, 24], Amazon [26], Meta [93], and Microsoft [96]. The improved efficiency of these accelerators unlocked additional usecases in serverless computing [100, 99, 97]. However, the primary target of these accelerators is commonly focused on computation efficiency. From Figure 4, we observe that _with disaggregated storage, the overhead of moving input and output data limits the benefits from acceleration in serverless applications_. As such, the overall benefits of the current paradigm of acceleration in a disaggregated system is strictly capped by the Amdahl's Law [138].
### _Opportunities in Near-Data Computational Storage in Serverless Computing_
To relieve the communication overhead, we can revisit the conventional wisdom of _computing close to the data storage_ in the scope of serverless systems. Employing near-data computation can notably reduce the network and I/O latency overhead. Such approach is naturally beneficial because it effectively relocates the system bottleneck towards computation element for which we can employ domain accelerators. This paper, _DSCS-Serverless_, sets out to explore this new model for serverless computing. In this model, we leverage computational storage as a scaffold to enable near-storage domain acceleration while being faithful to disaggregated architecture. In particular, we devise a relatively small accelerator, under tight storage power constraints, for a range of computationally intensive serverless functions. Finally, to offset the network and I/O latency, _DSCS-Serverless_ leverages peer-to-peer [139, 140] channel connecting accelerator and storage units. Next, we expound the _DSCS-Serverless_ system architecture.
## III DSCS-Serverless Overview
Figure 3(b) outlines the high-level system architecture of _DSCS-Serverless_. _DSCS-Serverless_ leverages computational storage drive (CSD) to architect and integrate a relatively small near-storage domain specific accelerator (DSA) on which a domain of functions are opportunistically executed. As shown in Figure 6, the _DSA_ directly communicates with the storage unit (e.g. flash NAND, SSD, etc.) using a dedicated peer-2-peer (P2P) PCIe link [140]. With such system architecture, the functions that can be accelerated with the near-storage _DSA_ obviate the extravagant network and I/O data transfer overheads (Remote Read/Write parts in Figure 4). This also improves the resource utilization of the storage node by enhancing it with computational capability, but does not replace the conventional disaggregated storage nodes. In essence, _DSCS-Serverless_ system architecture closely follows the design philosophies commonly-used in disaggregated storage data centers [141, 142, 17]. Note that in case a particular serverless function is not supported by DSA, _DSCS-Serverless_ falls back to the traditional serverless computing execution flow with its inherent data transfer overheads.
### _Life of Serverless Functions in DSCS-Serverless_
We contrast the life of a serverless function in _DSCS-Serverless_ and the traditional system architecture (refer Figure 3(a)).
\(\bigcirc\mapsto\) In contrast to the traditional system, _DSCS-Serverless_ directly deploys the serverless functions, for which DSA offers acceleration, to the _DSA_ integrated storage node that has the data, thereby eliminating the costly invocation of a compute node. Note that each invocation of compute node in the traditional confers a notable data read latency. Section V describes the details of how _DSCS-Serverless_ identifies the serverless functions amenable to acceleration, followed by corresponding DSA invocations.
\(\bigcirc\mapsto\) Instead of fetching data from a remote storage via costly RPC requests, _DSCS-Serverless_ employs its driver to instead initiate a peer-to-peer (P2P) data transfer from storage to _DSA_ memory. Specifically, in the traditional system for an AWS S3 Read API, the RPC upon reaching the storage node requires at least a ProtoBuf deserialized [92] and a _read_ system call to access the file over PCIe [143]. In contrast, _DSCS-Serverless_ circumvents the costly ProtoBuf deserialization by performing a single system calls to initiate P2P data communication [139].
The P2P data transfers between the SSD and the _DSA_ memory subsystem bypasses the host's entire stack *. Once the data are entirely transferred to _DSA_ memory, the execution of the serverless function commences.
\(\bigcirc\mapsto\) Once the execution completes, the _DSA_ sends an interrupt over PCIe to the host to initiate a P2P transfer of the results to the storage. Once transfer completes, the host may invoke subsequent serverless function calls. This is in contrast to the traditional system in which data transfer over network and I/O events occur to write the results to persistent storage.
Footnote *: In evaluations, we use PCIe Gen3 with \(\times 4\) lanes that is used in conventional CSDs such as Samsung SmartSSD [144]. We conduct sensitivity analysis against number of PCIe lanes in Section VI-C.
Once the serverless function concludes, _DSCS-Serverless_ interrupts the host to notify the completion of the job. Using our proposed system architecture, _DSCS-Serverless_ offers this opportunity to execute serverless function on a single (storage) node, thereby improving overall end-to-end latency and energy consumption. In addition, the specialized _DSA_
units economize the precious CPU cycles on compute nodes leading to additional cost efficiency in datacenters.
## IV DSCS-Serverless Architecture Design
This section first discusses the architecture of the programmable domain specific accelerator (_DSA_). Then, we showcase a methodology to derive the optimal _DSA_ configuration under tight computational storage power constraints. We also demonstrate a technology scaling analysis to project the performance numbers for more recent technology nodes. Note that, while we present the design space exploration framework for Machine learning and Data analytics applications, it can be readily employed to find optimal _DSA_s for alternative application domains.
### _Architecture Design of Domain Specific Accelerator_
Machine learning and Data analytics applications are one of the fastest growing domains in serverless, accounting for more than 40% of services [145, 7, 11]. As such, we set the primary target of our design to accelerate such applications. The important design decisions that we considered for the architecture were: programmability such that it can cater to a wide range of commonly deployed machine learning workloads and low-power such that it can abide by the power constraint imposed by the storage. Specifically, the architecture can support a wide range of machine learning inference tasks such as image classification, object detection, semantic segmentation, linear/logistic regression, neural machine translation, conversational AI, data pre-processing, etc. Figure 6 demonstrates our systolic array based _DSA_ architecture, inspired by recent industry designs such as Google TPU [146, 24], Amazon [26], Meta [93], and academic architectures [38, 39, 40, 52, 147, 48, 36]. Additionally, we design and integrate a vector engine in _DSA_.
**Systolic array accelerator.** The systolic unit consists of a 2D array of Processing Elements (PEs) and dedicated multibank scratchpads for input activations, weights, and outputs as shown in Figure 6. Each bank of scratchpad units is exclusively shared across PEs within a row. The execution flow of such architecture is similar to conventional systolic-array accelerators [149, 43, 150]. At each cycle input activation tensors are fetched from input scratchpads and shared across the PE units within a row. The partial sum from the PEs are forwarded in a waterfall fashion per column. Once the computations across the array of PEs conclude, the results are either fed to the _Vector Engine_ for ensuing operations or written back to DRAM.
**Vector engine unit.** The _Vector Engine_ is a Single Instruction Multiple Data (SIMD) architecture designed primarily to execute activation functions (e.g. Relu, LeakyRelu, Tanh, Sigmoid), pooling, quantization, vector arithmetic computations, and datatype casting which are prevalent in emerging machine learning models [151, 152, 153].
Furthermore, machine learning models generally require data pre-processing/post-processing such as image scaling, normalization, and datatype casting [123, 109, 110, 117]. These transformations are commonly packaged as separate serverless functions. For instance, _Asset Damage Detection_ benchmark [117] shows the pipeline for image classification that contains function solely for data pre-processing. We utilize the _Vector Engine_ to execute these functions as well. In summary, the tightly-couple design of a systolic array accelerator and a vector engine unit, enables the execution of a broad range of serverless function services, from heavy GEMM operations to simple data processing functions.
### _Design Space Exploration for Optimal Near-Storage DSA_
To effectively support the design space exploration of _DSA_, we designate the number of PEs, systolic array X-Y dimension, on-chip scratchpad sizes, and memory bandwidth as configurable parameters. Such design space exploration is crucial to ensure that _DSA_ architecture satisfies tight power and thermal constraints of computational storage drives 2 This tight design constraints are primarily because of the limited PCIe power budget (\(\leq\) 25 watts [104]) that is the exclusive source of power for CSD units Note that the power source of each CSD unit is shared between storage (flash) and compute devices [122, 121, 119]. To provide an estimate of the CSD's limited power budget, Samsung's SmartSSD [122, 121] has merely a TDP (ideal) of 18 watts. In addition, prior work has shown that computational storage, when appropriately designed, can significantly outperform conventional servers
Fig. 6: Architecture diagram of _DSA_ placed near storage for _DSCS-Serverless_ as highlighted in Figure 3
with high-end CPUs and GPUs [154, 155]. As such, performing proper design space exploration for _DSA_ is crucial.
**DSE objective.** There are various conflicting factors to take into account to identify optimal design points for CSDs. Commensurate with prior work [156, 157], we use throughput (frames per second) as the performance metric. Capital expense incurred by ASIC fabrication is another determining factor for designing data centers [158, 159]. However, measuring the precise capital expense is not pragmatic and kept confidential by major cloud providers [24, 160]. Therefore, we use chip area as a proxy for the ASIC fabrication cost. We use _DSA_ power consumption to assess the feasibility of a design point [156], which is capped by the CSD's limited power budget. The objective of DSE is to find the design points that are on the Pareto frontier of the power\(\leftrightarrow\)performance and area\(\leftrightarrow\)performance. Particularly, we only select _DSA_ architecture configurations that are on Pareto frontiers while abiding by the tight power and area constraints of CSDs. We choose the open source 45 nm FreePDK [161] technology node for our baseline analysis in the design space exploration.
**FPGA implementation and simulation methodology.** To build the design space exploration infrastructure, we first implement and synthesize _DSA_ (refer Section VI-A). Since hardware simulations for the entire design points (\(>650\)) is not practical, we develop a cycle-accurate simulator to closely model the latency and power of our designed _DSA_ (refer Section VI-A). For power and area numbers, we use the synthesized values using 45 nm technology node. We followed the methodology in [162] to scale the results to 14 nm, which is relatively similar to the technology node of Samsung SmardSSD [144]. We provide the details of methodology in Section VI-A.
**Design space exploration search space.** We use Google TPUv1 [24] with 256\(\times\)256 PEs, 28 MB of on-chip scratchpads, and 34 GB/s memory bandwidth as the standard design point. We then scale this design by varying the number of PEs from 4\(\times\)4 to 1024\(\times\)1024 with a power of 2 stride. We proportionally scale the scratchpads to provide sufficient on-chip resources for PEs. However, we set the maximum scratchpads size to 32 MB because large scratchpad sizes significantly increase the power consumption, exceeding the tight power constraints for _DSA_. We use three realistic memory bandwidth in the search space, namely DDR4 (19.2 GB/s), DDR5 (38 GB/s) and HBM2 (460 GB/s) [163, 164].
**Pareto-optimal design points.** Figure 7 demonstrate the area\(\leftrightarrow\)performance and power\(\leftrightarrow\)performance results across a range of design points. We exclude the design points that are either infeasible because of design constraints or significantly inefficient in terms of throughput. The power and throughput of each design point is an average across set of target benchmarks (See Table I). The design point at the bottom left portion of Figure 7(a), represents a 4x4 systolic array with 128 KB on-chip scratchpad and DDR4 memory. The top right design configuration shows a 128\(\times\)128 systolic array with 4 MB on-chip scratchpad and DDR5 memory. The results of our design space exploration indicates that a 1024\(\times\)1024 systolic array delivers significantly lower throughput compared with 128\(\times\)128 array. This is because the _DSA_ employs a tiling-based execution mechanism. For a batch size of one3, a 1024\(\times\)1024 systolic array does _not_ show the most optimal performance because the compiler (Section V) aims to obtain the optimal tiling such that the _DSA_ overlaps the memory transfers for a tile with the computation of preceding tile. If the tile sizes are large, the cycles spent on memory transfer outweigh the compute cycles. This is similar to CPU pipelines where stalls reduce the IPC. Through our design space exploration, we find the most optimal _DSA_ configuration to be a 128\(\times\)128 systolic array with 4 MB on-chip scratchpad, and DDR5 memory.
Footnote 3: We use batch size 1 for our baseline design because serverless functions generally target real-time user facing applications with tight service level objectives [103]. In these scenarios, additional latency for batching is not desirable. Nonetheless, Section VI-C provides sensitivity results to batch sizes.
## V DCS-Serverless: System Consolideration
_DSCS-Serverless_ requires minimal modifications to (1) the programming model to express near-data acceleration using _DSA_, and (2) the system software stack to identify and leverage nodes capable of near-storage acceleration using _DSA_.
**System software stack.**_DSCS-Serverless_ is deployed atop containerized serverless frameworks. We implement and deploy the _DSCS-Serverless_ software stack using OpenFaaS [105], an open source serverless framework. However, the _DSCS-Serverless_ can be readily deployed into other serverless platforms such as Apache OpenWhisk [124]. OpenFaaS is deployed on Kubernetes [106] to orchestrate containers, and uses Promethus [165] for cluster monitoring and telemetry.
**Programming model.** To utilize serverless functions using OpenFaaS, users define their target applications as a Directed Acyclic Graph (DAG) of decoupled serverless functions. During deployment of these functions, the user provides a YAML file to describe the properties and constraints of the function (eg. dependencies, timeout, access mechanism, storage, etc.). _DSCS-Serverless_ extends this YAML file format to enable users to mark near-data _DSA_ acceleratable functions. In addition, users provide a container that packages the accelerated serverless function with the appropriate device drivers and libraries.
**Data placement.** To enable effective utilization of near-storage accelerators, it is essential that the data to be processed is situated on the same storage drive that houses the accelerator. Cloud service providers today offer various storage classes for
Fig. 7: Performance frontiers, 45 nm Tech Node.
different types of data (hot and frequently accessed data, cold data, and archived data, etc.) [141, 142]. We extend the storage class [166] to add a new class named _Acceleratable_Storage_. When a function is designated as acceleratable during deployment, the storage provisioned for the data comprises _DSCS-Serverless_ storage drives. During invocation, all data requests for this application are directed to _DSCS-Serverless_ capable storage drives. Further, we exploit the fact that serverless functions have a strict maximum payload size per request (e.g., 256 KB on AWS Lambda [167]) to ensure that the entire payload is routed to the same drive. As the number of requests (data) increases, it is possible to store different requests on separate drives that support _DSCS-Serverless_. This is because requests (data) are independent of each other, and therefore, the scheduling of requests (data) can be distributed across different drives. The scheduler relies on Prometheus [165] telemetry to decide whether to employ near-storage acceleration or execute the function in a conventional manner.
**Function scheduling and fallback options.** We extend the centralized Kubernetes scheduler to expose storage nodes that can utilize near-data _DSA_ and opportunistically map acceleratable serverless functions execution to such nodes if the data resides on the node. The scheduler uses a simple FCFS scheduling policy for the incoming function execution requests, leveraging Prometheus [105] to monitor availability and prevent overloading a single Kubernetes pod. Note that similar to traditional serverless [9, 10, 11], a function instance on the _DSA_ also does _not_ support multi-tenancy and follows a run-to-completion execution policy. Once a function is offloaded for computation on _DSCS-Serverless_, the storage node marks its compute status as _busy_. The scheduler does not offload more functions until the node becomes _available_. When no _DSA_-capable storage node is available or a _DSA_ is already processing some other function, the scheduler falls back to traditional execution and utilizes remote compute nodes (CPU) for function execution instead. This is possible because the _DSCS-Serverless_ can be used as a traditional storage drive as well. Finally, we leverage the existing Kubernetes mechanisms for fail-over support and container migration.
**Storage scalability.** Remote storage can be scaled independently from compute, thereby enabling virtually unlimited storage capacity. This independent scalability is not limited by _DSCS-Serverless_ because it can still operate as a traditional storage node for applications that do not require the compute acceleration capabilities of _DSCS-Serverless_. Moreover, serverless functions do not require complex scaling of compute and storage since they have strict storage and compute quotas [167] and serverless deploys horizontal scaling to launch more function instances to handle new requests. Therefore, _DSCS-Serverless_ does not restrict independent storage scalability.
**Cold starts.** Functions in _DSCS-Serverless_ incurs the same cold start as functions in traditional platforms. Cold starts occur when the function is deployed for the first time to a node where the function's container image is pulled from a remote registry, unpacked, and has to pass a health check. During horizontal scaling, each time a function scale from N to N+1 replicas of a function, the same process described above takes place. However, we configure the serverless framework to preemptively lunch some container (until the horizontal scaling threshold is reached) if all other containers are busy. This hides the cold start latency because we have at least some replicas to cater to the current load of the system. Further, similar to traditional mechanism where the function is kept warm on the compute node's memory for certain amount of time, _DSCS-Serverless_ also stores the function on the _DSA_'s memory for some duration preemptively waiting for new requests.
**Device driver and libraries.** To support near-storage acceleration, _DSCS-Serverless_ includes a OpenCL device driver [168]. The driver implements standard interfaces for mapping storage space to physical address space of both the storage node and the _DSA_'s configuration registers and memory. Additionally, the driver orchestrates direct peer-to-peer (P2P) data transfer between the storage and the _DSA_ that bypass the storage node's system stack and utilizes dedicated PCIe links. The OpenCL-based driver also abides by the host OS security checks for access control to both storage and _DSA_. The acceleratable function's container include all dependent libraries such as OpenCL framework, runtime and associated tools that is used to program _DSA_.
**Compiler support.** We develop a compilation stack capable of code generation for different _DSA_ configurations and for a range of evaluated machine learning models. The serverless functions that utilized the _DSA_ are implemented using Pytorch [169] and stored as ONNX (Open Neural Network Exchange) files [170]. The front-end part of the compiler performs a range of optimizations, including operator fusion to minimize off-chip data movement. Then the compiler performs _DSA_ design configuration (e.g. number of PEs, memory bandwidth) specific optimizations such as padding and tiling to maximize the _DSA_'s utilization. Once these optimization passes complete, the compiler generates the hardware configuration specific optimized executable code. This code is packaged along with the serverless function in the container. Note that we rely on the user to partition their application into server functions that can or cannot be accelerated using a near-storage _DSA_.
**Host and storage communication.** Prior work [171, 172, 173] suggested the use of multi-channel DMA over PCIe for high throughput. This work follows the same methodology and configures our device driver to use multi-channel DMA. The computational storage contains a PCIe switch as shown in Figure 6 similar to prior work [174, 175]. CSD employs this switch to route incoming requests to either the storage or _DSA_. _DSA_'s communicate with the storage device over peer-2-peer PCIe links.
## VI Evaluation
### _Methodology_
**Benchmarks.** To evaluate the efficacy of _DSCS-Serverless_, we use eight real world latency critical applications representing serverless pipelines deployed on AWS Lambda [9]. Table I
shows the suite of benchmarks, their description, serverless functions, and the corresponding inputs/outputs sizes. Each application is executed in a pipeline similar to the one shown in Figure 2, where we offload both data pre-processing (f1) and machine learning inference (f2) to the compute platform (GPU, FPGA, ARM, _DSA_). Since the exact machine learning models used in AWS Lambda functions are not publicly available for some benchmarks, we use representative and state-of-the-art inference models from MLCommons [186] or Hugging Face [187] that provide similar functionality as AWS Lambda functions (e.g. AWS Rekognition [113]) offers image classification service and we use ResNet-50 [188]). We containerize all the serverless functions by using OpenFaaS [105]. The model weights of ML functions are stored on the container while the input/outputs use remote storage through AWS S3 API calls.
**Baseline system setup.** For the _baseline_, we use _Amazon EC2 c5.4xlarge_ with _Intel(R) Xeon(R) Platinum 8275CL_ CPU cloud instance and use an _IAM_ account to connect the EC2 virtual machine to a S3 object storage in the same region. The EC2 instances runs _Ubuntu 20.04.4 LTS_ with kernel version _5.13.0-1029-aws_. We spawn a cluster using Kubernetes [106] on the EC2 instance and deploy OpenFaaS [105] as a pod on the Kubernetes cluster. All the benchmarks are deployed and registered with the OpenFaaS function registry at the deployment time. All evaluations are based on warm containers unless specificed otherwise. We use _hey_[189], a http load generator tool to invoke the serverless application. Each request follow a run-to-completion model.
**Evaluation of compute platforms.** We consider two scenarios: First the _Traditional Platforms_, when the compute device (refer Table II) is a separate node and accesses the remote storage via network as shown in Fig. 3. We consider three commodity high-end compute platforms commonly used in data centers for this scenario: an Intel Xeon CPU (baseline), a NVIDIA GPU, and a Xilinx datacenter FPGA [190] (programmed with a relatively smaller _DSA_ due to resource constraints of the FPGA). Second is the _Conventional Near-Storage platforms (NS)_, where the compute is placed near the storage and connected via a peer PCIe link to the storage. Since these are not available on datacenters, we setup the infrastructure locally similar to the baseline setup. We consider three low power near storage platforms: a quad-core ARM CPU, a Mobile-GPU (Nvidia Jetson TX2), and the near-storage FPGA (Samsung SmartSSD [121]). Since we did not have access to ARM Cortex A53 which is used in commercially available CSDs [119, 120], we use an even powerful ARM core (Cortex A-57) for our evaluation. Table II reports the specifications of all the evaluated compute platforms.
**System performance measurements.** For the baseline measurement, we use the aforementioned baseline setup and invoke the application by generating 10000 sequential requests using _hey_, a http load generator to measure the latency. We use the 95th percentile latency for all our analysis similar to prior work [102, 103]. To measure the latency for all other _traditional compute platforms_, we create containers with the required environments (e.g. omxlimtime for GPU or Xilinx XRT for FPGA in addition to their corresponding drivers). For the case of _conventional near-storage platforms_, we develop an analytical model where we modify the baseline system by omitting the latency to access the remote storage and add the peer-2-peer latency for transferring the data between the storage and compute platform, and other required latencies (e.g. instructions, model weights, etc.). To obtain the P2P latency of data transfer, we emulate this communication on a Samsung Xilinx SmartSSD and measure the time it takes to transfer data between the FPGA and storage for all the benchmarks 10000 times and sample the 95th percentile latency.
**Hardware implementation and synthesis.** We implement the _DSA_ in 15k lines of Verilog and synthesize it using Synopsys Design Compiler R-2020.09-SP4 with FreePDK 45nm standard cell library. The design was able to achieve 1GHz frequency. Further, to synthesize the _DSA_ and program it on Samsung Xilinx SmartSSD FPGA, we use Xilinx Vitis/Vivado [191] and Xilinx X XRT Runtime [192]. We also use the Xilinx Vivado to obtain the resource utilization, timing, power and thermal statistics for the FPGA analysis.
**Simulation infrastructure.** We compile each machine
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{**Application**} & \multicolumn{1}{c|}{**Description**} & \multicolumn{1}{c|}{**Siversification Functions**} & \multicolumn{1}{c|}{**DNN Model (\#Params)**} & \multicolumn{1}{c|}{**Input/Output Size**} \\ \hline
**Craft Base Assessment[14, 103]** & **Identify penalties and negative code-base to base approval** & **11**: _Normalization_ (1): _Logistic Regression_: _Notification Service_ & Logistic_: _Inspection_ (4): _(2): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(19): _(20): _(21): _(22): _(23): _(10): _(24): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(20): _(21): _(23): _(24): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(20): _(21): _(22): _(23): _(24): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(21): _(22): _(23): _(24): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(20): _(21): _(23): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(21): _(20): _(21): _(23): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(19): _(21): _(22): _(23): _(24): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(11): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(20): _(21): _(21): _(22): _(23): _(24): _(25): _(26): _(27): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(11): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(21): _(20): _(21): _(23): _(25): _(26): _(27): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(10): _(11): _(12): _(13): _(14): _(15): _(16): _(17): _(18): _(19): _(20): _(21): _(22): _(23): _(25): _(26): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(11): _(10): _(11): _(13): _(11): _(12): _(14): _(15): _(16): _(17): _(19): _(20): _(3): _(4): _(5): _(6): _(7): _(8): _(9): _(11): _(10): _(11): _(13): _(15): _(17): _(18): _(19): _(21): _(21): _(23): _(24): _(25): _(26): _(27): _(27): _(28): _(29): _(3): _(3): _(4): _(5): _(6)
learning model to our domain-specific accelerator ISA and generate executable binaries. We develop a cycle-accurate simulator for _DSA_ ASIC implementation, which uses compiler-generated instructions, and provides cycle counts and energy statistics. We compare the simulator provided results with the FPGA implementation of _DSA_ on the SmartSSD card for the same design configuration and frequency to verify the closeness of the cycles by error margin of \(\leq\) 10%. We use this simulator to obtain the performance/energy numbers for _DSA_ ASIC implementation and design projections mentioned in Section III. Further, we will be open-sourcing both the hardware implementation and the software simulator.
**Power measurements.** We measure the compute, PCIe and system stack power dissipation and combine them to report the energy efficiency of the system. Although, serverless systems also use the network (Ethernet/Internet), measuring the power for it is not realistic and therefore we omit the network power for all the traditional systems. We use the Intel RAPL[193, 194] and MSR registers to get the Server-CPU power. To obtain the power for the ARM CPU and GPU on Jetson TX2, we use _NVPModel_ tool from NVIDIA Jetson TX2 Development Kit [195]. We use Xilinx Vivado [191] to measure the power for FPGA implementation of _DSA_ on Samsung Xilinx CSD and Alveo u280. To obtain power for the ASIC _DSA_, we use synthesis results to measure the logic cell power and CACTI-P [196] to model on-chip memory energy. For PCIe, we use the per-bit PCIe power reported in prior work [197].
**Cost efficiency model.** To assess if a new design offers cost savings over other systems, we evaluate cost efficiency [159], which is the average peak throughput over total cost and time of ownership as shown in the equation below.
\[\begin{split}\text{Cost Efficiency}=\frac{\text{Throughput}\times T }{\text{CAPEX}+\text{OPEX}}\\ \text{where OPEX}=&\Sigma(Power\times T\times Electricity) \end{split}\]
_CAPEX_ is the one time capital expenditure ($) to setup the hardware (including the networking, host server, etc.). We identify the cost to procure a compute platform from the respective company website [119, 122, 163, 182, 183]. To compute the _CAPEX_ for the proposed _DSA_, we use the analytical model from ASIC Clouds [158]. Further, _OPEX_ is the day to day operating cost ($) of hardware. It is the product of the power (watts) for various components in the cluster, the time for which the cluster is active (T) and the average price of electricity in U.S $0.0733/kWh [159]. We compute the _Cost Efficiency_ for a period of 3 years [160].
**Throughput measurement.** Throughput is the maximum requests per second achieved by the system while meeting the SLA of 99%. To measure throughput, we create an application trace by randomly sampling functions from the benchmarks (Table I) using Poisson distribution, and induce load on the system for 30 minutes. We emulate 100 _DSCS-Serverless_ on the baseline system by launching 100 serverless functions that busy-wait for the duration it would take to execute the function on the _DSA_. The choice of 100 is for feasibility of evaluation.
### _Experimental Results_
**Performance comparison.** Figure 8 compares the performance of various platforms listed in Table II for both traditional (with remote storage) and Near-Storage (denoted as NS) scenarios across all studied benchmarks. The speedups are normalized to the baseline (see Table II). On average, _DSCS-Serverless_ provides 3.6\(\times\) speedup over the baseline across all benchmarks. _Moreover, the results suggest that leveraging a lightweight accelerator (4.2 watts) for DSCS-Serverless outperforms a high-end GPU (250 watts) with remote storage by 2.7\(\times\)_. This is because: First, the inherent data movement latency in remote storage setting limits the performance benefits from the high-end GPU. Second, the usage of batch size one for serverless scenarios causes underutilization in GPUs. Using a FPGA slightly underperforms the baseline due to the driver overhead and limited FPGA resources.
To tackle the communication overheads, we analyze near-storage computing. As shown in Figure 8, using near-storage with a quad-core ARM CPU slightly underperforms the baseline. Compared to the baseline, mobile GPU provides 1.35\(\times\) speedup, while leveraging a NS-FPGA unlocks 2.2\(\times\) speedup. The speedup for low-power NS-FPGA seems counter-intuitive compared to high-power FPGA (with remote storage) because the latter was bottlenecked by the communication overhead. _This analysis shows that the overhead of moving input and output data from remote storage limits the benefits from acceleration._ Nevertheless, FPGA's performance is still bounded by it's limited resources and low frequency. As shown in Figure 8, _leveraging a domain specific architecture near the storage unlocks additional benefits and provides 3.7\(\times\)_ and 1.7\(\times\)_ speedups over the conventional approaches of using microprocessor and SmartSSD near the storage, respectively. Credit Risk Assessment_ exhibits the least speedup because logistic regression is not computationally intensive while _PPE Detection_ achieves the maximum speedup because moving compute to near-storage reduces the significant data movement which the benchmark otherwise incurs.
**Runtime breakdown analysis.** Figure 9 shows the runtime breakdown across the individual system components for all the benchmarks and platforms. We see that for _traditional platforms_ with GPU/FPGA (with remote storage), the compute portion is significantly reduced due to hardware acceleration. However, the data transfer over network limits the effective speedup achieved by the hardware acceleration. This significant data transfer is addressed by the near-storage platforms where moving compute closer to storage reduces the data movement, shifting the bottleneck back to the compute. The _DSA_ further accelerates this compute portion unlocking additional performance gains. Overall, we observe that leveraging _DSCS-Serverless_ shifts the bottleneck from compute and communication to other components such as the system stack.
For benchmark _Credit Risk Assessment_, Figure 4 shows that data movement accounts for approximately 75% of the runtime. Intuitively, moving compute to near-storage should provide 4x speedup. However, we observe 1.8x speedup because of two
reasons. First, using near-storage has it's driver overhead that contributes significantly towards the end-to-end runtime (order of O(ms)). Second, as mentioned in the methodology VI-A, _f3_ function is launched on the CPU and experiences the network and IO latency similar to traditional systems. As depicted in Figure 9, for _DSCS-Serverless_ the bottleneck now is the latency incurred by the function _f3_ to read the data from persistent storage and the system stack overheads.
**Energy reduction comparison.** Figure 10 analyzes the end-to-end system energy reduction achieved by _DSCS-Serverless_. On average, _DSCS-Serverless_ provides 3.5\(\times\) energy reduction over the baseline system and 1.9\(\times\) reduction over the NS-FPGA (SmartSSD), the most competitive baseline. FPGAs have significantly higher static energy dissipation and thus cannot match the energy efficiency on an ASIC. Although leveraging _DSA_ provides significant energy reduction (29\(\times\) over CPU baseline), the total system energy reduction is bounded by the system stack and _f3_ function being executed on the CPU. The trends in energy reduction are similar to the speedup, with _PPE Detection_ showing the maximum gains (8x) and _Credit Risk Assessment_ showing the minimum (1x).
**Cost efficiency.** Figure 11 shows the cost efficiency for various platforms normalized to the baseline. Results show _DSCS-Serverless_ offers the highest cost efficiency (3.4\(\times\)) compared to the CPU baseline, while NS-FPGA (SmartSSD) ranks second (1.6\(\times\)). This result is intuitive, since over the initial period of usage, the _CAPEX_ cost of building a hardware is dominant. As the time goes, the _OPEX_ cost, that is cost of operating (electricity cost) becomes more dominant. Since _DSCS-Serverless_ consumes lesser energy compared to other platforms, it's cost efficiency increases over time.
**Throughput.** Figure 12 compares the throughput of _DSCS-Serverless_ relative to baseline CPU for requests sampled from the application trace (Refer Sec VI-A). _DSCS-Serverless_ improves the throughput by 3.1x on average. For _Credit Risk Assessment_ the throughput improvement is not significant since it is not compute intensive and therefore acceleration does not provide significant benefits. Overall, _DSCS-Serverless_ improves the throughput of the system since each _DSCS-Serverless_ instance can process more requests per second as compared to the base
Fig. 11: Normalized system energy reduction for application.
Fig. 8: Normalized speedup for application designed as serverless function shown in Figure 2.
Fig. 10: Normalized speedup for application.
Fig. 9: Normalized runtime breakdown.
line. On the contrary, the baseline queues more request and results in a greater number of requests that violate the 99% SLA.
### _Sensitivity Analysis_
**Batch size.** Figure 13 shows the sensitivity of the _DSCS-Serverless_ end-to-end performance with respect to batch size (refer I). We sweep the batch size from one to 64 across all benchmarks and report the performance of _DSCS-Serverless_ normalized to the baseline (CPU) with remote storage, running the same batch size. The rationale behind limiting the batch size to 64 is because AWS Lambda has a strict cap on the network payload size for serverless functions [167]. Relative to the baseline, the performance improvements of _DSCS-Serverless_ increases from 3.6x for batch size 1 to 15.9x for batch size 64. This performance improvement stems from (1) reducing the communication overheads of transferring batched data to compute node compared to the baseline and (2) the capability of the _DSA_ in reusing the weights across the batch, thereby improving the computation significantly. From all the benchmarks, the improvements are more pronounced for _Conversational Chatbot_ and _Document Translation_, since these benchmark applications deploy language transformer models with large number of weights, where the _DSCS-Serverless_ leverages batching to amortize the cost of loading weights by reusing them across the input batch.
**Number of accelerated functions.** To analyze the sensitivity of the _DSCS-Serverless_ performance to the number of accelerated functions, we create synthetic benchmarks by adding either one, two, or three additional accelerated functions to the application. These added functions are all replicated from the _f2_ function of the original benchmarks. The label in Figure 14 refers to the number of replicated functions. The performance is normalized to the baseline (CPU) running the same function configuration. Results show that increasing the number of functions that are offloaded to _DSCS-Serverless_, the improvements escalates (from 3.6x to 8.1x). Increasing the number of functions in fact emulates the scenarios in which the serverless applications are composed of more complex pipelines with multiple functions [198, 199]. Using these complex pipelines would incur more pronounced computation and communication overheads to the end-to-end execution, both are addressed significantly by domain-specialization and near-storage computation of _DSCS-Serverless_.
**Sensitivity to PCIe bandwidth.** Figure 15 illustrates the sensitivity of the _DSCS-Serverless_ performance with respect to PCIe bandwidth used for P2P communication (normalized to _DSCS-Serverless_ with PCIe x1). We measure the communication time for all benchmarks with x4, x8, and x16 PCIe lanes using the NS-FPGA, FPGA, and GPU (refer Table II), and use these numbers to extrapolate the communication time for x1, x2, and x32 lanes. As the results show, scaling the number of lanes has a negligible impact on performance. This is due to the fact that PCIe provides enough bandwidth for P2P communication as the size of the data is relatively small for serverless functions. As such, in these cases the performance of _DSCS-Serverless_ is bounded by the latency of the communication as opposed to the bandwidth.
Fig. 14: Sensitivity to the number of accelerated functions.
Fig. 12: Throughput improvement over CPU baseline.
Fig. 13: Sensitivity to batchsize
Fig. 15: Sensitivity to PCIe bandwidth.
**Tail latency effect.** Accessing remote storage can incur long tail latency [18, 101, 135]. To understand this variability and it's implications on _DSCS-Serverless_ performance, we perform a sweep across various latency distribution for the PCIe, P2P and network. Figure 16 shows the implications of tail latency normalized to baseline (CPU) with the same latency distribution. Results suggest _DSCS-Serverless_ is robust to network and I/O tail latency, since it in fact removes the data movement over them. On average, _DSCS-Serverless_ provides 5.0\(\times\) speedup for the 99\({}^{th}\) percentile and 3.1\(\times\) speedup for 50\({}^{th}\) percentile.
**Cold start.** Figure 17 shows the speedup of _DSCS-Serverless_ over baseline. Both _DSCS-Serverless_ and the baseline use cold containers where they pull the container image (including the weights for the model) and load it to the memory of the _DSA_. Since the models are large, the time to load a model accounts for a significant portion of the end-to-end latency, thereby reducing the speedup from 3.6x to 2.6x. However, as mentioned in Section V, cold latency is incurred by both _DSCS-Serverless_ and the baseline systems. Further, only the first invocation incurs a cold latency while all subsequent invocations can potentially hide the cold latency using preemptive horizontal scaling (Refer Section V).
## VII Related Work
Individually, the emergence of serverless computing, the shift towards storage disaggregation, and adoption of domain-specific accelerators has provided significant benefits. However, collectively they poses interesting challenges. The paper explores the confluence of the three trends and provides a pathway towards utilizing accelerators for serverless computing on storage disaggregated datacenters. Below, we discuss related work to these different trends.
**Serverless and storage.** Serverless functions are stateless and ephemeral [9, 11, 200]. They use persistent storage to transfer intermediate data between functions [103]. Pocket [18] proposed an storage system to allocate different storage resources depending on workloads to reduce cost. Locus [133] focused on deriving an optimal combination of slow remote and fast in-memory storage while SONIC [132] used local and remote storage to pass data between functions. NumPyWren [201] identified appropriate block size to remote storage for serverless linear algebra. Jiffy [202] used in-memory caching on remote servers to accommodate large variable sized intermediate data. However, it still incurs the network latency to remote storage. These papers consider multi-tier storage and the possibility of efficient data passing between functions. They are orthogonal to this work since _DSCS-Serverless_ proposes a model of serverless computing by leveraging near-storage _DSA_ to reduce the data movement and unlock additional benefits from acceleration.
**Acceleration of serverless functions.** SmartNICs have been used to accelerate serverless functions with low compute intensity [203]. Speedo [204] placed the serverless function dispatcher on SmartNIC to avoid latency overhead. Dagger [130] accelerated RPCs using FPGA-based NIC thereby reducing the communication latency. BlastFunction [128] exposes FPGAs to serverless framework to accelerate functions. Molecule [97] and Hardless [100] propose runtimes to enable hardware accelerators for serverless computing. HiveMind [205] proposes a hardware-software solution for serverless edge swarms. These solutions either enable data-movement-aware acceleration on programmable NICs or compute-focused acceleration on GPUs/FPGAs. _DSCS-Serverless_ enables both by placing properly-sized _DSA_ near-storage.
**Near-storage acceleration.** Near-storage acceleration reduces data movement and lowers communication latency. A plethora of work proposed near-data ASICs for workloads (graph analytics, databases, sorting, etc.) demanding large amount of data transfer. [62, 86, 174, 175, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217]. INSIDER [218] proposes a runtime system that abstracts the storage-side FPGA computation and provides transparent data movement between CPU and FPGA. There are commercially available products such as Eideticom's NoLoad [219] for transparent compression, Samsung SmartSSD for utilities (encryption, compression, etc.) [121, 122], and NGD system's Newport for encryption on ARM cores [119, 120]. Some existing computational storage [220, 221, 222] use ASIC hardware to perform inline compression and encryption, In contrast, _DSCS-Serverless_ exclusively proposes domain-specific programmable accelerators for serverless functions.
## VIII Conclusion
Emergence of serverless computing coupled with disaggregation and hardware specialization introduces unique challenges and opportunities. The paper proposes a serverless computing model that integrates a domain-specific accelerator next to the storage unit. The paper evaluates the model by analyzing it for the domain of machine learning. Evaluation with a diverse set of serverless applications against variety of compute platforms shows significant gains in terms of performance, energy and cost efficiency.
## Acknowledgement
This work was in part supported by generous gifts from Google, Samsung, Qualcomm, Microsoft, Xilinx as well as the National Science Foundation (NSF) awards CCF#2107598, CNS#1822273, National Institute of Health (NIH) award #R01EB028350, Defense Advanced Research Project Agency (DARPA) under agreement number #HR0011-18-C-0020, and Semiconductor Research Corporation (SRC) award #2021-AH-3039. The U.S. Government is authorized to
Fig. 17: Cold start effect on functions.
reproduce and distribute reprints for Governmental purposes not withstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied of Google, Qualcomm, Microsoft, Xilinx, Samsung, NSF, SRC, NIH, DARPA or the U.S. Government. We would like to extend our gratitude towards Berkin Akin, Andrey Ayupov, Daniel Kaufman, and Stella Aslibekyan for reviewing the paper and providing feedback. We also thank the extended team at Google Research, Brain Team who enabled this collaboration and helped us with the paper.
|
2306.11659 | Subalgebra Independence | Subobject independence as morphism co-possibility has recently been defined
in [2] and studied in the context of algebraic quantum field theory. This
notion of independence is handy when it comes to systems coming from physics,
but when directly applied to classical algebras, subobject independence is not
entirely satisfactory. The sole purpose of this note is to introduce the notion
of subalgebra independence, which is a slight variation of subobject
independence, yet this modification enables us to connect subalgebra
independence to more traditional notions of independence. Apart from drawing
connections between subalgebra independence and coproducts and congruences, we
mainly illustrate the notion by discussing examples. | Zalán Gyenis, Alexa Gopaulsingh, Övge Öztürk | 2023-05-26T19:10:14Z | http://arxiv.org/abs/2306.11659v1 | # Subalgebra independence
###### Abstract
Subobject independence as morphism co-possibility has recently been defined in [2] and studied in the context of algebraic quantum field theory. This notion of independence is handy when it comes to systems coming from physics, but when directly applied to classical algebras, subobject independence is not entirely satisfactory. The sole purpose of this note is to introduce the notion of subalgebra independence, which is a slight variation of subobject independence, yet this modification enables us to connect subalgebra independence to more traditional notions of independence. Apart from drawing connections between subalgebra independence and coproducts and congruences, we mainly illustrate the notion by discussing examples.
**AMS Subject Classification**: 08A05, 08A30, 08A35
**Keywords:** Independence, Subobject independence, Subalgebra independence.
## 1 Introduction
Specifying notions of independence of subsystems of a larger system is crucial in the axiomatic approach to algebraic quantum field theory. It turns out that such notions of independence can be specified in a number of nonequivalent ways, Summers [8] gives a review of the rich hierarchy of independence notions; for a non-technical review of subsystem independence concepts that include more recent developments as well, see [9]. Generalizing earlier attempts, a purely categorial formulation of independence of subobjects as morphism co-possibility has been introduced and studied in the recent papers [5, 6] and [2]. Two subobjects of an object are defined to be independent if any two morphisms on the two subobjects are jointly implementable by a single morphism on the larger object. More precisely, let us recall the definition from [2]. Suppose \(M\) is a class of monomorphisms and \(H\) is another class of morphisms of a category.
**Definition 1.1**.: \(M\)_-morphisms \(f_{A}:A\to X\) and \(f_{B}:B\to X\) are called \(H\)-independent if for any two \(H\)-morphisms \(\alpha:A\to A\) and \(\beta:B\to B\) there is an \(H\)-morphism \(\gamma:X\to X\) such that the diagram below commutes._
The objects \(A\) and \(B\) can be regarded as \(M\)-subobjects of \(X\), and it is intuitively clear why \(H\)-independence of \(M\)-subobjects \(A\) and \(B\) is an independence condition: fixing the morphism \(\alpha_{A}\) on object \(A\) does not interfere with fixing any morphism \(\alpha_{B}\) on object \(B\), and vice versa. That is to say, morphisms can be independently chosen on these objects seen as subobjects of object \(X\).
In algebraic quantum field theory, independence given by the definition above is specified in the context of the category of special \(C^{*}\)-algebras taken with the class of operations (completely positive, unit preserving, linear maps) between \(C^{*}\)-algebras. Considerations from physics ensure injectivity of the "large system" \(X\) and therefore extending morphisms from the subobjects to the larger object in which independence is defined is always possible.
Although the definitions employed in [2] are rather general, they become too restrictive when injectivity is not guaranteed. To reiterate: the main concern is that independence of \(A\) and \(B\) should not depend on whether morphisms can be extended to the entire \(X\), but rather one should care for extensions to the subobject "generated by" \(A\) and \(B\) only. In other words, in a concrete category of structures, independence of \(A\) and \(B\) should depend only on how elements that can be term-defined from \(A\) and \(B\) relate to each other and not on elements that have "nothing to do" with \(A\) and \(B\). Algebraically, term-definable elements are exactly the elements of the substructure generated by \(A\) and \(B\). Defining the notion of a generated subobject in category theoretic terms in not unproblematic and we do not take the trouble here to deal with such issues. Instead, we focus almost exclusively on concrete algebras or categories of algebras. We introduce a slight modification to Definition 1.1 which makes it more useful among algebras. We illustrate this 'usefulness' by examples where subalgebra independence coincide with well-known traditional notions of independence:
* Subset independence is disjointness.
* Subspace independence is linear independence.
* Boolean subalgebra independence is logical independence.
* Abelian subgroup independence is the traditional notion of group independence.1
Footnote 1: However, the case of non-Abelian groups is very different.
Finally, we mention a related concept that we call congruence independence.
## 2 Subalgebra independence
Let us fix an algebraic (or more generally a first order) similarity type. When we speak about algebras or structures, then we understand these algebras (structures)
to have the same similarity type. We use the convention that algebras are denoted by Fraktur letters \(\mathfrak{A}\) and the universe of the algebra \(\mathfrak{A}\) is denoted by the same but capital letter \(A\). For subalgebras \(\mathfrak{A},\mathfrak{B}\) of \(\mathfrak{X}\) we write \(\mathfrak{A}\vee\mathfrak{B}\) for the subalgebra of \(\mathfrak{X}\) generated by \(A\cup B\).
**Definition 2.1** (Subalgebra-independence).: _Let \(\mathfrak{X}\) be an algebra and \(\mathfrak{A},\mathfrak{B}\) be subalgebras of \(\mathfrak{X}\). We say that \(\mathfrak{A}\) and \(\mathfrak{B}\) are subalgebra-independent in \(\mathfrak{X}\) if for any homomorphisms \(\alpha:\mathfrak{A}\to\mathfrak{A}\) and \(\beta:\mathfrak{B}\to\mathfrak{B}\) there is a homomorphism \(\gamma:\mathfrak{A}\vee\mathfrak{B}\to\mathfrak{A}\vee\mathfrak{B}\) such that the diagram below commutes._
_The homomorphism \(\gamma\) is called the joint extension of \(\alpha\) and \(\beta\) (to \(\mathfrak{A}\vee\mathfrak{B}\)). We write \(\mathfrak{A}\searrow_{\mathfrak{X}}\mathfrak{B}\) when \(\mathfrak{A}\) and \(\mathfrak{B}\) are subalgebra-independent in \(\mathfrak{X}\), and we might omit the subscript \(\mathfrak{X}\) when it is clear from the context._
When the algebras in question have particular names, e.g. groups, fields, etc., then we specify the independence as "subgroup-independence", "subfield-independence" etc.
Comparing subalgebra independence with Definition 1.1 it is clear that the inclusion mappings take the role of \(M\)-morphisms and \(H\) is the class of all homomorphisms between algebras. The main difference, however, is that in subalgebra independence we extend the mappings \(\alpha\) and \(\beta\) to the substructure generated by \(A\cup B\) only. We also note that \(H\) could be chosen differently, e.g. it could be the class of automorphisms, leading to variations of the notion of independence. We do not discuss such variations in this paper.
Before discussing the examples, let us state some useful propositions. First, it is an immediate consequence of the definition of subalgebra independence that the joint extension of \(\alpha\) and \(\beta\) is always unique (if exists):
**Proposition 2.2**.: _If the joint extension \(\gamma:\mathfrak{A}\vee\mathfrak{B}\to\mathfrak{A}\vee\mathfrak{B}\) of \(\alpha:\mathfrak{A}\to\mathfrak{A}\) and \(\beta:\mathfrak{B}\to\mathfrak{B}\) exists, then it is unique and is given by_
\[\gamma\big{(}t^{\mathfrak{A}\vee\mathfrak{B}}\left(\vec{a},\vec{b}\right) \big{)}\ =\ t^{\mathfrak{A}\vee\mathfrak{B}}\big{(}\alpha(\vec{a}),\beta(\vec{b}) \big{)}\]
_for each term \(t(\vec{x},\vec{y})\) and elements \(\vec{a}\in A\), \(\vec{b}\in B\)._
Proof.: Elements of \(\mathfrak{A}\vee\mathfrak{B}\) are of the form \(t^{\mathfrak{A}\vee\mathfrak{B}}(\vec{a},\vec{b})\) for \(\vec{a}\in A\) and \(\vec{b}\in B\). As \(\gamma\) is a homomorphism that extends both \(\alpha\) and \(\beta\), we must have
\[\gamma\big{(}t^{\mathfrak{A}\vee\mathfrak{B}}(\vec{a},\vec{b})\big{)}\ =\ t^{ \mathfrak{A}\vee\mathfrak{B}}\big{(}\gamma(\vec{a}),\gamma(\vec{b})\big{)}\ =\ t^{\mathfrak{A}\vee\mathfrak{B}}\big{(}\alpha(\vec{a}),\beta(\vec{b}) \big{)}.\]
Let \(\mathbf{K}\) be a class of similar algebras regarded as a category with homomorphisms as morphisms. Let \(\mathfrak{A}_{1},\mathfrak{A}_{2}\in\mathbf{K}\) and consider embeddings \(e_{i}:\mathfrak{A}_{i}\to\mathfrak{C}\). Then \(\mathfrak{C}\) is a coproduct of \(\mathfrak{A}_{1}\) and \(\mathfrak{A}_{2}\) in \(\mathbf{K}\) iff \(\mathfrak{C}\) has the following universal property with respect to \(\mathbf{K}\): for any \(\mathfrak{D}\in\mathbf{K}\) and homomorphisms \(f_{i}:\mathfrak{A}_{i}\to\mathfrak{D}\) there is a homomorphism \(g:\mathfrak{C}\to\mathfrak{D}\) such that \(f_{i}=g\circ e_{i}\) (\(i=1,2\)). The coproduct, if exists, is unique up to isomorphism. If \(\mathbf{K}\) is clear from the context we denote a coproduct of \(\mathfrak{A}_{1}\) and \(\mathfrak{A}_{2}\) by \(\mathfrak{A}_{1}\oplus\mathfrak{A}_{2}\). In what follows, we assume that \(\mathfrak{A}\) and \(\mathfrak{B}\) are (identified with) subalgebras of the coproduct \(\mathfrak{A}\oplus\mathfrak{B}\).
**Proposition 2.3**.: _Consider \(\mathfrak{A}\) and \(\mathfrak{B}\) as subalgebras of the coproduct \(\mathfrak{A}\oplus\mathfrak{B}\). Then any pair of homomorphisms \(\alpha:\mathfrak{A}\to\mathfrak{A}\) and \(\beta:\mathfrak{B}\to\mathfrak{B}\) has a joint extension to a homomorphism \(\alpha\oplus\beta:\mathfrak{A}\oplus\mathfrak{B}\to\mathfrak{A}\oplus \mathfrak{B}\)._
Proof.: From the diagram below on the left-hand side, by composing arrows, one gets the diagram on the right-hand side which is a coproduct diagram. Therefore a suitable \(\gamma\) with the dotted arrow exists and completes the proof.
\(\blacksquare\)
**Proposition 2.4**.: _Subalgebras \(\mathfrak{A}\) and \(\mathfrak{B}\) of the coproduct \(\mathfrak{A}\oplus\mathfrak{B}\) are subalgebra-independent provided \(\mathfrak{A}\vee\mathfrak{B}=\mathfrak{A}\oplus\mathfrak{B}\)._
Proof.: Immediate from Proposition 2.3. \(\blacksquare\)
It is clear that there is a canonical surjective homomorphism \(q:\mathfrak{A}\oplus\mathfrak{B}\to\mathfrak{A}\vee\mathfrak{B}\). Take homomorphisms \(\alpha:\mathfrak{A}\to\mathfrak{A}\) and \(\beta:\mathfrak{B}\to\mathfrak{B}\) and consider the diagram below.
\(\blacksquare\)
Then the joint extension \(\gamma:\mathfrak{A}\vee\mathfrak{B}\to\mathfrak{A}\vee\mathfrak{B}\) of \(\alpha\) and \(\beta\) exists if and only if the mapping
\[\gamma(q(x))=q((\alpha\oplus\beta)(x))\]
is well-defined, that is, \(\alpha\oplus\beta\) is "compatible" with the kernel \(\ker(q)\). We make use of this observation later on when we discuss the case of groups.
Let us see the examples without further ado.
### Sets
Sets can be regarded as structures having the empty set as similarity type. If \(A\) and \(B\) are subsets of \(C\), then the subset of \(C\) generated by \(A\) and \(B\) is simply their union \(A\cup B\). It is straightforward to check that subset independence coincides with disjointness.
**Proposition 2.5**.: _For \(A,B\subseteq C\) we have \(A\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\hss$ \smile$\hss}\kern-6.0pt\raise 0.0pt\hbox{$\smile$\hss}\kern-6.0pt \raise 0.0pt\hbox{$\smile$\hss}\kern-6.0pt\raise 0.0pt\hbox{$ \smile$\hss}\kern-6.0pt\raise 0.0pt\hbox{$\smile$\hss}\kern-6.0pt \raise 0.0pt\hbox{$\smile$\hss}\kern-6.0pt \raise 0.
**Corollary 2.8**.: _For subspaces \(\mathfrak{A},\mathfrak{B}\) of a vector space \(\mathfrak{C}\) we have \(\mathfrak{A}\bigm{\downarrow}\mathfrak{B}\) if and only if \(\mathfrak{A}\vee\mathfrak{B}\cong\mathfrak{A}\oplus\mathfrak{B}\)._
### Boolean algebras
Let \(\mathbf{Bool}\) be the category of Boolean algebras as objects with Boolean homomorphisms as morphisms. The Boolean algebra \(\mathfrak{C}\) is the internal sum of the subalgebras \(\mathfrak{A},\mathfrak{B}\leq\mathfrak{C}\) just in case the union \(A\cup B\) generates \(\mathfrak{C}\) and whenever \(a\in A\), \(b\in B\) are non-zero elements, then \(a\wedge b\) is non-zero (cf. Lemma 1 on p. 428 in [1]). This latter condition is called _Boolean-independence_: two subalgebras \(\mathfrak{A},\mathfrak{B}\leq\mathfrak{C}\) are Boolean-independent (\(\mathfrak{A}\parallel\mathfrak{B}\) in symbols) if for all \(a\in A\), \(b\in B\) we have \(a\cap b\neq 0\) provided \(a\neq 0\neq b\).
The internal sum construction coincides with the coproduct \(\mathfrak{A}\oplus\mathfrak{B}\) in the category \(\mathbf{Bool}\). As before \(\mathfrak{A}\vee\mathfrak{B}\) is the subalgebra (of \(\mathfrak{C}\)) generated by \(A\cup B\). Then we have \(\mathfrak{A}\vee\mathfrak{B}\cong\mathfrak{A}\oplus\mathfrak{B}\) precisely when \(\mathfrak{A}\parallel\mathfrak{B}\).
We claim that Boolean subalgebra independence coincides with Boole-independence of subalgebras.
**Proposition 2.9**.: _For Boolean subalgebras \(\mathfrak{A},\mathfrak{B}\) of a Boolean algebra \(\mathfrak{C}\) we have_
\[\mathfrak{A}\bigm{\downarrow}\mathfrak{B}\quad\Longleftrightarrow\quad \mathfrak{A}\parallel\mathfrak{B}\quad\Longleftrightarrow\quad\mathfrak{A} \vee\mathfrak{B}\cong\mathfrak{A}\oplus\mathfrak{B}.\]
Proof.: The second equivalence \(\mathfrak{A}\parallel\mathfrak{B}\quad\Longleftrightarrow\quad\mathfrak{A} \vee\mathfrak{B}=\mathfrak{A}\oplus\mathfrak{B}\) is clear. By Proposition 2.4 coproduct injections are always independent, therefore we have
\[\mathfrak{A}\parallel\mathfrak{B}\quad\Rightarrow\quad\mathfrak{A}\bigm{ \downarrow}\mathfrak{B}.\]
As for the converse implication assume \(\mathfrak{A}\bigm{\downarrow}\mathfrak{B}\). By way of contradiction suppose there are non-zero elements \(a\in A\), \(b\in B\) so that \(a\cap b=0\). For an element \(x\) let \(x^{\prime}\) stand for the Boolean negation (complement) of \(x\). Take a homomorphism \(\alpha:\mathfrak{A}\to\mathfrak{A}\) such that \(\alpha(a)=1\in A\) and \(\alpha(a^{\prime})=0\in A\) (e.g. take an ultrafilter in \(\mathfrak{A}\) that contains \(a\), and send elements belonging to the ultrafilter to \(1\in A\)). Take \(\beta=\mathrm{id}_{B}\). This two homomorphisms cannot be jointly extended to a homomorphism \(\gamma:\mathfrak{A}\vee\mathfrak{B}\to\mathfrak{A}\vee\mathfrak{B}\) because such a joint extension \(\gamma\) would satisfy \(\gamma(a^{\prime})=\alpha(a^{\prime})=0\) and \(\gamma(b)=\beta(b)=b\neq 0\). As \(b\subseteq a^{\prime}\) it must follow that \(\gamma(b)\subseteq\gamma(a^{\prime})=0\); contradiction.
We remark that \(\mathfrak{A}\parallel\mathfrak{B}\) implies \(A\cap B=\{0,1\}\) (for if \(0\neq a\neq 1\) was an element of \(A\cap B\), then taking \(a\in A\) and \(a^{\prime}\in B\) would witness non-Boole-independence). Thus, similarly to the previous cases, subalgebra-independence requires that the two subalgebras in question intersect in the minimal subalgebra.
Notice that Boolean independence coincides with logical independence if the Boolean algebras are viewed as the Lindenbaum-Tarski algebras of a classical propositional logic: \(a\wedge b\neq 0\) entails that there is an interpretation on \(C\) that makes \(a\wedge b\) hence both \(a\) and \(b\) true; i.e. any two propositions that are not contradictions can be jointly true in some interpretation. Therefore, Boolean-subalgebra independence captures logical independence in the category \(\mathbf{Bool}\).
### Abelian groups
The category \(\mathbf{AbGrp}\) contains commutative groups as objects and group homomorphisms as arrows. The commutative group \(\mathfrak{G}\) is the internal direct sum of its two subgroups \(\mathfrak{H}\) and \(\mathfrak{F}\) if and only if \(\mathfrak{G}\) is generated by \(H\cup F\) and \(H\cap F=\{e\}\) (here and later on, \(e\) is the unit element of the group). (Internal) direct sums are precisely the coproducts, denoted by \(\mathfrak{A}\oplus\mathfrak{B}\), in the category \(\mathbf{AbGrp}\).
We claim that abelian-subgroup independence coincides with having the trivial group as the intersection.
**Proposition 2.10**.: _For subgroups \(\mathfrak{A},\mathfrak{B}\) of the commutative group \(\mathfrak{C}\) we have_
\[\mathfrak{A}\;\mbox{\Large$\searrow$}\;\mathfrak{B}\quad\Longleftrightarrow \quad A\cap B=\{e\}\quad\Longleftrightarrow\quad\mathfrak{A}\vee\mathfrak{B} \cong\mathfrak{A}\oplus\mathfrak{B}.\]
Proof.: As \(\mathfrak{A}\vee\mathfrak{B}\) is the subgroup of \(\mathfrak{C}\) generated by \(A\cup B\), the equivalence
\[A\cap B=\{e\}\quad\Longleftrightarrow\quad\mathfrak{A}\vee\mathfrak{B} \cong\mathfrak{A}\oplus\mathfrak{B}\]
is clear. Since summands of a coproducts are always independent (Proposition 2.4) we also have
\[A\cap B=\{e\}\quad\Longrightarrow\quad\mathfrak{A}\;\mbox{\Large$\searrow$}\; \mathfrak{B}.\]
As for the other direction suppose, by way of contradiction, that there is \(e\neq g\in A\cap B\). Take \(\alpha:\mathfrak{A}\to\mathfrak{A}\), \(\alpha(x)=e\) and \(\beta=\mathrm{id}_{B}\). These two homomorphisms cannot have a joint extension to \(\mathfrak{A}\vee\mathfrak{B}\) as \(\alpha(g)\neq\beta(g)\); contradicting \(\mathfrak{A}\;\mbox{\Large$\searrow$}\;\mathfrak{B}\).
Independence of subgroups \(\mathfrak{A},\mathfrak{B}\) of \(\mathfrak{C}\) was defined in [7] by the condition \(A\cap B=\{e\}\). In the case of Abelian groups, subgroup independence gives back this exact notion, however, the case of general groups is much more complicated.
### Groups
Consider the category \(\mathbf{Grp}\) of groups with homomorphisms. Coproducts in this category exist and are isomorphic to free products. Recall that the free product of two groups is infinite and non-commutative even if both groups are finite or commutative (unless one of them is trivial as in this case the free product is isomorphic to one of the two groups). Suppose \(\mathfrak{A},\mathfrak{B}\leq\mathfrak{C}\). The proof of Proposition 2.10 shows that \(\mathfrak{A}\;\mbox{\Large$\searrow$}\;\mathfrak{B}\) implies \(A\cap B=\{e\}\).
**Proposition 2.11**.: _If \(\mathfrak{A}\)\(\bigsqcup\)\(\mathfrak{B}\), then \(A\cap B=\{e\}\)._
On the other hand, consider the subgroups \(\mathbb{Z}_{2},\mathbb{Z}_{3}\) of \(\mathbb{Z}_{6}\) (here \(\mathbb{Z}_{n}\) is the modulo \(n\) group with addition). These subgroups are independent as Abelian groups, and since any homomorphic image of a commutative group is commutative, they are independent as groups, too. But the free product (coproduct) \(\mathbb{Z}_{2}\oplus\mathbb{Z}_{3}\) is infinite, thus it is not isomorphic to \(\mathbb{Z}_{2}\vee\mathbb{Z}_{3}=\mathbb{Z}_{6}\). This is an example for an algebraic category where subalgebra independence and being an internal coproduct are not equivalent.
Using the next proposition we can draw some useful sufficient conditions for subgroup independence.
**Proposition 2.12**.: \(\mathfrak{A}\)\(\bigsqcup\)\(\mathfrak{B}\) _if and only if for all homomorphisms \(\alpha:\mathfrak{A}\to\mathfrak{A}\) and \(\beta:\mathfrak{B}\to\mathfrak{B}\) and elements \(a_{i}\in A\), \(b_{i}\in B\) we have_
\[\prod a_{i}b_{i}=e\ \ \text{implies}\ \ \prod\alpha(a_{i})\beta(b_{i})=e\]
Proof.: Consider the diagram below and let \(\mathfrak{N}\) be the normal subgroup of \(\mathfrak{A}\oplus\mathfrak{B}\) corresponding to the kernel \(\ker(q)\).
The joint extension \(\gamma:\mathfrak{A}\vee\mathfrak{B}\to\mathfrak{A}\vee\mathfrak{B}\) of \(\alpha\) and \(\beta\) exists if and only if \((\alpha\oplus\beta)(\mathfrak{N})\subseteq\mathfrak{N}\) as this is equivalent to that the mapping
\[\gamma(q(x))=q((\alpha\oplus\beta)(x))\]
is well-defined.
Observe that Proposition 2.2 implies that whenever \(\alpha\) and \(\beta\) has a joint extension \(\gamma\), then \(\gamma\) is given by the equation
\[\gamma\big{(}\prod a_{i}b_{i}\big{)}\ =\ \prod\alpha(a_{i})\beta(b_{i})\]
for every element \(\prod a_{i}b_{i}\) of \(\mathfrak{A}\vee\mathfrak{B}\) (where \(a_{i}\in A\), \(b_{i}\in B\)).
**Proposition 2.13**.: _If \(\mathfrak{A}\) and \(\mathfrak{B}\) are normal subgroups, such that \(A\cap B=\{e\}\), then \(\mathfrak{A}\)\(\bigsqcup\)\(\mathfrak{B}\)._
Proof.: If \(\mathfrak{A}\) and \(\mathfrak{B}\) are normal subgroups with \(A\cap B=\{e\}\), then \(ab=ba\) holds for all \(a\in A\) and \(b\in B\). For, \(a(ba^{-1}b^{-1})\in A\) and \((aba^{-1})b^{-1}\in B\), and thus \(aba^{-1}b^{-1}\in A\cap B=\{e\}\). Let us apply Proposition 2.12. Take homomorphisms \(\alpha\) and \(\beta\) and elements \(a_{i}\in A\) and \(b_{i}\in B\). Write \(a=\prod a_{i}\) and \(b=\prod b_{i}\). By the first observation \(\prod a_{i}b_{i}=ab\) follows. Thus, if \(\prod a_{i}b_{i}=e\), then \(ab=e\). As \(a\in A\), \(b\in B\) and \(A\cap B=\{e\}\), we have \(a=b=e\). Therefore \(\alpha(a)\beta(b)=e\). Using the homomorphism property and reordering the product we get \(\prod\alpha(a_{i})\beta(b_{i})=e\) as desired.
However, if one of the subgroups is normal but the other is not, then they cannot be subgroup independent.
**Proposition 2.14**.: _If \(\mathfrak{A}\) and \(\mathfrak{B}\) are subgroups such that \(\mathfrak{A}\) is normal but \(\mathfrak{B}\) is not normal in their join, then \(\mathfrak{A}\not\perp\mathfrak{B}\)._
Proof.: We can assume \(A\cap B=\{e\}\) as this condition is necessary for subgroup independence.
Note first that given the assumptions there must exist \(a\in A\) and \(b\in B\) such that \(ab\neq ba\). Otherwise, we would have \(aBa^{-1}=B\) for all \(a\in A\), and thus
\[gBg^{-1}=a_{1}b_{1}...a_{n}b_{n}Bb_{n}^{-1}a_{n}^{-1}...b_{1}^{-1}a_{1}^{-1}\]
would yield \(B\), contradicting \(B\) being not normal.
Pick \(a\in A\) and \(b\in B\) with \(ab\neq ba\). Then \(bab^{-1}\neq a\), but \(bab^{-1}\in A\) since \(\mathfrak{A}\) is a normal subgroup. Therefore \(bab^{-1}=a^{\prime}\neq a\) and \(a^{\prime}\in A\). Let \(\alpha:\mathfrak{A}\rightarrow\mathfrak{A}\) be the identity function and \(\beta:\mathfrak{B}\rightarrow\mathfrak{B}\) be such that \(\beta(x)=e\). If \(\sigma\) was a joint extension of \(\alpha\) and \(\beta\) then we would get
\[\sigma(bab^{-1}) = \sigma(b)\sigma(a)\sigma(b^{-1})=eae=a, \tag{1}\] \[\sigma(a^{\prime}) = a^{\prime}. \tag{2}\]
Hence, \(\sigma(bab^{-1})\neq\sigma(a^{\prime})\) which contradicts \(bab^{-1}=a^{\prime}\).
One might be tempted to think that because normal subgroups are independent, and if exactly one of the subgroups is normal, then they are not independent, it could also be the case that two non-normal subgroups cannot be independent. Unfortunately, this is not so, as indicated by the example below.
**Example 2.15**.: _Consider the group \(D_{\infty}\) given by the presentation \(D_{\infty}=\langle x,y\ |\ x^{2}=y^{2}=e\rangle\). Let \(A\) and \(B\) be its subgroups generated respectively by \(x\) and \(y\). Clearly \(A\cong B\cong\mathbb{Z}_{2}\). None of \(A\) and \(B\) are normal subgroups of \(D_{\infty}\), yet \(A\searrow_{D_{\infty}}B\) since the only homomorphisms \(A\to A\) and \(B\to B\) are either the identical or the trivial mappings, each can be extended to a joint homomorphism \(D_{\infty}\to D_{\infty}\)._
In the previous example \(D_{\infty}\) is the free product of its subgroups \(A\) and \(B\). The next example shows that two non-normal subgroups can be subgroup independent in finite groups too.
**Example 2.16**.: _Let \(A=\{e,(12)\}\) and \(B=\{e,(13)(24)\}\) be subgroups of the symmetric group on four elements. The subgroup generated by \(A\cup B\) is isomorphic to the dihedral group \(D_{4}\). None of \(A\) or \(B\) are normal subgroups, still \(A\searrow B\) for the same reason as in the previous example._
We do not yet have any nice group theoretical characterization of subgroup independence and we leave it as an open problem.
### Graphs
Let us see a non-algebraic example. A graph is a structure of the form \(\mathfrak{G}=(V,E)\), where \(V\) is a set and \(E\) is a binary relation \(E\subseteq V\times V\). There are at least two different types of homomorphisms between graphs: weak and strong homomorphisms. Let us recall the definitions.
**Definition 2.17**.: _Given two graphs \((V,E)\) and \((W,F)\) the mapping \(f:V\to W\) is a (weak) homomorphism if_
\[(u,v)\in E\quad\Longrightarrow\quad(f(u),f(v))\in F, \tag{3}\]
_and a strong homomorphism, if_
\[(u,v)\in E\quad\Longleftrightarrow\quad(f(u),f(v))\in F. \tag{4}\]
Subgraphs can be understood in the graph theoretic way (that is, embeddings are weak homomorphisms) or as substructures (i.e. we take inclusions as strong embeddings; this corresponds to spanned subgraphs in the graph theoretic terminology).
Let \(\mathbf{Gra}_{w}\) and \(\mathbf{Gra}_{s}\) respectively be the category of graphs with weak or strong homomorphisms as arrows. In both cases the coproduct of two graphs \(\mathfrak{G}_{1}\) and \(\mathfrak{G}_{2}\) exists and is (isomorphic to) their disjoint union, denoted by \(\mathfrak{G}_{1}\oplus\mathfrak{G}_{2}\). By Proposition 2.4 it is clear that \(\mathfrak{G}_{1}\searrow_{\mathfrak{G}_{1}\oplus\mathfrak{G}_{2}}\mathfrak{G }_{2}\). But not the other way around:
**Example 2.18**.: _Call a graph \(\mathfrak{G}\) rigid if the identity is its only (weak) homomorphism. There are arbitrarily large rigid graphs [4, 3]. Take two rigid graphs \(\mathfrak{G}_{1}\) and \(\mathfrak{G}_{2}\) such that their underlying sets are not disjoint. Then \(\mathfrak{G}_{1}\searrow_{\mathfrak{G}_{1}\cup\mathfrak{G}_{2}}\mathfrak{G }_{2}\) are independent, nevertheless, \(\mathfrak{G}_{1}\cup\mathfrak{G}_{2}\) is not the coproduct of \(\mathfrak{G}_{1}\) and \(\mathfrak{G}_{2}\)._
## 3 Joint extension of congruences
A property that is strongly related to subalgebra independence is the joint extension property of congruences. Suppose \(\alpha:\mathfrak{A}\to\mathfrak{A}\) and \(\beta:\mathfrak{B}\to\mathfrak{B}\) are homomorphisms and there is a joint extension \(\gamma:\mathfrak{A}\vee\mathfrak{B}\to\mathfrak{A}\vee\mathfrak{B}\) such that the diagram in Definition 2.1 commutes. This implies a relation between the kernels of the homomorphisms:
\[\ker(\gamma)\cap(A\times A)=\ker(\alpha),\quad\text{ and }\quad\ker(\gamma)\cap(B\times B)=\ker(\beta) \tag{5}\]
If \(\mathfrak{A}\bigm{\downarrow}\mathfrak{B}\), then (5) is the case for all congruences that are kernels of the appropriate endomorphisms. This motivates the following definition.
**Definition 3.1**.: _Let \(\mathfrak{X}\) be an algebra and \(\mathfrak{A},\mathfrak{B}\) be subalgebras of \(\mathfrak{X}\). We say that \(\mathfrak{A}\) and \(\mathfrak{B}\) are congruence-independent in \(\mathfrak{X}\) if for any congruences \(\vartheta_{A}\in\operatorname{Con}(\mathfrak{A})\) and \(\vartheta_{B}\in\operatorname{Con}(\mathfrak{B})\) there is a congruence \(\vartheta\in\operatorname{Con}(\mathfrak{A}\vee\mathfrak{B})\) such that_
\[\vartheta\cap(A\times A)=\vartheta_{A},\quad\text{ and }\quad\vartheta\cap(B \times B)=\vartheta_{B}\]
_We write \(\mathfrak{A}\bigm{\downarrow}_{\mathfrak{X}}^{c}\mathfrak{B}\) when \(\mathfrak{A}\) and \(\mathfrak{B}\) are congruence-independent in \(\mathfrak{X}\), and we might omit the subscript \(\mathfrak{X}\) when it is clear from the context._
Notice that \(\mathfrak{A}\bigm{\downarrow}^{c}\mathfrak{B}\) implies \(|A\cap B|\leq 1\). For if \(|A\cap B|\geq 2\), take the two congruences \(\vartheta_{A}=\operatorname{id}_{A}\) and \(\vartheta_{B}=B\times B\) (or \(\vartheta_{A}=A\times A\) and \(\vartheta_{B}=\operatorname{id}_{B}\)). Then no \(\vartheta\) can have the property
\[\vartheta\cap(A\times A)=\vartheta_{A},\quad\text{ and }\quad\vartheta\cap(B \times B)=\vartheta_{B}\]
as in that case we would have
\[\vartheta\cap(A\cap B)^{2}=\vartheta_{A}\cap(A\cap B)^{2}=\operatorname{id}_{A \cap B}\neq(A\cap B)^{2}=\vartheta_{B}\cap(A\cap B)^{2}=\vartheta\cap(A\cap B) ^{2}.\]
The connection between subalgebra independence and congruence independence is subtle, and already sets show that none implies the other. Take for example \(A=\{a\}\) and \(B=\{a,b\}\) as subsets of a set. Then \(A\bigm{\downarrow}^{c}B\) but \(A\bigm{\downarrow}B\) witnessed by \(\alpha=\operatorname{id}_{A}\) and \(\beta:B\to B\), \(\beta(x)=b\). However, a proposition similar to Proposition 2.3 can be formulated.
**Proposition 3.2**.: _Consider \(\mathfrak{A}\) and \(\mathfrak{B}\) as subalgebras of the coproduct \(\mathfrak{A}\oplus\mathfrak{B}\). Then for any congruences \(\vartheta_{A}\in\operatorname{Con}(\mathfrak{A})\) and \(\vartheta_{B}\in\operatorname{Con}(\mathfrak{B})\) there is a congruence \(\vartheta\in\operatorname{Con}(\mathfrak{A}\oplus\mathfrak{B})\) such that_
\[\vartheta\cap(A\times A)=\vartheta_{A},\quad\text{ and }\quad\vartheta\cap(B \times B)=\vartheta_{B}\]
Proof.: Let \(\alpha:\mathfrak{A}\to\mathfrak{A}/\vartheta_{A}\) and \(\beta:\mathfrak{B}\to\mathfrak{B}/\vartheta_{B}\) be the quotient mappings. Using the universal property of the coproduct, there is a homomorphism \(\gamma\) making the diagram below commute.
Then \(\vartheta=\ker(\gamma)\) is suitable.
**Proposition 3.3**.: _Subalgebras \(\mathfrak{A}\) and \(\mathfrak{B}\) of the coproduct \(\mathfrak{A}\oplus\mathfrak{B}\) are congruence-independent provided \(\mathfrak{A}\vee\mathfrak{B}=\mathfrak{A}\oplus\mathfrak{B}\)._
Proof.: Immediate from Proposition 3.2.
## Acknowledgement
We are grateful to the anonymous referee whose careful reading of the manuscript and helpful comments have improved the paper. Research supported in part by the Hungarian National Research, Development and Innovation Office, contract number: K-134275 and by the project no. 2019/34/E/HS1/00044 financed by the National Science Centre, Poland.
|
2306.09016 | Connectivity of graphs that do not have the edge-Erdős-Pósa
property | We show that we can assume graphs that do not have the
edge-Erd\H{o}s-P\'{o}sa property to be connected. Then we strengthen this
result to $2$-connectivity under the additional assumptions of a minor-closed
property and a generic counterexample. | Raphael Steck | 2023-06-15T10:19:01Z | http://arxiv.org/abs/2306.09016v2 | # Connectivity of graphs that do not have the edge-Erdos-Posa property
###### Abstract
We show that we can assume graphs that do not have the edge-Erdos-Posa property to be connected. Then we strengthen this result to \(2\)-connectivity under the additional assumptions of a minor-closed property and a generic counterexample.
A class \(\mathcal{F}\) has the _edge-Erdos-Posa property_ if there exists a function \(f:\mathbb{N}\to\mathbb{R}\) such that for every graph \(G\) and every integer \(k\), there are \(k\) edge-disjoint graphs in \(G\) each isomorphic to some graph in \(\mathcal{F}\) or there is an edge set \(X\subseteq E(G)\) of size at most \(f(k)\) meeting all subgraphs in \(G\) isomorphic to some graph in \(\mathcal{F}\). The edge set \(X\) is called the _hitting set_. If we replace vertices with edges in the above definition, that is, we look for a vertex hitting set or vertex-disjoint graphs, then we obtain the _vertex-Erdos-Posa property_. The class \(\mathcal{F}\) that is studied in this paper arises from taking minors: For a fixed graph \(H\), we define the set
\[\mathcal{F}_{H}=\{G\,|\,H\text{ is a minor of }G\}.\]
In other words, \(\mathcal{F}_{H}\) is the set of \(H\)_-expansions_. The vertex-Erdos-Posa property for \(\mathcal{F}_{H}\) is well understood: Robertson and Seymour [1] proved that the class \(\mathcal{F}_{H}\) has the vertex-Erdos-Posa property if and only if \(H\) is planar. This implies that the vertex-Erdos-Posa property is closed under taking minors, which in turn implies that
_If for a graph \(H\) the class \(\mathcal{F}_{H}\) has the vertex-Erdos-Posa property, then so does the class \(\mathcal{F}_{C}\) for every component \(C\) of \(H\)._
For the edge-Erdos-Posa property, it is not known whether it is minor-closed or not, and it is not at all clear whether it should be. Thus, we tackle (1) for the edge-Erdos-Posa property. We show that
**Theorem 1**.: _If for a graph \(H\) the class \(\mathcal{F}_{H}\) has the edge-Erdos-Posa property, then so does \(\mathcal{F}_{C}\) for every component \(C\) of \(H\)._
Interestingly, Robertson and Seymour proved their result about the vertex-Erdos-Posa property of planar graphs in two steps: First, they proved it for every connected planar graph. In a second step, they lifted the connectivity requirement. Thus Theorem 1 might also provide some help in verifying for which graphs \(H\) the class \(\mathcal{F}_{H}\) has the edge-Erdos-Posa property. For example,
it might help to prove that \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property if \(H\) has large treewidth (for larger or arbitrary maximum degree of \(H\)).
To this end, we also attempt a strengthening of Theorem 1 which allows us to not only focus on connected, but \(2\)-connected graphs \(H\). This, however, is not achieved in full generality, and we prove it only by imposing some additional assumptions (see Theorem 3).
For a graph \(H\), a connected graph \(G\subseteq H\) with at least one vertex \(v\in V(G)\) with \(d_{H}(v)\geq 3\) and an integer \(r\in\mathbb{N}\), we define \(G^{\times}\) to be the following graph: Starting with the empty graph, we add a copy of every vertex \(v\in V(G)\) with \(d_{H}(v)\geq 3\) to \(G^{\times}\). For every non-trivial path \(P\) of length \(l\) in \(G\) between two vertices such vertices, we add \(r\) internally disjoint paths of length \(\max\{l,2\}\) between their corresponding copies in \(G^{\times}\). Finally, for every path \(P\) of length \(l\) in \(G\) between a vertex \(u\) with \(d_{G}(u)=1\), \(d_{H}(u)\leq 2\) and its closest vertex \(v\in V(G)\) with \(d_{H}(v)\geq 3\), we add \(r\) paths of length \(l\) which are disjoint except for the copy of \(v\). See Figure 1 for an example.
Let us check that for every edge set \(X\) of size at most \(r-1\), \(G^{\times}-X\) contains a \(G\)-expansion. Every \(v\in V(G)\) with \(d_{H}(v)\geq 3\) can be mapped to its copy \(v^{\prime}\in V(G^{\times})\). Every \(u\)-\(v\) path between two such vertices can be mapped to one of its copies in \(G^{\times}\) that is disjoint from \(X\). For every vertex \(u\in V(G)\) with \(d_{G}(u)=1\) and \(d_{H}(u)\leq 2\), there is vertex \(v\in V(G)\) with \(d_{H}(v)\geq 3\) that is closest to \(u\). Among all copies of the \(u\)-\(v\) path, we pick a copy \(P^{\prime}\) that is disjoint from \(X\) and map \(P\) to \(P^{\prime}\) such that \(v\) is mapped to \(v^{\prime}\). The prerequisite of \(G\) being connected and containing a vertex \(v\) with \(d_{H}(v)\geq 3\) implies that every \(v\in V(G)\) with \(d_{G}(v)=d_{H}(v)=2\) lies on a path in \(G\) between vertices of degree other than \(2\) in \(H\), with at least one endvertex of the path having degree at least \(3\) in \(H\). We conclude that the above mapping yields a \(G\)-expansion in \(G^{\times}-X\).
For \(r\geq 3\), the number of vertices of degree at least \(3\) in \(G^{\times}\) is the same as the number of vertices \(v\) with \(d_{H}(v)\geq 3\) in \(G\).
**Remark 2**.: _Let \(H\) be a connected graph for which \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property. Then \(H\) contains vertices of degree at least \(3\)._
Proof.: Suppose \(H\) would only contain vertices of degree \(2\) or less. Since \(H\) is connected, \(H\) must be a cycle, a path or an isolated vertex. However, for all of those graphs, \(\mathcal{F}_{H}\) is already known to have the edge-Erdos-Posa property.
For two graphs \(A\) and \(B\), we define \(\preceq\) by
\[A\preceq B\Leftrightarrow A\text{ is a minor of }B.\]
Similarly, we define \(\not\preceq\) by
\[A\not\preceq B\Leftrightarrow A\text{ is not a minor of }B.\]
## 1 1-Connectivity
Proof of Theorem 1.: Let \(A\) be some component of \(H\) such that \(\mathcal{F}_{A}\) does not have the edge-Erdos-Posa property. Thus, there exists an integer \(k\in\mathbb{N}\) such that for every \(r\in\mathbb{N}\), there exists a graph \(A^{*}_{r}\) such that \(A^{*}_{r}\) neither contains \(k\) edge-disjoint expansions of \(A\) nor an edge set \(X\) of size at most \(r-1\) such that \(A^{*}_{r}-X\) contains no expansion of \(A\). We separate the other components of \(H\) into two disjoint sets \(\mathcal{B}\) and \(\mathcal{C}\), which we define by
\[\mathcal{B} =\{B\text{ component of }H\,|\,A\not\preceq B\}\text{ and}\] \[\mathcal{C} =\{C\text{ component of }H\,|\,A\preceq C\}\setminus\{A\}.\]
To prove that \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property, let \(r\) be given. We prove that there exists a graph \(H^{*}\) that contains neither \(k\) edge-disjoint \(H\)-expansions nor an edge set \(X\) of size at most \(r-1\) such that \(H^{*}-X\) contains no \(H\)-expansion. We define \(H^{*}\) to be the disjoint union of
* \(A^{*}=A^{*}_{r}\),
* for every \(B\in\mathcal{B}\): \(r\) distinct copies of \(B\) and
* for every \(C\in\mathcal{C}\): one \(C^{\times}\).
First we show that \(H^{*}\) does not contain an edge-hitting set meeting all \(A\)-expansions. Let \(X\subseteq E(H^{*})\) be an edge set of size at most \(r-1\). We claim that \(H^{*}-X\) still contains an \(H\)-expansion: Indeed, there is an \(A\)-expansion in \(A^{*}-X\) by choice of \(A^{*}\). For every \(B\in\mathcal{B}\), at least one of the \(r\) copies of \(B\) in \(H\) is disjoint from \(X\). Thus, there is a \(B\)-expansion in that copy. Finally, for every \(C\in\mathcal{C}\), there is a \(C\)-expansion in \(C^{\times}-X\) by construction of \(C^{\times}\). Together, this yields an \(H\)-expansion in \(H^{*}-X\).
We claim
\[\text{\it Every $H$-expansion in $H^{*}$ contains an $A$-expansion in $A^{*}$}. \tag{2}\]
Note that (2) finishes the proof of the theorem: Indeed, by choice of \(A^{*}\), there can be no \(k\) edge-disjoint \(A\)-expansions in \(A^{*}\). To prove the claim, consider an expansion \(H^{\prime}\) in \(H^{*}\), and suppose that (2) is false for \(H^{\prime}\).
Since every component of \(H^{\prime}\) is connected, it must be contained in a single component of \(H^{*}\). Further note that every expansion of a \(C\in\mathcal{C}\) in some component of \(H^{*}\) contains an expansion of \(A\) by definition of \(\mathcal{C}\). Thus, by definition of \(\mathcal{B}\), no \(A\)-expansion (and thus no \(C\)-expansion for any \(C\in\mathcal{C}\)) can be contained in some copy of some \(B\in\mathcal{B}\). On top of that, if an \(A\)-expansion (or a \(C\)-expansion for any \(C\in\mathcal{C}\)) is embedded in \(A^{*}\), this proves the above claim. Thus, suppose all \(A\)-expansion in \(H^{\prime}\) (and thus all \(C\)-expansions for every \(C\in\mathcal{C}\)) are contained in \(\bigcup\limits_{C\in\mathcal{C}}C^{\times}\). Every \(H\)-expansion in \(H^{*}\) contains at least one vertex of degree \(\geq 3\) in \(H^{*}\) for every vertex of degree \(\geq 3\) in \(H\). However, by construction of \(C^{\times}\), \(\bigcup\limits_{C\in\mathcal{C}}C^{\times}\) contains no more vertices of degree \(\geq 3\) than \(\bigcup\limits_{C\in\mathcal{C}}C\).
Since \(A\) contains vertices of degree \(\geq 3\) by Remark 2 and the branch sets of all vertices in \(A\cup\left(\bigcup\limits_{C\in\mathcal{C}}C\right)\) must be contained in \(\bigcup\limits_{C\in\mathcal{C}}C^{\times}\), this is a contradiction. Thus (2) holds, proving the theorem.
## 2 2-Connectivity
If we want to prove that for some graph \(H\), \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property, the above theorem implies that it suffices to check the components of \(H\) individually. Thus, without loss of generality, we can assume that \(H\) is \(1\)-connected. To improve this to \(2\)-connectivity of \(H\), we need two additional assumptions.
First, we define a graph property \(\mathcal{P}\) to be _hereditary_ if \(\bar{\mathcal{P}}\) is closed under taking minors, that is, for every graph \(H\) without property \(\mathcal{P}\), no minor of \(H\) has property \(\mathcal{P}\). An example for a hereditary property would be large treewidth: If a graph \(H\) does not have treewidth at least \(t\), then no minor of \(H\) has treewidth at least \(t\).
Second, for a block \(A\) of a graph \(H\), let \(S\) be the set of cutvertices of \(H\) that lie in \(A\). We say that there is a _generic counterexample_ for \(A\) if there exists some \(k\in\mathbb{N}\) such that for every \(r\in\mathbb{N}\), there is a graph \(A^{*}\) with the following properties: There are no \(k\) edge-disjoint \(A\)-expansions in \(A^{*}\). Furthermore, for every \(s\in S\), there is a \(s^{\prime}\in V(A^{*})\) such that for every edge set \(X\) of size at most \(r-1\), there is an embedding of \(A\) in \(A^{*}\) such that for every vertex \(s\in S\), the branch set \(B_{s}\) contains \(s^{\prime}\). (Together, this implies that \(\mathcal{F}_{A}\) does not have the edge-Erdos-Posa property.) Every known construction that shows that some class \(\mathcal{F}_{A}\) does not have the edge-Erdos-Posa property is a generic counterexample.
**Theorem 3**.: _Let \(\mathcal{P}\) be a hereditary graph property and let \(H\) be a graph that contains a block with property \(\mathcal{P}\). Furthermore, for every block \(A\) of \(H\) with property \(\mathcal{P}\), let there be a generic counterexample._
_Then \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property._
An example for an application of Theorem*3 could be the following. Suppose we want to show that:
_For every graph \(H\) of treewidth at least \(10^{100}\), \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property._
(3)
Having large treewidth is a hereditary graph property. Assume we were able to show that for every \(2\)-connected graph \(H\) of treewidth at least \(10^{100}\), \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property. Then we will most likely do that by giving a generic counterexample. Thus we can apply Theorem 3 to drop the connectivity requirement and we obtain (3). Now let us prove Theorem 3.
Proof.: We can assume \(H\) to be \(1\)-connected: Indeed, if \(H\) contains a block \(A\) with property \(\mathcal{P}\), then there is a component \(Q\) of \(H\) that contains \(A\). Furthermore, if all block of \(H\) with property \(\mathcal{P}\) allow for a generic counterexample, then this includes the blocks of \(Q\). If we are able to prove that \(\mathcal{F}_{Q}\) does not have the edge-Erdos-Posa property, then using Theorem 1, we conclude that \(\mathcal{F}_{H}\) does not have the edge-Erdos-Posa property.
We consider the block tree \(T\) of \(H\). Let \(T_{\mathcal{P}}\) be the minimal subtree of \(T\) that contains all blocks with property \(\mathcal{P}\). We pick \(A\) to be a leaf of \(T_{\mathcal{P}}\). Note that \(A\) is a block of \(H\) that has the property \(\mathcal{P}\). Observe that \(A\) is not a trivial block, i.e. \(A\) is not a \(K_{2}\) due to Remark 2.
We define
\[\mathcal{C}=\{C\text{ block of }H\,|\,A\preceq C\}\setminus\{A\}.\]
Since \(\mathcal{P}\) is hereditary, it holds for every \(C\in\mathcal{C}\) that \(C\) has the property \(\mathcal{P}\), too. Let \(T_{\mathcal{C}}\) be the minimal subtree of \(T\) that contains all blocks of \(\mathcal{C}\cup\{A\}\). We observe that \(T_{\mathcal{C}}\) is a subgraph of \(T_{\mathcal{P}}\). Thus,
\[A\text{ is a leaf of }T_{\mathcal{C}}. \tag{4}\]
We define
\[\mathcal{B}= \{B\text{ block of }H\,|\,A\not\preceq B\}\cap V(T_{\mathcal{C}})\text{ and}\] \[\mathcal{D}= \{D\text{ component of }\bigcup_{\begin{subarray}{c}B\text{ block of }H\\ B\not\in V(T_{\mathcal{C}})\end{subarray}}B\}.\]
Note that since \(T\) is a tree, \(T-T_{C}\) is a forest. For each component \(T^{\prime}\) in \(T-T_{C}\), \(V(T^{\prime})\) are the blocks and cutvertices of one element of \(\mathcal{D}\).
To show that \(H\) does not have the edge-Erdos-Posa property, let \(r\geq 3\) be some integer. We define our counterexample graph \(H^{*}\) to be the union of
* \(A^{*}\),
* for every \(C\in\mathcal{C}\): one \(C^{\times}\),
* for every \(D\in\mathcal{D}\): \(r\) distinct copies of \(D\),
* for every non-trivial \(B\in\mathcal{B}\): one \(B^{\times}\) and
* for every component \(P\) of \(\bigcup_{\begin{subarray}{c}B\in\mathcal{B}\\ B=K_{2}\end{subarray}}B\): one \(P^{\times}\).
We pick the above graphs to be disjoint except for those vertices which are copies of the same vertex \(v\in H\), which we identify with each other in \(H^{*}\), too. In \(A^{*}\), we pick the vertex \(s^{\prime}\) for every \(s\in S\) and identify it with all copies of \(s\). Note that for all blocks \(B\in\mathcal{B}\), there is a path \(P\) in \(H\) whose endvertices are in a block in \(\{A\}\cup\mathcal{C}\) and \(P\) contains an edge of \(B\). With \(A\) and \(C\) being non-trivial blocks for all \(C\in\mathcal{C}\), their cutvertices have degree at least \(3\) in \(H\). Thus the union of all trivial blocks in \(B\) is a collection of paths \(P\) whose endvertices are in non-trivial blocks and have degree at least \(3\) in \(H\). We denote the union of all \(P^{\times}\) and all \(B^{\times}\) for every non-trivial \(B\in\mathcal{B}\) by \(\mathcal{B}^{\times}\). Note that every vertex \(v^{\prime}\) in \(\mathcal{B}^{\times}\cup\bigcup_{C\in\mathcal{C}}C^{\times}\) with \(d_{H^{*}}(v^{\prime})\geq 3\) is the copy of some vertex \(v\) in \(\bigcup_{B\in\mathcal{B}}B\cup\bigcup_{C\in\mathcal{C}}C\) with \(d_{H}(v)\geq 3\).
Now we show that \(H^{*}\) does not contain an edge-hitting set meeting all \(A\)-expansions. Let \(X\subseteq E(H^{*})\) be an edge set of size at most \(r-1\). We claim that \(H^{*}-X\) still contains an embedding of \(H\): Indeed, we can embed \(A\) in \(A^{*}-X\) such that for every \(s\in S\), its branch set \(B_{s}\) contains \(s^{\prime}\) by choice of \(A^{*}\). For every \(D\in\mathcal{D}\), at least one of the \(r\) copies of \(D\) is disjoint from \(X\). Thus, we
can embed \(D\) in that copy. For every non-trivial \(B\in\mathcal{B}\cup\mathcal{C}\), we can embed \(B\) in \(B^{\times}-X\) by construction of \(B^{\times}\). We observed above that trivial blocks of \(B\) are contained in some paths between vertices of degree at least \(3\), which can be embedded in their copy in \(H^{*}-X\). Together, this yields an embedding of \(H\) in \(H^{*}-X\) by construction of \(H^{*}\).
It remains to show that there are no \(k\) edge-disjoint embeddings of \(H\) in \(H^{*}\). For this, we claim:
_Every \(H\)-expansion in \(H^{*}\) contains an \(A\)-expansion in \(A^{*}\)._ (5)
Note that (5) proves the theorem: Indeed, by choice of \(A^{*}\), there can be no \(k\) edge-disjoint embeddings of \(A\) in \(A^{*}\). Let us prove (5). Way say that a block \(B\) of \(H\) is _embedded in a block \(B^{*}\) of \(H^{*}\)_, when for every \(v\in V(B)\) with \(d_{H}(v)\geq 3\), the branch set \(B_{v}\) contains a vertex \(v^{*}\in V(B^{*})\) with \(d_{B^{*}}\geq 3\). In this sense, every block \(B\) of \(H\) is embedded in a single block \(B^{*}\) of \(H^{*}\). Note that every embedding of a \(C\in\mathcal{C}\) in some block of \(H^{*}\) contains an embedding of \(A\) by definition of \(\mathcal{C}\). Thus neither \(A\) nor any \(C\in\mathcal{C}\) can be embedded in any copy of some \(D\in\mathcal{D}\). Additionally, if some \(C\in\mathcal{C}\) is embedded in \(A^{*}\), this includes an embedding of \(A\) in \(A^{*}\), which was what we wanted. Thus, we may assume that neither \(A\) nor any \(C\in\mathcal{C}\) is embedded in \(A^{*}\).
We conclude that \(A\cup\bigcup\limits_{C\in\mathcal{C}}C\) is embedded in \(\mathcal{B}^{\times}\cup\bigcup\limits_{C\in\mathcal{C}}C^{\times}\). Let \(B\in\mathcal{B}\). By definition of \(\mathcal{B}\), \(B\) is on the unique path in \(T\) that connects two blocks \(C_{1},C_{2}\in\mathcal{C}\cup\{A\}\). Since \(A\) is a leaf of \(T_{\mathcal{C}}\) by (4) and we assumed that neither \(C_{1}\) nor \(C_{2}\) are embedded in \(A^{*}\), \(B\) cannot be embedded in \(A^{*}\). For every \(D\in\mathcal{D}\), there is a cutvertex separating \(D\) from all \(C\in\{A\}\cup\mathcal{C}\). Therefore, \(B\) cannot be embedded in \(D\). We conclude that \(B\) is embedded in \(\mathcal{B}^{\times}\cup\bigcup\limits_{C\in\mathcal{C}}C^{\times}\).
To sum up, \(A\cup\bigcup\limits_{B\in\mathcal{B}}B\cup\bigcup\limits_{C\in\mathcal{C}}C\) is embedded in \(\mathcal{B}^{\times}\cup\bigcup\limits_{C\in\mathcal{C}}C^{\times}\). However, the number of vertices \(v\) in \(\bigcup\limits_{B\in\mathcal{B}}B\cup\bigcup\limits_{C\in\mathcal{C}}C\) with \(d_{H}(v)\geq 3\) is the same as the number of vertices \(v^{\prime}\) in \(\mathcal{B}^{\times}\cup\bigcup\limits_{C\in\mathcal{C}}C^{\times}\) with \(d_{H^{*}}(v^{\prime})\geq 3\).
By by Remark 2, \(A\) contains at least one vertex \(v\) with \(d_{A}(v)\geq 3\). Since \(A\) is \(2\)-connected, we conclude that it must contain at least two vertices \(v\) with \(d_{A}(v)\geq 3\). Since \(A\) is a leaf of \(T_{C}\), it shares exactly one vertex with \(\bigcup\limits_{B\in\mathcal{B}}B\cup\bigcup\limits_{C\in\mathcal{C}}C\). Thus, \(A\setminus\left(\bigcup\limits_{B\in\mathcal{B}}B\cup\bigcup\limits_{C\in \mathcal{C}}C\right)\) contains at least one vertex \(v\) with \(d_{H}(v)\geq 3\). But then it is impossible to embed \(A\cup\bigcup\limits_{B\in\mathcal{B}}B\cup\bigcup\limits_{C\in\mathcal{C}}C\) in \(\mathcal{B}^{\times}\cup\bigcup\limits_{C\in\mathcal{C}}C^{\times}\). Thus, Claim (2) holds, proving the theorem.
|
2302.13552 | Dispatching Point Selection for a Drone-Based Delivery System Operating
in a Mixed Euclidean-Manhattan Grid | In this paper, we present a drone-based delivery system that assumes to deal
with two different mixed-areas, i.e., rural and urban. In these mixed-areas,
called EM-grids, the distances are measured with two different metrics, and the
shortest path between two destinations concatenates the Euclidean and Manhattan
metrics. Due to payload constraints, the drone serves a single customer at a
time returning back to the dispatching point (DP) after each delivery to load a
new parcel for the next customer. In this paper, we present the 1-Median
Euclidean-Manhattan grid Problem (MEMP) for EM-grids, whose goal is to
determine the drone's DP position that minimizes the sum of the distances
between all the locations to be served and the point itself. We study the MEMP
on two different scenarios, i.e., one in which all the customers in the area
need to be served (full-grid) and another one where only a subset of these must
be served (partial-grid). For the full-grid scenario we devise optimal,
approximation, and heuristic algorithms, while for the partial-grid scenario we
devise optimal and heuristic algorithms. Eventually, we comprehensively
evaluate our algorithms on generated synthetic and quasi-real data. | Francesco Betti Sorbelli, Federico Corò, Sajal K. Das, Cristina M. Pinotti, Anil Shende | 2023-02-27T07:11:49Z | http://arxiv.org/abs/2302.13552v1 | Dispatching Point Selection for a Drone-Based Delivery System Operating in a Mixed Euclidean-Manhattan Grid
###### Abstract
In this paper, we present a drone-based delivery system that assumes to deal with two different mixed-areas, i.e., rural and urban. In these mixed-areas, called EM-grids, the distances are measured with two different metrics, and the shortest path between two destinations concatenates the Euclidean and Manhattan metrics. Due to payload constraints, the drone serves a single customer at a time returning back to the dispatching point (DP) after each delivery to load a new parcel for the next customer. In this paper, we present the 1-Median Euclidean-Manhattan grid Problem (MEMP) for EM-grids, whose goal is to determine the drone's DP position that minimizes the sum of the distances between all the locations to be served and the point itself. We study the MEMP on two different scenarios, i.e., one in which all the customers in the area need to be served (full-grid) and another one where only a subset of these must be served (partial-grid). For the full-grid scenario we devise optimal, approximation, and heuristic algorithms, while for the partial-grid scenario we devise optimal and heuristic algorithms. Eventually, we comprehensively evaluate our algorithms on generated synthetic and quasi-real data.
## 1 Introduction
Drones or Unmanned Aerial Vehicles (UAVs) are recently becoming widely used in civil applications such as environmental protection [1, 2, 3, 4], public safety [5, 6, 7], localization [8], and smart agriculture [9, 10]. Currently, there is a growing interest in the use of drones in smart cities [11]. This interest is particularly increased after the global presence of the COVID-19 disease [12, 13]. Recently, [14] discuss about
the use of drones as carriers for distributing and transporting drugs and medicines, and revealed that 86.7% of people agree that drones are more effective, surely faster, and less polluting than any ground-based distribution system. Quarantine, closure of borders, and social distancing forced people to stay indoors for long periods, allowing them to go out only for essential activities. Therefore, considering the need to avoid all unnecessary direct human contact, people started to rely heavily on online stores for their regular shopping. In parallel, large companies like Amazon are testing drone-based delivery systems, particularly for what is known as "last-mile" small item logistics. For example, Amazon introduced "Amazon Air Prime", a service that uses drones able to deliver goods up to 25 kg to customers within a radius of 16 km [15, 16, 17], and Domino's developed a pizza-delivery service using drones [18].
There are countless advantages to using drones for deliveries, including economic benefits, saving on greenhouse gas, and the ability to deliver in time-critical situations or hard-to-reach places. With the growth of commercial interest, researchers have begun to study variants of the Traveling Salesman Problem (TSP) for drones [19]. However, due to the payload constraint that forces the drone to return after each delivery, TSP is not suitable for drones [20, 21]. Of particular interest is the work proposed by [22], where a drone combined with a truck is used to make multiple deliveries in a given area, going back and forth from the truck. In that case, it is crucial to find the best location for the truck to minimize the distance traveled by the drone.
In this article, we imagine offering a drone delivery service to the customers of a delivery area that covers two mixed areas, i.e., a rural and/or an urban one. In urban areas (see Figure 0(b)), for safety and privacy reasons (see regulation of inhabited centers, e.g., [23], it is assumed that the drone flies over the streets since it cannot fly beyond a certain maximum allowed altitude. Namely, in urban areas, it might be forbidden for the drone to travel along the straight line connecting the dispatching point (DP) and the delivery if such a straight line, due to tall obstacles, exists only at an altitude higher than the maximum altitude allowed by the regulations. However, the drone can at least save time - if not distance traveled - in the urban area. In fact, a drone is usually faster than a conventional wheeled vehicle in a crowded area: it will certainly not get stuck in traffic. In rural areas (see Figure 0(a)), instead, due to the fact that tall buildings are not present, the drone can move freely and follow the shortest path to reach its destination, making clear its advantage on the traveled distance in this case. In the case of mixed areas (see Figure 0(c)), i.e., rural and urban, the drone must follow both the Euclidean and Manhattan metrics. In particular, in the real example provided in Figure 0(c), the Euclidean part resides on the left, while the Manhattan part resides on the right. Actually, the two areas are split by the river.
We model the proposed drone delivery area as a two-dimensional mixed-grid. The vertices of the grid represent both the possible locations of the drone's DP and
the possible delivery destinations, and they are placed in rows and columns as in a regular 2-D grid. To model the two different types of areas through which the drone can move, we have divided the grid into two parts: the rural area and the urban area. In the former, the distance between two destinations respects the Euclidean metric, while in the latter, the metric is Manhattan (taxicab geometry). We are then interested in finding where to set the DP in order to minimize the sum of the distances between a subset of delivery destinations and the DP itself. Due to strict payload constraints, the drone must return back to the DP after each delivery. Therefore, the drone needs to travel back and forth from the DP as many times as the number of delivery destinations on the grid. In this paper, we consider two delivery scenarios. The full-grid scenario where each point of the grid has to be served by the drone. In light of the COVID-19 pandemic, this scenario happens, for example, if the drone is used to deliver meals in a lockdown area, or to deliver self-tests to sick people. In this particular example, the full-grid scenario is also justified by the fact that after every delivery the drone should be properly sanitized and disinfected before performing the next delivery, as proposed by [24]. The partial-grid scenario, instead, assumes that only a subset of points of the grid is a delivery site. This is the usual scenario in a delivery system.
The DP selection problem in logistics, while sharing similarities with the classical facility location problem, requires a more complex modeling approach that considers factors such as varying demand and transportation costs, and it involves optimizing multiple metrics simultaneously (Euclidean and Manhattan), which is not the case for the original facility location problem.
In this paper, we present extensions and improvements to our earlier work discussed in two conference papers, i.e., [25] and [26]. In [25] we focused on the full-grid scenario. In this paper we extend our work to include a partial-grid scenario. We suitably adapt results and ideas from [26] to this scenario, and provide new efficient algorithms for both the scenarios. The contributions of this paper are summarized as follows.
* We introduce the EM-grid model, which characterizes the delivery area for the drone, formed by two contiguous areas, i.e., one rural and one urban, that follow the Euclidean and Manhattan metrics, respectively.
* We define the 1-Median Euclidean-Manhattan grid Problem (MEMP) and devise time-efficient algorithms for the full-grid and partial-grid scenarios. For the full-grid scenario we devise optimal, approximation, and heuristic algorithms, while for the partial-grid scenario we devise optimal and heuristic algorithms. We also give all the proofs for the full-grid scenario that had been left blank in the previous conference paper [25].
* In addition to comparing the performance of our presented algorithms on randomly generated synthetic data, we also evaluate their effectiveness on quasi-real data obtained by adapting real city maps with our proposed grid model,
providing a more realistic assessment of their practical utility. Furthermore, to ensure the accuracy of our comparison, we incorporated data from a real drone to measure the distances traveled by the dispatching point in our evaluation on quasi-real data.
The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 formally defines MEMP. Section 4 and Section 5 describe properties and algorithms for efficiently solving MEMP with full-grid and partial-grid scenarios, respectively. Section 6 evaluates our algorithms, and Section 7 offers conclusions and future research directions.
## 2 Related Work
In the literature, many works attempt to solve the drone-based last-mile delivery problem. To the best of our knowledge, drones have been considered in a delivery system for the first time by [27]. Specifically, they study the cooperation between a truck and a drone to deliver packages to customers. The problem to solve is the Flying Sidekick Traveling Salesman Problem (FSTSP), which is a variant of the classic TSP. In the FSTSP, a drone can autonomously perform deliveries to the customers directly flying from the main depot or can be helped by a truck. In the latter case, the drone flies from the truck, delivers the package, and then rendezvous with the truck again in a third location. However, when the drone flies, the truck can do other deliveries independently, but still, it has to wait for the drone at the rendezvous location. For solving FSTSP, the authors propose an optimal mixed-integer linear programming (MILP) and two heuristics for solving instances of practical sizes. Then, [28] investigate the same scenario with multiple drones in by introducing the Multiple Flying Sidekicks Traveling Salesman Problem (mFSTSP). Even for the mFSTSP, they provide an optimal MILP formulation along with a heuristic solution approach that consists of solving a sequence of three sub-problems.
Recently, [29] present an exact formulation for FSTSP while also simplifying the model reducing the number of constraints and thus be able to solve several benchmark instances from the literature. However, in these works, the drones fly according to the Euclidean metric. [30] introduce the multiple Traveling Salesman Problem with Drone Stations (mTSP-DS), which is an extension to the classical multiple Traveling Salesman Problem (mTSP). In this problem, multiple trucks starting to/from a single depot are in charge of supplying some packet stations that host autonomous vehicles (drones or robots). On these stations, each truck can launch and operate drones/robots to serve customers. The objective of mTSP-DS is to serve all customers minimizing the makespan. The problem is formulated as an MILP only suitable to solve small instances. For larger instances, many matheuristic algorithms are presented. [31] introduce the Vehicle Routing Problem with Drones and En Route Operations (VRPDERO), which is an extension to the
Vehicle Routing Problem with Drones (VRPD). In this problem, drones may not only be launched and retrieved at vertices but also on some discrete points that are located on each arc. The problem is formulated as an MILP, and matheuristic approaches are presented to deal with large instances. The goal of both [31, 30] is to minimize the makespan, while ours is to find the best DPs from where to launch the drones.
[25] introduce the drone-based delivery area modeled as EM-grids where a drone is used for delivering small packages to customers. Given the delivery area divided into two contiguous areas, i.e., the rural and the urban areas, the goal is to find the optimal DP (depot or warehouse) for the drone in order to minimize the sum of all the distances between all the potential customers and the DP itself. However, due to strict payload constraints, the drone cannot serve more than a customer at a time, and after each delivery, the drone must go back to the depot. A similar approach, but in a different context, has been studied by [26]. In such a scenario, the objective is to determine the optimal cart point for the drone that minimizes the distances between a set of items (on shelves) and the cart itself, assuming that shelves follow the two aforementioned metrics. Differently from [25], in [26] only a subset of points needs to be considered. Moreover, [26] compare the current human-based system with respect to the newly proposed one based on drones.
[32] compare and contrast the performance of many algorithms in a delivery scenario. They model the delivery area as a circular region with a central depot, while customers are randomly distributed throughout the region. Different temporal and spatial metrics are compared when evaluating these algorithms. In particular, they evaluate the impact of having distances measured according to both the Euclidean and Manhattan distance metrics. The number of customers stochastically varies under both Manhattan and Euclidean distance metrics. The paper states that the number of customer deliveries and the metric used to measure travel distance impacts a decision maker's choice of the best algorithm and that employing multiple algorithms is recommended.
[33] explore the implications and advantages of strategic planning on urban delivery services. More specifically, the preferred method and local impacts of vehicle trips may vary by neighborhood characteristics (e.g., traffic or customer demands). Instead of searching for an optimal route, the paper focuses on the estimation of the vehicles' miles traveled (VMT) per meal order, considering different types of neighborhoods, delivery scenarios, and strategies. The proposed system is tested and evaluated in Chicago, showing that alternative delivery strategies can greatly reduce the VMT per order based on the type of neighborhood. In the evaluation, both the Euclidean and Manhattan metrics are combined. However, although different metrics have been evaluated, drones have not been used in either [32] or [33].
Recently, there has been more effort in solving the last-mile delivery problem using drones instead of a standard vehicle, due to the flexibility of drones and their capability to fly over obstacles and avoid traffic. [34] investigate the problem of
solving the TSP with a Drone (TSP-D) where the drone rides on the truck. [35] investigate the cooperation between a truck and multiple drones. Each delivery is characterized by a drone's energy cost, a reward based on its priority, and a time interval (launch and rendezvous with the truck). This work aims at finding an optimal scheduling for the drones that maximizes the overall reward, subject to the drone's battery capacity while ensuring that the same drone performs deliveries that do not overlap. Results show that the presented problem is \(NP\)-hard, therefore, different heuristics for solving the problem in a time-efficient way are proposed. More recently, [36] investigate the feasibility of performing deliveries with a drone in the presence of external factors such as wind.
[37] compare the traditional truck-based delivery system against the drone-based one to reduce the general energy consumption and hence reduce the gas emissions. They also take into account traffic congestion. They propose a mixed-integer green routing model with traffic restrictions and a genetic algorithm to efficiently solve the complex routing problem, showing that drones can accomplish more deliveries and at a lower cost (in terms of CO\({}_{2}\) emissions and energy consumption) compared to standard methods and that traffic types impact the results.
[38] study the last-mile scenario formed by multiple drones assisted by a single truck that carries them. The customers to be served by the drones form a clustering, and each drone is assigned to a specific cluster. The initial position of a drone is called a "cluster focal point". Once all the focal points are computed, the truck needs to visit these points by minimizing its traveled route. Moreover, due to payload constraints, the drones serve their customers one at a time. The truck cannot follow the Euclidean metric, while the drones do. The authors propose an optimal mixed integer nonlinear programming (MINLP) solution as well as an unsupervised machine learning-based heuristic algorithm.
[39] present a variation on the theme. The proposed model assumes that drones are assisted by trucks, that carry them through the city. The trucks start from the main depot and park the drones in specific locations where drones have to serve the customers by a sequence of back-and-forth flights. Both the trucks and the drones move according to the Euclidean metric. The introduced problem is called Energy Minimizing and Range Constrained Drone Delivery Problem (ERDDP) whose objective is to minimize the total operational cost including an explicit calculation of the energy consumption of the drone as a function of the drone's speed. The ERDDP is formulated as a second order cone program instance.
[40] propose a last-mile drone delivery scenario where multiple drones can exploit charging stations to replenish their batteries. In this setting, the drones can fly from the main hub to the terminal station and serve, one at a time, a subset of customers in the neighborhood, and then go back to the hub. Alternatively, they can move from a terminal station to a charging station to refill the battery and perform subsequent deliveries to other neighborhoods belonging to other terminal stations. The objective function aims to either minimize the number of charging
stations or to minimize the overall traveled distance. The authors solve this problem by proposing an optimal and a heuristic solution.
[41] combine the pickup and the delivery requests in a system with stations (nodes in a given graph) and introduce the Hybrid Vehicle-Drone Routing Problem. Vehicles visit stations to transport delivery items and drones, while drones are launched and collected only at stations. The problem is formulated as a mixed-integer program, which minimizes the vehicle and drone routing cost to serve all customers. To solve the problem, the authors use an extension of the classic Clarke and Wright algorithm (see [42], a known heuristic to solve the Vehicle Routing Problem. We remark that while their goal is to find the best route for drones and trucks, our goal is to find the best stations, i.e., the DPs, from where to launch the drones.
Finally, [43] investigate the problem of placing drone charging facilities in an area to help increase the coverage range of drones for commercial deliveries. The authors present an MILP formulation, and then a heuristic is proposed to effectively solve the problem.
## 3 Problem Definition
In this section, we first introduce the delivery area model and how the drone moves inside it, and then formally describe the problem to solve.
### Delivery Area Model
To model the delivery area, we define the _Euclidean-Manhattan-Grid_ (EM-grid) as \(G=(R,C,K)\), that is a 2-D grid with \(R\) rows, \(C\) columns, and the _Border_\(B\) is the column \(K\in[1,C]\) that separates the _Euclidean_ grid \(E\) (rural area) from the _Manhattan_ grid \(M\) (urban area) (see Figure 2). Specifically, \(E=\{1,\ldots,R\}\times\{1,\ldots,K\}\), \(B=\{1,\ldots,R\}\times\{K\}\subseteq E\), and \(M=\{1,\ldots,R\}\times\{K+1,\ldots,C\}\).
We assume that the drone delivery system covers a rectangular area: \(E\) and \(M\) have the same number of rows \(R\). The border consists of a single column, i.e., \(K\). Conventionally, the area consists only of a rural area region if \(K=C\) (i.e., \(M=\emptyset\)), whereas it is effectively limited to an urban area if \(K=1\). In an EM-grid, there are vertices and edges connecting adjacent vertices. Any internal vertex \(u=(r_{u},c_{u})\) of \(G\), i.e., with \(1<r_{u}<R\) and \(1<c_{u}<C\), is connected to the four adjacent vertices \((r_{u},c_{u}\pm 1)\) and \((r_{u}\pm 1,c_{u})\); whereas, in general, any vertex of the grid, i.e., with \(1\leq r_{u}\leq R\) and \(1\leq c_{u}\leq C\), is connected only to the existing adjacent vertices (i.e., an external vertex has only three or two neighbors). For simplicity, we assume that the distance between any two pairs of consecutive vertices on the same row or column is constant and unitary, and so the _weight_ of any edge is unitary. In this work, the "distance" is a measure of the _time required_ for performing the delivery, or of the _needed energy_ for shipping the package. We also assume that every customer can be reached by the drone to/from the DP. At the DP, the drone can recharge
its battery, or just replace it with fresh ones. Let \(\overline{R}=\lceil\frac{R}{2}\rceil\) be the middle row, let \(\overline{C}=\lceil\frac{C}{2}\rceil\) be the middle column, and let \(\overline{K}=\lceil\frac{K}{2}\rceil\) be the column that halves the Euclidean sub-grid.
For any two vertices \(u\), \(v\) in \(G\), the distance \(d(u,v)\) is the length of the shortest path traversed by the drone in the EM-grid to go from vertex \(u\) to the destination \(v\). The Euclidean and Manhattan distances are defined, respectively, as \(d_{E}(u,v)=\sqrt{(r_{u}-r_{v})^{2}+(c_{u}-c_{v})^{2}}\) and \(d_{M}(u,v)=|r_{u}-r_{v}|+|c_{u}-c_{v}|\).
We note that the shortest path between a vertex \(u\in E\) and a vertex \(v\in M\) is given by \(\min_{w\in B}\{d_{E}(u,w)+d_{M}(w,v)\}\). In Lemma 1 we prove that such path is unique and passes through the vertex on the border \(B\) that has the same row as \(v\).
**Lemma 1**.: _Consider an EM-Grid \(G=(R,C,K)\). Given \(u=(r_{u},c_{u})\in E\) and \(v=(r_{v},c_{v})\in M\), then \(d(u,v)=d_{E}(u,h)+d_{M}(h,v)\) with \(h=(r_{v},K)\)._
Proof.: Consider the vertex \(h=(r_{v},K)\) which shares the same row as \(v\), and another vertex \(w=(i,K)\) in \(B\) with \(h\neq w\). We want to prove that: \(d_{E}(u,h)+d_{M}(h,v)\leq d_{E}(u,w)+d_{M}(w,v)\). This follows by the triangle inequality applied to the vertices \(u,w,h\), i.e., \(d_{E}(u,h)\leq d_{E}(u,w)+|r_{v}-i|=d_{E}(u,w)+d_{M}(w,h)\).
Thus, from now on, \(d(u,v)\) is given by:
\[d(u,v)=\left\{\begin{array}{ll}d_{E}(u,v)&\mbox{if $u,v\in E$}\\ d_{M}(u,v)&\mbox{if $u,v\in(M\cup B)$}\\ d_{E}(u,h)+d_{M}(h,v)&\mbox{if $u\in E,v\in M$ where $h=(r_{v},K)\in B$}\\ d_{M}(u,h)+d_{E}(h,v)&\mbox{if $u\in M,v\in E$ where $h=(r_{u},K)\in B$} \end{array}\right. \tag{1}\]
### The Column-Cost
Let the _column-cost_ be the distance traversed by the drone starting from a vertex \(u\) in row \(\overline{R}\) to serve all the vertices of a given column, where the column is on the same side (Euclidean or Manhattan) of the grid as the vertex \(u\). Such column-cost depends on which side of the grid the column and the vertex \(u\) reside and on the number of rows \(R\). Let \(\Delta_{E}(j)\) be the column-cost to serve a Euclidean column at distance \(j\) from the candidate DP \(u=(\overline{R},c_{u})\) with \(u\in E\). Similarly, let \(\Delta_{M}(j)\) be the column-cost to serve a Manhattan column at distance \(j\) from the drone's DP \(u=\left(\overline{R},c_{u}\right)\) with \(u\in M\cup B\). One can easily find that:
\[\Delta_{E}(j) =j+2\sum_{i=1}^{\overline{R}-1}\sqrt{i^{2}+j^{2}}+((R-1)\bmod 2 )\sqrt{\overline{R}^{2}+j^{2}} \tag{2}\] \[\Delta_{M}(j) =j+2\sum_{i=1}^{\overline{R}-1}(i+j)+((R-1)\bmod 2)\left( \overline{R}+j\right)\] \[=\overline{R}(\overline{R}+1+2j)+j+((R-1)\bmod 2)(\overline{R}+j) \tag{3}\]
### The Problem Formulation
In our scenario, the fundamental task is to serve, with the aid of a drone, the customers of an area, e.g., to distribute viral tests to potentially infected patients. Due to payload constraints (and, e.g., to avoid the spread of the disease), the drone cannot serve all the customers on the same flight, and it has necessarily to go back and forth from a specific position inside the delivery area (called DP, where the drone, e.g., can be also sanitized each time) to all the customers. Specifically, this DP is the point in which all the products for customers are initially stored. Hence, the objective is to minimize the distance traveled by the drone when it moves inside the delivery area. We denote this problem as the 1-Median Euclidean-Manhattan grid Problem (MEMP) since the goal is to find a _single_ DP inside EM-grids.
Given an EM-grid \(G=(R,C,K)\) and a subset of vertices \(H\subseteq G\), for an arbitrary vertex \(u=(r_{u},c_{u})\in G\), we define the cost of delivery from \(u\) to each point in \(H\), denoted by \(\mathcal{C}(H,u)\), as:
\[\mathcal{C}(H,u)=2\sum_{v\in H}d(v,u) \tag{4}\]
As noted above, the distance between points \(u\) and \(v\) is a measure of the cost of delivery from \(u\) to \(v\), and the multiplicative constant 2 is in consideration of the round trip for each delivery. Given \(H\subseteq G\), let \(H_{E}\) and \(H_{M}\) be the subset of points that lie in the Euclidean and Manhattan grid, respectively, such that \(H_{E}\cup H_{M}=H\) and \(H_{E}\cap H_{M}=\varnothing\). The set \(H\) if formed by \(n=|H|\) vertices, where \(n_{E}=|H_{E}|\) and \(n_{M}=|H_{M}|\), such that \(n=n_{E}+n_{M}\).
When \(H=G\), Eq. (4) can be rewritten as:
\[\mathcal{C}(G,u)=2\sum_{v\in G}d(v,u)=2\sum_{r_{v}=1}^{R}\sum_{c_{v}=1}^{C}d(( r_{v},c_{v}),u) \tag{5}\]
with \(v=(r_{v},c_{v})\in G\), and \(r_{v}\in[1,R]\) and \(c_{v}\in[1,C]\). We refer to the scenario as a _partial-grid scenario_ (respectively, _full-grid scenario_) when \(H\subset G\) (respectively, \(H=G\)). Finally, given \(H\subseteq G\), we define the DP (median) \(u^{*}\) as:
\[u^{*}=\operatorname*{arg\,min}_{u=(r_{u},c_{u})\in G}\mathcal{C}(H,u) \tag{6}\]
## 4 Solving MEMP with Full-Grid Scenario
In this section we give properties and then devise algorithms for solving MEMP for the full-grid scenario. This full-grid scenario is justified by the fact that a delivery company has to consider all the grid's locations as potential customers. For example, in the case of a quarantined area, as with COVID-19, the drone can be used to deliver goods of primary necessity to all the residents in the area. Therefore, the objective would be to find the optimal location to set the DP in order to minimize the travel distances between any customer's locations and the DP.
In the following, we first discuss how to optimally solve MEMP with a full-grid (see Section 4.2), and then we propose an approximation algorithm, CMALL-F, that operates as if the grid is a full Manhattan grid (i.e., \(K=0\)) and provides a guaranteed approximation bound of \(\sqrt{2}\) (see Section 4.3.1). Then we propose two heuristics: (1) CEMB-F which assumes that all the Manhattan destinations move to the border \(B\), i.e., the grid is a fully Euclidean grid (see Section 4.3.2), and (2) CMEB-F which does the opposite, i.e., the grid is a fully Manhattan grid (see Section 4.3.3). In Table 1 we compare the presented algorithms that solve MEMP in the full-grid scenario evaluating their time complexities and guaranteed approximation bounds.
### Properties
In the following we prove some properties that we will exploit to devise our optimal algorithm. We first note that with a full Manhattan grid (i.e., \(K=1\)) or a full Euclidean grid (i.e., \(K=C\)), MEMP can be trivially solved. In the former case, MEMP has the Manhattan-median in \(u^{*}=(\overline{R},\overline{C})\) (see [44]). Note that, the median is not unique when the values \(C\) and \(R\) are even. In the latter case, by using symmetry arguments, it can be proven that MEMP has the Euclidean-median in \(u^{*}=(\overline{R},\overline{C})\). For the general case when \(1<K<C\), we can derive properties to narrow down the set of possible median point candidates.
First, we observe that the median always belongs to the middle row \(\overline{R}\) of \(G\).
**Theorem 1**.: _Given an EM-grid \(G=(R,C,K)\), the median \(u^{*}=(r^{*},c^{*})\) satisfies \(r^{*}=\overline{R}\)._
Proof.: Let the _row-cost_\(\Gamma(u,h)\) be the distance traversed by a drone with DP \(u=(x,y)\) to serve all the vertices on a row at distance \(h\) from \(u\). Note that the function \(\Gamma\) depends only on the relative distance between the \(x\)-coordinate of \(u\) and the row considered. There are potentially two rows at distance \(h\) from \(u\): one above \(u\) at row \(x+h\), and one below \(u\) at row \(x-h\). The crucial observation is that the two rows have exactly the same row cost when served by \(u\). For a fixed \(u\), \(\Gamma(u,h)\) increases with \(h\). In other words, for \(h_{2}>h_{1}\), it holds that \(\Gamma(u,h_{2})-\Gamma(u,h_{1})>0\).
Now we can show that \(u^{*}\) belongs to row \(\overline{R}\), proving that for a given \(u=(\overline{R},j)\) and \(v=(\ell,j)\), with \(\ell\neq\overline{R}\), it holds \(\mathcal{C}(G,u)\leq\mathcal{C}(G,v)\). First, we consider \(\ell>\overline{R}\). In
\begin{table}
\begin{tabular}{c|c c c} Point & Algorithm & Section & Time complexity & Approximation ratio \\ \hline \(u^{*}\) & OPT-F & 4.2 & \(\mathcal{O}(\log K)\) & \(1\) \\ \(u_{M}\) & CMALL-F & 4.3.1 & \(\mathcal{O}(1)\) & \(\sqrt{2}\) \\ \(u_{\hat{C}}\) & CEMB-F & 4.3.2 & \(\mathcal{O}(1)\) & \(-\) \\ \(u_{\hat{M}}\) & CMEB-F & 4.3.3 & \(\mathcal{O}(1)\) & \(-\) \\ \end{tabular}
\end{table}
Table 1: Comparison between the algorithms that solve MEMP in the full-grid scenario.
this case:
\[\mathcal{C}(G,u) =2\sum_{x=1}^{\overline{R}-1}\Gamma(u,x)+\Gamma(u,0) \tag{7}\] \[\mathcal{C}(G,v) =\sum_{x=1}^{\ell-1}\Gamma(v,x)+\sum_{x=1}^{n-\ell}\Gamma(v,x)+ \Gamma(v,0) \tag{8}\]
Subtracting \(\mathcal{C}(G,u)\) from \(\mathcal{C}(G,v)\), one has:
\[\sum_{x=\overline{R}}^{\ell-1}\Gamma(u,x)-\sum_{z=n-\ell+1}^{ \overline{R}-1}\Gamma(u,z)\geq 0 \tag{9}\]
In Eq. (9) \(x\geq\overline{R}\), while \(z<\overline{R}\), and accordingly it holds \(\Gamma(u,x)\geq\Gamma(u,z)\). Hence \(\mathcal{C}(G,u)\leq\mathcal{C}(G,v)\) for any \(v\). Similarly, the result can be proven when \(\ell<\overline{R}\).
Recall the notion of _column-cost_ defined in Section 3.2. Recall, also, that \(n=|G|\). Algebraically the following properties can be proven about the column-cost:
**Lemma 2**.:
1. _Both_ \(\Delta_{E}(j)\) _and_ \(\Delta_{M}(j)\) _increase with_ \(j\)_;_
2. \(\Delta_{M}(j)-\Delta_{M}(j-t)=t\cdot n\) _for_ \(j\geq t\)_;_
3. \(\Delta_{E}(j)-\Delta_{E}(j-t)\) _is positive and strictly monotone increasing with_ \(j\) _for all_ \(j>t\)_;_
4. \(\Delta_{E}(j)-\Delta_{E}(j-t)\geq t(\Delta_{E}(j-t+1)-\Delta_{E}(j-t))\)_,_ \(\Delta_{E}(j)-\Delta_{E}(j-t)\leq t(\Delta_{E}(j)-\Delta_{E}(j-1))\)_;_
5. \(\Delta_{E}(j)<\Delta_{M}(j)\leq\sqrt{2}\Delta_{E}(j)\)_._
Proof.: We prove each point separately:
1. Let \(j_{1}>j_{2}\). From Eq. (2), it holds \(\sqrt{i^{2}+j_{1}^{2}}>\sqrt{i^{2}+j_{2}^{2}}\). From Eq. (3), it holds \(i+j_{1}>i+j_{2}\).
2. From Eq. (3), it holds \(t+2(\overline{R}-1)t=tn\).
3. From Eq. (2), it holds \(\sqrt{i^{2}+j^{2}}>\sqrt{i^{2}+(j-t)^{2}}\). Now, differentiating with respect to \(j\) one obtains: \[1/\sqrt{\left(i/j\right)^{2}+1}-1/\sqrt{\left(i/j-t\right)^{2}+1}.\] Since \(t>0\), \(j>j-t\), and hence \(\frac{i}{j-t}>\frac{i}{j}\), confirming the strictly monotone increase for \(j>t\).
4. Observe that \(\Delta_{E}(j)-\Delta_{E}(j-t)\) can be rewritten as the sum of the distance of consecutive columns as \(\sum_{z=0}^{t-1}\Delta_{E}(j-z)-\Delta_{E}(j-z-1)\). By applying Property 3 with \(t=1\), it holds \(\Delta_{E}(j)-\Delta_{E}(j-t)\geq t(\Delta_{E}(j-t+1)-\Delta_{E}(j-t))\) because \(\Delta_{E}(j-z)-\Delta_{E}(j-z-1)\) is increasing with \(j-z\). Similarly, it follows: \(\Delta_{E}(j)-\Delta_{E}(j-t)\leq t(\Delta_{E}(j)-\Delta_{E}(j-1))\).
5. Recalling the well-known Cauchy-Schwarz inequality \(\sqrt{a^{2}+b^{2}}<(a+b)\leq\sqrt{2}\sqrt{a^{2}+b^{2}}\), it holds: \(\Delta_{E}(j)<\Delta_{M}(j)\leq\sqrt{2}\Delta_{E}(j)\).
Having established in Theorem 1 that \(u^{*}\) is on the row \(\overline{R}\), the potential candidates for the median are vertices \((\overline{R},c)\). For a vertex that belongs to the middle row, say \((\overline{R},c)\), let \(\overline{\mathcal{C}}(c)=\mathcal{C}(G,c)\). Selecting an arbitrary vertex \(u=(\overline{R},c_{u})\) as the candidate median, we can exploit the column-cost definition and decompose the cost \(\overline{\mathcal{C}}(c_{u})\) into \(C_{1},\ldots,C_{4}\) as defined in Eqs. (10a) and (10b). Both equations coincide when \(c_{u}=K\) because \(\Delta_{M}(0)=\Delta_{E}(0)\) and \(\Delta_{M}(0)+iR=\Delta_{M}(i)\).
\[\overline{\mathcal{C}}(c_{u}) \tag{10a}\] \[\overline{\mathcal{C}}(c_{u})\] (10b) \[\overline{\mathcal{C}}(c_{u})\] (10c) \[\overline{\mathcal{C}}(c_{u}) \tag{10d}\]
We now prove some technical results to help to further reduce the set of median candidates. We first show, in Lemma 3, that the median cannot be "too close" to the left border of \(G\).
**Lemma 3**.: _The column \(c^{*}\) of the DP \(u^{*}=(\overline{R},c^{*})\) of \(G=(R,C,K)\) cannot be in the interval \([1,\ldots,\overline{K}-1]\), where \(\overline{K}=\lceil\frac{K}{2}\rceil\) is the column that halves the Euclidean sub-grid._
Proof.: This is equivalent to saying that, if \(c_{u}\in[1,\overline{K}-1]\), then \(\overline{\mathcal{C}}(c_{u})>\overline{\mathcal{C}}(\left\lfloor\frac{K}{2} \right\rfloor)\).
From Eq. (10a), we notice that the cost \(C_{1}\) increases if \(c_{u}<\overline{K}\) because \(c_{u}\) is a sub-optimal solution for the median of the Euclidean sub-grid in \(G\). Namely, \(\overline{K}\) is the median of an EM-grid \(G^{\prime}=(R,K,K)\). Moreover, by Lemma 2 (Property 3), the cost \((C-K)\Delta_{E}(K-c_{u})>(C-K)\Delta_{E}(\overline{K})\) because \(K-c_{u}>\overline{K}\).
Next, we show, in Lemma 4 that \(\overline{\mathcal{C}}(c_{u})\) is convex when \(c_{u}\) varies from \(\overline{K}\) to \(K\).
**Lemma 4**.: _Varying \(c_{u}\) from \(\overline{K}\) to \(K\), the delivery cost function \(\overline{\mathcal{C}}(c_{u})\) has a single minimum._
Proof.: Let \(t\), with \(\overline{K}\leq t\leq K-1\), be the vertex where the delivery cost assumes the first minimum. Therefore from Eq. (10a) we have that:
\[\overline{\mathcal{C}}(t+1)-\overline{\mathcal{C}}(t) =\Delta_{E}(t+1)-\Delta_{E}(K-t)\] \[+(C-K)(\Delta_{E}(K-(t+1))-\Delta_{E}(K-t))\geq 0 \tag{11}\]
To prove that the cost function has exactly one minimum in the interval \([\overline{K},\ldots,K]\), it is sufficient to show that \(\overline{\mathcal{C}}(t+2)-\overline{\mathcal{C}}(t+1)\geq 0\) given that \(\overline{\mathcal{C}}(t+1)-\overline{\mathcal{C}}(t)\geq 0\).
First, observe by Lemma 2 (Property 3) that:
\[\Delta_{E}(K-(t+2))-\Delta_{E}(K-(t+1))\geq\Delta_{E}(K-(t+1))-\Delta_{E}(K-t) \tag{12}\]
Then,
\[\overline{\mathcal{C}}(t+2)-\overline{\mathcal{C}}(t+1) =\Delta_{E}(t+2)-\Delta_{E}(K-(t+1))\] \[+(C-K)(\Delta_{E}(K-(t+2))-\Delta_{E}(K-(t+1)))\] \[\geq\Delta_{E}(t+2)-\Delta_{E}(K-(t+1))\] \[+(C-K)(\Delta_{E}(K-(t+1))-\Delta_{E}(K-t))\] \[=\underbrace{\Delta_{E}(t+2)-\Delta_{E}(t+1)}_{\geq 0}+ \underbrace{\Delta_{E}(K-t)-\Delta_{E}(K-(t+1))}_{\geq 0} \tag{13}\] \[+\overline{\mathcal{C}}(t+1)-\overline{\mathcal{C}}(t)\geq \overline{\mathcal{C}}(t+1)-\overline{\mathcal{C}}(t)\geq 0 \tag{14}\]
So, we know now that the optimal column cannot be in the interval \([1,\ldots,\overline{K}-1]\), and that \(\overline{\mathcal{C}}(c_{u})\) has a single minimum in the interval \([\overline{K},K]\). We can have two cases here: \(K\leq\overline{C}\) and \(K>\overline{C}\). Lemma 5 shows that when \(K\leq\overline{C}\), there is only one vertex candidate as the median on the Manhattan side, while Lemma 6 establishes the fact that when \(K>\overline{C}\), the median cannot be in \([\overline{C},C]\).
**Lemma 5**.: _Let \(K\leq\overline{C}\). Varying \(c_{u}\) in the Manhattan side, \(K\leq c_{u}\leq C\), the delivery cost function \(\overline{\mathcal{C}}(c_{u})\) has a single minimum in \(\overline{C}\)._
Proof.: Since the candidate median is on the Manhattan side, the delivery cost is expressed by Eq. (10b). Moving the candidate from \((\overline{R},c_{u})\) to \((\overline{R},c_{u}+1)\), the vertices on the left of \(c_{u}\) (including \(c_{u}\) itself) increase by exactly one their distance from the median. Whereas, the vertices on the right of the column \(c_{u}\) decrease their distance from the median by one. Hence, the delivery cost function decreases as long as \(K\leq c_{u}\leq\overline{C}\).
**Lemma 6**.: _Let \(K>\overline{C}\). The value of the median candidate column cannot be greater than \(\overline{C}\)._
Proof.: We can first exclude any candidate with \(c_{u}\geq K\) because, as we seen in Lemma 5, the delivery cost function \(\overline{\mathcal{C}}(c_{u})\) is increasing for \(c_{u}\geq K\). Moreover, in order to exclude the candidates in the range \([\overline{C},K]\), we first prove that \(\overline{\mathcal{C}}(\overline{C}+1)>\overline{\mathcal{C}}(\overline{C})\). Namely,
\[\overline{\mathcal{C}}(\overline{C}+1)-\overline{\mathcal{C}}( \overline{C}) =\Delta_{E}(\overline{C}+1)-\Delta_{E}(K-\overline{C})\] \[-(C-1-K)\left(\Delta_{E}(K-\overline{C})-\Delta_{E}(K-(\overline {C}+1))\right)\geq 0 \tag{15}\]
because, by Property 4) and 3) of Lemma 2
\[\Delta_{E}(\overline{C}+1)-\Delta_{E}(K-\overline{C}) >\left(2\overline{C}+1-K\right)\left(\Delta_{E}(K-\overline{C}+1) -\Delta_{E}(K-\overline{C})\right)\] \[>(2\overline{C}+1-K)\left(\Delta_{E}(K-\overline{C})-\Delta_{E}( K-\overline{C}-1)\right)\] \[\geq(C-1-K)\left(\Delta_{E}(K-\overline{C})-\Delta_{E}(K-( \overline{C}+1))\right) \tag{16}\]
Since we have proven in Lemma 4 that if the cost function is increasing in one vertex of row \(\overline{R}\), then it is increasing in all the vertices on its right, there are no candidates for the median greater than \(\overline{C}\).
Then, Theorem 2 follows immediately from Theorem 1, and Lemmas 3, 5, and 6.
**Theorem 2**.: _Suppose \(u^{*}=(r^{*},c^{*})\) be the median for \(G=(R,C,K)\). Let \(c,\overline{K}\leq c\leq K\) be such that_
\[\overline{\mathcal{C}}(c)=\min_{i\in[\overline{K},K]}\{\overline{\mathcal{C}}( i)\}\]
_and, let \(c^{\prime},\overline{K}\leq c^{\prime}\leq\overline{C}\) be such that_
\[\overline{\mathcal{C}}(c^{\prime})=\min_{i\in[\overline{K},\overline{C}]}\{ \overline{\mathcal{C}}(i)\}\]
_Then, \(r^{*}=\overline{R}\), and_
\[c^{*}=\left\{\begin{aligned} c&\text{if }K\leq \overline{C}&\text{ and }&\overline{\mathcal{C}}(c)<\overline{\mathcal{C}}(\overline{C})\\ \overline{C}&\text{if }K\leq\overline{C}&\text{ and }&\overline{\mathcal{C}}(c)\geq\overline{\mathcal{C}}(\overline{C})\\ c^{\prime}&\text{otherwise}\end{aligned}\right.\]
### The Optimal Algorithm OPT-F
Theorem 2 directly translates to algorithm OPT-F (see the pseudo-code in Algorithm 1) that optimally solves MEMP in the full-grid scenario.
Given a closed interval of column numbers, the procedure find-minimum returns the column number, \(c\), in that interval such that the total delivery cost from the DP \((\overline{R},c)\) is the least over all the vertices \((\overline{R},i)\) for \(i\) in the given closed interval.
#### 4.2.1 Time Complexity of Algorithm OPT-F
Note that the minimum in \([\overline{K},K]\), returned by invoking the procedure find-minimum, can be found by applying a binary search due to the unimodality proven in Lemma 4. Similarly, when \(K>\overline{C}\) (Line 10), the minimum is in the interval \([\overline{K},\overline{C}]\), i.e., a sub-interval of \([\overline{K},K]\). Thus, as above, it can be found through the find-minimum procedure in Line 11. The time complexity of the find-minimum procedure is logarithmic in the width of the sub-interval where the minimum resides. The interval has a width of \(\frac{K}{2}\) in Line 2, and a width of \(\frac{C}{2}-\frac{K}{2}\leq K-\frac{K}{2}=\frac{K}{2}\) in Line 11 since \(K\geq\frac{C}{2}\). Thus, we can conclude that, in each case, the time complexity of find-minimum is \(\mathcal{O}(\log K)\).
With regards to the time complexity of the OPT-F algorithm, we observe that for a fixed \(c_{u}\), the delivery cost \(\overline{\mathcal{C}}(c_{u})\) can be computed in \(\mathcal{O}(1)\) time by applying Eq. (10a) if the prefix sums of the column-cost are computed in a pre-processing phase. The column-costs \(\Delta_{E}(j)\) and their prefix sums \(\sum_{t=1}^{j}\Delta_{E}(t)\), for \(1\leq j\leq K\), can be computed and memorized in a vector in \(\mathcal{O}(RK+K)\) time. For \(1\leq j\leq K\), the \(K\) column-costs \(\Delta_{M}(j)\) and their prefix sums \(\sum_{t=1}^{j}\Delta_{M}(t)\) can be computed and memorized in a vector in \(\mathcal{O}(K)\) time by using the closed-form in Eq. (3). Then, assuming that the prefix-sums of the column-costs are given as input to the algorithm, i.e., they are computed in a pre-processing phase, each \(\overline{\mathcal{C}}(c_{u})\) can be computed in constant time. Thus, the optimal point \(u^{*}\) can be computed by the OPT-F algorithm in \(\mathcal{O}(\log K)\) time.
### Approximation and Heuristic Algorithms
In the previous section, we presented an algorithm that optimally solves MEMP in the full-grid scenario, taking logarithmic time in the number of columns of the grid.
In this section, we present three constant-time algorithms, CMALL-F, CEMB-F, and CMEB-F, for solving MEMP in the full-grid scenario. We also establish an upper bound on the approximation ratio for algorithm CMALL-F. In Section 6.2.1, we present comparative empirical performance evaluation of these three approximation algorithms using synthetic data.
#### 4.3.1 Algorithm CMALL-F
Algorithm CMALL-F returns the Manhattan-median \(u_{M}=(\overline{R},\overline{C})\) as the DP. That is, CMALL-F operates as if the grid is a full Manhattan grid (i.e., \(K=1\)) This algorithm is sub-optimal for \(1<K<C\) providing a guaranteed approximation bound of \(\sqrt{2}\), while it is optimal when \(K=1\) or \(K=C\).
**Lemma 7**.: _The CMALL-F algorithm provides a \(\sqrt{2}\) approximation ratio when \(1<K<C\)._
Proof.: Let \(u^{*}\) be the median for \(G=(R,C,K)\). Since the Euclidean column-cost \(\Delta_{E}(j)\) is smaller than the Manhattan column-cost \(\Delta_{M}(j)\), \(\mathcal{C}(G,u_{M})\leq\mathcal{C}_{M}(G,u_{M})\), where \(\mathcal{C}_{M}(G,u_{M})\) is the cost when \(K=1\). Moreover, \(\mathcal{C}(G,u^{*})>\mathcal{C}_{E}(G,u^{*})\) because the Manhattan distance is at least as much as the Euclidean distance. Then, \(\mathcal{C}_{E}(G,u^{*})>\mathcal{C}_{E}(G,u_{M})\) because \(u_{M}\) is the Euclidean-median of \(G=(R,C,C)\). Thus, by the Cauchy-Schwarz inequality in Lemma 2:
\[\frac{\mathcal{C}(G,u_{M})}{\mathcal{C}(G,u^{*})}<\frac{\mathcal{C}_{M}(G,u_{ M})}{\mathcal{C}_{E}(G,u_{M})}\leq\sqrt{2}.\]
The CMALL-F algorithm finds the point \(u_{M}\) in constant time, so its time complexity is \(\mathcal{O}(1)\).
#### 4.3.2 Algorithm CEMB-F
Algorithm CEMB-F solves MEMP with full-grid, and selects the DP \(u_{\hat{C}}=(\overline{R},\mu)\) with
\[\mu=\frac{(\sum_{i=1}^{K}i)+(C-K)K}{C}=\frac{K(2C-K+1)}{2C} \tag{17}\]
CEMB-F imagines that all the Manhattan destinations move on the border \(B\). Thus, the grid becomes a full Euclidean grid \(E^{\prime}\) with \(K\) columns, whose rightmost column has multiplicity \(w_{v}=C-K\). So, minimizing Eq. (10a) is the same as finding the Euclidean-median of \(E^{\prime}\). Unfortunately, there is no closed form to compute the exact Euclidean-median for a set of points1.
Footnote 1: We only know the Euclidean-median of a Euclidean grid \(G=(R,C,C)\), whose columns have all multiplicity \(1\).
Algorithm CEMB-F finds the point \(u_{\hat{C}}\) using a const
#### 4.3.3 Algorithm CMEB-F
Algorithm CMEB-F imagines that all the Euclidean destinations are moved on the border \(B\). Thus, the grid becomes a full Manhattan grid \(M^{\prime}\) with \(C-K\) columns, whose leftmost column has multiplicity \(w_{v}=K\). Therefore, minimizing Eq. (10b) is the same as finding the Manhattan-median of the grid \(M^{\prime}\). Algorithm CMEB-F solves MEMP with full-grid, and selects the DP \(u_{\tilde{M}}=(\overline{R},\mu)\) with \(\mu=K\) if \(K\geq\overline{C}\) or with \(\mu=\overline{C}\) if \(K<\overline{C}\). In either case, \(u_{\tilde{M}}\) is computed in constant time.
## 5 Solving MEMP with Partial-Grid Scenario
In this section, we focus on MEMP in the partial-grid scenario. In this case, we are given \(H\subset G\) as the \(n\) delivery points, i.e., \(|H|=n\).
We first discuss the trivial cases where \(K=1\), i.e., a full Manhattan grid, and \(K=C\), i.e., a full Euclidean grid (see Section 5.1). The rest of the sections deals with the general cases.
In Sections 5.2 and 5.3 we propose two heuristic algorithms, CEMB-P and CMEB-P, assuming the optimal DP is in the Euclidean side of the grid, and in the Manhattan side of the grid, respectively. Note that neither of these two algorithms returns the optimal DP. Algorithm CEMB-P returns the best DP candidate in \(E\), whereas CMEB-P returns the best DP candidate in \(M\).
Then, in Section 5.4 we combine the above two heuristic algorithms and present algorithm OPT-P to find the optimal median, the DP, in \(G\).
In Sections 5.5 and 5.6 we present two algorithms, S-OPT-P and CMALL-P, respectively, that each compute a sub-optimal DP, but are more efficient than Algorithm OPT-P.
To summarize the above, we tabulate in Table 2 the presented algorithms that solve MEMP in the partial-grid scenario along with their time complexities and guaranteed approximation bounds. In Section 6.2.2, we present comparative empirical performance evaluation of these algorithms using synthetic data, while in Section 6.3, we compare these algorithms using quasi-real data.
\begin{table}
\begin{tabular}{c|c c c c} Point & Algorithm & Section & Time complexity & Approximation ratio \\ \hline \(u_{\tilde{C}}\) & CEMB-P & 5.2 & \(\mathcal{O}(nR\log K)\) & \(-\) \\ \(u_{\tilde{M}}\) & CMEB-P & 5.3 & \(\mathcal{O}(nR)\) & \(-\) \\ \(u^{*}\) & OPT-P & 5.4 & \(\mathcal{O}(nR\log K)\) & \(1\) \\ \(\overline{u}\) & S-OPT-P & 5.5 & \(\mathcal{O}(n\log R\log K)\) & \(-\) \\ \(u_{M}\) & CMALL-P & 5.6 & \(\mathcal{O}(n)\) & \(-\) \\ \end{tabular}
\end{table}
Table 2: Comparison between the algorithms that solve MEMP in the partial-grid scenario.
### \(K=1\) or \(K=c\)
We first discuss how to optimally solve MEMP to serve a subset of customers on a grid that is not mixed, i.e., either Manhattan or Euclidean. In a Manhattan grid (i.e., an EM-grid with \(K=1\)), given a subset \(H\subset G\) of customers, MEMP with partial-grid scenario can be trivially solved in \(\mathcal{O}(|H|)\) time by selecting \(u^{*}=(r_{u^{*}},c_{u^{*}})\) where \(r_{u^{*}}\) is the median of the row coordinate of the customers and \(c_{u^{*}}\) is the median of the column coordinate of the customers (see [44]).
In the literature, there are many results concerning the problem, called _geometric median_, of determining the DP that minimizes the sum of distances between the points of a given set \(H\subset\mathbb{R}^{d}\) and the DP itself. The geometric median is denoted as the Euclidean-median in two-dimensional space. Although the Euclidean-median is unique and the sum \(\mathcal{C}(H,u)\) of the distances from each customer to the DP \(u\) is positive and strictly convex in \(\mathbb{R}^{d}\), as proven by [45], there is no exact and closed expression for the Euclidean-median of an arbitrary set \(H\) of real points. In a Euclidean grid (i.e., an EM-grid with \(K=C\)), MEMP with partial-grid scenario can be solved more easily because the candidate positions in the plane are just the vertices of \(G\), and the number of attempts to determine the single Euclidean-median is limited by the fact that EM-grid is formed by \(R\) rows and \(C\) columns. So, a trivial solution for MEMP with partial-grid takes \(\mathcal{O}(RC|H|)\) time because for each position \(u\) of the grid the cost \(\mathcal{C}(H,u)\) can be computed in time \(\mathcal{O}(|H|)\). Exploiting the fact that, in \(\mathbb{R}^{2}\), \(\mathcal{C}(H,u)\) is positive and strictly convex when \(u\) moves on a single row of the grid, the position in a row that provides the minimum cost \(\mathcal{C}(H,u)\) can be computed by a slightly modified binary search. So, MEMP with a subset of customers on a Euclidean grid can be solved in \(\mathcal{O}(R\log C|H|)\) time because for each row of the grid only \(\log C\) candidates are tested.
In the next section, for the general case, i.e., the mixed-grid with \(1<K<C\), we leverage the observation that the median \(u^{*}\) resides either in \(M\) or in \(E\).
### The CEMB-P Algorithm
As noted above, CEMB-P solves MEMP assuming that \(u^{*}\in E\) and returns \(u_{\hat{C}}\in E\). Algorithm CEMB-P selects as the DP the point in the Euclidean side \(E\) that returns the minimum cost.
For any DP \(u\in E\), by Lemma 1, for each point \(v\) in \(H_{M}\) (the subset of \(H\) that contains the vertices in the Manhattan side), the drone must travel horizontally from the border \(B\) to \(v\). So, the drone covers the same fixed cost in the Manhattan grid regardless of the position of \(u\in E\). This motivates our algorithm whose pseudo-code is presented in Algorithm 2.
``` Input: \(u_{\hat{C}}\in E\), \(u_{\
of points in \(H^{\prime}\). Towards this, for each row (Algorithm 2, Line 3) we evaluate the minimum from the first to the \(K^{th}\) column, and update the overall minimum, if necessary. By exploiting the fact that in the Euclidean space the function \(\mathcal{C}(H,u)\) to minimize is positive and strictly convex (see [45]), we know that \(\mathcal{C}(H,u)\) is unimodal when fixing a row and varying the columns. Hence, we can calculate the minimum on each row by performing a time-efficient binary search that requires logarithmic time (Algorithm 2, Line 4). Since the cost of movement in \(M\) does not change the minimum, the returned value, \(u_{\hat{C}}\), is the best DP in \(E\).
```
1:\(H^{\prime}_{M}\leftarrow\{(r_{u},K)\in B\mid u\in H_{M}\},H^{\prime}\gets H _{E}\cup H^{\prime}_{M}\);
2:\(u_{\hat{C}}\leftarrow\varnothing,cost\leftarrow+\infty\);
3:for\(i\in 1,\ldots,R\)do
4:\(u^{*}_{i}\leftarrow\texttt{find-minimum-on-row}(i,H^{\prime},1,K)\);
5:if\(\mathcal{C}(H^{\prime},u^{*}_{i})<cost\)then
6:\(u_{\hat{C}}\gets u^{*}_{i},cost\leftarrow\mathcal{C}(H^{\prime},u^{*}_{i})\);
7:endif
8:endfor
9:
10:return\(u_{\hat{C}}\)
```
**Algorithm 2**The CEMB-P Algorithm
About the time complexity, due to the fact that there are \(n\) points to serve, and since we perform \(R\) binary searches (one for each row of a Euclidean grid with \(K\) columns), the total cost of CEMB-P is \(\mathcal{O}(nR\log K)\).
### The CMEB-P Algorithm
Algorithm CMEB-P solves MEMP assuming that \(u^{*}\in M\) and returns \(u_{\hat{M}}\in M\).
For any DP, \(u\) in \(M\), by Lemma 1, the drone has to fly through the projection of \(u\in B\) to serve any point of \(H_{E}\). Thus, for any DP \(u\), there is one intermediate point in \(B\) for all drone paths from \(u\) to points in \(H_{E}\). Note that the intermediate point does not depend on the column \(c_{u}\) of the DP \(u\). This motivates our algorithm whose pseudo-code is presented in Algorithm 3.
We first compute the column median, \(\chi\), of the points in \(H^{\prime}\), consisting of the points in \(H_{M}\) and the points in \(H_{E}\) moved to the border. The function column-median takes into account the possible multiplicity of points in \(H^{\prime}\). Since we are only concerned with the column numbers, we can move each point in \(H_{E}\) to any point on the border; we move all of them to \((1,K)\) (Algorithm 3, Lines 2 and 3).
Since the median in \(M\) can reside in \(R\) different rows, we have \(R\) candidate intermediate points, which are the points of the border \(B\). For each row \(i\), we construct the multi-set \(H^{\prime}_{E}(i)\) consisting of all the points in \(H_{E}\) moved to the intermediate point \((i,K)\) on the border \(B\) (Algorithm 3, Line 5). We then consider the point \(u^{*}_{i}=(i,\chi)\in M\) as the median of the points in the multi-set \(H^{\prime}=H_{M}\cup H^{\prime}_{E}(i)\)
With \(u_{i}^{*}\) as the DP, the cost of delivery is the sum of two costs: (1) the cost of flying the drone between \(u_{i}^{*}\) and each point in \(H^{\prime}\), with all distances calculated according to the Manhattan metric, and (2) the cost of flying the drone between each point in \(H_{E}\) and the intermediate point corresponding to \(u_{i}^{*}\), with all distances calculated according to the Euclidean metric (Algorithm 3, Line 8).
The algorithm returns as \(u_{\hat{M}}\) the intermediate point that witnesses the least cost, over all the possible intermediate points, and is thus the best DP in \(M\).
```
1:\(u_{\hat{M}}\leftarrow\varnothing,c\leftarrow+\infty\);
2:\(H^{\prime}_{E}(1)\leftarrow\{(r_{u},1)\in B\mid u\in H_{E}\}\), \(H^{\prime}\gets H^{\prime}_{E}(1)\cup H_{M}\);
3:\(\chi\leftarrow\texttt{column-median}(H^{\prime})\);
4:for\(i\in 1,\ldots,R\)do
5:\(H^{\prime}_{E}(i)\leftarrow\{(r_{u},i)\in B\mid u\in H_{E}\}\), \(H^{\prime}\gets H^{\prime}_{E}(i)\cup H_{M}\);
6:\(u_{i}^{*}\leftarrow(i,\chi)\);
7:\(C_{i}\leftarrow\mathcal{C}(H^{\prime},u_{i}^{*})+cost(H_{E}\rightarrow(i,K))\);
8:if\(C_{i}<c\)then
9:\(u_{\hat{M}}\gets u_{i}^{*}\);
10:\(c\gets C_{i}\);
11:endif
12:endfor
13:
14:return\(u_{\hat{M}}\)
```
**Algorithm 3**The CMEB-P Algorithm
As for the complexity of the algorithm, we compute the column median of \(n\) points once. Then, for every row \(i\), the cost \(cost(H_{E}\rightarrow(i,K))\) has to be computed, and this requires \(\mathcal{O}(|H_{E}|)\) time. Thus, the algorithm's complexity is \(\mathcal{O}(n+R|H_{E}|)\).
### The Optimal OPT-P Algorithm
Having computed the best DPs on both sides of EM-grid we can now optimally solve MEMP with partial-grid scenario.
The OPT-P algorithm finds the optimal point \(u^{*}\) by comparing the best point between \(u_{\hat{C}}\) and \(u_{\hat{M}}\). Given that Algorithm CEMB-P returns the best DP in \(E\), and Algorithm CMEB-P returns the best DP in \(M\), the simple idea of OPT-P is to compare these two points and return the best one, as follows:
\[u^{*}=\arg\min\{\mathcal{C}(H,u_{\hat{C}}),\mathcal{C}(H,u_{\hat{M}})\}. \tag{18}\]
About the time complexity, recalling that CEMB-P takes \(\mathcal{O}(nR\log K)\) and CMEB-P takes \(\mathcal{O}(nR)\), the overall time complexity of OPT-P is \(\mathcal{O}(nR\log K)\).
### The S-OPT-P Algorithm
Algorithm S-OPT-P applies the same strategy as OPT-P, i.e., of comparing the best among two points: the best one in the Euclidean grid, and the best one in the Manhattan grid. Nonetheless, it uses slightly different versions of both CEMB-P and CMEB-P.
In the modified version of Algorithm CEMB-P, we perform two binary searches - one on the rows plus the one on the columns up to the \(K^{th}\) column. In the modified version of Algorithm CMEB-P, we perform a single binary search on the rows. Thus, the time complexities of the two algorithms are \(\mathcal{O}(nR\log K)\) and \(\mathcal{O}(nR)\), respectively.
Thus, the overall time complexity of S-OPT-P is \(\mathcal{O}(n\log R\log K)\). However, this strategy does not guarantee that the returned point \(\overline{u}\) is optimal.
### The CMALL-P Algorithm
Essentially, Algorithm CMALL-P ignores the border that separates the Euclidean and the Manhattan sides, and computes the DP \(u_{M}\) as if \(G=(R,C,1)\). The algorithm returns
\[u_{M}=(\tilde{r}_{H},\tilde{c}_{H}), \tag{19}\]
where \(\tilde{r}_{H}\) and \(\tilde{c}_{H}\) are the individual medians of the rows and the columns, respectively, of the points in \(H\) (see [44]). Note that, also in this scenario, the median is not unique if \(|H|\) is even.
Although CMALL-P optimally solves MEMP in the case of \(G=(R,C,1)\), it is sub-optimal in the general case. Nonetheless, it works in linear time (see [46]) with respect to the number of points.
## 6 Performance Evaluation
In this section, we empirically compare the performance of our algorithms in terms of the quality of the solution (i.e., delivery cost), and their running times, for solving MEMP in both scenarios.
### Settings and Parameters
We implemented our algorithms in Python language version 3.9, and run all the instances on an Intel i7-860 computer with 12 GB of RAM. In order to evaluate our proposed algorithms for solving MEMP in both scenarios, we rely on synthetic and quasi-real delivery areas.
For the _synthetic case_ (Section 6.2), we set different layouts by varying \(R,C\in\{50,\ldots,400\}\) and \(1\leq K\leq C\). Then, we compare the algorithms with respect to the optimal one, and we plot the experimental ratio \(\rho=\frac{\mathcal{C}(H,\bar{u})}{\mathcal{C}(H,u^{*})}\geq 1\). In other words, for \(H\subseteq G\), \(\rho\) is the ratio between the total cost for serving all the required customers
from the DP \(\tilde{u}\) as returned by the tested algorithm, and the total cost of the optimal solution where the DP used is \(u^{*}\). When testing the full-grid scenario, we compare CMALL-F, CEMB-F, and CMEB-F with respect to the optimal algorithm OPT-F, while when testing the partial-grid scenario, we compare CMALL-P, CEMB-P, CMEB-P, and S-OPT-P with respect to the optimal algorithm OPT-P.
Moreover, in the partial-grid scenario, we uniformly generate \(n=|H|\) random positions inside the grid with \(n=\{5,\ldots,100\}\), and then return the _average_ ratio (along with the _standard deviation_) on 33 random instances. Also in the partial-grid scenario, given a setting with \(n\) random customers, we evaluate the algorithms when balancing the quantities \(n_{E}\) and \(n_{M}\) with respect to a certain fraction \(p=\{0,\frac{1}{3},\frac{1}{2},\frac{2}{3},1\}\) on \(n\), such that \(n_{E}=p\cdot n\) and \(n_{M}=(1-p)\cdot n\), with \(n=n_{E}+n_{M}\).
For the _quasi-real case_ (Section 6.3), we only test the more general partial-grid scenario in random instances taken from real cities, like the ones shown in Figure 1. For these examples, we approximately extract the actual EM-grid from the map, and then we run our proposed algorithms. Obviously, real cities cannot be exactly modeled as EM-grids due to the fact that roads and buildings can be arbitrarily made by people. However, we have found interesting examples and attempted to perform our algorithms on these layouts. Clearly, in the aforementioned grids of customers, some houses or skyscrapers can be missing.
### Results with Synthetic Data
#### 6.2.1 Full-Grid Scenario
We first analyze our empirical results of the performance of the algorithms with respect to the delivery costs, and then with respect to the running times.
_Delivery Costs_
Figure 3 compares the algorithms when solving MEMP with the full-grid scenario reporting, for each plot, the ratio \(\rho\) between the total cost of the tested algorithm and the optimal total cost.
CMALL-F performs very well since its DP \(u_{M}\) is always very close to \(u^{*}\). On the other hand, as expected, CEMB-F and CMEB-F are heavily affected by the value of \(K\). In fact, for small values of \(K\), CEMB-F reports bad results, while when \(K\) increases the ratio \(\rho\) tends to 1.
We note that the performance of CEMB-F is almost a reflection, in the vertical line \(\frac{K}{C}=0.5\), of the performance of CMEB-F. In particular, in Figure 3, we can observe that CMALL-F and CMEB-F perform similarly when \(0\leq\frac{K}{C}\leq 0.5\) (the two lines, i.e., the orange and the blue ones, almost coincide), and CMALL-F and CEMB-F perform similarly when \(\frac{K}{C}\leq 0.5\leq 1\) (orange and green lines).
We also note that, when \(R=C\) (Figure 3, first row), the worst cases of CEMB-F and CMEB-F, i.e., the worst \(\rho\) values exhibited, almost coincide; when \(R>C\) (Figure 3, second row), the worst case of CMEB-F is slightly better than that of
CEMB-F; and when \(R<C\) (Figure 3, third row) the worst case of CMEB-F is worse than that of CEMB-F.
It is interesting to note that the performance of each algorithm is better when \(R>C\), than when \(R<C\).
The almost symmetric performance, across \(K/C=0.5\), exhibited by CEMB-F and CMEB-F motivates the design of a hybrid algorithm, BEST-F, that simply returns the best result among the two. In Figure 4 we compare CMALL-F against the best among CEMB-F and CMEB-F (highlighted as BEST-F). Since the experimental results are very close to the optimum ratio, which is 1, here we reduce the scale along the \(y\)-axis. In general, BEST-F almost always outperforms CMALL-F, and it is very close to OPT-F. For instance, when \(R=150\) and \(C=50\), BEST-F is constantly 1, and so we can state that when \(R\gg C\), BEST-F is comparable to OPT-F. With the scaled \(y\)-axis in Figure 4, it becomes clear CMALL-F has the worst performance when \(\frac{K}{C}\approx 0.5\). Nevertheless, under these circumstances, the ratio \(\rho\) is very low. It is worth noting that in the case of \(C\gg R\), BEST-F is not better than CMALL-F (e.g., \(R=50,C=400\)). In fact, when \(C=2R\) BEST-F is preferable, but when \(C=8R\) (or more) it seems that CMALL-F is more consistent. However, in hypothetical real cases with a reasonable mix of rows and columns, BEST-F basically performs like the optimal OPT-F, albeit more efficiently, since BEST-F is a constant time algorithm. Finally, it can be seen that CMALL-F is always below the guaranteed threshold \(\sqrt{2}\) on each plot.
_Running Times_
In Figure 5 we report the experimental running time (in milliseconds) of all the algorithms with the full-grid scenario. In particular, the constant-time algorithms like CMALL-F take, on average, 2-6 ms, and therefore the graphs of their running times almost coincide. On the other hand, the optimal logarithmic-time algorithm OPT-F takes much more time - in the order of tenths of a second (in the worst case, 200 ms). The experimental running time of the OPT-F algorithm is low when the grid is either almost Manhattan or almost Euclidean, i.e., when \(\frac{K}{C}\to 0\) or \(\frac{K}{C}\to 1\), respectively.
In the following we will show that the behavior of OPT-F is consistent with the claimed time complexity \(\mathcal{O}(\log K)\). To better understand this behavior we discuss the following example (for almost Manhattan grids): When \(K\) is small with respect to \(C\), say \(\frac{K}{C}=0.25\) (see Figure 5), Algorithm OPT-F evaluates the column number, in the interval \([\overline{K},K]\), with the least cost (Line 2 of Algorithm 1). Specifically, it performs an efficient binary search in the interval \([\overline{K},K]\). If we assume that \(\frac{K}{C}=0.25\) or \(\frac{K}{C}=0.5\) (Line 2 of Algorithm 1), then the interval \([\overline{K},K]\) has width \(\frac{K}{2}\). Since when \(C\) is fixed, varying \(\frac{K}{C}\) from 0.25 to 0.5, the interval width \(\frac{K}{2}\) increases from \(\frac{C}{8}\) to \(\frac{C}{4}\), the number of steps required by the OPT-F algorithm increase accordingly to the logarithm of \(\overline{K}\).
When \(K\) is large with respect to \(C\), i.e., the grid is almost all Euclidean, say \(\frac{K}{C}=0.75\) (see Line 11 of Algorithm 1), the OPT-F algorithm performs an efficient
binary search in the interval \([\overline{K},\overline{C}]\). Recall that \(\overline{C}=\frac{C}{2}\). If we assume that \(\frac{K}{C}=0.75\), then \(K=\frac{3}{4}C\) and \(\overline{K}=\frac{K}{2}=\frac{3}{8}C\), and therefore the interval \([\overline{K},\overline{C}]\) has width \(\frac{C}{8}\). The time complexity is therefore logarithmic in \(\frac{C}{8}\) as it was in the case \(\frac{K}{C}=0.25\). Thus, the time complexity is comparable with that for \(\frac{K}{C}=0.25\), as reported in Figure 5.
It is very important to note that, the plots in Figure 5 also include the pre-processing time required by the Algorithm OPT-F; this pre-processing has time complexity \(\mathcal{O}(RK+K)\) and hence increases with \(R\) and \(K\leq C\). In Figure 5, the pre-processing phase impacts more when \(C=100\) because its time complexity depends on \(K\), which can be \(C\) in the worst case scenario.
#### 6.2.2 Partial-Grid Scenario
As before, we will first analyze our empirical results of the performance of the algorithms with respect to the delivery costs, and then with respect to the running times.
_Delivery Costs_
Figure 6 compares the algorithms when solving MEMP with the partial-grid scenario. In particular, given the bad behavior of Algorithm CEMB-F for large values of \(K\) in the full-grid scenario (see Figure 3), for its partial-grid version, i.e., CEMB-P, we only report the results using small values of \(K\). For a similar reason, we report the results for Algorithm CMEB-P only for large values of \(K\). Moreover, recall that the best DP returned by CEMB-P and CMEB-P is the optimum.
As before, in Figure 6 we present our empirical results in three groups, - \(R=C\), \(R>C\), and \(R<C\). As expected, both the CEMB-P and CMEB-P algorithms have different behavior, and one is more suitable than the other depending on the particular value of \(K\). When \(n\) is small, all the algorithms are variable and the standard deviation is large. Much more stable results can be observed when \(n\) increases. This is due to the fact that for larger values of \(n\) the subset \(H\subset G\) is "denser", and all the algorithms are less affected by the randomness. In general, the algorithms seem to perform better when \(R>C\). Of particular note is the behavior exhibited by Algorithm S-OPT-P- it performs almost as well as Algorithm OPT-P! We discuss this further in Section 6.2.3 below.
In Figure 7 we present our empirical results for all the algorithms with the partial-grid scenario when varying the parameter \(0\leq p\leq 1\) such that \(n_{E}=n\cdot p\) and \(n_{M}=n\cdot(p-1)\). For simplicity, in Figure 7 we only depict the squared layout, in which on each row we change the border position. Not surprisingly, when \(n\) is low the variability (standard deviation) is high, and algorithms like CMALL-P return a good ratio \(\rho\) (average). The other two algorithms, i.e., CEMB-P and CMEB-P, have the usual symmetric behavior. However, in this setting, such behavior is much more evident since we vary the number of deliveries to distribute in the two sub-areas. So, for small values of \(p\), CMEB-P is _almost_ optimum, and for large values
of \(p\) CEMB-P is almost optimum. Instead, when \(p\approx 0.5\), the two algorithms lose their efficacy. This can be seen for the three values of \(K\) shown in Figure 7. Once again, surprising, but expected based on the results presented in Figure 6, S-OPT-P performs almost as well as the optimum algorithm.
_Running Times_
Finally, in Figure 8 we report the experimental running time (in milliseconds) of all the algorithms with the partial-grid scenario. Notice that the plots have a different scale in the \(y\)-axis, i.e., the ones in the first row have \(30\,\mathrm{ms}\), while the ones in the second row have \(60\,\mathrm{ms}\). This has been done to emphasize the linear dependency on \(n\): when \(n\) doubles, the time performance on the \(y\)-axis doubles too, but since the scale of \(y\) is halved, the behavior appears to be the same. As expected, the fastest algorithm is CMALL-P whose time complexity is \(\mathcal{O}(n)\), followed by CMEB-P with a time complexity of \(\mathcal{O}(nR)\) (see Table 2). In Figure 8, it is difficult to appreciate the actual difference between CMALL-P and CMEB-P, but, for instance, when \(\frac{K}{C}=0.5\), on average, CMALL-P takes \(0.061\,\mathrm{ms}\), while CMEB-P takes \(0.786\,\mathrm{ms}\) (i.e., \(10\times\) more than CMALL-P).
The running time of the CMEB-P is "linear" since its cost does not depend on the value of the border \(K\). The sub-optimal algorithm S-OPT-P experimentally performs very well (theoretically it takes \(\mathcal{O}(n\log R\log K)\) in time), in the order of \(5\)-\(15\,\mathrm{ms}\), with respect to the number of deliveries \(n\). The CEMB-P algorithm, which has a time complexity of \(\mathcal{O}(nR\log K)\), depends on the binary search function invoked \(R\) times. The binary search itself depends on the size of the border \(K\), and therefore its running time increases (by a factor of \(\log K\)) when the EM-grid tends to become more "Euclidean".
In the full-grid scenario (Figure 5), the algorithm that exploits the binary search (i.e., OPT-F) has a parabolic trend, justified by the fact that when \(K\) is either small or large with respect to \(C\), Algorithm 1 executes differently. In this case (partial-grid scenario), though, we do not have a similar behavior, and the time complexity of Algorithm CEMB-P is simply affected by the factor \(\log K\). Nevertheless, it is still possible to observe a light drop for CEMB-P when \(\frac{K}{C}\approx 0.75\). Obviously, the optimal algorithm OPT-P, whose time complexity is \(\mathcal{O}(nR\log K)\), takes the most time (as seen in Figure 8) since it runs both CEMB-P and CMEB-P, and returns the best point among the ones outputted by both.
linear with respect to the number of deliveries - CMALL-P is a very good compromise. As noted above, both the CEMB-P and CEMB-P algorithms have different behavior, and one is more suitable than the other depending on the particular value of \(K\). However, as proven in Section 4.1, their combination, by returning the best among the two, ensures finding the optimal DP. So, despite their swinging behavior and relatively high complexity, they can be used to find the optimal DP.
We noted above the surprising, almost-optimal behavior of the sub-optimal algorithm S-OPT-P: On the 33 random instances, for each combination of rows, columns, border, and number of deliveries, the returned DP was also the optimal one. For this reason, on each plot of Figure 6, S-OPT-P exhibits a completely flat line. However, on an extensive "brute-force" campaign with \(10,000+\) random instances, we found 17 instances where the returned point \(\overline{u}\) was different from the optimal \(u^{*}\). Nevertheless, in these 17 instances the worst ratio \(\rho\) was 1.005. Of course, we have found counterexamples that show that S-OPT-P is not optimum, but it performs extremely well with a time complexity much lower than that of OPT-P.
### Results with Quasi-real Data
In this section, we test our algorithms in the quasi-real case. Specifically, we evaluate the performance of our partial-grid scenario algorithms applied on a real-world map. Clearly, in this case, we still generate random instances of customers.
#### 6.3.1 Settings
In this section, we first discuss how the delivery maps are extracted from real cities, and then which commercial drones are capable of delivering products in these areas.
_Map Extraction_
In order to evaluate our algorithms in a real-world case, we initially needed to extract the EM-grids from real cities. Figure 9 illustrates how to roughly perform this extraction task by modeling Chicago, New York, and Miami, in the top, middle, and bottom, respectively.
In Figure (a)a, there is the top view of a portion of the city of Chicago, while in Figure (b)b the corresponding EM-grid, i.e., \(G_{1}=(8,14,6)\). The average length of a "block" is \(\approx 120\,\mathrm{m}\), and so the distance among two adjacent vertices can be set. So, this map \(G_{1}\) has a size of \(840\,\mathrm{m}\times 1560\,\mathrm{m}\). Roughly, the river from the top to the bottom of Figure (a)a splits the portion of Chicago in two contiguous areas, i.e., the left one with relatively low buildings (although some tall buildings are present), and the right one with lots of skyscrapers (although some low buildings are present). The dashed line in Figure (b)b delimits the border between the two areas.
In Figure (c)c, there is the top view of a portion of the city of New York, while in Figure (d)d the corresponding EM-grid, i.e., \(G_{2}=(7,24,1)\). This means that the extracted EM-grid is a full Manhattan grid. In this case, differently from Chicago, we can observe that the length of a block on "streets" (horizontal) is \(\approx 80\,\mathrm{m}\), while
the length of a block on "avenues" (vertical) is \(\approx 150\,\mathrm{m}\). So, this map \(G_{2}\) has a size of \(660\,\mathrm{m}\times 2530\,\mathrm{m}\). Since in our model both the lengths must be equal, we average the two and set a value of \(110\,\mathrm{m}\).
Finally, in Figure (e)e, there is the top view of a portion of the city of Miami, while in Figure (f)f the corresponding EM-grid, i.e., \(G_{3}=(9,20,20)\). This means that the extracted EM-grid is a full Euclidean grid. Even the pure Euclidean maps are hard to find since roads and houses can have different sizes, lengths, and so on. The portion of Miami city reported in Figure (e)e contains some irregularities, especially in the right portion. Here, the average distance among two adjacent houses is in the order of \(\approx 25\,\mathrm{m}\). So, this map \(G_{3}\) has a size of \(200\,\mathrm{m}\times 475\,\mathrm{m}\).
_Drone Selection_
Previously, we have seen how to extract an EM-grid from a real city. Therefore, the next step is the drone selection. There are a plethora of off-the-shelf drones that are available at the moment, but only a very few have the required characteristics for performing deliveries. One of the most common drones able to do deliveries with some autonomy is the DJI Matrice 300 RTK (briefly Matrice). According to [47], this drone can carry up to \(2.7\,\mathrm{kg}\), can fly up to \(8\,\mathrm{km}\) away from the remote controller, can fly for \(30\,\mathrm{min}\) (with maximum payload), and can fly up to \(17\,\mathrm{m}/\mathrm{s}\approx 60\,\mathrm{km}/\mathrm{h}\) in P-mode (i.e., the Positioning mode recommended in autonomous flights). Clearly, the temporal bound of \(30\,\mathrm{min}\) also depends on the drone's speed and the payload as well as the current weather conditions. Nevertheless, we still continue to use these numbers as a reference. So, the Matrice can fly a distance of approximately \(30\,\mathrm{km}\). Recalling that the previous three grids \(G_{1}\), \(G_{2}\), and \(G_{3}\) have an area of approximately \(1.2\,\mathrm{km}^{2}\), \(1.5\,\mathrm{km}^{2}\), and \(0.1\,\mathrm{km}^{2}\), respectively, we believe that the Matrice can operate in these neighborhoods of the cities.
#### 6.3.2 Results
Finally, Figure 10 compares the algorithms in the partial-grid scenario in a quasi-real case. In particular, in the \(x\)-axis we report the number of deliveries \(n\), while in the \(y\)-axis we report the traveled drone's distance in \(\,\mathrm{km}\). As aforementioned, the selected Matrice drone can fly for approximately \(30\,\mathrm{km}\), and for this reason we put a solid red line on that threshold in each plot of Figure 10. Moreover, recall that \(G_{1}\), \(G_{2}\), and \(G_{3}\) correspond to the map of Chicago (mixed EM-grid), New York (full Manhattan EM-grid), and Miami (full Euclidean EM-grid), respectively.
Our first observation is that the differences among the algorithms, in terms of distance (\(\,\mathrm{km}\)) traveled, is small, with a couple of exceptions. In fact, as we discussed in the synthetic case, it is not profitable at all to perform the CEMB-P algorithm when the EM-grid is a full Manhattan grid (see \(G_{2}\) in Figure 10), or similarly, to perform the CMEB-P algorithm when the EM-grid is a full Euclidean grid (see \(G_{3}\)). In the general mixed case (see \(G_{1}\)), though, all the algorithms perform decently with respect to the optimal one.
Taking into account the Chicago map, i.e., \(G_{1}\), we can immediately observe that the adopted Matrice drone can potentially accomplish the whole mission of up to \(n=50\) deliveries with a single battery charge, regardless of the algorithm. Specifically, on average, both OPT-P and S-OPT-P require \(29.181\,\mathrm{km}\) of travel, followed by \(29.297\,\mathrm{km}\) for CMALL-P, \(29.324\,\mathrm{km}\) for CMEB-P, and finally \(30.395\,\mathrm{km}\) for CEMB-P. We note that the drone, if employing Algorithm CEMB-P, may be forced to fly beyond the aforementioned and approximated threshold of \(30\,\mathrm{km}\). However, here one can appreciate that the difference, on average, among the best and the worst performing algorithm is \(\approx 1\,\mathrm{km}\), which can be detrimental in some borderline cases. Moreover, even neglecting the CEMB-P algorithm, the above difference is still about \(0.143\,\mathrm{km}\). Recalling that the average block length in \(G_{1}\) is \(0.12\,\mathrm{km}\), this small difference could potentially result in a delivery not being completed.
In the \(G_{2}\) map in New York City, which is a full Manhattan grid, we can safely do up to \(n=35\) deliveries. In this case, we cannot rely to the CEMB-P algorithm, since it is not suitable for full Manhattan grids. It is important to recall that, in \(G_{2}\), CMALL-P returns, by definition of the Manhattan median, the optimal solution.
Finally, in the full Euclidean grid in Miami (\(G_{3}\)), due to the smaller distances in the example, the Matrice drone can safely perform even more than \(n=100\) deliveries. Even relying to the CMEB-P algorithm, we are still able to serve \(100\) customers in the area. It is important to recall that, in \(G_{3}\), CEMB-P returns the optimal solution.
## 7 Conclusion
We considered a drone-based delivery system for the "last-mile" logistics of small parcels, medicines, or viral tests, in EM-grids. The shortest path in an EM-grid concatenates Euclidean and Manhattan distances. We solved the MEMP on EM-grids whose goal is to minimize the sum of the distances between the locations to be served and the drone's DP. Finding the most suitable DP has many implications that can impact the expected delivery time for customers, the energy consumption of drones, and in general, the broader environmental impacts, e.g., the quantity of CO\({}_{2}\) emissions, when relying on trucks. We propose efficient algorithms to exactly solve the problem in EM-grids in both the full-grid and the partial-grid scenarios under the assumption that the vertex distance is unitary. We also run our algorithms on quasi-real cases by extracting the EM-grids from real cities in the United States.
Although we attempted to run our algorithms on real cities, we are aware that our EM-grid model is too simplified to characterize any real-world scenario. In future work, we intend to extend the introduced mixed-grid model to more general layouts (e.g., a rural area inside an urban area in Central Park, New York). Nevertheless, we believe that our work could be the starting point to devising much more complex scenarios to better model real-world scenarios. For example, we could use our technique of map extraction (see Section 6.3) to construct a grid, \(G\), where the
each individual grid square is labeled as "Buildings" or "Park". A contiguous group of squares labeled "Buildings" is effectively a Manhattan grid, and a contiguous group of squares labeled "Park" is a Euclidean grid. Then, we could sub-divide \(G\) into multiple EM-grids, and apply our algorithms suitably in each EM-grid.
Another interesting variant to study is the use of multiple drones that are responsible for delivering packages to different partitions of the customers. In this case, we will need efficient clustering algorithms for minimizing suitable metrics.
|
2308.06662 | Dynamics of the dissociative electron attachment to Ethanol | We report the detailed dynamics of the site selectivity observed in the
dissociative electron attachment (DEA) process in ethanol based on the momentum
images obtained using the velocity slice imaging technique. The H- dissociation
channel shows the site selectivity where the anion signal from the O-H site
peaks at 6.5 eV and 8 eV, and that from the C-H site peaks at 9.5 eV. The
momentum images also show the two-body dissociation dynamics for the O-H site
break-up. This dissociation channel shows a substantial effect of the torsion
mode of vibrations on the electron attachment process. In contrast, the C-H
site dissociation results from the many-body break-up consistent with the
earlier reports of DEA dynamics from organic molecules. We have also found that
the OH- channel has a resonance at 9.3eV and is produced with very little
kinetic energy. Using the isotope substitution, we show the role of H atom
scrambling in the C-O bond dissociation leading to the OH- channel. This
channel shows a substantial deviation from the corresponding photodissociation
dynamics. | Sukanta Das, Suvasis Swain, Vaibhav S. Prabhudesai | 2023-08-13T01:59:33Z | http://arxiv.org/abs/2308.06662v1 | # Dynamics of the dissociative electron attachment to Ethanol
###### Abstract
We report the detailed dynamics of the site selectivity observed in the dissociative electron attachment (_DEA_) process in ethanol based on the momentum images obtained using the velocity slice imaging technique. The H- dissociation channel shows the site selectivity where the anion signal from the O-H site peaks at 6.5 eV and 8 eV, and that from the C-H site peaks at 9.5 eV. The momentum images also show the two-body dissociation dynamics for the O-H site break-up. This dissociation channel shows a substantial effect of the torsion mode of vibrations on the electron attachment process. In contrast, the C-H site dissociation results from the many-body break-up consistent with the earlier reports of _DEA_ dynamics from organic molecules. We have also found that the OH- channel has a resonance at 9.3eV and is produced with very little kinetic energy. Using the isotope substitution, we show the role of H atom scrambling in the C-O bond dissociation leading to the OH- channel. This channel shows a substantial deviation from the corresponding photodissociation dynamics.
## I Introduction:
The study of the interaction of low-energy electrons with molecules has great importance because when high-energy radiation like x-rays, gamma rays, or energetic charged particles interact with matter, they produce secondary low-energy electrons. These low-energy electrons contribute significantly to DNA damage [1]. Dissociative electron attachment (_DEA_) plays a vital role in this process. In a bottom-up approach to understanding the details of the underneath complex dynamics of DNA strand breaks, studies of _DEA_ to several simple organic molecules like carboxylic acids, alcohols, and simple aromatic compounds have been taken up as a starting point. On the other hand, _DEA_ shows the site selectivity in organic molecules [2]. The site-selectivity observed in the _DEA_ process has been for electron energies well beyond the respective sites' dissociation threshold. This points to the vast potential of low-energy electron-based chemical control. To realise this feature, understanding the underlying dynamics of the _DEA_ process that causes site selectivity is of utmost importance. Besides, the excited states of negative ions play a vital role in various applications like electron beam lithography and plasma processing [3]. However, details of these dynamics are complicated to obtain from the theoretical calculations due to the resonant nature of the underlying excited negative ion states.
Simple alcohols are used to identify the site-selectivity for the O-H bond in the organic molecules [4]. It has been pointed out that the dissociation of the O-H bond in the alcohols shows a direct resemblance in the DEA peaks in the H\({}^{-}\) ion yield curve with that observed in the water, which is the precursor molecule for the hydroxyl group. Here, we have carried out detailed measurements of _DEA_ to ethanol (C\({}_{2}\)H\({}_{5}\)OH) to unravel the dynamics of this process in the case of simple alcohols.
Earlier works on DEA measurement on ethanol are carried out by Prabhudesai _et al._[2; 4], Ibanescu and Allan [5], Orzal _et al._[6] and Wang _et al._[7]. Three peaks in the H\({}^{-}\) channel at 6.4eV, 7.8eV, and 9.3 eV were first reported by Prabhudesai _et al_. [2]. They also reported the absolute cross-section of these resonances [4]. Ibanescu and Allan [5] introduced partially deuterated ethanol (C\({}_{2}\)H\({}_{5}\)OD) and reported that 6.35eV and 7.85eV resonances are due to H\({}^{-}\) from the OH site, and 9.18eV resonance is from the CH site. Besides, they have also obtained the C\({}_{2}\)H\({}_{5}\)\({}^{-}\) ions peaking at 2.75eV, 6.35eV, and 9.15eV. They have reported the vibration excitation cross-section and photoelectron spectra (PES) and compared _DEA_ spectra with those measurements. Orzal _et al._[6] have reported O\({}^{-}\) (5.8eV), OH\({}^{-}\) (8.2eV), and C\({}_{2}\)H\({}_{5}\)\({}^{-}\) (2eV, 5eV, and 8.2eV) and discussed the possible channels of these anions. Wang _et al._[7] have done the momentum imaging of O\({}^{-}\)/OH\({}^{-}\) at 6eV and 9eV using Velocity Slice Imaging (_VSI_). Due to the low energy and mass resolution of the _VSI_ spectrometer, they were unable to separate O\({}^{-}\) and OH\({}^{-}\).
Although the strong site selectivity in _DEA_ is seen in the H\({}^{-}\) channel, which is also the most abundant channel, there has been no report of the dynamics resulting in this process. We have measured these ions' angular distribution and kinetic energy distribution at various electron energies to unravel the dissociation dynamics of the temporary negative ion formed by the electron attachment. To identify the site-selective signal of the H\({}^{-}\) ions, we have carried out the measurements on partially deuterated ethanol (CH\({}_{3}\)CH\({}_{2}\)OD) and obtained the momentum images for H\({}^{-}\) and D\({}^{-}\) separately. We also looked for other negative ions and have identified the OH\({}^{-}\) ions by improving the mass resolution of the spectrometer. This ion signal peaks around 9.3eV. In this paper, we report the details of these findings and interpret the underlying dynamics using the kinetic energy and angular distributions.
## II Experiment
The details of the experimental setup are given elsewhere earlier [8], and here we describe the experiment in brief. The experiment is carried out in a crossed electron beam-molecular beam geometry where the effusive molecular beam produced by the capillary array is put along the axis of the velocity slice imaging (_VSI_) spectrometer. The spectrometer (Figure 1) consists of a pusher, puller, four electrostatic lens electrodes, a Flight tube, and a 2-D position-sensitive detector comprised of a Micro Channel Plate (_MCP_) detector, a Phosphor screen, and a CCD camera.
The interaction volume, which is due to the overlap of the electron beam and the molecular beam, is at the centre of the region between the pusher and puller. The magnetically collimated pulsed electron beam is produced from a home-built electron gun. The electron beam collimating magnetic field of 50 Gauss strength is produced by an externally mounted pair of magnet coils in the Helmholtz geometry coaxial to the electron gun. A Faraday Cup, co-axial to the electron gun on the opposite side of the interaction zone, is used to measure the current. We apply a delayed (100 ns with respect to the electron pulse) negative voltage pulse on the pusher of height 80V and 1\(\upmu\)s width to extract the ions from the interaction zone. Outside the chamber, a Baratron is connected to the back of the capillary to measure the pressure behind the capillary. The voltage on the detector assembly is pulsed using a variable width high voltage switch. The detector pulsing is synchronized with the central part of the ion time of flight peak to obtain the appropriate slice of the Newton sphere. The width of the detector pulse is kept at 80 ns (the minimum possible from such a switch) for slice imaging.
The whole chamber is kept below 5x10\({}^{\text{-7}}\) torr base pressure during the experiment. The gas line used to introduce the sample into the chamber and the needle valve used to maintain the gas flow are heated up to a constant temperature of 40\({}^{\text{0}}\) C throughout the experiment. Measurements are carried out in two steps. The ion yield curve is obtained in the first step, and its resonance position is calibrated using O\({}^{\text{-}}\) from O\({}_{2}\)[9]. In the second step, The pixel image of the illuminated Phosphor screen is recorded by the CCD camera. These pixel images are converted to momentum images by calibrating against H\({}^{\text{-}}\) from H\({}_{2}\). From these momentum images, kinetic energy (_KE_) distributions and angular distributions are obtained for the detected anions.
## III Results and Discussion
We have observed three resonances for H\({}^{\text{-}}\) from ethanol at 6.5eV, 8eV, and 9.5eV, as shown in Figure 2(a), consistent with the previously reported results [2; 5]. Using partially deuterated ethanol (Figure 2),
Figure 1: Schematic of the velocity slice imaging spectrometer used in the experimental setup
we have identified that H\({}^{-}\) from the O-H site is responsible for the 6.5eV and 8 eV peaks and H\({}^{-}\) from C\({}_{2}\)H\({}_{5}\) peaks at 9.5 eV consistent with the earlier report [5].
We obtained the momentum images of H\({}^{-}\) from C\({}_{2}\)H\({}_{5}\)OH at 6.5, 8 and 9.5eV and D\({}^{-}\) at 6.5 and 8eV and H\({}^{-}\) at 9.5 eV from C\({}_{2}\)H\({}_{5}\)OD (Figure 3). As discussed in the earlier work, the momentum images were obtained for the crossed beam and static gas that contributes to the background signal [8]. The images shown in Figure3 are obtained after subtracting the static gas images from the crossed beam images with appropriate normalization. Due to the presence of the transverse magnetic field, the H\({}^{-}\) ions, which are the lightest, follow the deviated trajectories. This shifts the momentum image to one side of the spectrometer axis and introduces distortion. This is evident in Figure 3. For the image analysis, we have considered only half of the image obtained close to the centre of the detector, which has the least distortion, as discussed in ref. [8]. In momentum images, we observe that at 6.5 eV, H\({}^{-}\) is mainly scattered in the backward direction (Figure 3(a)). As can be seen from the images obtained from the partially deuterated sample, the site selectivity of the hydride ion signal in the ion yield curve is also evident in the momentum images. The 6.5eV image of both H\({}^{-}\) from C\({}_{2}\)H\({}_{5}\)OH and that for D\({}^{-}\) from C\({}_{2}\)H\({}_{5}\)OD shows similar momentum images (Figure 3 (a) and (d)). As the 9.5eV peak has a long tail which contributes to the 8eV image, one can see a clear signature of the low energy blob in the momentum image of H\({}^{-}\) at 8eV from C\({}_{2}\)H\({}_{5}\)OH (Figure 3(b)), which is absent in the image of D\({}^{-}\) from C\({}_{2}\)H\({}_{5}\)OD (Figure 3(e)) whereas the blob is present in H\({}^{-}\) image at 9.5eV from both C\({}_{2}\)H\({}_{5}\)OH and C\({}_{2}\)H\({}_{5}\)OD (Figure 3 (c) and (f)). The _KE_ distribution obtained from the momentum images of the H\({}^{-}\) ions is shown in Figure 4.
Figure 2: Comparison of the normalised ion yield curve for H\({}^{-}\) from ethanol (black square) and D\({}^{-}\) (red circles) and H\({}^{-}\) (blue triangles) from partially deuterated ethanol (C\({}_{2}\)H\({}_{5}\)OD).
O'Malley and Taylor [10] reported the detailed theoretical calculation for the angular distribution of the negative ion produced in _DEA_ to diatomic molecules under the assumption that (i) there is only one resonance state that contributes to the negative ion formation, (ii) the negative ion state does not rotate before it decays (iii) the coupling is spin-independent. Azria _et al._[11] adopted a similar treatment for polyatomic molecules. They obtained the expression for the angular distribution of the fragment anions as
\[I(\theta)\propto\frac{1}{2\pi}\int_{0}^{2\pi}\bigl{|}\sum_{l,m,\nu}i^{l}\,exp (i\delta_{l})\,a_{lm}^{\nu}X_{lm}^{\nu\ast}(\theta,\varphi)\bigr{|}^{2}d\varphi \tag{1}\]
where \(X_{lm}^{\nu\ast}(\theta,\varphi)\) is the basis function for the irreducible representation of the group of molecules, \(a_{lm}^{\nu}\) their amplitude and \(\delta_{l}\) their phase. Here the angles \((\theta,\varphi)\) determine the orientation of the dissociating bond about the incoming electron beam. The above functions are in the dissociation frame of the molecule and expressed as a linear combination of spherical harmonics with appropriate frame transformation from the lab frame to the molecular frame.
Ethanol belongs to the \(\mathcal{C}_{S}\) molecular point group, with only two irreducible representations, namely \(A^{\prime}\) and \(A^{\prime\prime}\). In the ground state, 26 electrons of this molecule are arranged in 13 doubly occupied orbitals. The ground-state electron configuration is written as [12]
\[(1a^{\prime})^{2}(2a^{\prime})^{2}(3a^{\prime})^{2}(4a^{\prime})^{2}(5a^{ \prime})^{2}(6a^{\prime})^{2}(7a^{\prime})^{2}(1a^{\prime\prime})^{2}(8a^{ \prime})^{2}(9a^{\prime})^{2}(2a^{\prime\prime})^{2}(10a^{\prime})^{2}(3a^{ \prime\prime})^{2}\]
which corresponds to an \(A^{\prime}\) state. The closest unoccupied molecular orbitals (_MO_s) are \((11a^{\prime}),(12a^{\prime})\), and \((4a^{\prime\prime})\). The expected angular distribution under the axial recoil approximation for the \(A^{\prime}\) to \(A^{\prime}\) transition using the first two partial waves would be
Figure 3: Momentum images of H– from C\({}_{2}\)H\({}_{5}\)OH at (a) 6.5eV (b) 8eV and (c) 9.5eV electron energies. Momentum images of D– at (d) 6.5eV, (e) 8eV, and H– at (f) 9.5eV from C\({}_{2}\)H\({}_{5}\)OD. The arrow indicates the direction of the electron beam.
\[I_{s+p}=\frac{a^{2}}{2}+\frac{p^{2}}{2}(cos^{2}\,\theta)+ab\,cos\,\theta\,cos\delta _{1} \tag{2}\]
where \(a\), and \(b\) indicate the relative contributions of the two partial waves and \(\delta_{1}\) is the relative phase between the \(s\) and \(p\) waves, respectively. For \(A^{\prime}\) to \(A^{\prime\prime}\) transition using the first two allowed partial waves (\(p\) and \(d\)) would be
\[I_{p+d}=\frac{a^{2}}{2}(sin^{2}\,\theta)+b^{2}\left(\frac{3}{8}\,(\,sin2\theta )^{2}\right)+\frac{\sqrt{3}}{2}ab\,cos\delta_{1}sin\theta\,sin\,2\theta \tag{3}\]
We have obtained the angular distribution of the H\({}^{-}\) ion from the momentum images and analysed them using combination of equation (2) and equation (3) under the axial recoil approximation.
Here we discuss the _DEA_ dynamics that result in H\({}^{-}\) ion formation at three different resonances. The dynamics of OH\({}^{-}\) ion formation is addressed in the end.
##.1 **H\({}^{-}\) formation at the first resonance**
As concluded from the ion yield curves of the partially deuterated sample, for this 6.5eV resonance, the H\({}^{-}\) is formed from the -OH site. The most obvious path for the H\({}^{-}\) formation is the direct cleavage of the O-H bond according to the reaction
\[\mathrm{e}+\mathrm{C}_{2}\mathrm{H}_{5}\mathrm{OH}\rightarrow(\mathrm{C}_{2} \mathrm{H}_{5}\mathrm{OH}^{-})^{*}\rightarrow\mathrm{C}_{2}\mathrm{H}_{5} \mathrm{O}+\mathrm{H}^{-} \tag{4}\]
where C\({}_{2}\)H\({}_{5}\)OH\({}^{-}\)* represents the temporary negative ion (_TNI_) state at 6.5 eV. Using established heat of formation values of ethanol (-234 kJ/mol), C\({}_{2}\)H\({}_{5}\)O (17kJ/mol) and H (218kJ/mol) and subtracting the electron affinity of H (72.8kJ/mol), we get 396kJ/mol or 4.1eV as the minimum energy required for the formation of H\({}^{-}\)[13]. Since this channel is a two-body breakup, the excess energy of 2.4 eV would be distributed among the two fragments inversely proportional to their masses. Accordingly, up to 2.34 eV of energy would appear as the _KE_ of the H\({}^{-}\) ion. The observed _KE_ distribution of the H\({}^{-}\) ions (Figure 4(a)) extends up to 2.75eV, peaking around 2eV. The maximum _KE_ observed in this channel is consistent with the threshold of the above channel, considering the spread in the electron energy, which is about 0.8eV. The spread in the _KE_ distribution results from this energy uncertainty, momentum imaging resolution, which is about 0.3eV and the internal excitation of the neutral C\({}_{2}\)H\({}_{5}\)O fragment.
The angular distribution obtained for this channel is shown in Figure5(a). For the \(A^{\prime}\) to \(A^{\prime}\) transition, all partial waves can contribute starting from \(l\)=0, whereas, for the \(A^{\prime}\) to \(A^{\prime\prime}\) transition, partial waves with \(l\)\(\geq\)1 would contribute to the electron capture. The significant difference between these two transitions is that for the \(A^{\prime}\) to \(A^{\prime\prime}\) transition, the angular distribution of the ions formed from the cleavage of the bond lying in the symmetry plane of the molecule would always have nodes at 0\({}^{0}\) and 180\({}^{0}\) about the electron
beam. On the other hand, for the \(A^{\prime}\) to \(A^{\prime}\) transition due to the contribution from the \(s\)-wave, the ion signal would have finite strength in the 0\({}^{0}\) and 180\({}^{0}\) about the electron beam. This is evident from the angular distributions obtained for the 6.5eV and 8.5eV channels in water [14, 15]. The 6.5eV resonance is understood as the one specific to the O-H site; it is expected to show the angular distribution consistent with that obtained for water. In water, the resonance responsible for this peak corresponds to the \(B_{I}\) state, a valance excited resonance associated with the excitation of the lone pair of electrons from the O atom. Ibanescu and Allan [5] have reported that the 6.5eV resonance of H- is the Feshbach type formed by the excitation of the electron from 3a\({}^{\prime\prime}\) orbital (comprising mainly of the lone pair of electrons on the O atom) to the higher empty MOs while the electron capture. This transition is the basis of the O-H site selectivity observed in the simple alcohols [2].
To further understand the DEA dynamics, we compare the angular distribution of H- ion from ethanol obtained at 6.5eV with that obtained for water at 6.5eV. As shown earlier, for water, the angular distribution of the _DEA_ signal peaks around 100\({}^{0}\) with no intensity in the 0\({}^{0}\) and 180\({}^{0}\) directions (Figure 5(c)) [14]. The site selectivity of the O-H bond cleavage in DEA is understood as the result of a 2-particle-1-hole resoancne that is formed by excitation of the lone pair of electrons from the O atom. This is also seen in water molecule. With the consideration of similar underlying resonance, we expect similar angular
Figure 5: Angular distribution of H- ions obtained at (a) 6.5eV, and (b) 8eV electron energy from ethanol and (c) at 6.5eV from methanol (black circle) and water (red square). The solid curves show the fit obtained by combining equation (2) and (3)
distribution at this resonance from alcohols as the molecules retain the reflection symmetry about the plane containing the O-H bond. For example, for a molecule like simple alcohol that belongs to the \(C_{s}\) symmetry group, this resonance would be of A\({}^{\prime\prime}\) character and would not have any signal in the 0\({}^{0}\) and 180\({}^{0}\) angles under the axial recoil approximation. However, the angular distribution obtained from the 6.5eV resonance image for ethanol shows no nodes at 0\({}^{0}\) and 180\({}^{0}\).
Similar results were also obtained for methanol by Slaughter _et. al_[16]. In their work, they showed the entrance channel amplitude of for the corresponding resonance at 6.5eV having no intensity in the plane containing the O-H bond. This is consistent with our understanding of the structure of the underlying resonance. However, the angular distribution of H- ion from methanol shows no node around 0\({}^{0}\) and 180\({}^{0}\) direction about the electron beam (Figure 5(c)). They explained this observation as the loss of C\({}_{\mathrm{s}}\) symmetry due to torsional vibrations in the molecule about the C-O bond and opening of COH angle. As can be seen from the Figure5(a), in the case of ethanol, the relative intensity of the H- ion signal in the 180\({}^{o}\)direction w.r.t 90\({}^{o}\) is even higher than that observed in methanol. We also give the corresponding angular distribution data for methanol for the relevant resonance in Figure 5(c).
For ethanol, the vibrational energy of torsional modes that involve the out-of-plane motion of O-H bond are 200 and 251cm\({}^{\text{-1}}\)[17; 18] and for methanol it is around 295cm\({}^{\text{-1}}\)[13]. These torsional motions break the reflection symmetry of the molecule. Under our experimental condition this leads to a population of around 35% in the torsionally active states for Ethanol, and around 24% for methanol. If we assume torsionally ground and excited molecules participating in the dissociation under axial recoil approximation then for torsionally ground sate molecule, C\({}_{\mathrm{s}}\) symmetry holds, hence can taken A\({}^{\prime}\) to A\({}^{\prime\prime}\) transition (equation 3). For the molecules which are torsionally excited, we can take C\({}_{\mathrm{1}}\) symmetry (equation 2). We have used the incoherent sum of the angular distribution functions for the torsionally ground state molecules (equation3) and torsionally excited molecules (equation 2) tp fit the observed angular distribution. The used fitting function is
\[\small\begin{split} I_{C_{\mathrm{s}}+C_{\mathrm{s}}}=& \ C_{\mathrm{1}}\left(\frac{\mathrm{g}_{\mathrm{1}}^{2}}{2}+\frac{\mathrm{g}_ {\mathrm{1}}^{2}}{2}\left(cos^{2}\,\theta\right)+a_{\mathrm{1}}b_{\mathrm{1}} \,cos\,\theta\,cos\delta_{\mathrm{1}}\right)+C_{\mathrm{s}}\left(\frac{ \mathrm{g}_{\mathrm{2}}^{2}}{2}\left(sin^{2}\,\theta\right)+b_{\mathrm{2}}^{2} \left(\frac{3}{8}\left(sin2\theta\right)^{2}\right)+\frac{\sqrt{3}}{2}a_{ \mathrm{2}}b_{\mathrm{2}}\,cos\delta_{\mathrm{2}}sin\theta\,sin\,2\theta \right)\end{split} \tag{5}\]
where, C\({}_{\mathrm{1}}\) and C\({}_{\mathrm{S}}\) are the corresponding co-efficients for the two contributing sets of molecules.
According to the fit (Table 1), almost 57% contribution is found to be from the C\({}_{\mathrm{1}}\) symmetry molecules and the rest from the C\({}_{\mathrm{s}}\) symmetry molecules. The observed contribution is much larger than the that expected from the vibrational excitation. Another reason for such a break of reflection symmetry could
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Energy & \multicolumn{3}{c|}{C\({}_{\mathrm{1}}\) (s + p partial waves)} & \multicolumn{3}{c|}{C\({}_{\mathrm{s}}\) (p + d partial waves)} & \multicolumn{3}{c|}{C\({}_{\mathrm{1}}\) : C\({}_{\mathrm{s}}\)} \\ \cline{2-7} & \(a_{\mathrm{1}}\) & \(b_{\mathrm{1}}\) & \(\delta_{\mathrm{1}}\) & \(a_{\mathrm{2}}\) & \(b_{\mathrm{2}}\) & \(\delta_{\mathrm{2}}\) & \\ \hline
6.5 eV & 1.17 & 1.34 & 2.63 & 0.82 & 0.58 & 6.21 & 1.0 : 0.75 \\ \hline \end{tabular}
\end{table}
Table 1: Fitting parameters in equation (5) obtained for the angular distribution of H ions at 6.5eV.
be due to the presence of two conformers in the ground state of ethanol, namely anti (C\({}_{\mathrm{s}}\) symmetric) and gauche (C\({}_{\mathrm{1}}\) symmetric). Most of the microwave and IR studies have reported anti is the most stable one, but there is only a 40cm\({}^{\text{-1}}\) energy gap between the gauche and anti-conformer [19; 20], hence a predominance is expected of the gauche conformer at room temperature due to its two-fold degeneracy. Barnes and Hallam [21] estimated the ratio of anti and gauche conformer at room temperature in the vapour phase to be around 2:1. Shaw _et. al.[22]_ reported it to be 42:58 from their IR studies.
Based on these observations, we conclude that the large signal obtained in the backward direction for the H\({}^{\text{-}}\) ion signal is due to the combination of tortional excitation available in the molecule at room temperature and the asymmetric conformers present in the target beam. It can also be possible that for such torasionally excited molecules, the autodetachment cross-section may also vary depending on the change in geometry affecting the DEA signal strength.
## 2 H\({}^{\text{-}}\) formation at the second resonance
The second resonance is also from the -OH site. If we consider the same channel as the first resonance i.e.
\[\mathrm{e}+\mathrm{C_{2}H_{5}OH}\rightarrow(\mathrm{C_{2}H_{5}OH^{-}})^{*} \rightarrow\mathrm{C_{2}H_{5}O}+\mathrm{H^{-}} \tag{6}\]
then the excess energy will be 3.9eV. If we see the images of 6.5eV and 8eV resonance, they show different angular distributions. The obtained distribution is shown in Figure 5(b). For the 6.5eV resonance, the ion intensity peaks in the backward direction (close to 180\({}^{\circ}\)), whereas for the 8eV resonance, it peaks around 135\({}^{\circ}\). This observation and the observed _KE_ distributions may indicate that the _DEA_ dynamics of this resonance involve substantial internal excitation before dissociation. A comparison of _KE_ distribution at these two energies (Figure 4(a) and (b)) shows that for 8 eV resonance, the neutral fragment is formed with a broader distribution of internal energy. This is possible if the dissociation dynamics consist of slow dissociation to begin with before reaching the final slope. This can result in substantial distortion in the molecule, which can result in the excitation of various vibrational modes of the anion. Similar dynamics have also been reported for water for its 8.5eV resonance [23].
Another possible channel is:
\[\mathrm{e}+\mathrm{C_{2}H_{5}OH}\rightarrow(\mathrm{C_{2}H_{5}OH^{-}})^{*} \rightarrow\mathrm{CH_{3}}+\mathrm{H_{2}CO}+\mathrm{H^{-}} \tag{7}\]
where the TNI undergoes a three-body dissociation. Using the heat of the formation of CH\({}_{3}\) (145.69kJ/mol) and H\({}_{2}\)CO (-115.9kJ/mol), we obtain the thermodynamic threshold for this channel as 4.23eV [13]. In that case, the system has 3.77eV of excess energy. We cannot rule out this channel based on the kinetic energy distribution.
For the third resonance, based on the site selectivity observed in the ion yield curve of the partially deuterated molecule, H- can be formed by direct bond cleavage from either the CH\({}_{3}\) site or CH\({}_{2}\) site, and the possible channels can be
\[\text{e}+\text{CH}_{3}\text{CH}_{2}\text{OH}\rightarrow(\text{CH}_{3}\text{CH}_{ 2}\text{OH}^{-})^{*}\rightarrow\begin{matrix}\text{CH}_{2}\text{CH}_{2}\text{OH} +\text{H}^{-}\\ \text{CH}_{3}\text{CHOH}+\text{H}^{-}\end{matrix}\] (8a) (8b)
Using the heat of formation for CH\({}_{2}\)CH\({}_{2}\)OH (-25.9kJ/mol) and CH\({}_{3}\)CHOH (-54kJ/mol) [24], the thermodynamic threshold for the first channel is 3.66eV and the second channel is 3.37eV. But the resonance occurs around 9eV, so more than 5.5eV excess energy is available in the system. However, the _KE_ obtained in the H- channel is less than 1eV. This implies that more than 4.5eV energy is excess which can appear as the internal energy of the fragments.
This energy is sufficient to break the molecular fragment further into smaller fragments. This can also result from a three or multiple body dissociation, which has several possibilities like
\[\text{C}^{*}\text{H}_{3}+\text{H}_{2}\text{CO}+\text{H}^{-} \text{4.25eV} \tag{9a}\] \[\text{H}\text{CC}\text{H}_{2}+\text{H}_{2}\text{O}+\text{H}^{-} \text{4.52eV}\] (9b) \[\text{e}+\text{C}_{2}\text{H}_{5}\text{OH}\rightarrow(\text{C}_{2}\text{H}_{5} \text{OH}^{-})^{*}\rightarrow\begin{matrix}\text{C}_{2}\text{H}_{4}+\text{OH} +\text{H}^{-}\\ \text{C}_{2}\text{H}_{2}+\text{H}_{2}\text{O}+\text{H}+\text{H}^{-}\end{matrix}\] (9c) \[\text{CH}_{3}\text{C}^{*}\text{H}_{2}+\text{O}+\text{H}^{-} \text{7.77eV} \tag{9e}\]
In all these channels except the last two channels where H- appears to arise from the OH site, can contributing to this resonance. Hence we conclude that at 9.5eV, the C-H bond cleavage is associated with the multiple fragmentation process.
### **OH- formation at 9.3eV**
We also looked for the heavier ions but only detected OH- which shows a peak in the ion yield curve around 9.3eV (Figure 6(a)). In the photodissociation study of methanol, the presence of high energy OH radic at 157nm (\(\sim\)7.89eV) was reported by Yang _et. al_[25]. According to _ab initio_ calculations, there is an avoided crossing between 3p and 3s surface. Oxygen loan pairs are excited to 3p state by 157nm photon, by internal conversion they transfer to 3s state which leads to breaking of C-O bond. The presence of high energy OH in this photodissociation channel shows it as a two-body breakup. Since oxygen loan pairs are involved in the process, a similar dissociation of OH in ethanol can be expected [26]. In our case, we detected OH- which shows a peak in the ion yield curve at around 9.3eV. The lowest energy pathway for OH- formation is the cleavage of the C-O bond in a two-body breakup.
\[\text{e}+\text{C}_{2}\text{H}_{5}\text{OH}\rightarrow(\text{C}_{2}\text{H}_{5 }\text{OH}^{-})^{*}\rightarrow\text{C}_{2}\text{H}_{5}+\text{OH}^{-} \tag{10}\]
Using the heat of formation of C\({}_{2}\)H\({}_{5}\) (119kJ/mol) and electron affinity of OH (176.34kJ/mol), we arrive at the value of 2.24eV as the thermodynamic threshold for OH\({}^{-}\) formation from this channel [13]. But we have observed the ion yield peak at 9.3eV, which is 7eV above the thermodynamic threshold and 1.4eV above the photodissociation limit. We have also found both OH\({}^{-}\) and OD\({}^{-}\) from C\({}_{2}\)H\({}_{5}\)OD, as we can see from the mass spectrum (Figure 6(c)), where we have fitted the two peaks (OH\({}^{-}\) and OD\({}^{-}\)) with Gaussians of appropriate widths. These peaks indicate that the hydrogen scrambling occurs before dissociation.VSI image (Figure 6(b)) of OH\({}^{-}\) also shows very low kinetic energy (0.5eV) in this ion. Hence, we can expect that more than one fragment may be formed during the DEA process. One possibility of such channels is
\[\mathrm{e}+\mathrm{C}_{2}\mathrm{H}_{5}\mathrm{OD}\rightarrow(\mathrm{C}_{2} \mathrm{H}_{5}\mathrm{OD}^{-})^{*}\rightarrow\mathrm{C}_{2}\mathrm{H}_{4}+ \mathrm{OD}^{-}+\mathrm{D}\] (11a) (11b) (11c)
However, as one of the fragments is an H/D atom, it would carry most of the excess energy, leaving very little kinetic energy in the anion channel. This also suggests that the parent state of the resonance responsible for OH\({}^{-}\) formation is different from the photodissociation channel. The obtained momentum image of OH\({}^{-}\) is consistent with this picture (Figure 6(b)).
## IV Conclusion
We have shown that _DEA_ to ethanol shows a strong site selectivity, i.e. O-H and C-H sites follow their characteristic dissociation pattern. We have measured the kinetic energy and angular distribution of the H\({}^{-}\) channel and found that the O-H site breakage results from a two-body dissociation without any scrambling. The 6.5eV resonance shows a substantial effect of the torsion mode of vibrations in the electron attachment process, which manifests in the observed angular distribution of the H\({}^{-}\) ions. The gauche conformer of the molecule may also be responsible for the observed deviation of the angular distribution from that expected under axial recoil approximation. The 8eV resonance shows considerable energy left in the internal excitation of the molecular fragment. In contrast, the C-H site breaking corresponding to the 9.5eV resonance is associated with the many-body breakup consistent with the earlier reports for C-H sites in other organic molecules. We have also measured the OH\({}^{-}\) channel in terms
Figure 6: (a) Ion yield curve for OH\({}^{-}\) from ethanol (b) Momentum image of OH\({}^{-}\) obtained at 9.3eV from ethanol. (c) A part of the mass spectrum obtained for ethanol and partially deuterated ethanol (C\({}_{2}\)H\({}_{5}\)OD) at 9.3 eV.
of its kinetic energy and angular distribution and conclude that this channel is associated with hydrogen scrambling and many-body breakup.
## Acknowledgement
The authors acknowledge the financial support from the Department of Atomic Energy, India, under Project Identification No. RTI4002.
|
2307.01788 | A Radon-Nikodým Theorem for Valuations | We enquire under which conditions, given two $\sigma$-finite,
$\omega$-continuous valuations $\nu$ and $\mu$, $\nu$ has density with respect
to $\mu$. The answer is that $\nu$ has to be absolutely continuous with respect
to $\mu$, plus a certain Hahn decomposition property, which happens to be
always true for measures. | Jean Goubault-Larrecq | 2023-07-04T15:54:04Z | http://arxiv.org/abs/2307.01788v2 | # A Radon-Nikodym Theorem for Valuations
###### Abstract.
We enquire under which conditions, given two \(\sigma\)-finite, \(\omega\)-continuous valuations \(\nu\) and \(\mu\), \(\nu\) has density with respect to \(\mu\). The answer is that \(\nu\) has to be absolutely continuous with respect to \(\mu\), plus a certain Hahn decomposition property, which happens to be always true for measures.
Key words and phrases:Measure, valuation, Radon-Nikodym derivative, density function 2010 Mathematics Subject Classification: Primary 28C15; Secondary 60B05
## 1. Introduction
In its simplest form, the Radon-Nikodym theorem [19, 16] states that a \(\sigma\)-finite measure \(\nu\) has a measurable density with respect to a \(\sigma\)-finite measure \(\mu\) if and only if \(\nu\) is absolutely continuous with respect to \(\mu\). The purpose of this paper is to investigate a similar question in the larger setting of \(\omega\)-continuous valuations, a setting which encompasses both measures and the continuous valuations used in the semantics of probabilistic programming languages [12, 11].
Probably the distinguishing feature of valuations compared to measures is that they give mass to sets forming a collection that is not necessarily closed under complements: a lattice of subsets of valuations, a topology for continuous valuations, and what we call an \(\omega\)-topology for \(\omega\)-continuous valuations.
Sets equipped with such collection of sets are Pervin spaces, topological spaces, and what we call \(\omega\)-topological spaces respectively. They form categories \(\mathbf{Perv}\), \(\mathbf{Top}\) and \(\omega\mathbf{Top}\) respectively.
As we will see, the question of the existence of density maps is more about the category in which the density maps should reside, not so much about the distinction between valuations and mesaures. Indeed, on sufficiently nice topological spaces, continuous valuations and measures are essentially the
same thing, and therefore _measurable_ density maps will exist under the familiar assumptions of the classical Radon-Nikodym theorem. This will also entail that they do not exist in general as morphisms in \(\mathbf{Top}\) or \(\omega\mathbf{Top}\), as we will see in Section 4.2.
Hence some additional assumptions are needed to ensure that density maps exist in the relevant categories, and it is the purpose of this paper to identify them.
_Outline._ We give brief preliminaries in Section 2, and we develop the theory of valuations, including \(\omega\)-continuous valuations, measures and continuous valuations, in Section 3. We develop necessary conditions for density maps to exist in Section 4, and we show that they are sufficient in Section 5. Our final result includes the classical Radon-Nikodym theorem as a special case.
## 2. Preliminaries
We assume some basic knowledge about topology [9] and about measure theory [3]. We will need the following from domain theory [9, 8].
A _directed_ family \(D\) in a poset \(P\) is a non-empty family such that any two elements of \(D\) have a common upper bound in \(D\). A _dcpo_ (short for directed-complete partial order) is a poset in which every directed family has a supremum. We write \(\sup^{\uparrow}D\), or \(\sup^{\uparrow}_{i\in I}x_{i}\) if \(D=\left(x_{i}\right)_{i\in I}\), for directed suprema. We also write \(\bigcup^{\uparrow}\) for directed union. An \(\omega\)_cpo_ is defined similarly, except that we only require the existence of suprema of _monotone sequences_\(\left(x_{n}\right)_{n\in\mathbb{N}}\) (namely, \(x_{0}\leq x_{1}\leq\cdots\leq x_{n}\leq\cdots\)) instead of directed families.
A function \(f\colon X\to Y\) between dcpos is _Scott-continuous_ if and only it is monotonic (order-preserving) and preserves suprema of directed sets, namely \(\sup^{\uparrow}_{i\in I}f(x_{i})=f(\sup^{\uparrow}_{i\in I}x_{i})\) for every directed family \(\left(x_{i}\right)_{i\in I}\). It is \(\omega\)_-continuous_ if and only if it is monotonic and preserves suprema of monotone sequences.
The _Scott topology_ on a dcpo has as open sets those subsets \(U\) that are upwards-closed (if \(x\in U\) and \(x\leq y\) then \(y\in U\)) and such that every directed family \(D\) such that \(\sup^{\uparrow}D\in U\) intersects \(U\). The Scott-continuous maps are exactly the continuous maps with respect to the Scott topologies.
## 3. Valuations and measures
As our general setting, we will consider pairs \((X,\mathcal{L})\) where \(X\) is a set and \(\mathcal{L}\) is a lattice of subsets, namely a family of subsets of \(X\) that is closed
under finite intersections and finite unions. In particular, the empty set and \(X\) belong to \(\mathcal{L}\).
We retrieve topological spaces by requiring that \(\mathcal{L}\) be closed under arbitrary unions; or just under directed unions. Indeed, the union of any family \(\left(U_{i}\right)_{i\in I}\) of subsets of \(X\) is equal to the directed supremum \(\bigcup_{J\text{ finite }\subseteq I}^{\uparrow}\bigcup_{j\in J}U_{j}\).
We will call \(\omega\)_-topology_ on \(X\) any lattice of subsets \(\mathcal{L}\) that is at the same time an \(\omega\)cpo under inclusion. Then \((X,\mathcal{L})\) is an \(\omega\)_-topological space_. It is equivalent to require that \(\mathcal{L}\) be closed under countable unions, since the union of any countable family \(\left(U_{n}\right)_{n\in\mathbb{N}}\) of elements of \(\mathcal{L}\) is the union \(\bigcup_{n\in\mathbb{N}}^{\uparrow}\bigcup_{i=0}^{n}U_{i}\) of a chain of elements of \(\mathcal{L}\).
A lattice of subsets \(\mathcal{L}\) that is closed under complements is a _algebra of subsets_, and an \(\omega\)-topology \(\mathcal{L}\) that is closed under complements is the same thing as a \(\sigma\)-algebra. Then \((X,\mathcal{L})\) is called a _measurable space_.
There are categories \(\mathbf{Perv}\), \(\mathbf{BPerv}\), \(\mathbf{Top}\), \(\omega\mathbf{Top}\) and \(\mathbf{Mes}\) whose objects are pairs \((X,\mathcal{L})\) where \(\mathcal{L}\) is a lattice of subsets, resp. an algebra of subsets, resp. a topology, resp. an \(\omega\)-topology, resp. a \(\sigma\)-algebra. In each case, the morphisms \(f\colon(X,\mathcal{L})\to(Y,\mathcal{L}^{\prime})\) are the maps \(f\colon X\to Y\) such that \(f^{-1}(V)\in\mathcal{L}\) for every \(V\in\mathcal{L}^{\prime}\). They are called _continuous_ maps on \(\mathbf{Top}\), and _measurable_ maps on \(\mathbf{Mes}\). The categories \(\mathbf{Perv}\) and \(\mathbf{BPerv}\) are the categories of _Pervin spaces_ and _Boolean Pervin spaces_ respectively [18, Section 3.1]. Those categories are all full subcategories of \(\mathbf{Perv}\).
Let \(\overline{\mathbb{R}}_{+}\) be the dcpo of extended non-negative real numbers \(\mathbb{R}_{+}\cup\{\infty\}\), with the usual ordering \(\leq\) extended by the stipulation that \(r\leq\infty\) for every \(r\in\overline{\mathbb{R}}_{+}\). We will always equip \(\overline{\mathbb{R}}_{+}\) with the Scott topology of \(\leq\), making it an object of all the categories mentioned above. The open subsets of that Scott topology are the half-open intervals \(]t,\infty]\), \(t\in\mathbb{R}_{+}\), plus \(\overline{\mathbb{R}}_{+}\) and \(\emptyset\).
We write \(\mathfrak{L}(X,\mathcal{L})\) for the set of morphisms from \((X,\mathcal{L})\) to \(\overline{\mathbb{R}}_{+}\) (implicitly equipped with its Scott topology), in any full subcategory of \(\mathbf{Perv}\) containing \(\overline{\mathbb{R}}_{+}\). In other words, the elements \(h\) of \(\mathfrak{L}(X,\mathcal{L})\) are the functions \(h\colon X\to\overline{\mathbb{R}}_{+}\) such that \(h^{-1}(]t,\infty])\in\mathcal{L}\) for every \(t\in\mathbb{R}_{+}\).
When \((X,\mathcal{L})\) is a measurable space, \(\mathfrak{L}(X,\mathcal{L})\) is the set of all measurable maps from \((X,\mathcal{L})\) to \(\overline{\mathbb{R}}_{+}\) with its usual Borel \(\sigma\)-algebra, generated by the intervals. This is because one can write any interval as a Boolean combination of intervals of the form \(]t,\infty]\). When \((X,\mathcal{L})\) is a topological space, \(\mathfrak{L}(X,\mathcal{L})\) is known as the set of _lower semicontinuous maps_ from \((X,\mathcal{L})\) to \(\overline{\mathbb{R}}_{+}\).
If \(\mathcal{L}\) is an \(\omega\)-topology (resp., a topology), then \(\mathfrak{L}(X,\mathcal{L})\) is an \(\omega\)cpo (resp., a dcpo) under the pointwise ordering defined by \(h\leq h^{\prime}\) if and only if
\(h^{\prime}(x)\) for every \(x\in X\); additionally, suprema of monotone sequences (resp., directed suprema) are computed pointwise: \((\sup_{i\in I}^{\uparrow}h_{i})(x)=\sup_{i\in I}^{\uparrow}(h_{i}(x))\). In order to see this, it suffices to show that \(\sup_{i\in I}^{\uparrow}(h_{i}(x))\) is in \(\mathfrak{L}(X,\mathcal{L})\); and the inverse image of \(|t,\infty]\) under that map is \(\bigcup_{i\in I}^{\uparrow}h_{i}^{-1}(|t,\infty])\), since \(|t,\infty]\) is Scott-open.
Given any Pervin space \((X,\mathcal{L})\), a _valuation_\(\nu\) on \((X,\mathcal{L})\) is a map \(\nu\colon\mathcal{L}\to\overline{\mathbb{R}}_{+}\) that is:
* _strict_: \(\nu(\emptyset)=0\);
* _monotonic_: \(U\subseteq V\) implies \(\nu(U)\leq\nu(V)\);
* _modular_: for all \(U,V\in\mathcal{L}\), \(\nu(U)+\nu(V)=\nu(U\cup V)+\nu(U\cap V)\).
A _continuous valuation_ is a valuation that is Scott-continuous, and an \(\omega\)_-continuous valuation_ is a valuation that is \(\omega\)-continuous.
Continuous valuations have been the cornerstone of the domain-theoretic semantics of probabilistic languages since Claire Jones' PhD thesis [12, 11], and had first been studied by Nait Saheb-Djahromi [20]. The concept of valuation is older, and dates back to Smiley [21], Horn and Tarski [10], and Pettis [17], at least; see [14].
An \(\omega\)-continuous valuation on a measurable space \((X,\mathcal{L})\) is a _measure_. Measures are usually defined as \(\sigma\)-additive maps \(\nu\colon\mathcal{L}\to\overline{\mathbb{R}}_{+}\), but the two definitions are equivalent. Let us recall that \(\nu\colon\mathcal{L}\to\overline{\mathbb{R}}_{+}\) is _additive_ (where \(\mathcal{L}\) is any lattice of subsets) if and only if \(\nu(\emptyset)=0\) and \(\nu(U\cup V)=\nu(U)+\nu(V)\) for all pairs of two disjoint sets \(U,V\in\mathcal{L}\), and \(\sigma\)_-additive_ (where \(\mathcal{L}\) is any \(\omega\)-topology) if and only if \(\nu(\bigcup_{i\in I}U_{n})=\sum_{i\in I}\nu(U_{i})\) for every countable family \(\left(U_{i}\right)_{i\in I}\) of pairwise disjoint elements \(U_{i}\) of \(\mathcal{L}\). The equivalence of \(\omega\)-continuous valuations and \(\sigma\)-additive maps on \(\sigma\)-algebras follows from the following facts.
* If \(\mathcal{L}\) is an algebra of subsets, then the additive maps \(\nu\colon\mathcal{L}\to\overline{\mathbb{R}}_{+}\) are exactly the valuations on \((X,\mathcal{L})\). Indeed, if \(\nu\) is additive, then strictness is clear, monotonicity follows from the fact that if \(U\subseteq V\), then \(\nu(V)=\nu(V\smallsetminus U)+\nu(U)\geq\nu(U)\), and modularity from \(\nu(U)+\nu(V)=\nu(U\smallsetminus V)+\nu(U\cap V)+\nu(V)=\nu(U\cap V)+\nu(U \cup V)\). Conversely, any valuation \(\nu\) is additive, since if \(U\) and \(V\) are disjoint, then \(\nu(U\cup V)=\nu(U\cup V)+\nu(U\cap V)=\nu(U)+\nu(V)\).
* If \(\mathcal{L}\) is an \(\omega\)-topology, then the \(\sigma\)-additive maps are exactly the \(\omega\)-continuous, additive maps. This follows from the fact that every countably infinite union \(\bigcup_{n\in\mathbb{N}}U_{n}\) can be written as \(\bigcup_{n\in\mathbb{N}}^{\uparrow}\bigcup_{i=0}^{n}U_{i}\), plus additivity.
Addition is Scott-continuous on \(\overline{\mathbb{R}}_{+}\), and it follows that valuations on \((X,\mathcal{L})\) form a dcpo under the _stochastic ordering_, defined by \(\mu\leq\nu\) if and only if \(\mu(U)\leq\nu(U)\) for every \(U\in\mathcal{L}\); directed suprema are computed pointwise: \((\sup_{i\in I}^{\uparrow}\nu_{i})(U)=\sup_{i\in I}^{\uparrow}(\nu_{i}(U))\). The same can be said for continuous valuations on a topological space, or for \(\omega\)-continuous valuations on an \(\omega\)-topological space, hence also for measures on a measurable space, since suprema commute.
The simplest way to define a notion of integration is by the following _Choquet formula_[5, Chapter VII, Section 48.1, p. 265]:
\[\int_{x\in X}h(x)\;d\nu\stackrel{{\mathrm{def}}}{{=}}\int_{0}^{ \infty}\nu(h^{-1}(]t,\infty])\;dt, \tag{3.1}\]
for every function \(h\in\mathfrak{L}(X,\mathcal{L})\), and for every valuation \(\nu\) on \((X,\mathcal{L})\). The integral on the right is an ordinary improper Riemann integral, which is well-defined because the map \(t\mapsto\nu(h^{-1}(]t,\infty]))\) is antitonic (order-reversing). Indeed, it is easy to see that, for any antitonic map \(f\colon\mathbb{R}_{+}\to\overline{\mathbb{R}}_{+}\), \(\int_{0}^{\infty}f(t)\;dt\) is the supremum of the monotone sequence of lower Darboux sums \(\sum_{k=1}^{N^{2^{N}}}f(\frac{k}{2^{N}})\), \(N\in\mathbb{N}\). This was already observed in the proof of Lemma 4.2 of Regina Tix's master's thesis [22], which also contains the following statement; the proof boils down to a familiar commutation of suprema.
**Fact 1** (Lemma 4.2, 3rd item, of [22]).: _Riemann integration is Scott-continuous in the integrated antitonic map. In particular, for any directed family \(\left(f_{i}\right)_{i\in I}\) (countable or not) of antitonic maps from \(\mathbb{R}_{+}\) to \(\overline{\mathbb{R}}_{+}\), in the pointwise ordering, \(\int_{0}^{\infty}\sup_{i\in I}^{\uparrow}f_{i}(t)\;dt=\sup_{i\in I}^{\uparrow} \int_{0}^{\infty}f_{i}(t)\;dt\)._
Equation (3.1) makes sense for more general set functions \(\nu\) than just valuations, but we will not make use of this. We also write \(\int h\;d\nu\) for \(\int_{x\in X}h(x)\;d\nu\).
We sum up the main properties of the Choquet integral in the following proposition; \(h\), \(h^{\prime}\) and \(h_{i}\) stand for a arbitrary elements of \(\mathfrak{L}(X,\mathcal{L})\), \(\mu\), \(\nu\) and \(\nu_{i}\) for valuations on \((X,\mathcal{L})\), \(a\) and \(b\) are arbitrary elements of \(\mathbb{R}_{+}\). Addition and multiplication on \(\overline{\mathbb{R}}_{+}\) are defined in the obvious way, with the caveat that \(0.\infty=\infty.0=0\), so as to ensure that multiplication, not just addition, is Scott-continuous. On spaces of \(\overline{\mathbb{R}}_{+}\)-valued maps and of valuations, addition and scalar multiplication are defined pointwise. The _characteristic map_\(\chi_{U}\colon X\to\overline{\mathbb{R}}_{+}\) maps every \(x\in U\) to \(1\) and all other points to \(0\); \(\chi_{U}\) is in \(\mathfrak{L}(X,\mathcal{L})\) if and only if \(U\in\mathcal{L}\). The _Dirac valuation_\(\delta_{x}\) maps every \(U\in\mathcal{L}\) to \(1\) if \(x\in U\), to \(0\) otherwise; namely, \(\delta_{x}(U)=\chi_{U}(x)\). Given a morphism \(f\colon(X,\mathcal{L})\to(Y,\mathcal{L}^{\prime})\), the _image valuation_\(f[\nu]\) of any valuation \(\nu\) on \((X,\mathcal{L})\) is
defined by \(f[\nu](V)\stackrel{{\mathrm{def}}}{{=}}\nu(f^{-1}(V))\); this is a valuation, resp. an \(\omega\)-continuous valuation, resp. a measure, resp. a continuous valuation if \(\nu\) is.
**Proposition 2**.: _Choquet integration is:_
1. _linear in the valuation:_ \(\int h\;d(a\mu+b\nu)=a\int h\;d\mu+b\int h\;d\nu\)_;_
2. _Scott-continuous in the valuation:_ \(\int h\;d\sup_{i\in I}\nu_{i}=\sup_{i\in I}^{\uparrow}\int h\;d\nu_{i}\) _if_ \(\left(\nu_{i}\right)_{i\in I}\) _is directed;_
3. _linear in the integrated function if_ \((X,\mathcal{L})\) _is an_ \(\omega\)_-topological space and_ \(\nu\) _is an_ \(\omega\)_-continuous valuation:_ \(\int(ah+bh^{\prime})\;d\nu=a\int h\;d\nu+b\int h^{\prime}\;d\nu\)_;_
4. \(\omega\)_-continuous in the integrated function if_ \((X,\mathcal{L})\) _is an_ \(\omega\)_-topological space and_ \(\nu\) _is_ \(\omega\)_-continuous (in particular,_ \(\int\sup_{i\in\mathbb{N}}^{\uparrow}h_{i}\;d\nu=\sup_{i\in\mathbb{N}}^{ \uparrow}h_{i}\;d\nu\)_), and Scott-continuous if_ \((X,\mathcal{L})\) _is a topological space and_ \(\nu\) _is a continuous valuation (notably,_ \(\int\sup_{i\in I}^{\uparrow}\;h_{i}\;d\nu=\sup_{i\in I}^{\uparrow}\int h_{i} \;d\nu\) _if_ \(\left(h_{i}\right)_{i\in I}\) _is directed)._
_Additionally,_
1. \(\int\chi_{U}\;d\nu=\nu(U)\) _for every_ \(U\in\mathcal{L}\)_;_
2. \(\int h\;d\delta_{x}=h(x)\) _for every_ \(x\in X\)_._
Proof.: The argument follows classical lines, and most notably [22, Section 4].
Item \((i)\) follows from the fact that Riemann integration is itself linear, and \((ii)\) follows from Fact 1; monotonicity is clear. Item \((v)\) follows from the fact that \((\sup_{i\in I}^{\uparrow}h_{i})^{-1}(]t,\infty])=\bigcup_{i\in I}^{\uparrow}h_ {i}^{-1}(]t,\infty]\), \(\nu\) is Scott-continuous and Fact 1. Item \((iv)\) is proved similarly. As far as \((v)\) is concerned, we have \(\int\chi_{U}\;d\nu=\int_{0}^{\infty}\nu(\chi_{U}^{-1}(]t,\infty))\;dt=\int_{0}^ {\infty}f(t)\;dt\) where \(f\) maps every \(t\in[0,1[\) to \(\nu(U)\) and every \(t\geq 1\) to \(0\). For \((vi)\), \(\int h\;d\delta_{x}=\int_{0}^{\infty}\delta_{x}(h^{-1}(]t,\infty]))\;dt= \int_{0}^{\infty}g(t)\;dt\) where \(g\) maps every \(t<h(x)\) to \(1\), and every \(t\geq h(x)\) to \(0\). The only tricky point is to show item \((iii)\).
First, we have \(\int ah\;d\nu=\int_{0}^{\infty}\nu(ah^{-1}(]t,\infty])\). If \(a=0\), this is equal to \(0=a.\int h\;d\nu\). If \(a\neq 0,\infty\), this is equal to \(\int_{0}^{\infty}\nu(h^{-1}(]t/a,\infty])\;dt=\int_{0}^{\infty}\nu(h^{-1}(]u, \infty]).a\;du=a\int_{0}^{\infty}\nu(h^{-1}(]u,\infty])\;du=a\int h\;d\nu\). Hence \(\int ah\;d\nu=a\int h\;d\nu\) for every \(a\in\mathbb{R}_{+}\); this also holds when \(a=\infty\) by \((iv)\), since \(\infty=\sup^{\uparrow}\mathbb{N}\). Hence it suffices to show that \(\int(h+h^{\prime})\;d\nu=\int h\;d\nu+\int h\;d\nu^{\prime}\).
We proceed in steps. We fix \(h\). For every \(\epsilon\in\mathbb{R}_{+}\), and for every \(U\in\mathcal{L}\), we claim that:
\[\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cup(h^{-1}(]t- \epsilon,\infty])\;dt \tag{3.2}\] \[=\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty]))\;dt+\int_{0}^{ \epsilon}\nu(h^{-1}(]t,\infty])\cap U)\;dt.\]
If \(\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cap U)\ dt<\infty\), then we reason as follows. By the modularity law, the fact that the intersection of \(h^{-1}(]t,\infty])\) with \(h^{-1}(]t-\epsilon,\infty])\cap U\) simplifies to \(h^{-1}(]t,\infty])\cap U\), and the usual properties of Riemann integrals,
\[\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cup(h^{-1}(]t- \epsilon,\infty])\cap U))\ dt\] \[=\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty]))\ dt+\int_{ \epsilon}^{\infty}\nu(h^{-1}(]t-\epsilon,\infty])\cap U)\ dt\] \[\qquad-\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cap U)\ dt\] \[=\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty]))\ dt+\int_{0}^{ \infty}\nu(h^{-1}(]t,\infty])\cap U)\ dt\] \[\qquad-\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cap U)\ dt\] \[=\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty]))\ dt+\int_{0}^{ \epsilon}\nu(h^{-1}(]t,\infty])\cap U)\ dt.\]
If \(\int_{\epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cap U)\ dt=\infty\), then since \(h^{-1}(]t,\infty])\cap U\) is included in \(h^{-1}(]t,\infty])\cup(h^{-1}(]t-\epsilon,\infty])\cap U)\), both sides of (3.2) are equal to \(\infty\).
Now, \(\int(h+\epsilon\chi_{U})\ d\nu\) is equal to \(\int_{0}^{\infty}(h+\epsilon\chi_{U})^{-1}(]t,\infty])\ dt\), and \((h+\epsilon\chi_{U})^{-1}(]t,\infty])\) is equal to \(h^{-1}(]t,\infty])\cup U\) if \(t<\epsilon\) and to \(h^{-1}(]t,\infty])\cup(h^{-1}(]t-\epsilon,\infty])\cap U)\) otherwise. Therefore:
\[\int(h+\epsilon\chi_{U})\ d\nu\] \[=\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\cup U)\ dt+\int_{ \epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cup(h^{-1}(]t-\epsilon,\infty])\cap U))\ dt\] \[=\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\cup U)\ dt+\int_{ \epsilon}^{\infty}\nu(h^{-1}(]t,\infty]))\ dt\] \[\qquad+\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\cap U)\ dt\qquad \text{by (\ref{eq:1})}\] \[=\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\ dt+\int_{\epsilon}^{ \infty}\nu(h^{-1}(]t,\infty]))\ dt\] \[\qquad+\int_{0}^{\epsilon}\nu(U)\ dt\qquad\qquad\qquad\text{ by modularity of $\nu$ under the $\int_{0}^{\epsilon}$ terms}\] \[=\int h\ d\nu+\epsilon\nu(U).\]
This being done, let a _very simple function_ be any map \(h^{\prime}\) of the form \(\epsilon\sum_{i=1}^{n}\chi_{U_{i}}\) where \(\epsilon\in\mathbb{R}_{+}\) and \(U_{i}\in\mathcal{L}\). By induction on \(n\), and using what we have just proved, we obtain that \(\int(h+h^{\prime})\ d\nu=\int h\ d\nu+\int h^{\prime}\ d\nu\).
Finally, every \(h^{\prime}\in\mathcal{L}(X,\mathcal{L})\) is the supremum of the monotone sequence of very simple functions \(h^{\prime}_{N}\stackrel{{\rm def}}{{=}}\frac{1}{2^{N}}\sum_{i=1}^ {N^{2N}}\chi_{h^{\prime-1}(]i/2^{N},\infty])}\), \(N\in\mathbb{N}\). Then \(\int(h+\epsilon\chi_{U})^{-1}(]t,\infty])\) is equal to \(h^{-1}(]t,\infty])\cup U)\ dt\).
Now, \(\int(h+\epsilon\chi_{U})\ d\nu\) is equal to \(\int_{0}^{\infty}(h+\epsilon\chi_{U})^{-1}(]t,\infty])\ dt\), and \((h+\epsilon\chi_{U})^{-1}(]t,\infty])\) is equal to \(h^{-1}(]t,\infty])\cup U)\ dt\) if \(t<\epsilon\) and to \(h^{-1}(]t,\infty])\cup U)\ dt\)\((h^{-1}(]t,\infty])\cap U)\ dt\) otherwise. Therefore:
\[\int(h+\epsilon\chi_{U})\ d\nu\] \[=\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\cup U)\ dt+\int_{ \epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cup(h^{-1}(]t-\epsilon,\infty])\cap U))\ dt\] \[=\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\cup U)\ dt+\int_{ \epsilon}^{\infty}\nu(h^{-1}(]t,\infty]))\ dt\] \[\qquad+\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\cap U)\ dt\qquad \text{by (\ref{eq:1})}\] \[=\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\ dt+\int_{ \epsilon}^{\infty}\nu(h^{-1}(]t,\infty]))\ dt\] \[\qquad+\int_{0}^{\epsilon}\nu(U)\ dt\qquad\qquad\qquad\text{ by modularity of $\nu$ under the $\int_{0}^{\epsilon}$ terms}\] \[=\int h\ d\nu+\epsilon\nu(U).\]
This being done, let a _very simple function_ be any map \(h^{\prime}\) of the form \(\epsilon\sum_{i=1}^{n}\chi_{U_{i}}\) where \(\epsilon\in\mathbb{R}_{+}\) and \(U_{i}\in\mathcal{L}\). By induction on \(n\), and using what we have just proved, we obtain that \(\int(h+h^{\prime})\ d\nu=\int h\ d\nu+\int h^{\prime}\ d\nu\).
Finally, every \(h^{\prime}\in\mathcal{L}(X,\mathcal{L})\) is the supremum of the monotone sequence of very simple functions \(h^{\prime}_{N}\stackrel{{\rm def}}{{=}}\frac{1}{2^{N}}\sum_{i=1}^ {N^{2N}}\chi_{h^{\prime-1}(]i/2^{N},\infty])}\), \(N\in\mathbb{N}\). Then \(\int(h+\epsilon\chi_{U})^{-1}(]t,\infty])\) is equal to \(h^{-1}(]t,\infty])\cup U)\ dt\) if \(t<\epsilon\) and to \(h^{-1}(]t,\infty])\cup U)\ dt\)\((h^{-1}(]t,\infty])\cup U)\ dt\) otherwise. Therefore:
\[\int(h+\epsilon\chi_{U})\ d\nu\] \[=\int_{0}^{\epsilon}\nu(h^{-1}(]t,\infty])\cup U)\ dt+\int_{ \epsilon}^{\infty}\nu(h^{-1}(]t,\infty])\cup U)\ dt\] \[=\int_{0}^{\epsilon}\nu(h^
\(h^{\prime}\)) \(d\nu=\sup_{N\in\mathbb{N}}^{\uparrow}\int(h+h^{\prime}_{N})\ d\nu=\sup_{N\in\mathbb{N }}^{\uparrow}(\int h\ d\nu+\int h^{\prime}_{N}\ d\nu)=\int h\ d\nu+\int h^{ \prime}\ d\nu\) by using \((iv)\).
Property \((iv)\) is usually called the monotone convergence theorem (or the Beppo Levi theorem) when applied to measurable spaces and measures.
We will also use the following baby version of the Riesz representation theorem. A _linear_ map \(F\colon\mathfrak{L}(X,\mathcal{L})\to\overline{\mathbb{R}}_{+}\) is one such that \(F(ah)=aF(h)\) for all \(a\in\mathbb{R}_{+}\) (_positive homogeneity_) and \(h\in\mathfrak{L}(X,\mathcal{L})\) and \(F(h+h^{\prime})=F(h)+F(h^{\prime})\) for all \(h,h^{\prime}\in\mathfrak{L}(X,\mathcal{L})\) (_additivity_). It is equivalent to require \(F(ah+bh^{\prime})=aF(h)+bF(h^{\prime})\) for all \(a,b\in\mathbb{R}_{+}\) and \(h,h^{\prime}\in\mathfrak{L}(X,\mathcal{L})\); if \(F\) is \(\omega\)-continuous, then this extends to the cases where \(a\) or \(b\) or both is equal to \(\infty\).
**Proposition 3**.: _Let \((X,\mathcal{L})\) be an \(\omega\)-topological space (resp., a topological space). There is a one-to-one correspondence between \(\omega\)-continuous (resp., continuous) valuations \(\nu\) on \((X,\mathcal{L})\) and linear \(\omega\)-continuous (resp., Scott-continuous) maps \(F\colon\mathfrak{L}(X,\mathcal{L})\to\overline{\mathbb{R}}_{+}\). In one direction, \(F(h)\stackrel{{\text{def}}}{{=}}\int h\ d\nu\), and in the other direction, \(\nu(U)\stackrel{{\text{def}}}{{=}}F(\chi_{U})\)._
Proof.: We deal with the \(\omega\)-continuous case only, since the continuous case is similar. The continuous case was also dealt with by Tix [22, Satz 4.16], using similar arguments. Given an \(\omega\)-continuous valuation \(\nu\), the map \(F_{\nu}\colon h\mapsto\int h\ d\nu\) is \(\omega\)-continuous and linear by items \((ii)\) and \((iv)\) of Proposition 2. Conversely, given an \(\omega\)-continuous linear map \(F\colon\mathfrak{L}(X,\mathcal{L})\to\overline{\mathbb{R}}_{+}\), we define \(\nu_{F}(U)\stackrel{{\text{def}}}{{=}}F(\chi_{U})\). Then \(\nu_{F}\) is strict since \(F\) maps the constant \(0\) map to \(0\) by positive homogeneity, \(\omega\)-continuous since \(F\) is, and since the map \(U\mapsto\chi_{U}\) is itself \(\omega\)-continuous, and modular because of the equality \(\chi_{U}+\chi_{V}=\chi_{U\cup V}+\chi_{U\cap V}\) and the additivity of \(F\). We have \(\nu_{F_{\nu}}=\nu\), because for every \(U\in\mathcal{L}\), \(\nu_{F_{\nu}}(U)=F_{\nu}(\chi_{U})=\int\chi_{U}\ d\nu=\nu(U)\) by item \((v)\) of Proposition 2. In order to show that \(F_{\nu_{F}}=F\), we realize that \(F_{\nu_{F}}(\chi_{U})=\int\chi_{U}\ d\nu_{F}=\nu_{F}(U)=F(\chi_{U})\) by item \((v)\) of Proposition 2. Then, by the linearity of the integral (item \((iii)\)), \(F_{\nu_{F}}(h)=F(h)\) for every very simple function (as introduced in the proof of Proposition 2), and since every element of \(\mathfrak{L}(X,\mathcal{L})\) is a supremum of a monotone sequence of very simple functions, we conclude by the \(\omega\)-continuity of \(F\) and of \(F_{\nu_{F}}\) (item \((iv)\)) that \(F_{\nu_{F}}=F\).
## 4. Density maps
**Lemma 4**.: _Let \((X,\mathcal{L})\) be an \(\omega\)-topological space, let \(g\in\mathfrak{L}(X,\mathcal{L})\), and \(\mu\) be an \(\omega\)-continuous valuation on \((X,\mathcal{L})\)._
_The map that sends every \(h\in\mathfrak{L}(X,\mathcal{L})\) to \(\int hg\;d\mu\) is well-defined, linear and \(\omega\)-continuous._
_It is Scott-continuous provided that \(\mathcal{L}\) is a topology and \(\mu\) is Scott-continuous._
Proof.: We must first show that the integral makes sense, namely that the product map \(hg\) is in \(\mathfrak{L}(X,\mathcal{L}).\) The multiplication map \(a,b\mapsto ab\) is Scott-continuous from \(\overline{\mathbb{R}}_{+}\times\overline{\mathbb{R}}_{+}\) to \(\overline{\mathbb{R}}_{+},\) hence, for every \(t>0,\)\(ab>t\) if and only if there are two rational numbers \(p,q>0\) such that \(p>a,\)\(q>b\) and \(pq>t.\) For every \(t>0,\)\((hg)^{-1}(]t,\infty])\) is then equal to \(\bigcup_{p,q\in\mathbb{Q},pq>t}h^{-1}(]p,\infty])\cap g^{-1}(]q,\infty]).\) That is an infinite countable union, hence it is in \(\mathcal{L}.\) Therefore \(hg\) is in \(\mathfrak{L}(X,\mathcal{L}).\)
Since product by \(g\) is linear and \(\omega\)-continuous (even Scott-continuous), the remaining claims follow from items \((iv)\) and \((v)\) of Proposition 2.
Proposition 3 then turns this \(\omega\)-continuous linear function into an \(\omega\)-continuous valuation, defined as follows.
**Definition 5**.: _For every \(\omega\)-topological space \((X,\mathcal{L})\), for every \(g\in\mathfrak{L}(X,\mathcal{L})\), and for every \(\omega\)-continuous valuation \(\mu\) on \((X,\mathcal{L})\), we define:_
\[(g\cdot\mu)(U)\stackrel{{ def}}{{=}}\int\chi_{U}.g\;d\mu \tag{4.1}\]
_for every \(U\in\mathcal{L}\)._
Lemma 4 and Proposition 3 together yield the following.
**Proposition 6**.: _For every \(\omega\)-topological space \((X,\mathcal{L})\), for every \(g\in\mathfrak{L}(X,\mathcal{L})\), and for every \(\omega\)-continuous valuation \(\mu\) on \((X,\mathcal{L})\),_
1. \(g\cdot\mu\) _is an_ \(\omega\)_-continuous valuation;_
2. \(g\cdot\mu\) _is a continuous valuation if_ \((X,\mathcal{L})\) _is a topological space and_ \(\mu\) _is a continuous valuation;_
3. _For every_ \(h\in\mathfrak{L}(X,\mathcal{L})\)_,_ \[\int h\;d(g\cdot\mu)=\int hg\;d\mu.\] (4.2)
In particular, if \(\mathcal{L}\) is a \(\sigma\)-algebra, then \(g\cdot\mu\) is a measure for every measure \(\mu\) and every measurable map \(g\) from \(X\) to \(\overline{\mathbb{R}}_{+}.\) The measure \(g\cdot\mu\) is sometimes written as \(g\;d\mu.\)
Given two valuations \(\mu\) and \(\nu\) on \((X,\mathcal{L})\), one may wonder when one can write \(\nu\) as \(g\cdot\mu\) for some suitable map \(g\)--this is the goal of this paper. If \(\nu=g\cdot\mu,\) then we will see that \(\nu\) and \(\mu\) must satisfy two conditions: absolute continuity, and what we call the Hahn decomposition property, after the Hahn decomposition theorem of measure theory.
### Absolute continuity
We take the following definition of absolute continuity. While different from the usual definition, it is not entirely unusual, see for example [4].
**Definition 7** (Absolute continuity).: _Given two valuations \(\mu\) and \(\nu\) on a Pervin space \((X,\mathcal{L})\), we say that \(\nu\) is absolutely continuous with respect to \(\mu\) if and only if for every \(U_{0}\in\mathcal{L}\) such that \(\nu(U_{0})<\infty\), for every \(\epsilon\in\mathbb{R}_{+}\smallsetminus\{0\}\), there is an \(\eta\in\mathbb{R}_{+}\smallsetminus\{0\}\) such that for every \(U\in\mathcal{L}\) such that \(U\subseteq U_{0}\) and \(\mu(U)<\eta\), \(\nu(U)<\epsilon\)._
**Remark 8**.: _When \(\nu\) is a bounded valuation, the definition of absolute continuity simplifies to: for every \(\epsilon\in\mathbb{R}_{+}\smallsetminus\{0\}\), there is an \(\eta\in\mathbb{R}_{+}\smallsetminus\{0\}\) such that for every \(U\in\mathcal{L}\) such that \(\mu(U)<\eta\), \(\nu(U)<\epsilon\)._
The usual definition of absolute continuity is given as item (2) in the following proposition, where we show that it is equivalent in the case of \(\sigma\)-finite measures. A valuation \(\nu\) on a Pervin space \((X,\mathcal{L})\) is _\(\sigma\)-finite_ if and only if there is a countable family of sets \(E_{n}\in\mathcal{L}\), \(n\in\mathbb{N}\), such that \(\bigcup_{n\in\mathbb{N}}E_{n}=X\) and \(\nu(E_{n})<\infty\) for each \(n\in\mathbb{N}\). Replacing \(E_{n}\) by \(\bigcup_{k=0}^{n}E_{k}\) if necessary, we may assume that \(\left(E_{n}\right)_{n\in\mathbb{N}}\) is a monotone sequence. This definition applies to measures as well, in which case we retrieve the usual notion of \(\sigma\)-finiteness. Considering Remark 8, the following is well-known for bounded measures [3, page 422], and the proof is entirely similar.
**Proposition 9** (Absolute continuity, simplified).: _Let \(\nu\), \(\mu\) be two measures on a measurable space \((X,\mathcal{L})\), and consider the following statements._
1. \(\nu\) _is absolutely continuous with respect to_ \(\mu\)_;_
2. _for every_ \(U\in\mathcal{L}\) _such that_ \(\mu(U)=0\)_,_ \(\nu(U)=0\)_._
_Then \((\ref{eq:1})\) implies \((\ref{eq:1})\), and \((\ref{eq:1})\) and \((\ref{eq:2})\) are equivalent if \(\nu\) is \(\sigma\)-finite._
Proof.: Let us assume that \((\ref{eq:1})\) holds, but not \((\ref{eq:1})\). There is an \(\epsilon>0\) and a set \(U_{0}\in\mathcal{L}\) such that \(\nu(U_{0})<\infty\) and, for every \(n\in\mathbb{N}\), letting \(\eta\stackrel{{\mathrm{def}}}{{=}}1/2^{n}\), there is an element \(V_{n}\in\mathcal{L}\) with \(V_{n}\subseteq U_{0}\) such that \(\mu(V_{n})<1/2^{n}\) but \(\nu(V_{n})\geq\epsilon\). In particular, \(\sum_{n=0}^{\infty}\mu(V_{n})<\infty\), so by the first Borel-Cantelli lemma [3, Theorem 4.3], \(\mu(\bigcap_{m\in\mathbb{N}}\bigcup_{n\in\mathbb{N}}V_{n})=0\). Using \((\ref{eq:1})\), it follows that \(\nu(\bigcap_{m\in\mathbb{N}}\bigcup_{n\geq m}V_{n})=0\). The sets \(\bigcup_{n\geq m}V_{n}\) form a decreasing sequence of elements of \(\mathcal{L}\) included in \(U_{0}\), hence of finite \(\nu\)-measure. Therefore \(\inf_{m\in\mathbb{N}}\nu(\bigcup_{n\geq m}V_{n})=0\). This is impossible, since for every \(m\in\mathbb{N}\), \(\nu(\bigcup_{n\geq m}V_{n})\geq\nu(V_{m})\geq\epsilon\).
Conversely, we assume that \((\ref{eq:1})\) holds and that \(\nu\) is \(\sigma\)-finite. Let \(\left(E_{n}\right)_{n\in\mathbb{N}}\) be a monotone sequence of elements of \(\mathcal{L}\) covering \(X\) and such that
\(\infty\) for every \(n\in\mathbb{N}\). Let also \(U\in\mathcal{L}\) be such that \(\mu(U)=0\). For every \(n\in\mathbb{N}\), \(U\cap E_{n}\) is included in \(U\), and \(\mu(U\cap E_{n})=0\), so by absolute continuity, for every \(\epsilon>0\), \(\nu(U\cap E_{n})<\epsilon\). Since \(\epsilon\) is arbitrary, \(\nu(U\cap E_{n})=0\). Then \(\nu(U)=\nu(U\cap\bigcup_{n\in\mathbb{N}}^{\dagger}E_{n})=\sup_{n\in\mathbb{N}} ^{\dagger}\nu(U\cap E_{n})=0\).
We will use the following often.
**Lemma 10**.: _Let \((X,\mathcal{L})\) be a Pervin space, \(\mu\) be a valuation on \((X,\mathcal{L})\) and \(g\in\mathfrak{L}(X,\mathcal{L})\). For every \(U\in\mathcal{L}\), \((g\cdot\mu)(U)=\int_{0}^{\infty}\mu(U\cap g^{-1}(]t,\infty])\;dt\)._
Proof.: Let \(\nu\stackrel{{\rm def}}{{=}}g\cdot\mu\). For every \(U\in\mathcal{L}\), we write \(\nu(U)\stackrel{{\rm def}}{{=}}\int\chi_{U}g\;d\mu\) as \(\int_{0}^{\infty}\mu((\chi_{U}g)^{-1}(]t,\infty]))\;dt\). For every \(t\in\mathbb{R}_{+}\), \((\chi_{U}g)^{-1}(]t,\infty])=U\cap g^{-1}(]t,\infty])\), whence the result.
**Proposition 11**.: _Let \(\mu\) and \(\nu\) be two valuations on a Pervin space \((X,\mathcal{L})\). If \(\nu=g\cdot\mu\) for some function \(g\in\mathfrak{L}(X,\mathcal{L})\), then \(\nu\) is absolutely continuous with respect to \(\mu\)._
Proof.: Let us fix \(\epsilon\in\mathbb{R}_{+}\smallsetminus\{0\}\) and \(U_{0}\in\mathcal{L}\) such that \(\nu(U_{0})<\infty\).
Let \(h(t)\stackrel{{\rm def}}{{=}}\mu(U_{0}\cap g^{-1}(]t,\infty]))\), and \(h_{N}(t)\) be defined as \(h(t)\) if \(t\leq N\), \(0\) otherwise. The maps \(h_{N}\), \(N\in\mathbb{N}\), are antitonic, and their pointwise supremum is \(h\). Using Lemma 10, with \(U\stackrel{{\rm def}}{{=}}U_{0}\), and Fact 1, \(\nu(U_{0})=\int_{0}^{\infty}h(t)\;dt=\sup_{N\in\mathbb{N}}^{\uparrow}\int_{0} ^{\infty}h_{N}(t)\;dt\). Since \(\nu(U_{0})<\infty\), for some \(N\in\mathbb{N}\smallsetminus\{0\}\), \(\int_{0}^{\infty}h_{N}(t)\;dt>\nu(U_{0})-\epsilon/2\). Then \(\int_{N}^{\infty}h(t)\;dt<\epsilon/2\).
Let \(\eta\stackrel{{\rm def}}{{=}}\epsilon/(2N)\). For every \(U\in\mathcal{L}\) such that \(U\subseteq U_{0}\) and \(\mu(U)<\eta\), we show that \(\nu(U)<\epsilon\) as follows.
\[\nu(U) =(g\cdot\mu)(U)\] \[=\int_{0}^{\infty}\mu(U\cap g^{-1}(]t,\infty]))\;dt\quad\text{ by Lemma \ref{lem:Lond}}\] \[=\int_{0}^{N}\mu(U\cap g^{-1}(]t,\infty]))\;dt+\int_{N}^{\infty} \mu(U\cap g^{-1}(]t,\infty]))\;dt\] \[\leq N\mu(U)+\int_{N}^{\infty}h(t)\;dt<N\mu(U)+\epsilon/2<N\eta+ \epsilon/2=\epsilon.\qed\]
### Absolute continuity is not enough
Given a topological space \((X,\mathcal{L})\), let \(\mathcal{B}(\mathcal{L})\) be its Borel \(\sigma\)-algebra. A _Borel measure_, namely a measure on \((X,\mathcal{B}(\mathcal{L}))\), induces a valuation on \((X,\mathcal{L})\) by restriction to the open sets. The Borel measures for which the induced valuation is continuous are traditionally called \(\tau\)_-smooth_. By Adamski's theorem [1, Theorem 3.1], it is equivalent to require all Borel measures on \((X,\mathcal{B}(\mathcal{L}))\) to be \(\tau\)-smooth, or to require \((X,\mathcal{L})\) to be _hereditarily Lindelof_; a space is hereditarily Lindelof if
and only if every family \((U_{i})_{i\in I}\) of open subsets has a countable subfamily with the same union. All second-countable spaces are hereditarily Lindelof.
There has been quite some literature on the converse question, among which [15, 2, 13]: given a continuous valuation \(\nu\) on \((X,\mathcal{L})\), does \(\nu\) extend to a (necessarily \(\tau\)-smooth) Borel measure? One of the most general theorems of this kind is the following [7, Theorem 1]: every continuous valuation on an LCS-complete space extends to a Borel measure; an _LCS-complete_ space is a homeomorph of a \(G_{\delta}\) subset of a locally compact sober space. The class of LCS-complete spaces includes all locally compact sober spaces, Matthew de Brecht's quasi-Polish spaces [6], and therefore also all Polish spaces.
Additionally, a standard use of the \(\pi\lambda\)-theorem [3, Theorem 3.2] shows that any \(\sigma\)-finite continuous valuation \(\nu\) on \((X,\mathcal{L})\) extends to a _unique_ Borel measure. That Borel measure \(\mu\) is such that there exists a monotone sequence \(\left(U_{n}\right)_{n\in\mathbb{N}}\) of open sets covering \(X\) and such that \(\mu(U_{n})<\infty\) for every \(n\in\mathbb{N}\). This is a stricter condition than simply being \(\sigma\)-finite, since \(U_{n}\) is required to be open, and Borel measures having this property are sometimes called _moderated_.
Since quasi-Polish spaces are second-countable [6, Definition 16], hence hereditarily Lindelof, it follows that \(\sigma\)-finite continuous valuations are in one-to-one correspondence with moderated \(\tau\)-smooth measures on quasi-Polish spaces.
We use this to transport the classical Radon-Nikodym theorem over to the world of continuous valuations.
In one direction, given any \(\sigma\)-finite continuous valuation \(\mu\) on an LCS-complete space \((X,\mathcal{L})\), let \(\widetilde{\mu}\) be its unique extension to a Borel measure. For every measurable map (not just any lower semicontinuous map) \(g\), in \(\mathfrak{L}(X,\mathcal{B}(\mathcal{L}))\), we can form the measure \(g\cdot\widetilde{\mu}\) on \((X,\mathcal{B}(\mathcal{L}))\). This induces an \(\omega\)-continuous valuation by restriction to \(\mathcal{L}\), which we write as \(g\cdot\mu\), extending Definition 5 to a larger class of density functions. With this definition, we have the following.
**Theorem 12**.: _For any two \(\sigma\)-finite continuous valuations on an LCS-complete space \((X,\mathcal{L})\), the following are equivalent:_
1. \(\widetilde{\nu}\) _is absolutely continuous with respect to_ \(\widetilde{\mu}\)_;_
2. _there is a measurable map_ \(g\in\mathfrak{L}(X,\mathcal{B}(\mathcal{L}))\) _such that_ \(\nu=g\cdot\mu\)_._
_Additionally, \(g\) is unique up to \(\widetilde{\mu}\)-null sets._
Proof.: The condition \(\nu=g\cdot\mu\) is equivalent to \(\widetilde{\nu}=g\cdot\widetilde{\mu}\), by our (re)definition of \(g\cdot\mu\). We conclude by invoking the classical Radon-Nikodym theorem.
Although this is a positive result, this gives us a recipe to show that absolute continuity is _not_ enough for two \(\sigma\)-finite \(\omega\)-continuous valuation to have a density \(g\in\mathfrak{L}(X,\mathcal{L})\): find measurable maps that are equal to no lower semicontinuous map up to a \(\widetilde{\mu}\)-null set.
We provide two counter-examples. The first one relies on the existence of non-trivial specialization orderings in non-\(T_{1}\) spaces. The second one takes place in \(\mathbb{R}\) with its standard metric topology.
**Example 13**.: _Let \(\mu\stackrel{{\text{def}}}{{=}}a\delta_{x}+b\delta_{y}\), where \(a,b>0\) and \(x\) and \(y\) are two points of an LCS-complete space \((X,\mathcal{L})\) with \(x<y\). (We let \(x\leq y\) if and only if every \(U\in\mathcal{L}\) containing \(x\) contains \(y\); this is the specialization preordering of \((X,\mathcal{L})\). A space is \(T_{0}\) if and only if \(\leq\) is antisymmetric, and every LCS-complete space is \(T_{0}\). We write \(x<y\) if \(x\leq y\) and \(y\not\leq x\).) Next, consider any \(g\in\mathfrak{L}(X,\mathcal{B}(\mathcal{L}))\) such that \(g(x)>g(y)\). For example, taking \(g\stackrel{{\text{def}}}{{=}}1-h\) fits, where \(h\) is any lower semicontinuous map from \((X,\mathcal{L})\) to \([0,1]\subseteq\overline{\mathbb{R}}_{+}\) such that \(h(x)\neq h(y)\); indeed, every lower semicontinuous map is monotonic. We note that \(g\) is equal to no lower semicontinuous map up to any \(\widetilde{\mu}\)-null set, because \(g\) is antitonic, lower semicontinuous maps are monotonic, and the \(\widetilde{\mu}\)-null sets are the Borel sets that contain neither \(x\) nor \(y\). Therefore \(g\cdot\mu\) has no lower semicontinuous density with respect to \(\mu\). For a concrete instance of this construction, consider Sierpinski space for \((X,\mathcal{L})\), namely \((\{0,1\},\{\emptyset,\{1\},\{0,1\}\})\), \(x\stackrel{{\text{def}}}{{=}}0\), \(y\stackrel{{\text{def}}}{{=}}1\), \(g(0)\stackrel{{\text{def}}}{{=}}1\), \(g(1)\stackrel{{\text{def}}}{{=}}0\)._
**Example 14**.: _Let \(\mu\) be the bounded discrete valuation \(\delta_{0}+\sum_{n\in\mathbb{N}}\frac{1}{2^{n}}\delta_{1/2^{n}}\) on \(\mathbb{R}\) with its standard topology. Let \(g\) map every non-zero real number to \(0\), and \(0\) to \(1\). This is a measurable map. The \(\widetilde{\mu}\)-null sets are the Borel sets that do not contain \(0\) or any point \(1/2^{n}\), \(n\in\mathbb{N}\). If \(g\) were equal to some \(h\in\mathfrak{L}(\mathbb{R})\) up to some \(\widetilde{\mu}\)-null set, then we would have \(h(0)=1\) and \(h(1/2^{n})=0\) for every \(n\in\mathbb{N}\). But then \(h^{-1}(]1/2,\infty])\) would contain \(0\), hence \(1/2^{n}\) for \(n\) large enough, and that is impossible since \(h(1/2^{n})=0\). It follows that \(g\cdot\mu\) has no lower semicontinuous density with respect to \(\mu\)._
We will therefore look for additional conditions imposed by the existence of \(g\in\mathfrak{L}(X,\mathcal{L})\) such that \(\nu=g\cdot\mu\).
### The Smiley-Horn-Tarski theorem
Let \(\mathcal{A}(\mathcal{L})\) be the smallest algebra of subsets of \(X\) containing \(\mathcal{L}\). Its elements are the unions of finite collections of pairwise disjoint crescents. A _crescent_ is a difference \(U\smallsetminus V\) of elements \(U,V\in\mathcal{L}\); we can assume \(V\subseteq U\) without loss of generality.
The _Smiley-Horn-Tarski theorem_[21, 10, 17] states that every bounded valuation \(\nu\) on \((X,\mathcal{L})\) extends to a unique (bounded) valuation on \((X,\mathcal{A}(\mathcal{L}))\).
In general, every valuation \(\nu\) on \((X,\mathcal{L})\) (not necessarily bounded) extends to a valuation on \((X,\mathcal{A}(\mathcal{L}))\), but that extension may fail to be unique [8, Proposition IV-9.4]. We will usually write an extension of \(\nu\) on \((X,\mathcal{A}(\mathcal{L}))\) with the same letter \(\nu\), although one should be careful that such extensions may fail to be unique when \(\nu\) is not bounded. Still, some uniqueness remains: if \(C\in\mathcal{A}(\mathcal{L})\) can be written as a disjoint union of crescents \(U_{i}\smallsetminus V_{i}\) with \(V_{i}\subseteq U_{i}\) (\(1\leq i\leq n\)), and if \(\nu(U_{i})<\infty\) for every \(i\), then necessarily \(\nu(C)=\sum_{i=1}^{n}(\nu(U_{i})-\nu(V_{i}))\).
**Lemma 15**.: _Let \((X,\mathcal{L})\) be a Pervin space, \(\mu\) be a bounded valuation on \((X,\mathcal{L})\) and \(g\in\mathfrak{L}(X,\mathcal{L})\). The function \(\nu\) that maps every \(C\in\mathcal{A}(\mathcal{L})\) to \(\int_{0}^{\infty}\mu(C\cap g^{-1}([t,\infty]))\ dt\) is a valuation on \(\mathcal{A}(\mathcal{L})\) that extends \(g\cdot\mu\) to \((X,\mathcal{A}(\mathcal{L}))\)._
We call the valuation \(\nu\) above the _canonical extension_ of \(g\cdot\mu\) to \((X,\mathcal{A}(\mathcal{L}))\). There may be others: while \(\mu\) is bounded, \(g\cdot\mu\) may fail to be.
Proof.: The definition of \(\nu(C)\) makes sense, since the extension of \(\mu\) to \(\mathcal{A}(\mathcal{L})\), which is required to make sense of \(\mu(C\cap g^{-1}([t,\infty]))\) is unique, owing to the fact that \(\mu\) is bounded. It is clear that \(\nu(\emptyset)=0\). The modularity and the monotonicity of \(\nu\) on \(\mathcal{A}(\mathcal{L})\) follow from the modularity and the monotonicity of \(\mu\). Hence \(\nu\) is a valuation, and it extends \(g\cdot\mu\) by Lemma 10.
Since extensions to \(\mathcal{A}(\mathcal{L})\) are unique for bounded valuations, we obtain the following.
**Corollary 16**.: _Let \((X,\mathcal{L})\) be a Pervin space, \(\mu\) be a bounded valuation on \((X,\mathcal{L})\) and \(g\in\mathfrak{L}(X,\mathcal{L})\). If \(\nu\stackrel{{\text{def}}}{{=}}g\cdot\mu\) is bounded, then its unique extension to \(\mathcal{A}(\mathcal{L})\) is such that \(\nu(C)=\int_{0}^{\infty}\mu(C\cap g^{-1}([t,\infty]))\ dt\) for every \(C\in\mathcal{A}(\mathcal{L})\)._
### Signed valuations
In order to state the Hahn decomposition property, we need to introduce signed valuations.
A _signed valuation_ is a map \(\varsigma\colon\mathcal{L}\to\mathbb{R}\) (not \(\mathbb{R}_{+}\)) that is _strict_ (\(\varsigma(\emptyset)=0\)) and _modular_ (for all \(U,V\in\mathcal{L}\), \(\varsigma(U\cup V)+\varsigma(U\cap V)=\varsigma(U)+\varsigma(V)\)).
Typical examples of signed valuations are given by maps \(\nu-r\cdot\mu\colon\mathcal{L}\to\mathbb{R}\), where \(\nu\) and \(\mu\) are bounded valuations on \((X,\mathcal{L})\), and every \(r\in\mathbb{R}_{+}\).
We have the following analogue of the bounded form of the Smiley-Horn-Tarski theorem. The proof uses ingredients similar to Proposition 3, and can also be used to derive the classical Smiley-Horn-Tarski theorem.
**Proposition 17**.: _Let \(\mathcal{L}\) be a lattice of subsets of a set \(X\), and \(\varsigma\) be a signed valuation on \((X,\mathcal{L})\). Then \(\varsigma\) extends to a unique signed valuation
on \((X,\mathcal{A}(\mathcal{L}))\). The extension--still written \(\varsigma\)--satisfies \(\varsigma(U\smallsetminus V)=\varsigma(U)-\varsigma(U\cap V)=\varsigma(U\cup V)- \varsigma(V)\) for all \(U,V\in\mathcal{L}\)._
Proof.: If \(\varsigma^{\%}\) is any signed valuation extending \(\varsigma\) on \((X,\mathcal{A}(\mathcal{L}))\), then it is defined uniquely on crescents by the fact that \(\varsigma^{\%}(U\smallsetminus V)\) must be equal to \(\varsigma(U)-\varsigma(U\cap V)\) and also to \(\varsigma(U\cup V)-\varsigma(V)\) for all \(U,V\in\mathcal{L}\), by modularity and the fact that \(\varsigma^{\%}((U\cap V)\cap(U\smallsetminus V))\) and \(\varsigma^{\%}(V\cap(U\smallsetminus V))\) must both be equal to \(\varsigma^{\%}(\emptyset)=0\); then \(\varsigma^{\%}\) is uniquely determined on finite disjoint unions of crescents by additivity.
We proceed as follows to prove the existence of \(\varsigma^{\%}\). Let \(M^{+}\) be the set of functions \(h\in\mathfrak{L}(X,\mathcal{L})\) taking their values in \(\mathbb{N}\). One can write any such \(h\) as \(\sum_{i=1}^{\infty}\chi_{U_{i}}\) in a unique way, where \(U_{1}\supseteq\cdots\supseteq U_{n}\supseteq\cdots\) form an antitone sequence of elements of \(\mathcal{L}\), with \(U_{n}=\emptyset\) for \(n\) large enough. Indeed, \(U_{i}\) is determined uniquely as \(h^{-1}([i,\infty])\) (which is equal to \(h^{-1}(]i-\epsilon,\infty]\)) for any \(\epsilon\in\left]0,1\right[\), hence is in \(\mathcal{L}\)). On every such \(h\in M^{+}\), let \(F(h)\stackrel{{\mathrm{def}}}{{=}}\sum_{i=1}^{\infty}\varsigma(U _{i})\). This is a finite sum, because \(\varsigma\) is strict.
With \(h\) as above and \(U\in\mathcal{L}\), \(h+\chi_{U}\) is equal to \(\sum_{i=1}^{\infty}\chi_{V_{i}}\) where \(V_{i}=(h+\chi_{U})^{-1}([i,\infty])=h^{-1}([i,\infty])\cup(h^{-1}([i-1,\infty ])\cap U)=U_{i}\cup(U_{i-1}\cap U)\); when \(i=1\), we use the convention that \(U_{0}=X\). Hence:
\[F(h+\chi_{U}) =\sum_{i=1}^{\infty}\varsigma(U_{i}\cup(U_{i-1}\cap U))\] \[=\sum_{i=1}^{\infty}\left(\varsigma(U_{i})+\varsigma(U_{i-1}\cap U )-\varsigma(U_{i}\cap U)\right)\] \[\qquad\text{by modularity; note that }U_{i}\cap U_{i-1}\cap U\text{ simplifies to }U_{i}\cap U\] \[=F(h)+\varsigma(U),\]
by canceling the telescoping terms \(\varsigma(U_{i-1}\cap U)\) and \(\varsigma(U_{i}\cap U)\), so that only \(\varsigma(U_{0}\cap U)=\varsigma(U)\) remains.
For every \(h^{\prime}\in M^{+}\), written as \(\sum_{j=1}^{\infty}\chi_{V_{j}}\) where \(V_{1}\supseteq\cdots\supseteq V_{n}\supseteq\cdots\) form an antitone sequence of elements of \(\mathcal{L}\), with \(V_{n}=\emptyset\) for \(n\) large enough, we obtain that \(F(h+h^{\prime})=F(h)+F(h^{\prime})\) by induction on the number of non-empty sets \(V_{n}\).
We can therefore extend \(F\) to an additive map from \(M\) to \(\mathbb{Z}\), where \(M\) is the collection of differences \(f-g\) of two elements of \(M^{+}\), by \(F(f-g)\stackrel{{\mathrm{def}}}{{=}}F(f)-F(g)\). This is unambiguous: if \(f-g=f^{\prime}-g^{\prime}\), then \(f+g^{\prime}=f^{\prime}+g\), so \(F(f)+F(g^{\prime})=F(f^{\prime})+F(g)\), or equivalently \(F(f)-F(g)=F(f^{\prime})-F(g^{\prime})\).
Let us define \(\varsigma^{+}(C)\stackrel{{\mathrm{def}}}{{=}}F(\chi_{C})\) for every subset \(C\) of \(X\) such that \(\chi_{C}\in M\). Amongst those, we find the crescents \(U\smallsetminus V\) (with \(U,V\in\mathcal{L}\) and \(V\subseteq U\)), since \(\chi_{U\smallsetminus V}=\chi_{U}-\chi_{V}\). We also find the finite disjoint unions of crescents
\(C_{1}\),..., \(C_{n}\), since their characteristic map is \(\sum_{i=1}^{n}\chi_{C_{i}}\). Now \(\varsigma^{+}\) is strict since \(F(0)=0\), and modular on \((X,\mathcal{A}(\mathcal{L}))\). The latter rests on the fact that for any sets \(C\) and \(C^{\prime}\), \(\chi_{C\cup C^{\prime}}+\chi_{C\cap C^{\prime}}=\chi_{C}+\chi_{C^{\prime}}\): then \(\varsigma(C\cup C^{\prime})+\varsigma(C\cap C^{\prime})=F(\chi_{C\cup C^{ \prime}}+\chi_{C\cap C^{\prime}})\) (since \(F\) is additive) \(=F(\chi_{C}+\chi_{C^{\prime}})=\varsigma(C)+\varsigma(C^{\prime})\) (since \(F\) is additive, once again).
### The Hahn decomposition property
**Definition 18** (Hahn decomposition property).: _Let \((X,\mathcal{L})\) be a Pervin space. A signed valuation \(\varsigma\colon\mathcal{L}\to\mathbb{R}\) has the Hahn decomposition property if and only if there is an element \(U\) of \(\mathcal{L}\) such that:_
* _for every crescent_ \(C\) _included in_ \(U\)_,_ \(\varsigma(C)\geq 0\)_;_
* _for every crescent_ \(C\) _disjoint from_ \(U\)_,_ \(\varsigma(C)\leq 0\)_._
In this definition, we extend \(\varsigma\) implicitly to a valuation on \((X,\mathcal{A}(\mathcal{L}))\), using Proposition 17, in order to make sense of \(\varsigma(C)\). We will call the set \(U\) given above a _witness_ to the Hahn decomposition property.
**Proposition 19**.: _Let \(\mu\) and \(\nu\) be two bounded valuations on a Pervin space \((X,\mathcal{L})\). If \(\nu=g\cdot\mu\) for some \(g\in\mathfrak{L}(X,\mathcal{L})\), then for every \(r\in\mathbb{R}_{+}\), the signed valuation \(\nu-r\cdot\mu\) has the Hahn decomposition property--and one can take \(U\stackrel{{\text{def}}}{{=}}g^{-1}(]r,\infty])\) as a witness to the latter._
Proof.: We take \(U\stackrel{{\text{def}}}{{=}}g^{-1}(]r,\infty])\). For every crescent \(C\subseteq U\), \(C\cap g^{-1}(]t,\infty])=C\) for every \(t\in[0,r]\), since \(g^{-1}(]t,\infty])\) contains \(U\) in that case. Hence:
\[\nu(C) =\int_{0}^{\infty}\mu(C\cap g^{-1}(]t,\infty]))\;dt\qquad \qquad\text{by Corollary \ref{cor:Hahn decomposition}}\] \[=\int_{0}^{r}\mu(C\cap g^{-1}(]t,\infty]))\;dt+\int_{r}^{ \infty}\mu(C\cap g^{-1}(]t,\infty]))\;dt\] \[\geq\int_{0}^{r}\mu(C\cap g^{-1}(]t,\infty]))\;dt=\int_{0}^{r} \mu(C)\;dt=r\cdot\mu(C).\]
For every crescent \(C\) disjoint from \(U\), \(C\cap g^{-1}(]t,\infty])\) is empty for every \(t\geq r\), since \(g^{-1}(]t,\infty])\) is included in \(U\) in that case. Hence:
\[\nu(C) =\int_{0}^{\infty}\mu(C\cap g^{-1}(]t,\infty]))\;dt\] \[=\int_{0}^{r}\mu(C\cap g^{-1}(]t,\infty]))\;dt\leq\int_{0}^{r} \mu(C)\;dt=r\cdot\mu(C).\]
Given any valuation \(\nu\) on a Pervin space \((X,\mathcal{L})\), and any \(U_{0}\in\mathcal{L}\), we can define a valuation \(\nu_{|U_{0}}\) by letting \(\nu_{|U_{0}}(U)\stackrel{{\text{def}}}{{=}}\nu(U\cap U_{0})\) for every \(U\in\mathcal{L}\); then \(\nu_{|U_{0}}\) is an \(\omega\)-continuous (resp., continuous) valuation if \(\nu\) is. We also note
that \(\nu_{|U_{0}}\) is a bounded valuation if and only if \(\nu(U_{0})<\infty\). We use this in the proof of the following corollary, and we will use the notion again later.
**Corollary 20**.: _Let \(\mu\) and \(\nu\) be two valuations on a Pervin space \((X,\mathcal{L})\). If \(\nu=g\cdot\mu\) for some \(g\in\mathfrak{L}(X,\mathcal{L})\), then for every \(U_{0}\in\mathcal{L}\) such that \(\nu(U_{0})<\infty\) and \(\mu(U_{0})<\infty\), for every \(r\in\mathbb{R}_{+}\), the signed valuation \(\nu_{|U_{0}}-r\cdot\mu_{|U_{0}}\) has the Hahn decomposition property._
Proof.: If \(\nu(U_{0})<\infty\) and \(\mu(U_{0})<\infty\), then \(\nu_{|U_{0}}=g\cdot\mu_{|U_{0}}\), since for every \(U\in\mathcal{L}\), \(\nu_{|U_{0}}(U)=\nu(U\cap U_{0})=\int_{0}^{\infty}\mu(U\cap U_{0}\cap g^{-1}( [t,\infty]))\ dt\) (by Lemma 10) \(=\int_{0}^{\infty}\mu_{|U_{0}}(U\cap g^{-1}(]t,\infty]))\ dt=(g\cdot\mu_{|U_{0}})(U)\). We conclude by using Proposition 19.
## 5. The existence of density maps
We now show that absolute continuity and the Hahn decomposition property suffice to guarantee the existence of a density function. The following are the two key lemmata. We write \(\mathbb{Q}_{2}\) for the set of dyadic numbers, namely rational numbers of the form \(p/2^{n}\) with \(p\in\mathbb{Z}\) and \(n\in\mathbb{N}\). We also use the Smiley-Horn-Tarski theorem in order to make sense of \(\nu(C)\) below, and the canonical extension given in Lemma 15 in order to make sense of \((g\cdot\mu)(C)\).
**Lemma 21**.: _Let \((X,\mathcal{L})\) be a Pervin space, \(g\in\mathfrak{L}(X,\mathcal{L})\), and \(\nu\), \(\mu\) be two bounded valuations on \((X,\mathcal{L})\). Let us assume that for every non-negative dyadic number \(r\in\mathbb{Q}_{2}\cap\mathbb{R}_{+}\), for every crescent \(C\subseteq g^{-1}(]r,\infty]), \(\nu(C)\geq r\cdot\mu(C)\). Then for every \(C\in\mathcal{A}(\mathcal{L})\), \(\nu(C)\geq(g\cdot\mu)(C)\). In particular, \(\nu\geq g\cdot\mu\) on \((X,\mathcal{L})\)._
Proof.: It suffices to show the claim for every crescent \(C\). Once this is done, the claim that \(\nu(C)\geq(g\cdot\mu)(C)\) for every \(C\in\mathcal{A}(\mathcal{L})\) follows from the fact that \(C\) is a disjoint union of crescents, and that \(\nu\) and \(g\cdot\mu\) are additive.
Figure 1. Bounding \((g\cdot\mu)(C)\) from above
We fix a crescent \(C\). By definition of canonical extensions (Lemma 15), \((g\cdot\mu)(C)=\int_{0}^{\infty}\mu(C\cap g^{-1}(]t,\infty]))\ dt\).
The main ingredient of the proof is summarized in Figure 1: the sum of the areas of the vertical bands on the left is equal to the sum of the areas of the horizontal bands on the right. We will rely on that figure in what follows.
Let \(f(t)\stackrel{{\text{def}}}{{=}}\mu(C\cap g^{-1}(]t,\infty]))\), and \(f_{N}(t)\stackrel{{\text{def}}}{{=}}f(t)\) if \(t\leq N\), \(0\) otherwise. In the figure, \(f\) is shown as the solid decreasing curve, both on the left-hand side and on the right-hand side. Since \(f\) is the pointwise supremum of \(\left(f_{N}\right)_{N\in\mathbb{N}}\), \((g\cdot\mu)(C)=\sup_{N\in\mathbb{N}}^{\uparrow}\int_{0}^{\infty}f_{N}(t)\ dt\) by Fact 1.
We fix an arbitrary \(r\in\mathbb{R}_{+}\) such that \(r<(g\cdot\mu)(C)\). For \(N\in\mathbb{N}\) large enough, \(r\leq\int_{0}^{\infty}f_{N}(t)\ dt=\int_{0}^{N}\mu(C\cap g^{-1}(]t,\infty]))\ dt\leq\sum_{k=1}^{N2^{N}}\frac{1}{2^{N}}\mu(C\cap g^{-1}(] (k-1)/2^{N},\infty])\). The latter is the sum of the areas of the vertical bands on the left of Figure 1.
Reorganizing the summation, that is also equal to the sum of the areas of the horizontal bands on the right, so:
\[r\leq\sum_{k=1}^{N2^{N}}\frac{k}{2^{N}}\mu(C\cap g^{-1}(](k-1)/2 ^{N},\infty])\smallsetminus g^{-1}(]k/2^{N},\infty]))\] \[\qquad\qquad+N\mu(C\cap g^{-1}(]N,\infty])).\]
The final term in the sum is the area of the bottommost band. The sum of the terms with \(1\leq k\leq N\) is bounded from above by \(\sum_{k=1}^{N}\frac{k}{2^{N}}\mu(C)\leq\frac{N(N+1)}{2^{N+1}}\mu(C)\). For every \(k\) between \(N+1\) and \(N2^{N}\), the crescent \(C^{\prime}\stackrel{{\text{def}}}{{=}}C\cap g^{-1}(](k-1)/2^{N}, \infty])\smallsetminus g^{-1}(]k/2^{N},\infty])\) is included in \(g^{-1}(](k-1)/2^{N},\infty])\), so \(\nu(C^{\prime})\geq(k-1)/2^{N}\ \mu(C^{\prime})\) by assumption. Similarly, \(\nu(C\cap g^{-1}(]N,\infty]))\geq N\ \mu(C\cap g^{-1}(]N,\infty]))\).
It follows that:
\[r\leq\frac{N(N+1)}{2^{N+1}}\mu(C)+\sum_{k=N+1}^{N2^{N}}\frac{k}{ k-1}\nu(C\cap g^{-1}(](k-1)/2^{N},\infty])\smallsetminus g^{-1}(]k/2^{N}, \infty]))\] \[\qquad\qquad+\nu(C\cap g^{-1}(]N,\infty])).\]
In the middle sum, \(k/(k-1)\) is smaller than or equal to \((N+1)/N\). We also have \(\nu(C\cap g^{-1}(]N,\infty]))\leq\frac{N+1}{N}\nu(C\cap g^{-1}(]N,\infty]))\), because \(\frac{N+1}{N}\geq 1\). Hence:
\[r\leq\frac{N(N+1)}{2^{N+1}}\mu(C)+\frac{N+1}{N}\sum_{k=N+1}^{N2^{N}}\nu(C\cap g ^{-1}(](k-1)/2^{N},\infty])\smallsetminus g^{-1}(]k/2^{N},\infty]))\)
By the additivity of \(\nu\), the right-hand side is equal to \(\frac{N(N+1)}{2^{N+1}}\mu(C)+\frac{N+1}{N}\nu(C\cap g^{-1}(]N/2^{N},\infty[))).\) Since \(C\cap g^{-1}(]N/2^{N},\infty])\) is included in \(C\), and \(\nu\) is monotonic, \(r\leq\frac{N(N+1)}{2^{N+1}}\mu(C)+\frac{N+1}{N}\nu(C).\) We let \(N\) tend to \(\infty\), and we obtain that \(r\leq\nu(C).\) Taking suprema over all \(r<(g\cdot\mu)(C),\)\((g\cdot\mu)(C)\leq\nu(C).\)
We have a somewhat symmetric situation in the following lemma, except that we cannot conclude that \((g\cdot\mu)(C)\geq\nu(C)\) without further assumptions. Once again, we use the canonical extension of \(g\cdot\mu\) to make sense of \((g\cdot\mu)(C).\)
**Lemma 22**.: _Let \((X,\mathcal{L})\) be a Pervin space, \(g\in\mathfrak{L}(X,\mathcal{L})\), and \(\nu\), \(\mu\) be two bounded valuations on \((X,\mathcal{L}).\) Let us assume that for every non-negative dyadic number \(r\in\mathbb{Q}_{2}\cap\mathbb{R}_{+}\), for every crescent \(C\) disjoint from \(g^{-1}(]r,\infty])\), \(\nu(C)\leq r\cdot\mu(C).\) Then there is an directed countable family \(\left(U_{N}\right)_{N\in\mathbb{N}}\) of elements of \(\mathcal{L}\) with the following properties:_
1. \(\nu(X\smallsetminus\bigcup_{N\in\mathbb{N}}^{\uparrow}U_{N})=0\)_;_
2. _for every_ \(C\in\mathcal{A}(\mathcal{L})\)_, for every_ \(N\in\mathbb{N}\)_,_ \((g\cdot\mu)(C)\geq\frac{N}{N+1}\nu(C\cap U_{N})+N(\mu(C\cap U_{N})-\frac{1}{N+ 1}\nu(C\cap U_{N}))\)_;_
Proof.: By definition of canonical extensions (Lemma 15), \((g\cdot\mu)(C)=\int_{0}^{\infty}\mu(C\cap g^{-1}(]t,\infty])\ dt\geq\sum_{k=1}^{N 2^{N}}\frac{1}{2^{N}}\mu(C\cap g^{-1}(]k/2^{N},\infty[)))\ dt.\) The latter is the area of the vertical bands on the left of Figure 2, which rewrites as the area of the horizontal bands on the right, namely:
\[\sum_{k=1}^{N2^{N}-1}\frac{k}{2^{N}} \mu(C\cap g^{-1}(]k/2^{N},\infty[))\smallsetminus g^{-1}(](k+1)/2^ {N},\infty[)))\] \[+N\mu(C\cap g^{-1}(]N,\infty[))).\]
The last term is the area of the bottommost band.
For each \(k\), the crescent \(C^{\prime}\stackrel{{\rm def}}{{=}}C\cap g^{-1}(]k/2^{N},\infty ])\smallsetminus g^{-1}(](k+1)/2^{N},\infty])\) is disjoint from \(g^{-1}(](k+1)/2^{N},\infty]),\) so by assumption, \(\nu(C^{\prime})\leq\frac{k+1}{2^{N}}\cdot\mu(C^{\prime}).\)
Figure 2. Bounding \((g\cdot\mu)(C)\) from below
Therefore:
\[(g\cdot\mu)(C)\geq\sum_{k=1}^{N2^{N}-1}\frac{k}{k+1}\nu(C\cap g^{-1}( \lvert k/2^{N},\infty\rvert)\smallsetminus g^{-1}(\lvert(k+1)/2^{N},\infty\rvert))\\ +N\mu(C\cap g^{-1}(\lvert N,\infty\rvert)).\]
Keeping only the terms from the summation with \(k\geq N\) and observing that \(\frac{k}{k+1}\geq\frac{N}{N+1}\) for all such \(k\),
\[(g\cdot\mu)(C)\geq\sum_{k=N}^{N2^{N}-1}\frac{N}{N+1}\nu(C\cap g^{- 1}(\lvert k/2^{N},\infty\rvert)\smallsetminus g^{-1}(\lvert(k+1)/2^{N},\infty \rvert))\\ +N\mu(C\cap g^{-1}(\lvert N,\infty\rvert))\\ =\frac{N}{N+1}\nu(C\cap g^{-1}(\lvert N/2^{N},\infty\rvert)\smallsetminus g ^{-1}(\lvert N,\infty\rvert))\\ +N\mu(C\cap g^{-1}(\lvert N,\infty\rvert))\\ =\frac{N}{N+1}\nu(C\cap g^{-1}(\lvert N/2^{N},\infty\rvert))\\ +N\left(\mu(C\cap g^{-1}(\lvert N,\infty\rvert))-\frac{1}{N+1}\nu (C\cap g^{-1}(\lvert N,\infty\rvert))\right).\]
Let \(U_{N}\stackrel{{\mathrm{def}}}{{=}}g^{-1}(\lvert N/2^{N},\infty\rvert)\) for every \(N\in\mathbb{N}\): we have just proved \((ii)\). The family \((U_{N})_{N\in\mathbb{N}}\) is directed: given any \(i,j\in\mathbb{N}\), there is an \(N\in\mathbb{N}\) such that \(N/2^{N}\leq i/2^{i},j/2^{j}\) because \(N/2^{N}\) tends to \(0\) as \(N\) tends to \(\infty\); and then \(U_{N}\) contains both \(U_{i}\) and \(U_{j}\).
Finally, \(\bigcup_{N\in\mathbb{N}}^{\dagger}U_{N}=g^{-1}(\lvert 0,\infty\rvert)\). Let \(C\) be the crescent \(X\smallsetminus\bigcup_{N\in\mathbb{N}}^{\dagger}U_{N}\). This is disjoint from \(g^{-1}(\lvert r,\infty\rvert)\) for every \(r\in\mathbb{Q}_{2}\cap\mathbb{R}_{+}\), so \(\nu(C)\leq r\cdot\mu(C)\) for every \(r\in\mathbb{Q}_{2}\cap\mathbb{R}_{+}\) by assumption. As a consequence, \(\nu(C)=0\), and this is \((i)\).
The role of absolute continuity is as follows.
**Lemma 23**.: _Let \((X,\mathcal{L})\) be a Pervin space, and \(\nu\) and \(\mu\) be two bounded valuations on \((X,\mathcal{L})\). Let \(\left(U_{N}\right)_{N\in\mathbb{N}}\) be a countable family of elements of \(\mathcal{L}\). If \(\nu\) is absolutely continuous with respect to \(\mu\), then for every \(U\in\mathcal{L}\), for every \(\epsilon>0\), there is an \(N_{0}\in\mathbb{N}\) such that for every \(N\geq N_{0}\), \(N(\mu(U\cap U_{N})-\frac{1}{N+1}\nu(U\cap U_{N}))\geq-\epsilon\)._
Proof.: Let us fix an arbitrary \(\epsilon>0\). Using Remark 8, since \(\nu\) and \(\mu\) are bounded, we can find \(\eta>0\) such that for every \(V\in\mathcal{L}\) such that \(\mu(V)<\eta\), \(\nu(V)<\epsilon\). Since \(\nu\) is bounded once again, there is an \(N_{0}\in\mathbb{N}\) such that \(N_{0}\eta\geq\nu(X)\). For every \(N\geq N_{0}\), either \(\mu(U\cap U_{N})<\eta\), in which case \(\nu(U\cap U_{N})-\epsilon<0\leq N\mu(U\cap U_{N})\), or \(\mu(U\cap U_{N})\geq\eta\), in which case \(N\mu(U\cap U_{N})\geq N_{0}\eta\geq\nu(X)\geq\nu(U\cap U_{N})-\epsilon\). Whatever the alternative,
we have \(N\mu(U\cap U_{N})-\nu(U\cap U_{N})\geq-\epsilon\), and therefore \(N(\mu(U\cap U_{N})-\frac{1}{N+1}\nu(U\cap U_{N}))\geq-\epsilon\), for every \(N\geq N_{0}\).
The following is the only place in this section where we need our valuations to be \(\omega\)-continuous.
**Lemma 24**.: _Let \(\nu\) be an \(\omega\)-continuous bounded valuation on an \(\omega\)-topological space \((X,\mathcal{L})\), and let \(\left(U_{N}\right)_{N\in\mathbb{N}}\) be a countable directed family of elements of \(\mathcal{L}\) such that \(\nu(X\smallsetminus\bigcup_{N\in\mathbb{N}}^{\uparrow}U_{N})=0\). For every \(C\in\mathcal{A}(\mathcal{L})\), \(\sup_{N\in\mathbb{N}}^{\uparrow}\frac{N}{N+1}\nu(C\cap U_{N})\geq\nu(C)\)._
Proof.: Let \(U_{\infty}\stackrel{{\mathrm{def}}}{{=}}\bigcup_{N\in\mathbb{N} }^{\uparrow}U_{N}\). For every \(C\in\mathcal{A}(\mathcal{L})\), the family \(\left(\nu(C\cap U_{N})\right)_{N\in\mathbb{N}}\) is directed. This is because \(\left(U_{N}\right)_{N\in\mathbb{N}}\) is directed and \(U\in\mathcal{L}\mapsto\nu(C\cap U)\) is monotonic. Indeed,, if \(U\subseteq V\), then \(\nu(C\cap V)=\nu(C\cap U)+\nu(C\cap(V\smallsetminus U))\geq\nu(C\cap U)\).
We claim that \(\sup_{N\in\mathbb{N}}^{\uparrow}\nu(C\cap U_{N})\geq\nu(C\cap U_{\infty})\) for every \(C\in\mathcal{A}(\mathcal{L})\). (The equality follows by monotonicity of \(U\mapsto\nu(C\cap U)\).) By additivity of \(\nu\), and since \(+\) is Scott-continuous, it is enough to show this when \(C\) is a crescent, say \(U^{\prime}\smallsetminus V^{\prime}\), where \(U^{\prime},V^{\prime}\in\mathcal{L}\) and \(V^{\prime}\subseteq U^{\prime}\). For every \(\epsilon>0\), there is an \(N\in\mathbb{N}\) such that \(\nu(U^{\prime}\cap U_{N})\geq\nu(U^{\prime}\cap U_{\infty})-\epsilon\), since \(\nu\) is \(\omega\)-continuous. Since \(\nu\) is monotonic, \(\nu(V^{\prime}\cap U_{N})\leq\nu(V^{\prime}\cap U_{\infty})\), and therefore \(\nu(C\cap U_{N})=\nu(U^{\prime}\cap U_{N})-\nu(V^{\prime}\cap U_{N})\geq\nu(U ^{\prime}\cap U_{\infty})-\nu(V^{\prime}\cap U_{\infty})-\epsilon=\nu(C\cap U _{\infty})-\epsilon\).
Now, since \(\nu(X\smallsetminus U_{\infty})=0\), we have \(\nu(C\cap U_{\infty})=\nu(C)\). (Formally, \(\nu(C\smallsetminus U_{\infty})\leq\nu(X\smallsetminus U_{\infty})=0\), and then \(\nu(C)=\nu(C\cap U_{\infty})+\nu(C\smallsetminus U_{\infty})=\nu(C\cap U_{ \infty})\).) Therefore \(\sup_{N\in\mathbb{N}}^{\uparrow}\nu(C\cap U_{N})\geq\nu(C)\). Since multiplication is Scott-continuous on \(\overline{\mathbb{R}}_{+}\) and \(\sup_{N\in\mathbb{N}}^{\uparrow}\frac{N}{N+1}=1\), we conclude.
**Corollary 25**.: _Let \(\mu\) and \(\nu\) be two bounded \(\omega\)-continuous valuations on an \(\omega\)-topological space \((X,\mathcal{L})\). If \(\nu\) is absolutely continuous with respect to \(\mu\) and if for every non-negative dyadic number \(r\in\mathbb{Q}_{2}\cap\mathbb{R}_{+}\), for every crescent \(C\) disjoint from \(g^{-1}(]r,\infty])\), \(\nu(C)\leq r\cdot\mu(C)\), then \(g\cdot\mu\geq\nu\) on \((X,\mathcal{L})\)._
Proof.: Let \(U_{N}\) be as in Lemma 22. For every \(U\in\mathcal{L}\), for every \(N\in\mathbb{N}\), \((g\cdot\mu)(U)\) is larger than or equal to the sum of \(\frac{N}{N+1}\nu(U\cap U_{N})\) and of \(N(\mu(U\cap U_{N})-\frac{1}{N+1}\nu(U\cap U_{N}))\). For every \(\epsilon>0\), the latter is larger than or equal to \(-\epsilon\) for \(N\) large enough by Lemma 23, and the former is larger than or equal to \(\nu(U)-\epsilon\) for \(N\) large enough by Lemma 24. Hence \((g\cdot\mu)(U)\geq\nu(U)-\epsilon\). We conclude since \(\epsilon>0\) is arbitrary.
We now go beyond bounded valuations, and on to \(\sigma\)-finite valuations.
**Lemma 26**.: _Let \(\mu\) and \(\nu\) be two \(\omega\)-continuous valuations on an \(\omega\)-topological space \((X,\mathcal{L})\). If both \(\nu\) and \(\mu\) are \(\sigma\)-finite, there is a monotone sequence
\(\left(E_{n}\right)_{n\in\mathbb{N}}\) of elements of \(\mathcal{L}\) such that \(\bigcup_{n\in\mathbb{N}}^{\uparrow}E_{n}=X\) and \(\nu(E_{n}),\mu(E_{n})<\infty\) for each \(n\in\mathbb{N}\)._
Proof.: Let \(\left(F_{n}\right)_{n\in\mathbb{N}}\) be a monotone sequence of elements of \(\mathcal{L}\) such that \(\nu(F_{n})<\infty\) and \(\bigcup_{n\in\mathbb{N}}^{\uparrow}F_{n}=X\), and let \(\left(G_{n}\right)_{n\in\mathbb{N}}\) play the same role with \(\mu\). Then let \(E_{n}\stackrel{{\mathrm{def}}}{{=}}F_{n}\cap G_{n}\) for each \(n\in\mathbb{N}\).
We will call any monotone sequence \(\left(E_{n}\right)_{n\in\mathbb{N}}\) satisfying the conclusion of Lemma 26 a _witness_ of the joint \(\sigma\)-finiteness of \(\nu\) and \(\mu\).
**Theorem 27** (Existence of density maps).: _Let \(\left(X,\mathcal{L}\right)\) be an \(\omega\)-topological space, and \(\mu\) and \(\nu\) be two \(\sigma\)-finite \(\omega\)-continuous valuations on \(\left(X,\mathcal{L}\right)\). Let \(\left(E_{n}\right)\) be any witness of joint \(\sigma\)-finiteness of \(\nu\) and \(\mu\). Then the following properties are equivalent:_
1. _there is a density function_ \(g\in\mathfrak{L}(X,\mathcal{L})\) _such that_ \(\nu=g\cdot\mu\)_;_
2. _the following two conditions are met:_ 1. \(\nu\) _is absolutely continuous with respect to_ \(\mu\)_;_ 2. _for every_ \(n\in\mathbb{N}\)_, for every_ \(r\in\mathbb{R}_{+}\)_,_ \(\nu_{\left|E_{n}\right.}-r\cdot\mu_{\left|E_{n}\right.}\) _has the Hahn decomposition property._
Proof.: The implication \(\left(1\right)\Rightarrow\left(2\right)\) is by Proposition 11 and Corollary 20.
In the converse direction, let \(\left(E_{n}\right)_{n\in\mathbb{N}}\) be as given in Lemma 26. For each \(n\in\mathbb{N}\) and for each non-negative rational number \(q\), \(\nu_{\left|E_{n}\right.}-q\cdot\mu_{\left|E_{n}\right.}\) has the Hahn decomposition property, so there is an element \(U_{nq}\in\mathcal{L}\) such that every crescent \(C\) included in \(U_{nq}\) satisfies \(\nu(C\cap E_{n})\geq q\cdot\mu(C\cap E_{n})\) and every crescent \(C\) disjoint from \(U_{nq}\) satisfies \(\nu(C\cap E_{n})\leq q\cdot\mu(C\cap E_{n})\).
Since \(\mathcal{L}\) is an \(\omega\)-topology, \(V_{q}\stackrel{{\mathrm{def}}}{{=}}\bigcup_{\begin{subarray}{c}q^ {\prime}\in\mathbb{Q},q^{\prime}\geq q\\ n\in\mathbb{N}\end{subarray}}(E_{n}\cap U_{nq})\) is in \(\mathcal{L}\) for every \(q\in\mathbb{Q}\), \(q\geq 0\). Moreover, \(\left(V_{q}\right)_{q\in\mathbb{Q},q\geq 0}\) forms an antitonic chain: if \(q\leq q^{\prime}\) then \(V_{q}\supseteq V_{q^{\prime}}\).
Given \(n\in\mathbb{N}\) and \(q\in\mathbb{Q}\), \(q\geq 0\), we claim that for every crescent \(C\subseteq V_{q}\), \(\nu(C\cap E_{n})\geq q\cdot\mu(C\cap E_{n})\), and that every crescent \(C\) disjoint from \(V_{q}\) satisfies \(\nu(C\cap E_{n})\leq q\cdot\mu(C\cap E_{n})\). The second property is clear: if \(C\) is disjoint from \(V_{q}\), then it is disjoint from \(E_{n}\cap U_{nq}\), so \(C\cap E_{n}\) is a crescent disjoint from \(U_{nq}\), whence \(\nu((C\cap E_{n})\cap E_{n})\leq q\cdot\mu((C\cap E_{n})\cap E_{n})\). For the first property, where \(C\subseteq V_{q}\), let us write \(C\) as \(U\smallsetminus V\) where \(U,V\in\mathcal{L}\). We enumerate the rational numbers larger than or equal to \(q\) as \(\left(q_{m}\right)_{m\in\mathbb{N}}\). Since \(C\subseteq V_{q}\), \(\nu(C\cap E_{n})=\nu(C\cap E_{n}\cap V_{q})\). Now \(E_{n}\cap V_{q}=\bigcup_{p,p^{\prime}\geq n}^{\uparrow}W_{pp^{\prime}}\), where \(W_{pp^{\prime}}\stackrel{{\mathrm{def}}}{{=}}\bigcup_{ \begin{subarray}{c}0\leq j\leq p\\ 0\leq k\leq p^{\prime}\end{subarray}}\left(E_{j}\cap E_{n}\cap U_{jq_{k}}\right)\). Therefore \(\nu(C\cap E_{n})=\nu(C\cap E_{n}\cap V_{q})=\nu(U\cap E_{n}\cap V_{q}\smallsetminus V )=\nu((U\cap E_{n}\cap V_{q})\cup V)-\nu(V)=\sup_{p,p^{\prime}\in\mathbb{N} }^{\uparrow}\nu((U\cap W_{pp^{\prime}})\smallsetminus V)-\nu(V)=\sup_{p,p^{ \prime}\in\mathbb{N}}^{\uparrow}\nu(C\cap W_{pp^{\prime}})\). Similarly, \(\mu(C\cap E_{n})=\sup_{p,p^{\prime}\in\mathbb{N}}^{\uparrow}\mu(C\cap W_{pp^{ \prime}})\). We can
write \(W_{pp^{\prime}}\) as the finite disjoint union of crescents \(C_{jk}\) with \(0\leq j\leq p\) and \(0\leq k\leq p^{\prime}\), where \(C_{jk}\stackrel{{\mathrm{def}}}{{=}}(E_{j}\cap E_{n}\cap U_{jq_{k}}) \smallsetminus\bigcup\limits_{\begin{subarray}{c}0\leq j^{\prime}\leq j\\ 0\leq k^{\prime}\leq k\end{subarray}}(E_{j^{\prime}}\cap E_{n}\cap U_{j^{ \prime}q_{k^{\prime}}})\). Then \(C_{jk}\subseteq U_{jq_{k}}\), hence also \(C\cap C_{jk}\subseteq U_{jq_{k}}\), so \(\nu(C\cap C_{jk}\cap E_{j})\geq q_{k}\cdot\mu(C\cap C_{jk}\cap E_{j})\). Since \(C_{jk}\subseteq E_{j}\), this simplifies to \(\nu(C\cap C_{jk})\geq q_{k}\cdot\mu(C\cap C_{jk})\). Then \(\nu(C\cap W_{pp^{\prime}})=\sum_{\begin{subarray}{c}0\leq j\leq p\\ 0\leq k\leq p^{\prime}\end{subarray}}\nu(C\cap C_{jk})\geq\sum_{ \begin{subarray}{c}0\leq j\leq p\\ 0\leq k\leq p^{\prime}\end{subarray}}q_{k}\cdot\mu(C\cap C_{jk})\). Since \(q_{k}\geq q\) for every \(k\), this is larger than or equal to \(q\cdot\sum_{\begin{subarray}{c}0\leq j\leq p\\ 0\leq k\leq p^{\prime}\end{subarray}}\mu(C\cap C_{jk})=q\cdot\mu(C\cap W_{pp^{ \prime}})\). Taking suprema over \(p,p^{\prime}\in\mathbb{N}\), we obtain that \(\nu(C\cap E_{n})\geq q\cdot\mu(C\cap E_{n})\), as desired.
We define \(g(x)\) as \(\sup^{\uparrow}\{t\in\mathbb{R}_{+}\mid\exists q\in\mathbb{Q},q>t\text{ and }x\in V_{q}\}\). Then \(g(x)>t\) if and only if \(x\in V_{q}\) for some \(q\in\mathbb{Q}\), \(q>t\). Hence \(g^{-1}(]t,\infty])=\bigcup_{q\in\mathbb{Q},q>t}V_{q}\), which is in \(\mathcal{L}\) since \(\mathcal{L}\) is an \(\omega\)-topology. Therefore \(g\) is in \(\mathfrak{L}(X,\mathcal{L})\).
Let us fix \(n\in\mathbb{N}\). For every non-negative dyadic number \(r\in\mathbb{Q}_{2}\cap\mathbb{R}_{+}\), \(g^{-1}(]r,\infty])=\bigcup_{q\in\mathbb{Q},q>r}V_{q}\subseteq V_{r}\), so for every crescent \(C\subseteq g^{-1}(]r,\infty])\), \(\nu(C\cap E_{n})\geq r\cdot\mu(C\cap E_{n})\). By Lemma 21, \(\nu_{|E_{n}}\geq g\cdot\mu_{|E_{n}}\). For every crescent \(C\) disjoint from \(g^{-1}(]r,\infty])\), \(C\) is disjoint from every \(V_{q}\) with \(q>r\), so \(\nu(C\cap E_{n})\leq q\cdot\mu(C\cap E_{n})\) for every rational \(q>r\); therefore \(\nu_{|E_{n}}(C)\leq r\cdot\mu_{|E_{n}}(C)\), and by Corollary 25, \(g\cdot\mu_{|E_{n}}\geq\nu_{|E_{n}}\).
It follows that \(\nu_{|E_{n}}=g\cdot\mu_{|E_{n}}\) for every \(n\in\mathbb{N}\). Then, using the fact that \(X=\bigcup_{n\in\mathbb{N}}^{\uparrow}E_{n}\) and the \(\omega\)-continuity of \(\nu\), for every \(U\in\mathcal{L}\), \(\nu(U)=\sup_{n\in\mathbb{N}}^{\uparrow}\nu_{|E_{n}}(U)=\sup_{n\in\mathbb{N}} ^{\uparrow}(g\cdot\mu_{|E_{n}})(U)=\sup_{n\in\mathbb{N}}^{\uparrow}\int_{0}^{ \infty}\mu(U\cap E_{n}\cap g^{-1}(]t,\infty]))\;dt\) (by Lemma 10), and this is equal to \(\int_{0}^{\infty}\mu(E\cap g^{-1}(]t,\infty]))\;dt\) by Fact 1 and the \(\omega\)-continuity of \(\mu\), namely to \((g\cdot\mu)(U)\). Therefore \(\nu=g\cdot\mu\).
**Remark 28**.: _In the special case where \(\mathcal{L}\) is not only an \(\omega\)-topology, but is also closed under complements, namely when \(\mathcal{L}\) is a \(\sigma\)-algebra, we have seen that \(\omega\)-continuous valuations and measures are the same thing. Then, for every \(n\in\mathbb{N}\) and for every \(r\in\mathbb{R}_{+}\), \(\nu_{|E_{n}}-r\cdot\mu_{|E_{n}}\) is a signed measure. The Hahn decomposition theorem [3, Theorem 32.1] states that every signed measure has the Hahn decomposition property, and therefore property \((2b)\) is simply true in the case of measures. Hence Theorem 27 implies the classical Radon-Nikodym theorem._
|
2310.13743 | Designing Moiré Patterns by Bending | Motivated by a recent experiment [Kapfer et. al., Science {\bf 381}, 677
(2023)], we analyze the structural effects and low-energy physics of a bent
nanoribbon placed on top of graphene, which creates a gradually changing
moir\'e pattern. By means of a classical elastic model we derive the strains in
the ribbon and we obtain its spectrum with a scaled tight-binding model. The
size of the bent region is determined by the balance between elastic and van
der Waals energy, and different regimes are identified. Near the clamped edge,
strong strains and small angles leads to one-dimensional channels. Near the
bent edge, a long region behaves like magic angle twisted bilayer graphene
(TBG), showing a sharp peak in the density of states, mostly isolated from the
rest of the spectrum. We also calculate the band topology along the ribbon and
we find that it is stable for large intervals of strains an twist angles.
Together with the experimental observations, these results show that the bent
nanoribbon geometry is ideal for exploring superconductivity and correlated
phases in TBG in the very sought-after regime of ultra-low twist angle
disorder. | Pierre A. Pantaleón, Héctor Sainz-Cruz, Francisco Guinea | 2023-10-20T18:01:57Z | http://arxiv.org/abs/2310.13743v2 | # Designing Moire Patterns by Bending
###### Abstract
Motivated by a recent experiment [Kapfer \(et.\)\(al.\), Science **381**, 677 (2023)], we analyze the structural effects and low-energy physics of a bent nanoribbon placed on top of graphene, which creates a gradually changing moire pattern. By means of a classical elastic model we derive the strains in the ribbon and we obtain its spectrum with a scaled tight-binding model. The size of the bent region is determined by the balance between elastic and van der Waals energy, and different regimes are identified. Near the clamped edge, strong strains and small angles leads to one-dimensional channels. Near the bent edge, a long region behaves like magic angle twisted bilayer graphene (TBG), showing a sharp peak in the density of states, mostly isolated from the rest of the spectrum. We also calculate the band topology along the ribbon and we find that it is stable for large intervals of strains an twist angles. Together with the experimental observations, these results show that the bent nanoribbon geometry is ideal for exploring superconductivity and correlated phases in TBG in the very sought-after regime of ultra-low twist angle disorder.
## I Introduction
Experiments on twisted graphene stacks have unveiled an array of exotic phenomena, including almost all strongly correlated phases known in condensed matter physics [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. However, progress in understanding is proving arduous, due to the subtle interplay between factors with comparable energy scales, such as kinetics and Coulomb interactions, angle disorder [23] and strains [24]. As of now, studying all of these in a single model is out of reach, and experiments struggle with low reproducibility [25]. Moreover, cascades of correlated phases and superconductivity have been discovered in Bernal bilayer graphene [26; 27; 28; 29; 30] and rhombohedral trilayer graphene [31; 32], crystalline systems with very low strains or disorder, suggesting it may be fruitful to understand these stacks before twisted ones. Still, several phases have only been observed in twisted stacks so far, and there is a feeling that the moire leads to emergent properties.
Crucial efforts are underway to overcome both angle disorder and strain in twisted graphene systems. Strains are commonly found in twisted stacks [33], and the combination of twists and strains can lead to many novel moire structures [34; 35]. In particular, a recent experiment [36] has shown that bending 2D materials can create exceptionally uniform moire patterns, and achieve independent control of twist angle and strain. This new technique is promising for next generation experiments on twisted graphene. Indeed, samples with ultra-low disorder are expected to enhance already known phenomena, but will also uncover new phases, too delicate to appear in the presence of even mild disorder. Reducing disorder from \(\sim 0.1^{\circ}\) to \(\sim 0.02^{\circ}\) has allowed for the discovery of zero-field Chern insulators [17] and signatures of additional superconducting domes [4], among others. Therefore, we are certain that samples with ultra-low disorder \(\lesssim 0.005^{\circ}\) will be full of surprises.
In this paper, we study the properties of a bent graphene nanoribbon on top of graphene, which creates a slowly changing moire pattern, as seen in the experiment. We first derive the strains in the system using the classical theory of elasticity and then we obtain its spectrum by exact diagonalization of a scaled tight binding model. We find that one dimensional channels appear due to the combination of small twist angles and high strains near the clamped edge. Moreover, when the bending angle is close to \(1^{\circ}\), a very long region near the bent edge behaves like magic-angle graphene, showing a sharp peak in the density of states, nearly isolated from the rest of the spectrum and an stable band topology. These results support the proposal of Ref. [36] that bent graphene ribbons are an ideal platform for studying TBG with ultra-low twist angle disorder. The rest of the paper is organized as follows: in section II we discuss the elastic properties of the system; spectral and topological properties are described in section III, and we conclude in Section IV.
## II Elasticity
### Deformations and stresses
We calculate the deformations, \(u_{x},u_{y}\) and stresses, \(u_{xx},u_{yy},u_{xy}\) in a bent nanoribbon of length \(L\) and width \(W\), in the limit \(W\ll L.\) We assume that one of the narrow sides is clamped so that:
\[u_{x}(0,y)=u_{y}(0,y)=0,\ -\frac{W}{2}\leq y\leq\frac{W}{2}. \tag{1}\]
The ribbon is bent by a vertical force, along the \(y\) axis, applied at the other end, \(x=L.\) We adapt to our problem the theory of plates described in [37], we assume that, to
leading order in \(W/L\), we can write:
\[u_{y}(x)\approx f(x)+\cdots \tag{2}\]
Then, in the absence of external forces, the elastic energy can be written as:
\[E_{el} = \frac{\mu(\lambda+\mu)W^{3}}{12(\lambda+2\mu)}\int_{0}^{L}dx\left( \frac{\partial^{2}u_{y}(x)}{\partial x^{2}}\right)^{2}= \tag{3}\] \[= \frac{EW^{3}}{48}\int_{0}^{L}dx\left(\frac{\partial^{2}u_{y}(x)} {\partial x^{2}}\right)^{2}\]
where \(\lambda,\mu\) are elastic Lame coefficients, and \(E=[4\mu(\lambda+\mu)]/(\lambda+2\mu)\) is the two dimensional Young modulus. In the following, to simplify notation we omit, if necessary, the \(x,y\) dependence on the deformation and stress functions. The general equilibrium solution of Eq. 19 satisfies:
\[\frac{\partial^{4}u_{y}}{\partial x^{4}}=0. \tag{4}\]
A vertical force, \(F\), applied at position \(x_{0}\) of the nanoribbon leads to a term:
\[\delta E=F\times\frac{1}{W}\int_{-\frac{W}{2}}^{\frac{W}{2}}u_{y}(x_{0},y)dy \tag{5}\]
this term induces a discontinuity in \(u_{y}^{iw}(x)\) at \(x=x_{0}\). In the case of a force applied at the end of the nanoribbon, \(x=L\), implies that the function \(u_{y}(x)\) must have a finite third derivative at \(x=L\). The clamped condition at \(x=0\) also implies that \(u_{y}^{\prime}(0)=0\), so that the most general solution of Eq. 4 is:
\[u_{y}=\frac{ax^{2}}{2L}+\frac{bx^{3}}{3L^{2}}, \tag{6}\]
where \(a,b\) are dimensionless constants. The condition that the shear components of the strain tensor vanish, \(\sigma_{x,y}=0\), in order to set to zero elastic forces at the top and bottom edges of the nanoribbon, implies that:
\[u_{x}=-y\frac{\partial u_{y}}{\partial x}=-y\left(\frac{ax}{L}+\frac{bx^{2}}{ L^{2}}\right) \tag{7}\]
so that:
\[u_{xx}=\partial_{x}u_{x}=-y\left(\frac{a}{L}+\frac{2bx}{L^{2}}\right). \tag{8}\]
The \(\sigma_{yy}\) component of the stress tensor should also vanish, and:
\[\sigma_{yy} = \lambda(u_{xx}+u_{yy})+2\mu u_{yy}=0,\] \[u_{yy} = -\frac{\lambda}{\lambda+2\mu}u_{xx}=-\nu u_{xx}, \tag{9}\]
where \(\nu=\lambda/(\lambda+2\mu)\) is the Poisson ratio. Equation 6 requires a correction of order \(W/L\):
\[\delta u_{y}=-\frac{\nu y^{2}}{2}\left(\frac{a}{L}+\frac{2bx}{L^{2}}\right). \tag{10}\]
Figure 1: (a) Sketch of a bent nanoribbon on top of graphene, which results in a gradually changing moiré pattern. We highlight the maximum displacement along the y-axes at \(x=L\). (b) Effective twist angle as function of position, normalized to the maximum twist angle \(\theta_{max}\). c) Moiré length variation as a function of position, starting from a minimum \(L=L_{0}\) and normalized to its maximum value. (d) Strain profile \(u_{xx}\) in Eq. 15 normalized to the maximum strain value. The white line in the middle is the non-strained path. e) Nanoribbon deflection curve \(d(x)\) and bending angle \(\theta_{b}(x)\) as a function of position, each function is normalized to its maximum value. Inset in e) is an enlarged region where a small variation of twist angle is obtained.
The only non zero component of the stress tensor is:
\[\sigma_{xx} = \lambda(u_{xx}+u_{yy})+2\mu u_{xx}= \tag{11}\] \[= \frac{4\mu(\lambda+\mu)}{\lambda+2\mu}y\left(\frac{a}{L}+\frac{2bx }{L^{2}}\right),\]
a quantity that must vanish at the non clamped edge of the nanoribbon, \(x=L\), so that:
\[b=-\frac{a}{2}. \tag{12}\]
The angle between the bent nanoribbon and the horizontal axes is given by
\[\theta(x,y)=\frac{\partial u_{y}(x,y)}{\partial x}. \tag{13}\]
The maximum twist angle occurs for \(x=L\) and it is:
\[\theta_{max}\approx\left.\frac{\partial u_{y}(x,y)}{\partial x}\right|_{x=L} \approx a+b=\frac{a}{2}, \tag{14}\]
so that \(a=-2b\approx 2\theta_{max}\). The strains inside the nanoribbon are:
\[u_{xx}(x,y) = -\theta_{max}\frac{y}{L}\left(1-\frac{x}{L}\right),\] \[u_{yy}(x,y) = -\nu u_{xx}(x,y). \tag{15}\]
Note that we have replaced the force applied at the free end by the value of the resulting twist angle. The deformations satisfying the boundary conditions are then given by
\[u_{x}(x,y) = -2\theta_{max}\frac{y}{L}\left(x-\frac{x^{2}}{2L}\right), \tag{16}\] \[u_{y}(x,y) = \frac{\theta_{max}}{L}\left[\left(x^{2}-\frac{x^{3}}{3L}\right)- \nu y^{2}\left(1-\frac{x}{L}\right)\right], \tag{17}\]
which are the local displacements of each point within the nanoribbon. From Eq. 13 and Eq. 17 we obtain an explicit form for the twist angle,
\[\theta(x,y)=\frac{\theta_{max}}{L}\left(2x-\frac{x^{2}}{L}-\frac{y^{2}\nu}{L^ {2}}\right), \tag{18}\]
between the nanoribbon and the horizontal axis.
### Clamping and sliding.
The previous subsection describes the deformations, strains, and stresses of a nanoribbon with one of the ends clamped and the other under a lateral force. We now estimate, as function of the van der Waals forces between the nanoribbon and the substrate, which fraction of the nanoribbon remains clamped to the substrate, and which fraction slides because of the force applied to the end. The size of the bent region is determined by a balance between the interlayer van der Waals coupling, and the elastic energy required to bend the nanoribbon.
The elastic energy of the bent nanoribbon can be easily calculated from the estimates of the strains discussed previously. In the limit \(W\ll L\), the elastic energy scales as:
\[E_{elas}\propto\mu W^{3}\int_{0}^{L}\left(\frac{\partial^{2}u_{y}}{\partial x ^{2}}\right)^{2}dx\sim\frac{\mu W^{3}\theta_{max}^{2}}{L}. \tag{19}\]
The van der Waals interaction can be described, in a first approximation, by the energy per unit area between of perfectly aligned layer, \(v_{vdW}\), and the assumption that, in the limit of large misalignment, the two layers are decoupled.
The van der Waals energy between the nanoribbon and the substrate can be approximated by the energy difference, per unit area, between perfect alignment between the two layers, and the energetically less favorable alignment, denoted as \(V_{vdW}\). These two configurations repeat themselves with a periodicity comparable to the intralayer distance between nearest neighbor atoms. An interpolation using a few harmonics gives a rough description of the van der Waals interaction. The value of \(V_{vdW}\) is of order 1 meVA\({}^{-2}\).
In the following, we estimate the van der Waals energy for two possible regimes:
#### ii.2.1 Complete decoupling between the nanoribbon and the substrate.
We assume perfect alignment of the clamped region with the substrate, with the bent region considered decoupled from it. The van der Waals energy required to
Figure 2: (a) Moiré pattern of the bent nanoribbon in a region close to the clamped edge. (b) Real space unit cell for a given value of strain and twist angle. (c) Formation of the mini Brillouin zone (mBZ, black) due to the uniform substrate (red) and the twisted and strained lattice (blue). The orientation of both real unit cell and mBZ depends on the combination of twist angle and strain sign and magnitude.
detach the bent region is
\[E_{vdW}(L)=V_{vdW}\times(LW). \tag{20}\]
The total energy needed to bend the nanoribbon is determined by the sum of this term and the elastic energy, Eq. (19). We approximate \(\theta_{max}\approx d_{max}/L\), where \(d_{max}\) is the vertical deflection at the edge. Then, the optimal value of \(L\), the length of the detached region is given by:
\[L\approx(d_{max}W)^{1/2}\left(\frac{\mu}{V_{vdW}}\right)^{1/4}. \tag{21}\]
#### ii.2.2 Misaligned substrate and partial relaxation.
We consider that neither the clamped nor the bent regions are fully aligned or misaligned with the substrate. We assume that the misalignment is described by a twist angle, \(\theta_{subs}\) in the clamped region, which changes smoothly in the bent region:
\[\theta(x)\approx\theta_{subs}+\theta_{max}\left(\frac{x}{L}\right)^{2}. \tag{22}\]
At each position, \(x\), in the nanoribbon a moire pattern can be defined, associated to the twist angle \(\theta(x)\). The van der Waals force between the nanoribbon and the substrate leads to a relaxation, and to a finite attractive van der Waals interaction. The force per unit area exerted by the substrate on the nanoribbon is of order \(V_{vdW}/\ell\), where \(\ell\) is comparable to the interatomic distance in each layer. The deformation induced by this force arises from the balance of this force and the cost in elastic energy associated to the relaxation of the atomic positions. The moire defined by the twist angle \(\theta(x)\) is of order \(\ell_{m}(x)\sim\ell/\theta(x)\). The wavelength of the induced strains is of order \(\ell_{m}(x)\), which leads to an effective spring constant, per unit area, of order \(\mu/\ell_{m}(x)^{2}\). Using second order perturbation theory, the attractive van der Waals energy, per unit area is
\[\tilde{E}_{vdW}(x)\propto-\frac{V_{vdW}^{2}}{\mu}\frac{\ell_{m}(x)^{2}}{\ell^ {2}}\approx\frac{V_{vdW}^{2}}{\mu\theta(x)^{2}} \tag{23}\]
The total van der Waals energy can be obtained by integrating this expression over \(x\), from \(x=0\) to \(x=L\).
We now assume that the initial misalignment in the clamped region is much larger than the change induced by bending, \(\theta_{max}\ll\theta_{subs}\). Then, we can expand the van der Waals energy as:
\[E_{vdW} =\int_{0}^{L}\tilde{E}_{vdW}(x)dx\approx\] \[\approx-\frac{V_{vdW}^{2}W}{\mu}\left(c_{1}\frac{L}{\theta_{0}^{2 }}+c_{2}\frac{d_{max}}{\theta_{0}^{3}}+c_{3}\frac{d_{max}^{2}}{L\theta_{0}^{4 }}+\cdots\right). \tag{24}\]
where \(c_{1},c_{2},c_{3}\) are dimensionless constants of order unity, and we are using \(\theta_{max}\approx d_{max}/X\). The size of the bent region arises from the balance of this expression and the elastic energy, Eq. 19. The first term in Eq. 24 is independent of the bending, and the second term does not contain the length \(L\). From the third term in Eq. 24 and Eq. 19, we find:
\[L\approx W\theta_{0}^{2}\frac{\mu}{V_{vdW}}. \tag{25}\]
This regime seems to approximately describe the experiments reported in [36], as it was found that \(L\propto W\), and the values of \(L\) and \(d_{max}\) seem to be independent. The ratio \(L/W\approx 7\) reported in [36] is consistent with \(\mu/V_{vdW}\approx 10^{4}\) and \(\theta_{0}\approx 1.5^{\circ}\). Note, however, that Eq. 25 suggests a significant dependence on the misalignment of the clamped region, parametrized by \(\theta_{0}\), which could imply a sample dependence not observed in the experiments.
### Bent graphene nanoribbons
The model presented in the previous sections is applicable for determining the elastic properties of nanoribbons with an arbitrary geometry. However, recent experiments have realized bent graphene nanoribbons placed on a graphene substrate [36]. A sketch of the system is shown in Fig. 1(a). The position of each lattice site is given by the set \(\{x_{i},y_{i}\}\) for graphene and \(\{x_{i}+u_{x}(x_{i},y_{i}),y_{i}+u_{y}(x_{i},y_{i})\}\) for the bent nanoribbon, where index \(i\) runs over all positions in the honeycomb lattice. In the following, without loss of generality, we are assuming an initial non twisted \(AB\) stacking at the clamped region (see Fig. S1). Near this edge, the combination of high strains and low angles leads to a strong deformation of the moire lattice and the formation of quasi-one-dimensional patterns [34; 35]. A zoom near this zone is shown in Fig. 2(a), where the bright spots correspond to \(AA\) regions. The different sign of the strains inside the nanoribbon, Eq. 15, results in a different distortion of both real and reciprocal space, as shown in Fig. 2(b, c), respectively. The sign and strength of the strain can be inferred from the distortion of the \(AA\) regions [35].
For regions far from the clamped edge, the deformations in Eq. 16 and Eq. 17 give rise to a smooth sequence of moire patterns whose dimensions become almost uniform. Figure 1(b) shows the variation of the twist angle as a function of position. Close to the clamped edge, the angle grows linearly while it is nearly uniform close to the bent edge, cf. inset in Fig. 1(e). Notice that there is also a small transversal variation of the twist angle, cf. Eq. 18. For large nanoribbons the variation of the twist angle becomes minimal as we move away from the clamped edge. This angle uniformity can also be observed in the dependence of the moire length with position, which can be obtained with the local twist angle and the components of the strain tensor in Eq. 15. Figure 1c) shows the variation of the moire length with position. The larger values are close to the clamped edge where the combination of
small twist and strains gives rise to a complicated lattice structure, shown in Fig. 2. As we move away from the clamped edge the moire length becomes almost perfectly uniform. Figure 1d) displays the spatial profile of the \(u_{xx}(x,y)\) component of the strain tensor, cf. Eq. 15. This component changes linearly in both the longitudinal and transverse directions. The highest strain occurs near the edges of the clamped side. The magnitude of the strain components is symmetric around the zero strain line, and its sign is determined by the direction of the applied force. This leads to compression (expansion) on the upper (lower) side of the nanoribbon.
On the other hand, the zero strain line, shown in white in Fig. 1, is given by the positions \(\{x_{i},u_{y}(x_{i},0)\}\), where the second component is a displacement in the \(y\)-direction and is called _deflection curve_, \(d(x)\), which is a function describing the nanoribbon bending along the vertical direction [37]. The normalized deflection curve \(d(x)/d(L)\) is universal and describes the shape of any bent nanoribbon. A plot of this function is shown in Fig. 1(e). The variation of the deflection curve in a given position determines the bending angle \(\theta_{b}(x)=\theta(x,0)\) which is the twist angle, with respect to the graphene substrate, along the zero strain line. In an experimental setup, for a nanoribbon of length \(L\), the deflection curve gives the bending angle which determines the maximum twist angle \(\theta_{max}\). By knowing the length and the maximum twist angle (or the maximum of the deflection curve), thus Eq. 16 and Eq. 17 determine the full elastic properties. In addition, the enlarged plot in Fig. 1(e) shows a region where the twist angle is almost uniform. Indeed, in this region, the bent nanoribbon has been shown to have very low disorder, with small variations of twist angles and strain [36].
### Numerical estimates.
As previously described, the strains in the bent nanoribbon are highest at the edges, \(y=\pm W/2\) and they are antisymmetric around the center, \(y=0\). The maximum values occur at \(x=0\),
\[u_{xx}\left(0,\pm\frac{W}{2}\right) =\pm\theta_{max}\frac{W}{L},\] \[u_{yy}\left(0,\pm\frac{W}{2}\right) =\mp\nu\theta_{max}\frac{W}{L}. \tag{26}\]
It is interesting to note that the moire pattern defined by a combination of a twist and an uniaxial strain leads to quasi one dimensional behavior [34; 35] when \(-u_{xx}u_{yy}\approx\theta^{2}\). Using the expression presented above, this relation is satisfied for
\[\frac{y}{L}=\frac{\frac{x}{L}\left(1-\frac{x}{2L}\right)}{\sqrt{\nu}\left(1- \frac{x\sqrt{\nu}}{L}\right)}\approx\frac{x}{L\sqrt{\nu}}, \tag{27}\]
and the geometrical effects can be observed in the distortions in Fig. 2.
In realistic graphene nanoribbons on a graphene substrate [36], the quotient \(W/L\sim 0.10\) and the maximum twist angle \(\theta_{max}\sim 2.5^{\circ}\) results in a strain tensor, Eq. 26, with components of magnitude \(u_{xx}=\pm 0.43\%\) and \(u_{yy}=\pm 0.72\%\) for a Poisson ratio of \(\nu=0.165\). These values are in excellent agreement with the mapping of the strain profile in Ref. [36]. On the other hand, both the deflection curve and the bending angle tend to be uniform. The uniform twist angle region is that where \(x/L\gtrsim 0.8\), as shown in Fig. 1.
Before introducing the electronic structure, it is important to note that the analyzed strains induce an effective gauge field that interacts with the electrons. We assume that the lattice axes are oriented such that the gauge field can be written as (note that the sign depends on the valley):
\[\{A_{x},A_{y}\}=\pm\frac{3\beta}{2a}\times\left\{\begin{array}{ ll}\{-u_{xx}+u_{yy},2u_{xy}\}&\quad\text{zigzag}\\ \{2u_{xy},u_{xx}-u_{yy}\}&\quad\text{armchair}\end{array}\right. \tag{28}\]
where \(\beta=(a/t)(\partial t/\partial a)\) is a dimensionless constant that describes the change of the nearest neighbor hopping parameter, \(t\), with respect to the inter atomic distance, \(a\). In the present case, we obtain:
\[\{A_{x},A_{y}\}=\pm\frac{3\beta}{2a}(1+\nu)\times\left\{\begin{array}{ll}\{ -u_{xx},0\}&\quad\text{zigzag}\\ \{0,u_{xx}\}&\quad\text{armchair}\end{array}\right. \tag{29}\]
The effective magnetic field is:
\[B(x,y) =\frac{1}{\ell_{m}^{2}}=\frac{\partial A_{y}}{\partial x}-\frac{ \partial A_{x}}{\partial y}=\] \[=\pm\frac{3\beta}{2a}(1+\nu)(-2\theta_{max})\times\left\{ \begin{array}{ll}-\frac{1}{L}+\frac{x}{L^{2}}&\quad\text{zigzag}\\ \frac{y}{L^{2}}&\quad\text{armchair}\end{array}\right. \tag{30}\]
where \(\ell_{m}\) is the magnetic length. The highest possible magnetic field occurs for the zigzag orientation at the clamped edge. Then, the magnetic length is:
\[\ell_{m}\approx\sqrt{\frac{La}{3\theta_{max}\beta(1+\nu)}} \tag{31}\]
For \(\theta_{max}\sim 4^{\circ}\) and \(L\sim 10\mu\)m we obtain \(\ell_{m}\sim 500\)nm. The corresponding magnetic fields are below 1T, so that the effect on the electrons is small.
## III Spectrum and topology
### Spectrum
We now calculate the electronic spectrum of the system, using a tight-binding Hamiltonian [38] with a scaling approximation [39; 40; 41], for details see Ref. [42]. We compute the spectrum of a bent nanoribbon like the one
shown in Fig. 1(a), but with a length of \(L\approx 800\) nm and a width of \(W\approx 80\)nm, after scaling. There are \(N\approx 2.6\cdot 10^{5}\) sites. The geometric twist angle changes from \(4.4^{\circ}\) to \(0^{\circ}\). However, thanks to the scaling approximation, we can use this lattice to simulate a change from \(1.1^{\circ}\) (near the magic angle) to \(0^{\circ}\) (cf. SM Sec. I). Near the bent edge, the system has twist angles near the magic angle and little strain, while at the clamped edge there is an interplay between small angles and large strains, which leads to one-dimensional channels, that are revealed by the lattice structure in Fig. 2(a) and the charge map in Fig. 5 below.
Figure 3 shows the DOS of the system. We compare the DOS across various regions: the yellow line corresponds to states near the clamped edge on the left side (\(\theta\approx 0^{\circ}-0.57^{\circ}\)), the red line corresponds to states near the right end (\(\theta\approx 0.98^{\circ}-1.10^{\circ}\)), the black line represents the total DOS across the entire ribbon and in the blue line we consider the contribution of the monolayer fringes (cf. SM Sec. I). The low-energy DOS near the bent edge (red curve) has a prominent peak while the DOS near the left end (yellow curve) is negligible in comparison. The peak is due to states localized at the AA stacking, see e.g. the charge map in Fig. 5, and it is not at zero energy due to electron-hole asymmetry and to a rigid blue-shift induced by scaling. We observe that a long region starting at the bent edge of the nanoribbon behaves like low-disorder magic-angle TBG, as evidenced by the sharp peak in the density of states. Moreover, on both sides of the peak, the DOS is almost zero, hinting at the presence of gaps to the rest of the spectrum, as it happens when flat bands are separated from remote bands. Therefore, correlated phases and superconductivity, in as much as they depend on these features, should also be present. Moreover, the system studied here is much smaller than the experimental one, on which the idea works even better, thanks to the many unit cells and very slow changing twist angle.
As a side note, the middle region, which has twist angles in the range \(\theta\approx 0.4^{\circ}-0.7^{\circ}\), also shows localization near the AA stacking regions [43], with _annular_ shapes instead of peaks for lower angles, see Fig. S10. We note that this annular behavior of the charge density closely resembles the charge localization for different magic angles as described in Refs. [44; 45]. Furthermore, the small peak near zero energy in Fig. 3 originates from the edge states that are a consequence of the finite boundary conditions of the monolayer fringes [46]. These edge states can also manifest at twisted bilayer boundaries [47; 48].
The results in Fig. 3 are a clear indication that the bent nanoribbon geometry pioneered in Ref. [36] is useful for observing magic angle TBG physics and searching for new phenomena in the uncharted regime of with ultra-low angle disorder. For optimal results, the bending angle should be close to the magic angle. This configuration will benefit from the slow changing angle near the bent edge, combined with low strains.
### Topology
In a large nanoribbon, the slow variation of twist angle and strain allows us to locally calculate its electronic properties. As described in the previous section, the narrow bands dominate giving rise to a large density of states. In experiments, the TBG samples are supported on [49] or encapsulated [50; 51] in hexagonal boron nitride (hBN). In the following, we consider the presence of a mass term in the bent graphene nanoribbon. This can be achieved, for example, by considering an hBN substrate acting only on the bottom graphene layer [52]. The hBN substrate induce a mass term that breaks inversion symmetry allowing for a finite Berry curvature [53]. We focus on a large nanoribbon, enabling the local determination of its electronic characteristics. Band topology is evaluated over a range of bending (or twist) angles, varying from around \(\theta\approx 0.2^{\circ}\) to \(\theta\approx 1.5^{\circ}\). We obtain the valley Chern numbers of the two low-energy central bands as a function of the local twist angle, cf. Eq. 18 and local strain tensor, cf. Eq. 15, using a continuum model of strained twisted bilayer [34; 54]. We also introduce a mass term of \(\Delta=15\) meV [55]. As shown in Fig. 4(a) and Fig. 4(b), near the clamped edge there are different topological phases with valley Chern numbers varying from \(\mathcal{C}=\pm 4\) to \(\mathcal{C}=\pm 1\). This result highlights the intricate nature of the bands when both low twist and strain are present. This complex behavior is also reflected in the charge map displayed in Fig. 5 and the geometric profile depicted in Fig. 2. It is important to underscore that the electronic structure near the clamped side exhibits high sensitivity to the chosen parameters. A comprehensive study may be necessary to thoroughly characterize the complete electronic properties in this region. Moreover, within the nanoribbon, certain regions exhibit clearly defined topological transitions that dis
Figure 3: Low energy density of states (DOS) of a system like the one in Fig. 1, including the monolayer fringes (blue), versus the DOS of the ribbon (black) and the local DOS at the right end of the ribbon, spanning angles [\(0.98^{\circ}\), \(1.1^{\circ}\)] (red) and at the left end ([\(0^{\circ}\), \(0.57^{\circ}\)], yellow). Inset is the total DOS indicating the energy positions (dashed lines) of the charge density maps on Figs. 5 and 6.
play reduced sensitivity to variations in parameters. On the left side of the ribbon, specifically concerning the two middle bands, the phase characterized by Chern numbers \(\{\mathcal{C}_{1},\mathcal{C}_{2}\}=\{-1,1\}\) is found within the range of \(\theta\sim(0.49^{\circ},0.63^{\circ})\) and \(x/L\sim(0.18,0.24)\). The phase with \(\{\mathcal{C}_{1},\mathcal{C}_{2}\}=\{0,0\}\) occurs within \(\theta\sim(0.63^{\circ},0.86^{\circ})\) and \(x/L\sim(0.24,0.35)\). The dominating phase from the central to the right end is the one characterized by Chern numbers \(\{\mathcal{C}_{1},\mathcal{C}_{2}\}=\{1,-1\}\), existing for \(\theta\gtrsim 0.86^{\circ}\) and \(x/L\gtrsim 0.35\). This is the well known topological phase of TBG on an nBN substrate [55; 56; 57; 58; 59]. Our findings are consistent with the smooth variation of twist angle and moire length described in the previous sections, cf. Fig. 1. Thus, in an experimental setup, our results indicate that the uniform strains and small variations of twist angles in large regions, give a uniform electronic structure and band topology, further supporting the proposal of Ref. [36] that bent graphene nanoribbons on a graphene substrate are ideal platforms to study TBG with ultra-low angle disorder.
## IV Conclusions
In this paper, we have examined the elastic and spectral characteristics of a graphene nanoribbon that has been bent on top of graphene. This arrangement has been recently achieved experimentally [36]. This geometry offers a crucial advantage by allowing independent control of twist angle and strain while reducing angle disorder. By means of the classical theory of elasticity, we have analyzed the deformations and stresses within the system, which result in a gradual change in the twist angle and the corresponding moire pattern.
The size of the bent region is determined by a balance between the elastic energy, which for fixed shift of the free end decreases with length, and the reduction in van der Walls energy, which increases with the length. Different regimes have been identified.
In proximity to the clamped edge, the combination of strong strains and small angles induces quasi-one-dimensional channels, aligning with earlier findings [34]. As a consequence, states in these region can show charge stripes.
In a long region at the right side of the bent edge, strains and twist angles are nearly uniform, as seen in the experiment. In this region, the low-energy spectrum includes many states with charge localization in the AA stacking regions which lead to a sharp, nearly isolated, peak in the density of states, as expected for magic angle twisted bilayer graphene. We attribute this to the existence of flat bands in a wide range of twist angles and strains. We have also calculated the band topology and found that the Chern number depends only on the twist angle for large intervals of strains and angles. Our results suggest that a bent nanoribbon geometry pioneered in Ref. [36] is ideal for exploring superconductivity and correlated phases in TBG in the very sought-after regime of ultra-low angle disorder.
## Acknowledgements
We thank Rebecca Ribeiro-Palau, M. Kapfer, B. Jessen, Federico Escudero, and Zhen Zhan for discussions. IMDEA Nanociencia acknowledges support from the "Severo Ochoa" Programme for Centres of Excellence in R&D (CEX2020-001039-S/AEI/10.13039/501100011033). We acknowledge funding from the European Commission, within the Graphene Flagship, Core 3, grant number 881603 and from grants NMAT2D (Comunidad de Madrid, Spain), SprQuMat, (MAD2D-CM)-MRR MATERIALES AVANZADOS-IMDEA-NC, NOVMOMAT, Grant PID2022-142162NB-I00 funded by MCIN/AEI/ 10.13039/501100011033 and by "ERDF A way of making Europe".
Figure 4: Topological phase diagram of a bent graphene nanoribbon. The maximum twist angle is set to \(\theta_{max}=1.5^{\circ}\) with Poisson ratio \(\nu=0.165\). Topological phases for the a) lower and b) upper middle bands. The corresponding Chern numbers are shown in colors. |
2303.14114 | Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion
Problem for Event-based Cameras | Neuromorphic (event-based) image sensors draw inspiration from the
human-retina to create an electronic device that can process visual stimuli in
a way that closely resembles its biological counterpart. These sensors process
information significantly different than the traditional RGB sensors.
Specifically, the sensory information generated by event-based image sensors
are orders of magnitude sparser compared to that of RGB sensors. The first
generation of neuromorphic image sensors, Dynamic Vision Sensor (DVS), are
inspired by the computations confined to the photoreceptors and the first
retinal synapse. In this work, we highlight the capability of the second
generation of neuromorphic image sensors, Integrated Retinal Functionality in
CMOS Image Sensors (IRIS), which aims to mimic full retinal computations from
photoreceptors to output of the retina (retinal ganglion cells) for targeted
feature-extraction. The feature of choice in this work is Object Motion
Sensitivity (OMS) that is processed locally in the IRIS sensor. Our results
show that OMS can accomplish standard computer vision tasks with similar
efficiency to conventional RGB and DVS solutions but offers drastic bandwidth
reduction. This cuts the wireless and computing power budgets and opens up vast
opportunities in high-speed, robust, energy-efficient, and low-bandwidth
real-time decision making. | Shay Snyder, Hunter Thompson, Md Abdullah-Al Kaiser, Gregory Schwartz, Akhilesh Jaiswal, Maryam Parsa | 2023-03-24T16:22:06Z | http://arxiv.org/abs/2303.14114v3 | Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion Problem for Event-based Cameras
###### Abstract
Neuromorphic (event-based) image sensors draw inspiration from the human-retina to create an electronic device that can process visual stimuli in a way that closely resembles its biological counterpart. These sensors process information significantly different than traditional RGB sensors. Specifically, sensory information generated by event-based image sensors is orders of magnitude sparser compared to that of RGB sensors. The first generation of neuromorphic image sensors, Dynamic Vision Sensors (DVS), are inspired by the computations confined to the photoreceptors and the first retinal synapse. In this work, we highlight the capability of the second generation of neuromorphic image sensors, Integrated Retinal Functionality in CMOS Image Sensors (IRIS), which aims to mimic full retinal computations from photoreceptors to output of the retina (retinal ganglion cells) for targeted feature-extraction. The feature of choice in this work is Object Motion Sensitivity (OMS) that is processed locally in the IRIS sensor. Our results show that OMS can accomplish standard computer vision tasks with similar efficiency to conventional RGB and DVS solutions but offers drastic bandwidth reductions. This cuts the wireless and computing power budgets and opens up vast opportunities in high-speed, robust, energy-efficient, and low-bandwidth real-time decision making.
event-based vision sensors, retinal computation
## 1 Introduction
Digital cameras have become an essential tool for capturing visual information in our environment. Their applications range from smartphones [1] and autonomous driving [2], to robotics [3], and manufacturing [4]. Event-based cameras, also referred to as neuromorphic cameras, represent the next generation of imaging technology, drawing inspiration from biological retinas to extract events from visual stimuli in a faster and more efficient manner compared to traditional cameras [5]. The most common event-camera architecture is the Dynamic Vision Sensor (DVS), which is made up of a pixel array that responds asynchronously and independently to brightness changes in the scene [5].
This continuous stream of events is different from the sequential production of frames in traditional active pixel sensor (APS) cameras. Importantly, events are sparse in space and time and therefore enable a memory and energy efficient representation of spatiotemporal activity including motion. Therefore, DVS serves as a practical solution to the size, weight, and power (SWaP) constraints of embedded image processing systems, such as self-driving cars [6] and autonomous robotics [6].
In real-world applications of event-based computer vision systems, distinguishing between events caused by moving objects and those caused by the camera's ego-motion has been a persistent problem [7]. Unlike RGB frames, event data provides limited contextual information about the observed scene. Consequently, it is challenging to distinguish between the active foreground object's spikes and the static background events caused by the camera's ego-motion. Numerous methods have been proposed to address the ego-motion problem, ranging from incorporating inertial data to fitting a linear motion model [8, 9]. While some recent approaches employ neural networks to estimate ego-motion, these methods tend to be computationally intensive.
A biological solution emerges from a computation performed in the neural circuitry of the animal retina-_Object Motion Sensitivity (OMS)_[10, 11]. OMS is a fundamental computation performed within the animal visual system by the feature-spike activity of Retinal Ganglion Cells (RGCs). The algorithm instantiated in the biological circuit involves subtracting a global temporal contrast signal in the receptive-field surround from a local contrast signal in the receptive-field center.
Figure 1 visualizes the biological retina's architecture that is responsible for extracting the OMS features and highlights that OMS aims to build upon the biological underpinnings of DVS to develop a more biologically plausible sensor. This activity is used by the brain to discriminate the motion of objects (object motion) from motion caused by motion of the observer (ego-motion) [10].
This work takes inspiration from the biological retina to evaluate in-sensor computation methods in real-world environments. Integrated Retinal Functionality in CMOS Image Sensors (IRIS) is a novel retina-inspired approach to vision sensing that uses spike-based processing to filter out
the dichotomy between self motion of the camera from the physical movements of objects in the scene [12]. Furthermore, IRIS reduces the computational requirements of machine learning models by incorporating data preprocessing and feature extraction within the physical sensor leveraging 3D semiconductor integration technology. This approach stands at the center of the edge computing paradigm where budgets for wireless and computing power can be drastically reduced along with opening up ground breaking opportunities in ultra-fast decision making. Moreover, novel application driven chip solutions like IRIS are at the epicenter of United States semiconductor road map (CHIPS act) [13].
In summary, the major impacts of this paper are as follows:
1. We present an algorithmic model inspired by biological retinal ganglion cell (RGC) computations for extracting OMS features from visual stimuli. Our evaluation focuses on the effectiveness of the model in capturing OMS features compared with RGB and DVS.
2. We assess the performance characteristics of OMS in a standard computer vision task; object detection using a high-resolution autonomous driving dataset along with a state-of-the-art convolutional neural network.
3. We perform a thorough evaluation of the numerous benefits that come from OMS where the resulting representation contains 3.26x more information per bit of transmitted data.
## 2 Related Work
To the best of our knowledge, this works serves as a foundational work at the intersection of end-to-end retinal computations applied to existing computer vision tasks. As such, we go through two research areas that are directly related to this novel application: ego-motion compensation and optimization for size, weight, and power constrained environments.
**Ego-Motion Compensation** The ego-motion problem is a persistent issue which has plagued efforts to use DVS cameras mounted on moving platforms. Whenever the platform shifts, the DVS pixels pick up on the reflectance changes and produce output even when all entities in the scene remain static. These events tend to occur around edges in the scene but frequently extend to the object surfaces. This results in the occlusion of salient entities in the scene which complicates the task of moving object detection. Many works have addressed this problem through the parametric modeling of camera motion.
Stoffregen, et al. [14] leverages an iterative Contrast-Maximization [15] approach to jointly model the motion parameters by defining a set of clusters and predicting their event-cluster assignments. While this approach is effective, it requires a slow and computationally intensive iterative approach. A later work by Liu, et. al. [16] attempts to solve this problem through the use of bounding functions to place constraints on Contrast-Maximization though they still required the use of gradient descent and only evaluated their work on rotational ego-motion.
Similarly, Mitrokhin, et al. [17] introduces an approach to object tracking compatible with event-based cameras using parametric models. These modes are used to estimate the three-dimensional geometry of the event data and correct for ego-motion noise. The central experiment within this paper is the ability of the model to segment object motion and compensate for ego motion. Compared to OMS and IRIS, this work has two major limitations: (1) there are no references to biological inspirations, and (2) it cannot be embedded into low-overhead, spatial-temporal computing due to the underlying motion compensation system's need for constant updates of a time-dependent point cloud model.
Several other works have explored deep learning methods for compensating for ego-motion during object detection and segmentation tasks. These approaches typically involve discretizing the event stream into a series of frames, which can then be used to train convolutional neural networks (CNNs) for computing the visual odometry of the scene. For example, Nitin et. al. [18] creates a 3-channel event frame from an event stream where the first and second channels are the positive and negative event counts respectively and the third is the average time between events. This frame is then passed to a series of shallow neural networks which jointly compute the movements of the camera and observed objects. Zhu et. al [19] takes a similar approach though uses a novel method to discretize the time domain though uses bilinear sampling. The resultant frames are given to an encoder-decoder CNN which has another network tied to it's residual block for the purpose of pose estimation when performing ego-motion prediction.
Chen, et. al. [20] introduces a non-biologically inspired method for performing standard computer vision tasks with ego-motion is presented. Their major contribution, as opposed to our work, is their evaluation of the effectiveness of their approach in sub-optimal viewing conditions such as those with motion blur and poor lighting. However, there is no mention of biological inspiration for this method. In contrast, our approach is inspired by biology and targets a broader range of object classes, including cars, pedestrians, buses, trucks, bicycles, riders, and motorcycles.
Fig. 1: Retinal circuit for Object Motion Sensitivity embedded inside hierarchical retinal layers.
While the aforementioned studies were the closest to our research, there have been multiple other methods proposed and investigated in radar applications [21] and frame-based vision [22]. Our approach is radically different from previous works because the underpinnings of object motion sensitivity come directly from biology rather than being defined by an arbitrary network architecture or parametric model.
**Size, Weight, and Power (SWaP)** State of the art applications of deep neural networks require quick and accurate processing of event-based sensory information that is compatible real-time intelligent systems. Prior works in this area can be broken down into two major areas: algorithm optimization and in-sensor computation. Algorithm optimization aims to process sensory information from existing technologies such as RGB or DVS and optimize the machine learning pipeline. These models are responsible for extracting fundamental features from visual perceptions to make better decisions on lower-dimensional data.
The majority of literature focuses on the algorithmic optimization of computer vision systems to more standard RGB and DVS sensors. A variety of sub-fields emerge with solutions that aim to mitigate the performance penalties that come with complicated neural network optimization systems such as quantization [23], pruning [24], neuromorphic computing [25, 26], and novel architecture design [27, 28]. While these works result in intelligent systems that are more capable in edge applications, a significant portion of the learning is dedicated to dealing with noisy and low entropy visual representations.
Numerous works strive to mitigate the challenges that hinder the wider adoption of intelligent computer vision systems in SWaP (size, weight, and power)-constrained environments by enhancing the sensor's processing capabilities via chip integration methodologies. A variety of methods are explored with varying success such as 3D monolithic integration [29], 3D heterogeneous integration [30], planar system on chip (SoC) integration [30], and 2.5D chiplet integration [30]. These works emphasize on the hardware itself and propose it as a possible solution to a variety of problems. Our work differs in using biologically inspired OMS functionality for algorithmic analysis that can be embedded inside sensor arrays featuring spatio-temporal computations while leveraging 3D integration schemes.
In conclusion, there have been extensive research efforts dedicated to ego-motion detection and compensation, as well as dimensionality reduction for SWaP-constrained environments. While these areas have been investigated separately, there has been limited research at their intersection where the trade-off between ego-motion classification performance, information density, and applicability to existing computer vision applications is explored. This paper seeks to establish a research foundation at the intersection of ego-motion detection and compensation, while taking into account information density and applicability in SWaP-constrained environments through the use of a biologically inspired functionality - Object Motion Sensitivity.
## 3 Research Methods
In this section, we discuss the fundamental building blocks of this project. As shown in Figure 2, we start off with a discussion about object motion sensitivity and Integrated Retinal Functionality in CMOS Image Sensors (IRIS). Next, we take a detailed look at the model architecture, machine learning task, and the various metrics used to evaluate the broader impacts and applications of Object Motion Sensitivity (OMS) on existing computer vision tasks.
### _Dataset_
Event-based cameras transmit a continuous stream of events corresponding to reflectance changes in the scene [5]. These events are transmitted in an address event representation (AER) unlike traditional active pixel sensors (APS) which output a 3-dimensional image. Due to this difference in output and the limited availability of event-based cameras, the creation of datasets has been limited to static cameras observing single moving objects, such as CIFAR10-DVS [31] and DHP19 [32]. Recently, more complex datasets like DDD20 [33] have been created using event cameras on a moving platform. However, they lack the accurate bounding box and segmentation labels required to compare the detection accuracy of a model trained on OMS data versus the detection accuracy of that same model trained on DVS data.
To overcome this limitation, we used the ViBES retinal computation simulator [34] to construct frames that approximate DVS data from the Berkeley Deep Drive 100K Multi-Object Tracking and Segmentation (BDD100K MOTS) dataset [35]. The BDD100K MOTS dataset contains 90 videos from the original BDD100k dataset. The original videos were recorded at 30 fps and resampled to 5 fps for the MOTS dataset so that the salient objects in each frame could be labeled. This results in each video having approximately 200 frames per video. The observed objects fall into one of seven classes: cars, pedestrians, buses, trucks, bicycles, riders, and motorcycles. We randomly chose 60 of these videos to convert into DVS data and then fed that DVS data into the algorithmic implementation of OMS provided by ViBES. Our final dataset contains a close approximation of what one could obtain using an OMS inspired sensor such as IRIS [12].
Fig. 2: A flow chart presenting our processing for preprocessing BDD100K, converting it to DVS and OMS representations, fine-tuning YOLOv5, and our performance evaluation.
### _Object Motion Sensitivity_
The visual pathways of many organisms have evolved the OMS circuit to suppress stimuli created by arbitrary eye movements and global motion. As shown in Figure 4, this circuit is comprised of bipolar, amacrine, and retinal ganglion cell layers, which work together to distinguish between stimuli created from global and local motion. Figure 3 visually represents the similarity and differences between RGB, DVS, and OMS. As shown in Algorithm 1, The ViBES simulator contains an algorithmic approximation of the retinal OMS computation. In the case of the BDD100k dataset, the data is initially in the form of a collection of RGB frames. When executing on this data type, ViBES first computes frames approximating DVS by performing a difference on the frames and determining if the result exceeds the chosen contrast threshold of \(0.1\).
This method is an approximation of that performed by DVS cameras whose events defined as \(e_{n}=\{x_{n},y_{n},t_{n},p_{n}\}\) are elicited when a change in the log photocurrent \(L=log(I)\) exceeds the temporal contrast threshold \(\pm C\) at a given pixel \(\varphi_{n}=\{x_{n},y_{n}\}\) based upon \(E(\varphi_{n},t_{n})\) such that [5]:
\[\Delta L(\varphi_{n},t_{n}))=L(\varphi_{n},t_{n})-L(\varphi_{n-1},t_{n-1}) \tag{1}\]
\[E(\varphi_{n},t_{n})=\begin{cases}e_{n},&\Delta L(\varphi_{n},t_{n})>+C)\\ e_{n},&\Delta L(\varphi_{n},t_{n})<-C)\\ \varnothing,&otherwise\end{cases} \tag{2}\]
The resultant DVS frames are then sent to the OMS function 1. From a biological perspective, the DVS frames represent the response to photoreceptor activation by bipolar cells, and the OMS function takes the role of the amacrine and retinal ganglion cell layers. The OMS algorithm is comprised of two circular averaging filters also known as disk filters [36]. These filters are matrices containing a discrete feathered circle of a chosen radius whose values sum to one. Values in the center of the matrix possess larger values and thus carry more weight. The matrix convolves over the frame by centering itself over each pixel and storing resultant value into said pixel's position. This value is the mean contrast of the region covered by the disk filter.
The smaller of these disk filters is the center filter which represents a retinal ganglion cell (RGC) and the excitatory bipolar cell cluster with which it is connected. We chose a radius of 1 to lower the chance that a single cell cluster covers an entire entity. The larger disk filter serves the
Fig. 4: Retinal object motion sensitivity circuitry.
Fig. 3: A visual comparison of the differences between RGB, DVS, and OMS. It is important to notice that OMS retains the majority of the spatial features of DVS while drastically reducing the noise and number of spikes.
role of the amacrine cells, which are designed to inhibit the RGCs' response if global motion is observed. For this filter, a radius of 5 was chosen to cover a sufficiently wide region of each frame without significantly diminishing the weights. If the weights are too small then the surround filter will have no impact on the center filter values. In order to simulate the inhibition, the mean contrast values from the amacrine (surround) filter are subtracted from those of the RGC (center) filter. If the resultant values are larger than the threshold, we chose a threshold of 0.1 to match DVS, then a Boolean spike is stored in the OMS frame tensor.
### _Integrated Retinal Functionality in Images Sensors (IRIS)_
First proposed in [12], IRIS cameras aim to embed retinal computations inside image sensing platforms, including object motion sensitivity (OMS). IRIS cameras are the next generation of neuromorphic cameras that aim to mimic feature-extraction computations within biological retina from photo-transduction to computations performed in the inner retinal layers, much of which has been recently discovered by the retinal neuroscience community [10]. The initial version of the sensor implements two retinal features [37]: object motion sensitivity and looming detection.
In comparison to state-of-the-art active pixel CMOS image sensors (RGB cameras) [38, 39] that use a plethora of sophisticated and computationally complex algorithms to extract images features, retinal computations for IRIS cameras can be embedded inside an image sensor using low-cost, highly-efficient retina-inspired circuits [40]. Similarly, IRIS cameras go beyond existing DVS cameras that focus on the changing luminance detection aspect of the retina to embed analog spatio-temporal computations of inner retinal layers needed for extracting retinal features by leveraging 3D integration of semiconductor chips [41].
The outer retinal computations (pipolar cells functionality) can be implemented utilizing a modified active pixel sensor (APS) as well as the dynamic vision sensor (DVS) [12] on the back-side illuminated die. In contrast, the inner retinal circuits (e.g., amacrine and ganglion cells functionality of OMS features) can be implemented in a separate die and vertically stacked with the sensor die using pixel-parallel fine-pitched Cu-Cu hybrid bonding while maintaining the pixel density [42]. With respect to the OMS algorithm, IRIS cameras distribute retinal computations from photo-transduction to RGCs using an interleaved center-surround receptive field distributed throughout the camera focal plane [43].
The inner retinal circuits' excitatory (from the center receptive field) and inhibitory (from the surrounding receptive field) connections have been ensured by the opposite direction of current flows inside the CMOS circuits. A thresholding circuit compares the summed signal of the center and surrounding receptive field and generates an OMS feature spike when the summed signal crosses the OMS threshold. IRIS cameras thus form the required underlying hardware substrate that can implement OMS inside state-of-the-art camera manufacturing technology for real-time extraction of OMS spikes in highly SWaP (size, weight, and power) constrained environments.
### _YOLOv5_
Based on the initial version introduced in 2015 [44], YOLOv5 (You Only Look Once Version 5) [45] stands as a major step forward as it introduces a variety of features over the initial version. Throughout the different versions, multiple features have been added such as focal loss, batch normalization, and Mosaic data augmentation. This model architecture has been deployed in a variety of scenarios such as autonomous vehicles [46], medical imaging [47], and video surveillance [48]. Accuracy for this model on object detection tasks is defined as the mean average precision (mAP). We chose YOLOv5 for this study due to its impressive performance and user-friendly interface, which facilitates the replication and extension of results.
### _Bandwidth Reduction_
To gain a quantitative perspective of the capabilities of OMS, we established a quantitative metric for bandwidth reduction represented by \(bit\_rate\).
\[bit\_rate=hw\psi \tag{3}\]
Our metric for bandwidth, as shown in Equation 3, is bit rate per frame which is defined as \(bit\_rate=(hw\psi)\) where \(h,w\in\mathbb{N}\geq 0\) and \(\psi\in\mathbb{R}\geq 0\). \(h\) and \(w\) are the height and width of an individual frame in pixels. \(\psi\) is the bit depth of an individual pixel. Our RGB images have a \(\psi=24\) with DVS and OMS representations having a \(\psi=1\).
A lower bit rate indicates a lower data bandwidth requirement while also reducing the total number of bits to be transmitted over a communication channel. Depending on the underlying communication scheme - wired [49], short-distance [50], or long-distance wireless communication [51], the lower data bandwidth translates to lower communication energy while also avoiding data congestion in bandwidth-constrained environments.
## 4 Results
We evaluated the effectiveness of OMS for an object detection task on the Berkeley Deep Drive dataset [35]. All machine learning tests were conducted with the small configuration of YOLOv5 which has 7.2 million parameters and requires 16.5 billion FLOPS. This model was pretrained on the COCO dataset [52] for 300 epochs and results in a final mAP value of 37.4. The following subsections contain information and results from each of the performance comparisons used to evaluate the effectiveness of OMS versus RGB and DVS images. We start off by evaluating the performance delta between RGB, DVS, and OMS in their native state with each image type having a resolution of 1280 by 720. This resolution will remain static through all of the following tests. For each image representation, we fine tune the aforementioned pretrained weights for 100 epochs with a batch size of 128 on a computer equipped with an Intel Xeon W-2295 processor, 128GB of system memory, and an Nvidia RTX A5000. We continue with an evaluation of the average bit rate per frame, analogous to spike rate, of each image representation throughout the Berkeley dataset. We end this subsection by normalizing the performance values of each image type by their respective data rate per frame.
### _Native Performance_
We begin with a look at the native performance of each image type for object detection with YOLOv5.
As shown in Figure 5, RGB outperforms the other image types by a significant margin where DVS and OMS fall behind by \(62.89\)% and \(69.83\)%, respectively. These performance penalties are to be expected as RGB image sensors capture the magnitude of different wavelengths of light at every pixel which results in a drastically higher data rate. DVS' biologically inspired nature is designed to produce less information per frame. This explanation is only amplified for the ever-increasing sparsity engineered into OMS sensors. When applications are not limited by size, weight, and power constraints, it is evident that RGB is the best camera for this particular computer vision task.
### _Data Rate_
Given that OMS is designed to increase the feature-richness of individual spikes along with their sparsity, it is vital that we compare the average data rates of the individual representations across our dataset.
Given the low entropy within RGB images, we expect it to have the highest data rate. This comes in stark contrast to DVS and OMS where they are designed to increase information density while simultaneously decreasing data rates. Table I shows the average data rates of these image representations across the entire BDD100K MOTS dataset. As expected, we see that RGB has the highest data rate with a data rate of \(2.21\times 10^{7}\) bits per frame versus DVS and OMS with data rates of \(1.96\times 10^{5}\) bits per frame and \(3.77\times 10^{4}\) bits per frame, respectively.
### _Performance versus Data Rate_
With the drastic reductions in data rate among the more biologically inspired representations, we need to evaluate how much of an impact this will have on the overall F1-Score of our computer vision system. Therefore, we evaluate the performance of each image representation where the F1-Scores are normalized by the data rates.
Given the low information density yet high data rate of RGB we expect it to have the lowest coefficient of performance versus data rate. The more biologically inspired methods should present ratio increases from RGB to DVS and DVS to OMS. Figure 6 shows the performance versus data rate for RGB, DVS, and OMS. The performance versus data rate values for each representation are \(1.88\times 10^{-8}\), \(7.91\times 10^{-7}\), and \(3.37\times 10^{-6}\), respectively. These results highlight that, compared to RGB, individual DVS bits contribute \(41.07\)x more F1-Score per bit of information. OMS builds upon the impressive foundation of DVS with \(187.25\)x and \(3.26\)x more information per bit compared RGB and DVS, respectively.
## 5 Discussion & Future Works
In this work, we conducted a study of the applicability and effectiveness of object motion sensitivity (OMS) versus dynamic vision sensors (DVS), and traditional RGB-based sensors. OMS is a biological computation conducted within animal retinas where the goal is to reduce the dimensionality of visual information from the individual color values perceived at each cell to a more feature-rich and lower dimensional representation.
Rather than fully evaluate the biological plausibility of this mathematical representation of OMS, the focus of this paper was shifted to evaluate its application on more standard computer vision tasks. We choose object detection on the BDD100K MOTS dataset with our deep learning model being represented by YOLOv5. The mathematical representation for OMS is implemented within the Visual Behavioral Environment Simulator (ViBES). While this paper doesn't
Fig. 5: A line chart showing the performance deltas between the three image representations at every training epoch.
Fig. 6: The F1 scores for the YOLOv5 models fine-tuned on RGB, DVS, and OMS where the final scores are normalized by the average bit rate per frame of the given representation.
focus on the physical deployment of this algorithm, the hardware design has been developed and published in [12].
We fine-tuned a pretrained version of YOLOv5 on both DVS and OMS for our performance evaluation on the BDD100K validation set. We used three metrics to evaluate multiple performance attributes between the various image representations: F1-score, data rate, and F1-score versus data rate.
We began by showing that in their native state, without taking into consideration any of their unique performance characteristics, RGB is the most accurate representation with a final F1-score of \(0.4177\). The other types trailed by a significant margin with DVS and OMS having 62.89% and 69.83% less F1-Score, respectively.
This story becomes drastically more interesting when taking into consideration the sparsity and lower bandwidths of DVS and OMS versus RGB. For example, the RGB sensor has an average data rate of \(2.21\times 10^{7}\) bits per frame versus DVS and OMS with an average bits per frame of \(1.96\times 10^{5}\) and \(3.77\times 10^{4}\), respectively. When normalizing the F1-Scores from each representation by their respective bit rate per frame, we see that an individual bit of OMS contains orders of magnitude more information versus RGB and DVS with \(178.25\)x and \(3.26\)x more information per bit, respectively. In other words, this means that a given bit information within OMS contains \(178.25\)x more information than RGB images and \(3.26\)x more information than DVS.
Although we have demonstrated the promising information density and F1-Score versus data rate achieved by OMS compared to RGB and DVS, we have numerous future objectives and ambitions for this project:
1. Investigate the impact on the overall effectiveness of OMS versus DVS and RGB when changing algorithmic hyper-parameters through a Bayesian optimization scheme
2. Compare the operational characteristics of the OMS simulation algorithm against its biological counterpart to gain a more in-depth understanding of how representative it truly is
3. Implement OMS on a proven DVS simulator such as v2e [53] or ESIM [54]
4. Create and incorporate other fundamental retinal computations within this framework to learn the trade-offs between them and how to incorporate multiple features into a holistic computer vision system
|
2308.08897 | Temperature-transferable tight-binding model using a hybrid-orbital
basis | Finite-temperature calculations are relevant for rationalizing material
properties yet they are computationally expensive because large system sizes or
long simulation times are typically required. Circumventing the need for
performing many explicit first-principles calculations, tight-binding and
machine-learning models for the electronic structure emerged as promising
alternatives, but transferability of such methods to elevated temperatures in a
data-efficient way remains a great challenge. In this work, we suggest a
tight-binding model for efficient and accurate calculations of
temperature-dependent properties of semiconductors. Our approach utilizes
physics-informed modeling of the electronic structure in form of hybrid-orbital
basis functions and numerically integrating atomic orbitals for the distance
dependence of matrix elements. We show that these design choices lead to a
tight-binding model with a minimal amount of parameters which are
straightforwardly optimized using density functional theory or alternative
electronic-structure methods. Temperature-transferability of our model is
tested by applying it to existing molecular-dynamics trajectories without
explicitly fitting temperature-dependent data and comparison to density
functional theory. We utilize it together with machine-learning molecular
dynamics and hybrid density functional theory for the prototypical
semiconductor gallium arsenide. We find that including the effects of thermal
expansion on the onsite terms of the tight-binding model is important in order
to accurately describe electronic properties at elevated temperatures in
comparison to experiment. | Martin Schwade, Maximilian J. Schilcher, Christian Reverón Baecker, Manuel Grumet, David A. Egger | 2023-08-17T10:10:26Z | http://arxiv.org/abs/2308.08897v4 | Dynamic tight binding for large-scale electronic-structure calculations of semiconductors at finite temperatures
###### Abstract
Calculating the electronic structure of materials at finite temperatures is important for rationalizing their physical properties and assessing their technological capabilities. However, finite-temperature calculations typically require large system sizes or long simulation times. This is challenging for non-empirical theoretical methods because the involved bottleneck of performing many first-principles calculations can pose a steep computational barrier for larger systems. While machine-learning molecular dynamics enables large-scale/long-time simulations of the structural properties, the difficulty of computing in particular the electronic structure of large and disordered materials still remains. In this work, we suggest an adaptation of the tight-binding formalism which allows for computationally efficient calculations of temperature-dependent properties of semiconductors. Our dynamic tight-binding approach utilizes hybrid-orbital basis functions and a modeling of the distance dependence of matrix elements via numerical integration of atomic orbitals. We show that these design choices lead to a dynamic tight-binding model with a minimal amount of parameters which are straightforwardly optimized using density functional theory. Combining dynamic tight-binding with machine learning molecular dynamics and hybrid density functional theory, we find that it accurately describes finite-temperature electronic properties in comparison to experiment for the prototypical semiconductor gallium-arsenide.
## I Introduction
Computational investigations of the microscopic characteristics of functional materials are important for designing and discovering new compounds. Growing computational power and development of new methods has enabled investigations of materials of ever increasing complexity and size. Large-scale molecular dynamics (MD) simulations with a high accuracy are now possible because of recent advances in combining machine learning (ML) with MD (ML-MD) [1; 2; 3]. For semiconductors and their use in technological applications it is, however, crucial to be able to characterize the electronic structure in such large-scale/long-time dynamical simulations as well. This is particularly relevant in the context of capturing thermal characteristics of electronic states in structurally disordered materials and the electron-phonon interactions present in such systems. Indeed, structurally large and disordered quantum systems do present great computational barriers to pertinent methods relevant in this context, such as density functional theory (DFT). For this reason, implementations of linear-scaling DFT methods have been proposed [4; 5; 6; 7], but combining self-consistency with speed and accuracy remains an on-going challenge despite recent advances [8].
From a perspective of computational efficiency, parameterized methods such as tight binding (TB) have the advantage that the self-consistency loop is skipped. Furthermore, due to the real-space nature of TB the electronic interactions may be neglected beyond a certain cut-off distance providing sparse Hamiltonian matrices. In this context, it is relevant that the TB method can capture the electronic structure of materials using only relatively few parameters, which can be obtained by fitting to any higher-level electronic-structure method. ML models can be powerful tools for such tasks as well [9; 10; 11], but they can also be significantly more complex when learning properties such as the electron density. TB models may offer the advantage of requiring reduced amounts of training data, which is especially important for finite-temperature calculations of the electronic structure that require transferring the model to structurally distorted materials systems.
Perhaps one of the most significant advances in the context of TB was the work by Slater and Koster (SK), providing a table of matrix elements that allowed efficient parameterization [12]. Based on their work, there have been numerous successful applications of the TB method to various materials [13; 14; 15]. However, including thermal effects in a dynamic and transferable model still remained challenging. To overcome this, different approaches were suggested including non-orthogonal TB models [16], environment-dependent (or three-body) TB [17; 18] or more recently the combination of TB with ML [19; 20; 21]. These methods succeeded in increasing the transferability of the model but often imply higher complexity and computational cost especially when more training data are required for optimizing the parameters.
In this work, we present a dynamic TB framework that exhibits a high level of transferability to differently sized systems and temperatures while requiring only a minimal amount of parameters. The approach is general and may be unified with some of the aforementioned techniques, such as ML, to further enhance its performance. The motivation for developing our model |
2301.03211 | Nonlinear Topological Magnon Spin Hall Effect | When a magnon passes through two-dimensional magnetic textures, it will
experience a fictitious magnetic field originating from the $3\times 3$
skew-symmetric gauge fields. To date, only one of the three independent
components of the gauge fields has been found to play a role in generating the
fictitious magnetic field while the rest two are perfectly hidden. In this
work, we show that they are concealed in the nonlinear magnon transport in
magnetic textures. Without loss of generality, we theoretically study the
nonlinear magnon-skyrmion interaction in antiferromagnets. By analyzing the
scattering features of three-magnon processes between the circularly-polarized
incident magnon and breathing skyrmion, we predict a giant Hall angle of both
the confluence and splitting modes. Furthermore, we find that the Hall angle
reverses its sign when one switches the handedness of the incident magnons. We
dub it nonlinear topological magnon spin Hall effect. Our findings are deeply
rooted in the bosonic nature of magnons that the particle number is not
conserved, which has no counterpart in low-energy fermionic systems, and may
open the door for probing gauge fields by nonlinear means. | Zhejunyu Jin, Xianglong Yao, Zhenyu Wang, H. Y. Yuan, Zhaozhuo Zeng, Yunshan Cao, Peng Yan | 2023-01-09T09:08:27Z | http://arxiv.org/abs/2301.03211v1 | # Nonlinear Topological Magnon Spin Hall Effect
###### Abstract
When a magnon passes through two-dimensional magnetic textures, it will experience a fictitious magnetic field originating from the \(3\times 3\) skew-symmetric gauge fields. To date, only one of the three independent components of the gauge fields has been found to play a role in generating the fictitious magnetic field while the rest two are perfectly hidden. In this work, we show that they are concealed in the nonlinear magnon transport in magnetic textures. Without loss of generality, we theoretically study the nonlinear magnon-skyrmion interaction in antiferromagnets. By analyzing the scattering features of three-magnon processes between the circularly-polarized incident magnon and breathing skyrmion, we predict a giant Hall angle of both the confluence and splitting modes. Furthermore, we find that the Hall angle reverses its sign when one switches the handedness of the incident magnons. We dub it nonlinear topological magnon spin Hall effect. Our findings are deeply rooted in the bosonic nature of magnons that the particle number is not conserved, which has no counterpart in low-energy fermionic systems, and may open the door for probing gauge fields by nonlinear means.
_Introduction.--_Topology dictates the particle or wave transport in many branches of physics, ranging from solid state physics to geophysics and astrophysics [1; 2]. One outstanding example in condensed matter physics is the intrinsic spin Hall effect which originates from the momentum-space topology, i.e., the Berry curvature of the band structure [3; 4; 5; 6; 7; 8; 9; 10; 11]. On the other hand, non-collinear spin textures, such as the magnetic vortex, meron, and skyrmion, can give rise to the real-space topology. When a spinful particle propagates through the topological spin texture, it will experience an effective Lorentz force, resulting in the so-called topological (spin-) Hall effect [12; 13; 14; 15; 16; 17].
Magnons, quanta of spin waves, are the collective excitations of ordered magnets [18; 19]. Very recently, magnon-based spintronics has attracted enormous interest due to peculiar advantages of magnons, such as the long-distance transport and low-energy consumption. Magnons carry spin angular momentum as well, so that they can experience an effective Lorentz force from the spin texture, leading to the topological magnon Hall effect [20; 21; 22; 23; 24]. In antiferromagnets, magnons have two degenerate modes with opposite spins, i.e., right- and left-handed magnons [25]. Therefore, when a magnon passes through the antiferromagnetic (AFM) skyrmion, for instance, it will experience a spin-dependent Lorentz force, resulting in the topological magnon spin Hall effect [26; 15; 27]. These topological magnon Hall effects originate from the gauge fields in transforming the non-collinear magnetic texture to the collinear state. The gauge transformation generates the covariant form of the differential operator \(\partial_{\mu}+\mathcal{A}_{\mu}\) with \(\mu=x,y\). Here, the \(3\times 3\) skew-symmetric matrix \(\mathcal{A}_{\mu}=\mathcal{R}^{-1}\partial_{\mu}\mathcal{R}\) with the rotation matrix \(\mathcal{R}\) contains three independent gauge fields [28; 29]. So far, only one of the three elements, i.e., \(\mathcal{A}_{\mu,12}\), has been identified to play a role in the Hall transport of magnons while the rest two (\(\mathcal{A}_{\mu,13}\) and \(\mathcal{A}_{\mu,23}\)) are concealed from the community.
In the past few years, the nonlinear Hall effect due to the momentum-space topology, e.g., Berry curvature dipole, has attracted much attention [30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. However, its counterpart induced by the real-space topology has not been reported till now. It is well known that the geometric phase derived from the adiabatic evolution is crucial for the Hall transport. Notably, in the three-wave mixing process, the accumulation of adiabatic geometric phase takes place not only on incident waves but also on nonlinear ones [42; 43; 44]. One thus expects that magnons generated in the nonlinear three-magnon process in spin textures [45; 46; 47; 48; 49; 50; 51] may also experience a topological Hall effect subject to the conventional gauge field, but it is not clear whether the rest two gauge fields play any role.
In this Letter, we aim to reveal the concealed gauge fields by addressing the nonlinear Hall transport of magnons in spin textures. To this end, we theoretical study the nonlinear interaction between polarized magnons and magnetic skyrmions in antiferromagnets. We show that the two long-sought gauge fields are actually hidden in the nonlinear magnon transport. By analyzing the "bunny ears" scattering pattern of three-magnon processes between the circularly-polarized magnon and breathing skyrmion in an antiferromagnet, we discover a giant Hall angle of both confluence and splitting modes. The Hall angle reverses its sign when one switches the handedness of incident AFM magnons. We dub it nonlinear topological magnon spin Hall effect. Our findings are deeply connected to both the nonconservation of magnon number and the spin-texture-induced Berry curvature in real space, as shown in Fig. 1.
_Model.--_Let us consider a chiral antiferromagnet described by the following Lagrangian [16]
\[\mathcal{L}=\int(\partial_{t}\mathbf{I})^{2}d\mathbf{r}-\mathcal{H}, \tag{1}\]
where \(\mathbf{I}\) is the normalized Neel vector and \(\mathcal{H}=\int\left[J(\nabla\mathbf{I})^{2}+D\mathbf{I}\cdot(\nabla \times\mathbf{I})-Kl_{z}^{2}\right]d\mathbf{r}\) is the system Hamiltonian including the exchange energy, Dzyaloshinskii-Moriya interaction (DMI), and magnetic anisotropy, with \(J\), \(D\), and \(K\) being the exchange stiffness, DMI strength, and anisotropy coefficient, respectively. To facilitate the analysis, we use the \(3\times 3\) matrix \(\mathcal{R}=\)exp(\(\phi L_{z}\))exp(\(\theta L_{y}\)) to rotate the \(z-\)axis to the equi
librium direction of the stagger vector \(\mathbf{l}_{0}\), i.e., \(\mathcal{R}\mathbf{e}_{z}=\mathbf{l}_{0}=(\sin\theta\cos\phi,\sin\theta\sin\phi, \cos\theta)\) with the polar angle \(\theta\) and azimuthal angle \(\phi\). Here, \(L_{z}\) and \(L_{y}\) are generators of the three-dimensional rotations about the \(z\) and \(y\) axis, respectively. To investigate the magnon excitation and transport in spin textures, we introduce the magnon creation (\(a^{\dagger}\)) and annihilation (\(a\)) operators by the Holstein-Primakoff transformation on the vector \(\mathbf{n}=\mathcal{R}^{-1}\)[53]. We expand the bosonic operator as \(a=a_{s}e^{i\mathbf{k}_{s}\cdot\mathbf{r}}+a_{p}e^{i\mathbf{k}_{p}\cdot\mathbf{ r}}+a_{q}e^{i\mathbf{k}_{q}\cdot\mathbf{r}}+a_{r}\psi_{r}\), where \(a_{s},a_{p},a_{q}\), and \(a_{r}\) are operators of incident magnon, sum-frequency, difference-frequency, and the skyrmion breathing modes [54], respectively. \(\mathbf{k}_{s}\), \(\mathbf{k}_{p}\), and \(\mathbf{k}_{q}\) are the corresponding wave vectors of three propagating modes in the far-field region, and \(\psi_{r}\) is the wavefunction of the localized breathing mode. Furthermore, we assume that the magnon excitation is in the form of a wave packet, which has a fixed shape and can be described by its central position \(\mathbf{r}_{i}(t)\), with \(i=s,p,q\). In terms of these collective coordinates [55], the Lagrangian can be simplified as a function of the position \(\mathbf{r}_{i}\) and the group velocity \(\mathbf{v}_{i}=\mathbf{r}_{i}\) of the magnon wavepacket [24]. Keeping up to third-order terms, the total Hamiltonian can be expressed as \(\mathcal{H}=\mathcal{H}_{2}+\mathcal{H}_{3}\). Here, the quadratic Hamiltonian is
\[\mathcal{H}_{2}=\sum_{i\approx s,p,q}2a_{i}^{\dagger}a_{i}\int\Big{[}\frac{1} {J}\omega_{i}^{2}\mathbf{v}_{i}^{2}-(2\mathbf{A}_{12}+\frac{D\mathbf{l}_{0}}{ J})\cdot\omega_{i}\mathbf{v}_{i}\Big{]}d\mathbf{r}, \tag{2}\]
which determines the magnon dispersion relation. The cubic Hamiltonian \(\mathcal{H}_{3}=\mathcal{H}_{3s}+\mathcal{H}_{3p}+\mathcal{H}_{3q}\) includes contributions from the incident term \(\mathcal{H}_{3s}\), sum-frequency term \(\mathcal{H}_{3p}\), and difference-frequency term \(\mathcal{H}_{3q}\)
\[\mathcal{H}_{3s}=\int\omega_{s}\mathbf{v}_{s}\cdot\bigg{\{}-\frac {1}{\sqrt{2}}[(i\mathbf{A}_{13}+\mathbf{A}_{23})+\frac{D}{2J}(i\mathbf{e}_{ \phi}+\mathbf{e}_{0})] \tag{3}\] \[\qquad\quad[3a_{q}a_{r}a_{s}^{\dagger}e^{i(-\mathbf{k}_{s}+ \mathbf{k}_{q})\cdot\mathbf{r}}+a_{p}^{\dagger}a_{r}a_{s}e^{i(\mathbf{k}_{s}- \mathbf{k}_{q})\cdot\mathbf{r}}]\psi_{r}+\text{H.c.}\bigg{\}}d\mathbf{r},\] \[\mathcal{H}_{3p}=\int\omega_{p}\mathbf{v}_{p}\cdot\bigg{\{}-\frac {3}{\sqrt{2}}[(i\mathbf{A}_{13}+\mathbf{A}_{23})+\frac{D}{2J}(i\mathbf{e}_{ \phi}+\mathbf{e}_{0})]\] \[\qquad\quad a_{s}a_{r}a_{p}^{\dagger}e^{i(\mathbf{k}_{s}- \mathbf{k}_{p})\cdot\mathbf{r}}\psi_{r}+\text{H.c.}\bigg{\}}d\mathbf{r},\] \[\mathcal{H}_{3q}=\int\omega_{q}\mathbf{v}_{q}\cdot\bigg{\{}-\frac {1}{\sqrt{2}}[(i\mathbf{A}_{13}+\mathbf{A}_{23})+\frac{D}{2J}(i\mathbf{e}_{ \phi}+\mathbf{e}_{0})]\] \[\qquad\quad a_{s}a_{r}^{\dagger}a_{q}^{\dagger}e^{i(\mathbf{k}_{s }-\mathbf{k}_{q})\cdot\mathbf{r}}\psi_{r}^{*}+\text{H.c.}\bigg{\}}d\mathbf{r},\]
where \(\mathbf{A}_{\nu\nu}=\mathcal{A}_{\nu,\nu\nu}\mathbf{e}_{x}+\mathcal{A}_{\eta, \nu\nu}\mathbf{e}_{y}\) are the gauge fields (\(\nu,\nu^{\prime}=1,2,3\)), \(\mathbf{e}_{0}\) and \(\mathbf{e}_{\phi}\) are two unit vectors in spherical coordinates, and \(\omega_{s}\), \(\omega_{p}\), and \(\omega_{q}\) are, respectively, the frequencies of the incident, confluence, and splitting magnons meeting law of energy conservation, i.e., \(\omega_{p(a)}=\omega_{s}\pm\omega_{r}\) with \(\omega_{r}\) the skyrmion breathing frequency, see the bottom panel of Fig. 1. Equations (2) and (3) show that the conventional gauge field \(\mathbf{A}_{12}\) only appears in the quadratic term, while gauge fields \(\mathbf{A}_{13}\) and \(\mathbf{A}_{23}\) emerge in the nonlinear three-magnon processes. To reveal their role in the magnon transport, we employ the Euler-Lagrangian formula to derive equations of motion of magnon wavepackets [56] for the derivation of (4). It is noted that we have ignored the effective electric field associated with the skyrmion static energy due to its negligible role in the magnon Hall effect. The extra force \(\mathbf{F}_{i}^{\text{cubic}}\) originates from the three-magnon process and the newfound gauge fields, with the following expression
\[\mathbf{F}_{i}^{\text{cubic}}=c_{i}\mathbf{v}_{i}\times\mathbf{B}^{\prime},(i=s, p,q), \tag{5}\]
where \(\mathbf{B}^{\prime}=B_{s}^{\prime}\mathbf{e}_{z}\) with \(B_{z}=\frac{\hbar}{e}[\nabla\times(\mathbf{A}_{12}+\frac{D\mathbf{l}_{0}}{2J})] _{z}=\frac{\hbar}{e}[\partial_{z}\mathbf{l}_{0}\times\partial_{x}(\mathbf{ n}\times\frac{\mathbf{e}_{z}}{\sin\theta})-\partial_{x}\mathbf{l}_{0} \cdot\partial_{y}(\mathbf{n}\times\frac{\mathbf{e}_{z}}{\sin\theta})]+\frac{ \hbar D}{2Jc}(\nabla\times\frac{\mathbf{e}_{z}\cos\theta}{\sin\theta})\), represents the new fictitious magnetic field playing a role merely when the nonlinear three-magnon process occurs. Due to the circular symmetry of skyrmion, the \(\mathbf{A}_{13}\) component is absent. Here, \(c_{s}=\frac{\omega_{q}}{4}(g_{p}a_{q}^{\dagger}a_{r}a_{s}+3g_{q}a_{q}^{ \dagger}a_{r}a_{s}+\text{H.c.})\), \(c_{p}=\frac{3\omega_{p}}{4}(g_{p}a_{s}a_{s}a_{r}a_{p}^{\dagger}+\text{H.c.})\), and \(c_{q}=\frac{\omega_{q}}{4}(g_{p}a_{s}a_{r}^{\dagger}a_{q}^{\dagger}+\text{H.c.})\), with overlap integrals \(g_{p}=\frac{1}{\sqrt{2}V}\int e^{i(\mathbf{k}_{s}-\mathbf{k}_{p})\cdot\mathbf{ r}}\psi_{r}d\mathbf{r}\), \(g_{q}=\frac{1}{\sqrt{2}V}\int e^{i(\mathbf{k}_{s}-\mathbf{k}_{q})\cdot\mathbf{r}} \psi_{r}^{*}d\mathbf{r}\), and \(V\) being the system volume. As shown in Eq. (4), the spin-wave packet can be regarded as a particle-like object moving in its own parameter space [57] subject to fictitious magnetic fields
Figure 1: Schematic illustration of the nonlinear topological magnon spin Hall effect in magnon-AFM skyrmion scattering. Circles with arrows indicate the handedness of AFM magnons. Incident, skyrmion breathing, sum-frequency, and difference-frequency modes are denoted by black, green, red, and blue colors, respectively. \(\mathbf{v}_{s,p,q}\) represent the velocity of three propagating magnon wavepackets. The bottom panel shows the magnon splitting (left) and confluence (right) processes. It is noted that magnons with opposite handedness will experience magnitude-equal but opposite Lorentz forces, resulting in the opposite transverse displacement (not shown).
(**B** and **B\({}^{\prime}\)**). The first term on the left-hand side of Eq. (4) characterizes the acceleration of magnons. The second term represents the effective Lorentz force from the quadratic Hamilton \(\mathcal{H}_{2}\), resulting in the conventional topological magnon Hall effect. Interestingly enough, the third term induces an extra Lorentz force on the wavepacket, leading to the nonlinear topological magnon Hall effect. The spatial distributions of the dimensionless magnetic fields \(B_{z}/B_{0}\) and \(B_{z}^{\prime}/B_{0}\) are shown in Figs. 2(a) and 2(b), respectively, where \(B_{0}=\hbar/a^{2}e\) with \(a\) being the lattice constant. It is noted that \(B_{0}\approx 660\) T for \(a=1\) nm. Due to the rotational symmetry of the Bloch skyrmion, both magnetic fields \(\textbf{B}/B_{0}\) and \(\textbf{B}^{\prime}/B_{0}\) have the circular symmetry. For \(\textbf{B}/B_{0}\), its main origin comes from the topological charge density of skyrmion, and the total magnetic flux is \(4\pi\)[58]. The spatial distribution of \(\textbf{B}^{\prime}/B_{0}\), however, is similar to the fictitious magnetic field distribution of the target skyrmion [59; 60] with a vanishing total flux but a singularity at the skyrmion core.
_Revealing the concealed fictitious magnetic field._--In nonlinear magnon-skyrmion scatterings, the time-evolution of populations of confluence and splitting modes is governed by the coupled Heisenberg equations: \(i\dot{a}_{p}=(\Delta_{p}-i\alpha\omega_{p})a_{p}+\tilde{g}_{p}a_{a}a_{r}\) and \(i\dot{a}_{q}=(\Delta_{q}-i\alpha\omega_{q})a_{q}+\tilde{g}_{q}a_{q}a_{r}^{\dagger}\). Here, the detuning parameter \(\Delta_{p(q)}=\omega_{p(q)}-\omega_{0}\) with the driving microwave frequency \(\omega_{0}\), \(\tilde{g}_{p}=\int\big{[}-2g_{1,\mu}ik_{p,\mu}\psi_{r}+g_{2,\mu}^{*}(\partial _{\mu}\psi_{r}+ik_{\mu})-\frac{5}{2}K\sin\theta\cos\theta\big{]}e^{i(\textbf{ k}_{r}-\textbf{k}_{p})\cdot\textbf{r}}d\textbf{r}\) and \(\tilde{g}_{q}=\int[g_{2,\mu}(\partial_{\mu}\psi_{r}^{*}+ik_{q,\mu})+2g_{1,\mu }^{*}ik_{x,\mu}\psi_{r}^{*}-\frac{5}{\sqrt{2}}K\sin\theta\cos\theta\big{]}e^{i (\textbf{k}_{r}-\textbf{k}_{p})\cdot\textbf{r}}d\textbf{r}\) where the Einstein summation rule is applied, coefficients \(g_{1,\mu}=\frac{3J}{2\sqrt{2}}(\mathcal{A}_{\mu,13}-i\mathcal{A}_{\mu,23})+ \frac{3D}{4\sqrt{2}}(e_{\phi,\mu}-ie_{\phi,\mu})\) and \(g_{2,\mu}=\frac{J}{\sqrt{2}}(-\mathcal{A}_{\mu,13}-i\mathcal{A}_{\mu,23})+ \frac{D}{2\sqrt{2}}(-e_{\phi,\mu}-ie_{\phi,\mu})\) denote the strength of the three-magnon confluence and splitting, respectively, and \(\alpha\) is the Gilbert damping constant. Then, one can analytically derive the steady-state magnon populations as \(a_{p}=\frac{g_{2,\mu}a_{r}}{e+i\alpha(\omega_{r}+\omega_{r})}\) and \(a_{q}=\frac{g_{2,\mu}a_{r}^{\dagger}}{e-i\alpha(\omega_{r}-\omega_{r})}\) with \(\epsilon=\omega_{s}-\omega_{r}\). Here, we have adopted the approximation \(\tilde{g}_{p}\approx\tilde{g}_{q}\approx g\) which is justified by the small difference between confluence and splitting frequencies since \(\omega_{r}\ll\omega_{s}\). We therefore obtain
\[m_{\text{sw},i}\dot{\textbf{v}}_{i}-\textbf{e}\textbf{v}_{i}\times\sigma( \textbf{B}+\lambda_{i}\textbf{B}^{\prime})=0,(i=s,p,q), \tag{6}\]
which is the main result of this work (see Supplemental Material [56] for detailed derivations). Here, \(m_{\text{sw},i}=\hbar\omega_{i}/J\) is the effective mass of the spin-wave packet in antiferromagnets, \(\sigma=\mp 1\) represents the left/right-hand magnon polarizations, \(\lambda_{s}=n_{r}(\frac{g_{2,\mu}}{4e}+\frac{3g_{2,\mu}}{4e}+\text{H.c.})\), \(\lambda_{p}=\frac{3}{4}(\frac{eg_{r}}{s}+\text{H.c.})\), and \(\lambda_{q}=\frac{1}{4}(\frac{eg_{q}}{s}+\text{H.c.})\) with the particle number of skyrmion breathing mode \(n_{r}=\langle a_{r}^{\dagger}a_{r}\rangle\). Equation (6) shows that the extra effective Lorentz force \(e\lambda_{i}\textbf{v}_{i}\times\sigma\textbf{B}^{\prime}\) (\(i=s,p,q\)) is mode-dependent. For incident magnons, the extra Lorentz force is proportional to the product of the skyrmion breathing number \(n_{r}\) (\(\ll 1\)), the coupling parameter \(g/\epsilon\), and the overlap integral \(g_{p,q}\). In general, magnon populations of confluence and splitting modes are far less than the incident one. It implies \(g/\epsilon\), \(g_{p,q}\ll 1\). The effect of \(\textbf{B}^{\prime}\) on incident magnons is thus negligible. However, for the confluence and splitting modes, parameters \(\lambda_{p,q}\) are inversely proportional to \(g/\epsilon\), the additional effective Lorentz force is therefore expected to bring an enormous effect.
To explore the role of fictitious magnetic fields on the magnon transport, we numerically solve Eq. (6) both without and with the new fictitious magnetic field \(\textbf{B}^{\prime}\). In calculations, we consider a right-handed magnon wavepacket (\(\sigma=1\)), and set the incident-magnon's initial velocity \(v_{s}(t=0)=2.65\) (with unit \(J/a\omega\)) along \(x\) direction, magnon mass \(m_{\text{sw}}=0.31\) (with unit \(\hbar\omega/J\)), and coefficient \(\lambda_{s,p,q}=1\). It is observed that the effective magnetic field \(\textbf{B}^{\prime}\) significantly enhances the magnon Hall effect, as displayed in Figs. 2(c) and 2(d). Due to the singularity of the fictitious magnetic field \(\textbf{B}^{\prime}\), we note anomalous magnon trajectories near the skyrmion center. Below, we verify our theoretical predictions by full micromagnetic simulations using MUMAX3 package [61].
We consider an AFM thin film of dimension 1000 \(\times 1000\)\(\times 1\)nm\({}^{3}\), hosting a Bloch-type skyrmion at the film center [62; 63; 64]. Magnetic parameters of KMnF\({}_{3}\)[65]: \(J=6.59\) pJ/m, \(K=1.16\times 10^{5}\) J/m\({}^{3}\), and \(D=1\) mJ/m\({}^{2}\) are used in the simulations, which gives rise to a skyrmion radius \(\approx 11\) nm (defined as the radius of circle \(l_{z}=0\)). The Gilbert damping is set as \(\alpha=0.001\). To efficiently generate polarized magnons and the three-wave mixing, we apply a microwave field \(\textbf{h}_{\text{RH/LH}}(t)=h_{0}[\cos(\omega_{s}t)\textbf{e}_{x}\mp\sin( \omega_{s}t)\textbf{e}_{y}]\) with amplitude \(h_{0}=50\) mT and frequency \(\omega_{s}/2\pi=1.205\) THz (generating the incident magnon of wavelength \(\approx 15.2\) nm) on one sublattice in a narrow region: \(-401\) nm\(\leq x\leq-399\) nm and a local field \(\textbf{h}_{r}(t)=h_{r}\sin(\omega_{r}t)\textbf{e}_{z}\) over the skyrmion with amplitude \(h_{r}(t)=h_{r}\sin(\omega_{r}t)\textbf{e}_{z}\). The magnetic field \(\textbf{h}_{r}(t)=h_{r}\sin(\omega_{r}t)\textbf{e}_{z}\) is given by
Figure 2: Spatial distribution of dimensionless field \(B_{z}/B_{0}\) (a) and \(B_{z}^{\prime}/B_{0}\) (b). Spin wave trajectories in real space under fictitious magnetic field \(\textbf{B}\) (c) and \(\textbf{B}+\textbf{B}^{\prime}\) (d), where different black curves represent trajectories of magnon wavepackets with different impact parameters, the red curve indicates the averaged trajectory of 51 magnon wavepackets, and the dashed green circle labels skyrmion’s wall center (\(l_{z}=0\)).
\(h_{r}=5\) mT and \(\omega_{r}/2\pi=0.095\) THz (the skyrmion breathing frequency) [56]. Here, RH and LH represent the abbreviation of microwave with right and left handedness, respectively. Absorbing boundary conditions are adopted to eliminate the spin-wave reflection by film edges [66].
To analyze the magnon spectrum in the skyrmion area, we implement fast Fourier transform of local magnetic moments. Figure 3(a) shows the emerging magnon frequency comb (MFC) [52] in the terahertz region, where the mode spacing of the comb is exactly the skyrmion breathing frequency. Furthermore, we plot the isoline map of incident, confluence, and splitting modes to analyze the Hall angle of each mode, as shown in Fig. 3(b). We observe an interesting "bunny ears" pattern of magnons scattering off the AFM skyrmion, with red and blue lines denoting the propagation direction of two branches. Here, the Hall angle is defined as the included angle between each branch and the horizontal line. Compared with the incident mode, the Hall angle of nonlinear modes nearly doubles (quintuples) for the main (secondary) branch of the "bunny ears" [see Fig. 3(b)], where the major (secondary) branch is referred to as the one with a large (small) Hall angle. More importantly, by flipping the chirality of incident magnons, we observe an opposite magnon Hall motion [comparing the top and bottom panels in Fig. 3(b)]. The small difference of Hall angles between right-handed and left-handed magnons results from the dipolar field [56]. Micromagnetic simulations thus offer solid evidences for the nonlinear topological magnon spin Hall effect as we predicted above.
Furthermore, we derive the frequency-dependent Hall angle by fitting the flow direction of the main branch of the isosurface of each mode. Figure 4(a) plots the quantitative comparison between theoretical calculations and micromagnetic simulations for incident (black), confluence (blue), and splitting (red) modes. It shows that the Hall angle monotonically decreases with the increase of the mode frequency. Simulation results can be well explained by the analytical model (6) with parameters \(n_{r}=0\), \(g=49\) MHz, and \(g_{p}=\frac{1}{3}g_{q}=9.4\times 10^{-6}\). A vanishing mode number of skyrmion breathing is justified by its small wave amplitude. The coupling coefficient \(g\) is independently obtained by numerically solving the coupled Heisenberg equations. Acceptable deviations could be attributed to the simplified wavepacket treatment in the present formalism and the neglected topological electric-field component of the gauge fields. Figure 4(b) shows the Hall angle of nonlinear magnons over a broad frequencies \(\omega_{s}+m\omega_{r}\) in the MFC with integer \(m\) labeling the order of the spectrum line. Their "bunny ears" scattering patterns are plotted in Supplemental Material [56]. It is found that the nonlinear Hall angle increases linearly with \(|m|\). This monotonic dependence is reminiscent of the refraction process of light waves through multilayer media [67], where the refraction angle accumulates upon each scattering layer. It is noted that the slope of the linear trendline decreases as the incident magnon's frequency \(\omega_{s}\) increases.
_Discussion.--_In the above calculations, we have considered rotationally symmetric spin textures, the \(\mathbf{A}_{13}\) gauge field thus vanishes. However, the curl of gauge field induced by DMI becomes finite when the rotational symmetry is broken. We then envision contributions from \(\mathbf{A}_{13}\) in generating the fictitious magnetic field for elliptical skyrmions [68].
To summarize, we revealed the long-sought gauge fields concealed in the nonlinear magnon transport. By investigating the three-wave mixing between propagating magnons and
Figure 3: (a) MFC in the nonlinear scattering between the incident magnon and AFM skyrmion. (b) Isoline maps for right-handed (top panel) and left-handed (bottom panel) magnons scattered by the skyrmion at the origin. In each panel, modes from left to right correspond to splitting, incident, and confluence magnons, respectively.
Figure 4: (a) The Hall angle of the main branch of “bunny ears” as a function of the driving frequency for the incident (black), confluence (blue), and splitting (red) modes. Symbols are micromagnetic simulations and curves are analytical fitting by solving Eq. (6). (b) The Hall angle as a function of the mode index \(m\) for different incident magnon frequencies. Symbols and lines represent micromagnetic simulations and linear fittings, respectively.
breathing skyrmions, we found giant Hall angles emerging for each nonlinear spectrum line of the MFC. We further identified that the sign of the Hall angle is reversed by switching the chirality of incident magnons, and we dub it nonlinear topological magnon spin Hall effect. Our findings are intimately connected to the particle number nonconservation of magnons and thus applicable to generic bosons, which does not have the low-energy fermionic counterpart. Our results significantly advance the understanding of the nonlinear Hall effect and pave the way to probing the gauge field by frequency comb.
This work was funded by the National Key Research Development Program under Contract No. 2022YFA1402802 and the National Natural Science Foundation of China (NSFC) (Grant No. 12074057). Z.W. acknowledges financial support from the NSFC (Grant No. 12204089) and the China Postdoctoral Science Foundation under Grant No. 2019M653063. H.Y.Y. acknowledges the European Union's Horizon 2020 research and innovation programme under Marie Sklodowska-Curie Grant Agreement SPINCAT No. 101018193.
|
2303.02776 | A Low-Cost Portable Apparatus to Analyze Oral Fluid Droplets and
Quantify the Efficacy of Masks | Every year, about 4 million people die from upper respiratory infections.
Mask-wearing is crucial in preventing the spread of pathogen-containing
droplets, which is the primary cause of these illnesses. However, most
techniques for mask efficacy evaluation are expensive to set up and complex to
operate. In this work, a novel, low-cost, and quantitative metrology to
visualize, track, and analyze orally-generated fluid droplets is developed. The
project has four stages: setup optimization, data collection, data analysis,
and application development. The metrology was initially developed in a dark
closet as a proof of concept using common household materials and was
subsequently implemented into a portable apparatus. Tonic water and UV
darklight tube lights are selected to visualize fluorescent droplet and aerosol
propagation with automated analysis developed using open-source software. The
dependencies of oral fluid droplet generation and propagation on various
factors are studied in detail and established using this metrology.
Additionally, the smallest detectable droplet size was mathematically
correlated to height and airborne time. The efficacy of different types of
masks is evaluated and associated with fabric microstructures. It is found that
masks with smaller-sized pores and thicker material are more effective. This
technique can easily be constructed at home using materials that total to a
cost of below \$60, thereby enabling a low-cost and accurate metrology. | Ava Tan Bhowmik | 2023-03-05T21:30:49Z | http://arxiv.org/abs/2303.02776v1 | # A Low-Cost Portable Apparatus to Analyze Oral Fluid Droplets and Quantify the Efficacy of Masks
###### Abstract
Every year, about 4 million people die from upper respiratory infections. Mask-wearing is crucial in preventing the spread of pathogen-containing droplets, which is the primary cause of these illnesses. However, most techniques for mask efficacy evaluation are expensive to set up and complex to operate. In this work, a novel, low-cost, and quantitative metrology to visualize, track, and analyze orally-generated fluid droplets is developed. The project has four stages: setup optimization, data collection, data analysis, and application development. The metrology was initially developed in a dark coset as a proof of concept using common household materials and was subsequently implemented into a portable apparatus. Tonic water and UV darklight tube lights are selected to visualize fluorescent droplet and aerosol propagation with automated analysis developed using open-source software. The dependencies of oral fluid droplet generation and propagation on various factors are studied in detail and established using this metrology. Additionally, the smallest detectable droplet size was mathematically correlated to height and airborne time. The efficacy of different types of masks is evaluated and associated with fabric microstructures. It is found that masks with smaller-sized pores and thicker material are more effective. This technique can easily be constructed at home using materials that total to a cost of below $60, thereby enabling a low-cost and accurate metrology.
## I Introduction
The COVID-19 pandemic is the most devastating global health crisis since the 1918 influenza pandemic [1-4]. As of December 2022, the SARS-CoV-2 virus has infected over 656 million people and caused 6.67 million fatalities worldwide [5]. Besides COVID-19, the influenza virus infects approximately one billion people worldwide and inflicts close to 650 thousand deaths annually [6]. Globally, respiratory syncytial virus (RSV) infects 64 million people and causes 160,000 deaths each year [7]. Contagious respiratory diseases spread through the corresponding pathogens in saliva or mucus droplets generated by infected individuals when they breathe, talk, cough, or sneeze. Inhalation of such particles often causes infection [8]. However, the propagation of these infectious droplets can be obstructed by wearing an effective mask [9]. Proper mask usage could potentially prevent 4 million acute upper respiratory infection-induced deaths each year [10].
Standard masks such as N95 respirators are proven to be the most efficient at filtering orally-generated fluid droplets, blocking 95% of airborne particles [11]. Top-grade personal protective equipment (PPE) should be prioritized for medical first responders and other healthcare workers, especially amidst PPE shortages [12]. Additionally, surgical and N95 masks are single-use and are made out of non-recyclable plastics that are harmful to the environment when disposed of [13-15]. Although KN-95 masks have recently become prevalent, many people are not wearing them correctly or frequently [16]. As a result, the general public is encouraged by the Centers for Disease Control and Prevention (CDC) to wear alternate face coverings such as homemade or commercially available fabric masks [17]. However, many such masks are ineffective and there is no easy and low-cost method for determining the efficacy of specific, individual masks accurately. Quantitative experimental setups that have been reported require a laboratory, complex equipment, and experience to operate. They are generally not accessible and repeatable in home environments. It is both costly and impractical to test every single kind of mask in a lab setting. A lack of accessible methods to verify mask efficacy may result in usage of masks that do not effectively block droplets unknowingly.
In this work, a novel, low-cost, portable, and accurate apparatus to visualize orally generated fluid droplets and quantify mask efficacy has been developed. This method is the world's first fluorescence-based technique for oral droplet visualization utilizing common household materials. It is based on the fluorescent properties of tonic water, a common beverage, which makes the oral droplets generated by expiratory events visible under UV darklight. The fluorescent emission is captured in slow-motion using a smartphone camera and processed and analyzed using open-source software. Detected droplet size was estimated based on the test conditions and the flight time of the droplets with a corresponding mathematical model. The results of these analyses are applied to determine the sensitivity of the metrology and the efficacy of masks.
## II Prior Research
Past reports on oral fluid droplet visualization require advanced equipment and are complex to operate. Representative approaches regarding the detection and/or visualization of respiratory droplets are summarized below.
Bourouiba et al. utilizes light scattering with high-speed videography to visualize the cloud of saliva and mucous droplets expelled by a sneeze [18]. Saliva droplets propel from an individual's mouth or nose scatter photons, making them visible against a black background. This method is accurate and can capture fast-moving droplets, but other foreign particles present in the test chamber would also scatter light, resulting in unwanted signal. To address this problem, a high-efficiency particulate air (HEPA) filter is
used. However, the filtration system still allows some dust particles to enter the test chamber [19]. In addition, a high-power light source is needed to generate sufficient signal from the droplets and the wavelength of light scattered by the droplets is the same as the wavelength of the source light, which reduces the signal-to-noise ratio (SNR) for detection. A better way to avoid this specific issue is to utilize fluorescence, where the emission wavelength is lower than the excitation wavelength [20].
Another type of experiment utilizes a laser light sheet rather than a large-area light illumination to avoid the source light being captured by the camera and increasing the background noise [21]. However, this technique can only visualize droplets passing through the thin light sheet at a single given moment. Furthermore, if not handled properly, the powerful laser can cause eye damage and visual impairment if not handled properly [22]. In addition, the expensive equipment such as HEPA filters, high-intensity lasers, and high-speed cameras render these experiments inaccessible in home environments.
To lower the overall cost of mask efficacy evaluation, several low-cost techniques could potentially be used for testing in a home setting. Unfortunately, these simple methods for mask efficacy determination, such as the "candle flame test" [23] or hydrophobic coating test are not quantitative and lack accuracy. In the example of the candle test, in which a subject attempts to blow out a candle while wearing a mask, outside variables, such as the type of candle and personal lung strength can affect the outcome. In addition, blowing may not be a reliable proxy for small aerosols exiting with normal speaking or coughing. In the hydrophobic coating test, the presence of a hydrophobic covering may not necessarily correlate to mask efficacy, as factors like pore size and fabric layering effect mask efficacy too. One other test method that is potentially feasible at home uses colored dyes to quantify the number of droplets generated by an expiratory event [24]. This system has several limitations: The microdroplets that are less than 10 microns in diameter cannot be tracked and recorded while airborne due to the low SNR and high viscosity of droplets, thus eliminating the possibility of studying the trajectory and propagation of the particles. The dye droplets leave a distorted mark when landing on the ground, causing some stains to merge and ruining the analysis. Additionally, the dyes could potentially be toxic to ingest, unpleasant to taste, and difficult to clean up.
## III **Metrology Development**
The goal of the metrology development phase was to leverage common household items for quantitative and accurate droplet detection, use smartphone-based high-speed videography, and apply open-source software for image processing and analysis. This development project consists of four major modules: setup optimization, data collection, data processing and analysis, and application development, as illustrated in Figure 1. For setup optimization, various fluorescent liquids, UV light sources, and data collection settings were characterized and the optimal combination of conditions determined. During data collection, recordings of the fluorescent microdroplets are captured with a smartphone. The data is processed and analyzed using an automated macro. This metrology can be applied to a broad range of applications, including the study of droplet generation and propagation, and mask efficacy testing, and has been built into a portable prototype for education purposes to demonstrate the spread of respiratory diseases. Details for each module are described in the following sections.
### _Setup and Material Optimization_
During the setup and material optimization phase of the project, independent variables such as fluorescent liquid choices, UV light source selection and configuration, and test setup conditions were evaluated and determined. For each variable, background research, comparisons, multiple testings, and analysis were performed.
#### Iii-A1 Fluorescent Liquid Selection
The first step of the research is to select an ingestible, fluorescent liquid that has a similar viscosity to saliva and fluoresces brightly to be detected and captured using an iPhone camera. This liquid is used as a proxy for oral fluid droplets. After studying a wide range of common household products, tonic water was the only liquid that fulfilled all the criteria. Tonic water contains a fluorophore called quinine [25]. The excitation wavelength range of quinine is 270 to 400 nm and the emission spectrum is 380 to 530 nm, giving tonic water its signature blue glow under UV dark light [26].
To optimize fluorescent intensity, various quinine-containing liquids and concentrations were evaluated. Figure 2 shows comparison images of the test liquids with tap water as a control. A notable candidate was East India Tonic Syrup, advertised to contain 5 times the quinine concentration of tonic water. However, as is apparent in Figure 2A, the Schweppes tonic water was significantly brighter than the tonic syrup. This is because the tonic syrup contains cinchona bark, from which quinine is extracted, rather than pure quinine. The impact of tonic water concentrations were tested, including 33%, 50%, 100%, and 200% by dilution or evaporation. The results are shown in Figure 2B. Although evaporating tonic water to obtain a higher concentration resulted in a higher brightness level than regular tonic water, regular Schweppes tonic water was found to be the most suitable for the experiment since it introduces the least variability without risking possible quinine decomposition from high temperature [27].
#### Iii-A2 UV Light Source Selection
Both a UV blacklight flashlight (385-395 nm wavelength) and UV darklight party tube lights (397-402 nm wavelength) with different power settings and configurations were tested. The tube light was determined to be more suitable than the flashlight since it provides uniform intensity over the illumination field, whereas the flashlight has a radial intensity decay towards the edge of the field. When a flashlight is used, only droplets passing through the center of the light beam are visible.
Additionally, considering safety, the party tube light had a wavelength that was safe for exposure to human eyes and skin [28].
#### Iii-B3 UV Light Source Selection
Figure 3 shows a matrix of different setup conditions experimented with to determine the optimal configuration for final testing. This includes the color of the background and the state of the room light. A spray bottle is used during these calibration tests to ensure low variability in the process of droplet generation. It is determined that data collected with a black background in a dark room was the most optimal. This is because the black background absorbs the incoming UV light rather than reflecting it. Along with the removal of the background room light, these settings had the lowest noise and resulted in an increased SNR.
#### Iii-B4 Final Metrology Setup
Figure 4 shows the final metrology development setup constructed in a closet as the proof of concept. Besides the conditions outlined in Section III.1.3, several other conditions are optimized. It is found that since the light field from the tube light disperses away in a trapezoid shape, the tube lights should hang at least 14 inches away from the back wall to minimize UV light reflection appearing in the video. The ideal horizontal distance between the iPhone camera and the UV tube lights is around 16 inches. At this distance, the camera can capture the entire length of the UV tube lights in the field of view to track the trajectory of the droplets without having to digitally zoom in, which can potentially reduce image quality. The optimized vertical distance between the user's mouth and the UV tube light is 6 inches. This ensures that the droplet cloud is as close to the light as possible for the highest illumination without being obstructed.
As seen in Figure 4, the final setup consists of UV tube lights suspended from the closet hanger rod 14 inches away from the back wall, 16 inches away from the camera, and 6 inches below the source of the droplets, an iPhone camera placed on a tripod, a spray bottle filled with quinine-containing tonic water for calibration, and a black poster board or towel to serve as the dark background.
Fig. 1: The flowchart of the metrology development process. The four modules are setup optimization, data collection, data analysis, and application development.
Fig. 2: Comparison of the fluorescence intensity of (A) various quinine-containing liquids with tap water as a control, (B) different concentrations of tonic water under UV illumination.
### _Data Collection_
During data collection, the test subject first wets their mouth with tonic water with a spray bottle to reduce variability of tonic water volume. Then, the person performs an expiratory event (speaking, sneezing, or coughing). For each trial, the loudness level of speech is measured by Starkey's SoundCheck App and recorded. When equivalent conditions are compared, the loudness level is calibrated to remain constant. The iPhone camera is set to slow-motion mode ("slow-mo") and used for recording at 240 frames per second (fps). Using this process, the droplets generated are visualized, recorded, and analyzed as described in Section 3.3.
### _Data Analysis_
The video of the aerosol cloud generated by an expiratory event is downloaded into separate frames using the open-source VLC software package [29]. This is performed using VLC's scene video filter, selectable under the "Preferences" menu. The recording ratio should be set to 1 to ensure that no compression is applied and every single frame in the video is saved. Next, the downloaded images from each frame are imported into ImageJ from the National Institutes of Health (NIH) [30] as an image stack for processing and analysis. In ImageJ, the frames can be enhanced by histogram adjustment. The enhanced images can be seen in the image montage shown in Figure 5. The montage shows the progression of the microdroplet cloud generated by the word "Fruits" being spoken at 92 decibels (dB) \(\pm\) 3 dB. This technique can capture the microdroplets lingering in the air long after their dispersal. This shows that 6 feet of social distancing alone is not enough to combat COVID-19, as the contagious particles can stay in the air long after a sick individual has left the area. For quantitative analysis, the mean brightness values of each frame are collected using the "Measure..." function on the ImageJ software menu. The mean brightness values can then be graphed in an intensity vs. time plot to quantify and visualize the generation and dissipation of droplets and aerosols, as narrated in the Results and Discussion section below. This entire analysis process is automated by coding a macro in ImageJ, which reduces the analysis time from 30 minutes to 30 seconds.
### _Applications_
Once the initial proof of concept metrology was developed in the closet, the setup was implemented into a consolidated, portable prototype in a cardboard box. This prototype can be used for mask or instrument cover evaluation purposes. Furthermore, the simple visualization of the oral fluid droplets can be used for educational purposes around the world on how contagious respiratory diseases spread through aerosols.
#### Iii-D1 Portable Prototype Development and Implementation
A portable prototype, shown in Figure 6, is constructed using a cardboard box, 2 UV tube lights, an iPhone, and black paper. The cost of the setup is less than $60 if a phone is already in possession; the cost breakdown is shown in Table 1.
The size of the prototype displayed in Figure 6 is 38.5 inches x 35.5 inches x 28.5 inches, which can be reduced to as small as 20 inches x 30 inches x 19.5 inches. The UV
Fig. 3: Comparison of different test setup conditions and their impact on the results. The independent variables changed are the color of the background and the state of the room light.
Figure 4: The final experimental setup and materials used for the metrology development.
Figure 5: Propagation of microdroplets generated by “Fruits” spoken at 92 dB. The video is captured in slo-mo mode at 240 fps. The frames extracted from the video are shown in (A)-(L) with 38-millisecond intervals.
tube light is placed 16 inches from the camera mounted on the front wall to ensure that the entire illumination field was captured in the field of view and 14 inches from the back to reduce the reflection of UV light.
In the original single-light setup, the illumination field intensity decreases as a function of the distance from the light source, as the droplets fall further away from the light source, the bottom half of the image frame has lower illumination intensity, resulting in a decreasing signal. To further optimize the setup, a second tube light is added 20 inches away from the top light at the bottom, facing up to produce a combined, more uniform illumination field, as shown in Figure 7 and 8A. The analysis illustrates that the bottom half illumination showed a 4x increase in brightness, indicating not only an overall intensity increase, but also better signal uniformity, which can be seen in Figure 8B1. Additionally, with two lights, the droplets did not experience the same drop in intensity, as shown in Figure 8B2.
Adding a second light improves the illumination intensity by 1.5-2x, as shown in Figure 9. In this study, three words were spoken at 92 dB. There were three trials for each word spoken under 1 vs. 2 UV lights. When the words "Fruits", "Phosphorus", and "Philosophy" were spoken under two lights versus the one light test conditions, the intensity of the signal improved 2.1x, 1.5x, and 1.5x, respectively. It was also found that data collected using the new setup has 1-3% lower variability across 3 trials under otherwise identical conditions.
#### Iv-B2 Mask Efficacy Comparison
This portable apparatus can be applied to study the generation, trajectory, and dissipation of orally-generated microdroplets for quantitative and in-depth studying of the mechanics of droplet propagation. People use cloth masks for many different reasons: to mitigate the environmental impact of disposable masks, for comfort, or limited access to medical masks. Thus, having access to a low-cost and convenient setup as described in this work is important for individuals to evaluate whether the masks that they are using are effective. Better understanding of aerosol physics may permit specialized mask designs specifically suited towards blocking the user's droplets from escaping when speaking, coughing, and sneezing. Furthermore, such a setup would expedite the process of developing and testing revolutionary material, such as copper foam [31], that would mitigate the environmental impact of widespread face mask use. Additionally, this setup would be useful in areas with limited resources to not only demonstrate the mechanism of virus spread, but also for the aforementioned mask evaluation purposes.
With the prototype described above, a number of trials were repeated with the subject wearing a different mask each time. The masks tested are made of thin cotton, thick cotton, linen, thin polyester, thick polyester, surgical, and N95. The peak brightness values from the recordings of each trial for a given mask were recorded. The efficacy of each tested mask is determined and correlated to its nanostructure using averaged peak intensity values and the scanning electron microscopy (SEM) images of different fabrics, which can be seen in Figure 10. The bigger and greater density of pores there are, the more droplets escape. Thinner material is correlated with more droplets being generated. Thick and thin cotton, which are both porous and the least effective. Not only did they allow small droplets to pass through, but they also allowed larger droplets to be broken up into aerosols when they encountered the holes. Linen blocked many aerosols in addition to small and big droplets but still let a considerable amount through. Polyester barely let any droplet through and the surgical and N95 masks were very effective.
As compared in Figure 11, of all the face coverings tested (both homemade and standard), the N95 mask (Figure 10F) predictably performs the best. A thick polyester mask (Figure 10D) *is the most effective cloth mask at blocking the droplets. It can be observed that the droplets escaping from the cotton masks (Figure 10A, 10B) are significantly smaller than those with the linen face covering (Figure 10C), almost a fine mist compared to the distinctive, individual droplets. This may be due to the initial droplets passing through the holes in the cotton fabric becoming broken up into smaller droplets. However, in linen, due to the larger pore size, it is possible some droplets passing through the fabric will merge
Fig. 6: A miniaturized and optimized portable prototype based on the original metrology built in a cardboard box. (A) shows the setup with labeled dimensions and (B) during operation.
Fig. 7: Comparison of the setup, light field, and results with one light versus two lights.
on the other side. As a result, while cotton (Figure 10A, 10B) masks can help hold back oral droplets, it is possible that wearing them may be even more dangerous than not wearing a mask at all due to droplets that would otherwise fall to the ground quickly being converted into aerosols that linger in the air.
## IV **Discussions**
Using the experimental setup, data collection, and automated analysis procedures described in Section 3, the effects of variables such as the phonic sound, loudness levels, and type of expiratory event on the amount and propagation characteristics of droplets generated were studied. In this section, the results of these analyses are discussed.
### _Dependency of Droplet Propagation on Droplet Size_
From the data collected with this prototype, the size of the droplets can be modeled and correlated. The results reaffirms the already well established fact [32, 33] that aerosols, which are droplets smaller than 10 microns in diameter, float in the air for prolonged periods of time and eventually dehydrate, making them particularly infectious. The dehydrated nuclei of the aerosols can stay in the air for minutes or even hours. As a result, they are much more dangerous than larger droplets that immediately fall to the ground following expulsion. The mean time for a particle to reach the ground can be calculated using the equation:
where \(\tau_{sed}\) is the time that it takes for a droplet of radius R to reach the ground from a height, \(z_{0}\) (in micrometers) [34]. The prefactor, \(\phi=9\eta/(2\rho g)=0.85*10^{-2}\mu m*s\), is calculated based on the viscosity of air at 25 \({}^{\circ}C\), \(\eta=1.86*10^{-8}g/(\mu m*s)\), water density \(\rho=10^{-12}g/\mu m^{3}\), and the gravitational constant \(g=9.8*10^{6}\mu m/s^{2}\). Using this equation, the time it takes for a droplet to fall to the ground in the absence of evaporation can be calculated based on the size of the droplet. For example, droplets placed initially at \(z_{0}=1.5m\) (the average height above ground for the mouth of a standing human adult) with radii of 1, 10, or 100 \(\mu m\) will require \(1.3*10^{4}s(3.5hrs)\), 130s, and 1.3 s, respectively, to fall to the ground.
Based on the amount of time it took for a droplet to fall out of the frame of the video (10 cm) tracked using ImageJ, the equation is used to estimate the radius of droplets captured by this metrology. It is determined that this setup is able to capture droplets as small as 9.2 \(\mu m\), which is below the threshold at which particles can be considered aerosols. Thus, this setup can be used to thoroughly evaluate a mask's performance at blocking not only droplets but also aerosols. As depicted in Figure 12, the trajectories of two different droplets are shown. The bigger droplet took 110 ms to fall 10 cm as compared to 167 ms for the small droplet. They have a calculated radius of 88 \(\mu m\) and 71 \(\mu m\), respectively. The trajectory of the big droplet is steeper and thus travels a shorter distance compared to the relatively smaller droplet.
### _Dependency of Droplet Generation and Propagation on Phonics_
Testing results on spoken words showed that the "F" and "Ph" phonics generated substantially more droplets compared to vowels and other consonants, as can be seen in Figure 13.
A peak in the mean brightness of the graph indicates a sudden influx of droplets. The timestamps of these peaks correspond to whenever the "F" sound is spoken; thus, the
Fig. 8: (A1) and (A2) show droplets captured under one and two lights. (B1) shows the brightness of the frame’s bottom half over time under the different light configurations. (B2) shows the brightness of a single droplet under the different light configurations.
[MISSING_PAGE_POST]
generation of excess oral fluid droplets can be attributed to the utterance of the "F" sound. This is likely due to the vibration of the front teeth against the bottom lip while uttering these sounds propel the droplets outward. The same logic can be extended to the "Th" sound. As a result, a cloud of fluid is expelled, making these phones more dangerous to utter in public because they can easily spread a respiratory pathogen.
### _Dependency of Droplet Generation and Propagation on Loudness_
It was also observed that the amount of fluid expelled during speech increased with the loudness levels of the sound. Figure 14A shows the superimposed mean brightness vs. time graphs for different loudness levels of speech, and the relationship between microdroplet generation and the loudness levels. Louder speech generates more droplets, as indicated by brightness levels from the data analysis. In addition, the droplets generated by louder speech Niger in the air for longer than softer speech. This means that not only does louder speech result in greater microdroplet generation, but it also propels droplets further. The correlation between loudness level and the peak intensity of the recording is shown in Figure 14B. The greater the loudness level is, the more droplets are generated.
### _Dependency of Droplet Generation and Propagation on Expiratory Event_
In this section, a range of expiratory events, including speech, coughing, and sneezing were studied. As can be observed in Figure 15, the droplet cloud expelled by a sneeze lingered for much longer compared to the droplets from other expiratory events. This may be due to the variations in the aerosol to droplet ratio generated by different expiratory events. Since aerosols remain in the air for prolonged periods of time, a higher aerosol to droplet ratio would result in a much longer dissipation time. Orally generated aerosols are considered to be more dangerous than larger droplets in terms of disease spread due to their extended airborne period [35]. Aerosols can travel on air currents for up to hours before settling, during which they remain infectious. So, it can be inferred that sneezing in public is more dangerous than speaking or coughing due to the increased generation of small aerosols.
## V **CONCLUSIONS AND FUT PROSPECTs**
In summary, a novel, home-built, low-cost, and accurate metrology to visualize oral fluid droplets and quantify mask efficacy has been developed. This fluorescence-based technique development process consists of four major blocks: setup optimization, data collection, data analysis, and applications. In this study, droplet size and height was mathematically correlated with the amount of time it took to fall out of the frame. It is found that this prototype is capable of capturing aerosols as small as 9.2 \(\upmu\)m. The proposed system can be easily built and used in a home environment and can be carried out in many different settings. The setup is easily scaleable and can be implemented into a portable prototype costing less than $60. Compared to the original closet setup with only one UV tube light at the top, the integrated portable prototype with UV tube lights from both the top and bottom is superior in terms of illumination uniformity and intensity, thus generating data that has high intensity and low variability. Video data collected from the portable prototype displayed a 1.5-2x increase in intensity
Fig. 11: Comparison of the efficacy of the masks as measured by the peak brightness analyzed based on the recordings. The masks analyzed include thin cotton, medium cotton, thick cotton, thin polyester, thick polyester, linen, surgical and N95.
Figure 12: Trajectory of (A) a large droplet with an estimated radius of 88 \(\mu m\) and (B) a small droplet with an estimated radius of 71 \(\mu m\) ejected from a 8 gauge syringe needle. The paths of the droplets were tracked frame-by-frame and overlaid with the final snapshot using ImageJ.
Figure 13: The mean brightness of the tonic water discharge from speaking the words (A) “Fruits”, (B) “Phosphorus”, and (C) “Philosophy” were integrated over each frame and plotted over time.
Figure 14: Graphs of (A) mean brightness over time and (B) the peak brightness values for the word “fruits” spoken at different loudness levels.
Figure 15: Three different expiratory events, including cough, sneeze, and speech, were recorded and analyzed. The normalized brightness of each event was plotted over time.
and 29-53% less variability compared to the closet setup. Since it is easily transportable, this technique can be used for educational and mask evaluation purposes around the globe. The setup can be democratized for widespread use in the fight against contagious respiratory infections. In the future, this could also be used for studying fluid droplet and aerosol dynamics. The use of this method was simplified by automating the data processing and analysis processes to make it into a mobile application.
|
2302.03999 | Total positivity of some polynomial matrices that enumerate labeled
trees and forests. II. Rooted labeled trees and partial functional digraphs | We study three combinatorial models for the lower-triangular matrix with
entries $t_{n,k} = \binom{n}{k} n^{n-k}$: two involving rooted trees on the
vertex set $[n+1]$, and one involving partial functional digraphs on the vertex
set $[n]$. We show that this matrix is totally positive and that the sequence
of its row-generating polynomials is coefficientwise Hankel-totally positive.
We then generalize to polynomials $t_{n,k}(y,z)$ that count improper and proper
edges, and further to polynomials $t_{n,k}(y,\mathbf{\phi})$ in infinitely many
indeterminates that give a weight $y$ to each improper edge and a weight $m! \,
\phi_m$ for each vertex with $m$ proper children. We show that if the weight
sequence $\mathbf{\phi}$ is Toeplitz-totally positive, then the two foregoing
total-positivity results continue to hold. Our proofs use production matrices
and exponential Riordan arrays. | Xi Chen, Alan D. Sokal | 2023-02-08T11:19:06Z | http://arxiv.org/abs/2302.03999v3 | # Total positivity of some polynomial matrices
###### Abstract
We study three combinatorial models for the lower-triangular matrix with entries \(t_{n,k}=\binom{n}{k}n^{n-k}\): two involving rooted trees on the vertex set \([n+1]\), and one involving partial functional digraphs on the vertex set \([n]\). We show that this matrix is totally positive and that the sequence of its row-generating polynomials is coefficientwise Hankel-totally positive. We then generalize to polynomials \(t_{n,k}(y,z)\) that count improper and proper edges, and further to polynomials \(t_{n,k}(y,\boldsymbol{\phi})\) in infinitely many indeterminates that give a weight \(y\) to each improper edge and a weight \(m!\,\phi_{m}\) for each vertex with \(m\) proper children. We show that if the weight sequence \(\boldsymbol{\phi}\) is Toeplitz-totally positive, then the two foregoing total-positivity results continue to hold. Our proofs use production matrices and exponential Riordan arrays.
**Key Words:** Tree, labeled tree, rooted tree, functional digraph, partial functional digraph, tree function, Lambert \(W\) function, Ramanujan polynomials, Riordan array, exponential Riordan array, production matrix, Toeplitz matrix, Hankel matrix, totally positive matrix, total positivity, Toeplitz-total positivity, Hankel-total positivity, Stieltjes moment problem.
**Mathematics Subject Classification (MSC 2010) codes:** 05A15 (Primary); 05A19, 05A20, 05C05, 05C30, 15B05, 15B36, 15B48, 30E05, 44A60 (Secondary).
###### Contents
* 1 Introduction and statement of results
* 2 Preliminaries
* 2.1 Partially ordered commutative rings and total positivity
* 2.2 Production matrices
* 2.3 Production matrices and total positivity
* 2.4 Binomial row-generating matrices
* 2.5 Riordan arrays
* 2.6 Exponential Riordan arrays
* 2.7 A lemma on diagonal scaling
* 2.8 Lagrange inversion
* 3 Bijective proofs
* 3.1 Proof of Propositions 1.3 and 1.6
* 3.2 Proof of Propositions 1.5 and 1.7
* 4 The matrices \(\mathsf{T}\), \(\mathsf{T}(y,z)\) and \(\mathsf{T}(y,\phi)\) as exponential Riordan arrays
* 4.1 The matrix \(\mathsf{T}\)
* 4.2 The matrix \(\mathsf{T}(y,z)\)
* 4.3 The matrix \(\mathsf{T}(y,\phi)\)
* 5 Proof of Theorems 1.1, 1.2, 1.4 and 1.8
* 5.1 The matrix \(\mathsf{T}\)
* 5.2 The matrix \(\mathsf{T}(y,z)\)
* 5.3 The matrix \(\mathsf{T}(y,\phi)\)
* 5.4 More on the production matrix for \(\mathsf{T}\)
* A Interpretation of \(t_{n,k}(y,z)\) in our first combinatorial model
Introduction and statement of results
It is well known [59, 79] that the number of rooted trees on the vertex set \([n+1]\stackrel{{\rm def}}{{=}}\{1,\ldots,n+1\}\) is \(t_{n}=(n+1)^{n}\); and it is also known (though perhaps less well so) [12, 13, 75] that the number of rooted trees on the vertex set \([n+1]\) in which exactly \(k\) children of the root are lower-numbered than the root is
\[t_{n,k}\;=\;\binom{n}{k}\,n^{n-k}\;. \tag{1.1}\]
The first few \(t_{n,k}\) and \(t_{n}\) are
\[\begin{array}{c|ccccccccc|c}n\setminus k&0&1&2&3&4&5&6&7&8&(n+1)^{n}\\ \hline 0&1&&&&&&&&&&1\\ 1&1&1&&&&&&&&2\\ 2&4&4&1&&&&&&&&9\\ 3&27&27&9&1&&&&&&64\\ 4&256&256&96&16&1&&&&625\\ 5&3125&3125&1250&250&25&1&&&&7776\\ 6&46656&46656&19440&4320&540&36&1&&117649\\ 7&823543&823543&352947&84035&12005&1029&49&1&2097152\\ 8&16777216&16777216&7340032&1835008&286720&28672&1792&64&1&43046721\\ \end{array}\]
\([61,\) A071207 and A000169].
There is a second combinatorial interpretation of the numbers \(t_{n,k}\), also in terms of rooted trees: namely, \(t_{n,k}\) is the number of rooted trees on the vertex set \([n+1]\) in which some specified vertex \(i\) has \(k\) children.1
Footnote 1: This fact ought to be well known, but to our surprise we have been unable to find any published reference. Let us therefore give two proofs:
First proof. Let \(\mathcal{T}_{n}^{\bullet}\) denote the set of rooted trees on the vertex set \([n]\), and let \(\deg_{T}(i)\) denote the number of children of the vertex \(i\) in the rooted tree \(T\). Rooted trees \(T\in\mathcal{T}_{n+1}^{\bullet}\) are associated bijectively to _Prufer sequences_\((s_{1},\ldots,s_{n})\in[n+1]^{n}\), in which each index \(i\in[n+1]\) appears \(\deg_{T}(i)\) times [79, pp. 25–26]. There are \(\binom{n}{k}n^{n-k}\) sequences in which the index \(i\) appears exactly \(k\) times.
Equivalently, by [79, Theorem 5.3.4, eq. (5.47)],
\[\sum_{T\in\mathcal{T}_{n}^{\bullet}}\;\prod_{j=1}^{n}x_{j}^{\deg_{T}(j)}\;=\; (x_{1}+\ldots+x_{n})^{n-1}\;.\]
Replacing \(n\to n+1\) and then setting \(x_{i}=x\) and \(x_{j}=1\) for \(j\neq i\), we have
\[\sum_{T\in\mathcal{T}_{n+1}^{\bullet}}x^{\deg_{T}(i)}\;=\;(x+n)^{n}\;.\]
Extracting the coefficient of \(x^{k}\) yields \(\binom{n}{k}n^{n-k}\).
Second proof. There are \(f_{n,k}=\binom{n}{k}\,k\,n^{n-k-1}\)\(k\)-component forests of rooted trees on \(n\) labeled vertices (see the references cited in [76, footnote 1]). By adding a new vertex \(0\) and connecting it to the roots of all the trees, we see that \(f_{n,k}\) is also the number of unrooted trees on \(n+1\) labeled vertices in which some specified vertex (here vertex \(0\)) has degree \(k\). Now choose a root: if this root is \(0\), then vertex \(0\) has \(k\) children; otherwise vertex \(0\) has \(k-1\) children. It follows that the number of rooted trees on \(n+1\) labeled vertices in which some specified vertex has \(k\) children is \(f_{n,k}+nf_{n,k+1}=\binom{n}{k}n^{n-k}\).
The second proof was found independently by Ira Gessel (private communication).
And finally, there is a third combinatorial interpretation of the numbers \(t_{n,k}\)[20] that is even simpler than the preceding two. Recall first that a _functional digraph_ is a directed graph in which every vertex has out-degree 1; the terminology comes from the fact that such digraphs are in obvious bijection with functions \(f\) from the vertex set to itself [namely, \(\overrightarrow{ij}\) is an edge if and only if \(f(i)=j\)]. Let us now define a _partial functional digraph_ to be a directed graph in which every vertex has out-degree 0 or 1; and let us write \(\mathbf{PFD}_{n,k}\) for the set of partial functional digraphs on the vertex set \([n]\) in which exactly \(k\) vertices have out-degree 0. (So \(\mathbf{PFD}_{n,0}\) is the set of functional digraphs.) A digraph in \(\mathbf{PFD}_{n,k}\) has \(n-k\) edges. It is easy to see that \(|\mathbf{PFD}_{n,k}|=t_{n,k}\): there are \(\binom{n}{k}\) choices for the out-degree-0 vertices, and \(n^{n-k}\) choices for the edges emanating from the remaining vertices.
We will use all three combinatorial models at various points in this paper.
The unit-lower-triangular matrix \((t_{n,k})_{n,k\geq 0}\) has the exponential generating function
\[\sum_{n=0}^{\infty}\sum_{k=0}^{n}t_{n,k}\,\frac{t^{n}}{n!}\,x^{k}\ =\ \frac{e^{xT(t)}}{1-T(t)} \tag{1.2}\]
where
\[T(t)\ \stackrel{{\rm def}}{{=}}\ \sum_{n=1}^{\infty}n^{n-1}\, \frac{t^{n}}{n!} \tag{1.3}\]
is the _tree function_[19].2 An equivalent statement is that the unit-lower-triangular matrix \((t_{n,k})_{n,k\geq 0}\) is the exponential Riordan array [3, 22, 24, 70]\(\mathcal{R}[F,G]\) with \(F(t)=\sum_{n=0}^{\infty}n^{n}\,t^{n}/n!=1/[1-T(t)]\) and \(G(t)=T(t)\); we will discuss this connection in Section 4.1.
Footnote 2: In the analysis literature, expressions involving the tree function are often written in terms of the _Lambert_\(W\)_function_\(W(t)=-T(-t)\), which is the inverse function to \(w\mapsto we^{w}\)[19, 45].
The principal purpose of this paper is to prove the total positivity of some matrices related to (and generalizing) \(t_{n}\) and \(t_{n,k}\). Recall first that a finite or infinite matrix of real numbers is called _totally positive_ (TP) if all its minors are nonnegative, and _strictly totally positive_ (STP) if all its minors are strictly positive.3 Background information on totally positive matrices can be found in [28, 34, 46, 64]; they have applications to many areas of pure and applied mathematics.4
Footnote 3: **Warning:** Many authors (e.g. [28, 33, 34, 35]) use the terms “totally nonnegative” and “totally positive” for what we have termed “totally positive” and “strictly totally positive”, respectively. So it is very important, when seeing any claim about “totally positive” matrices, to ascertain which sense of “totally positive” is being used! (This is especially important because many theorems in this subject require _strict_ total positivity for their validity.)
Footnote 4: Including combinatorics [8, 9, 10, 33, 72], stochastic processes [46, 47], statistics [46], the mechanics of oscillatory systems [34, 35], the zeros of polynomials and entire functions [2, 27, 43, 46, 48, 64], spline interpolation [37, 46, 68], Lie theory [32, 54, 55, 56] and cluster algebras [30, 31], the representation theory of the infinite symmetric group [6, 83], the theory of immanants [80], planar discrete potential theory [21, 29] and the planar Ising model [53], and several other areas [37].
Our first result is the following:
**Theorem 1.1**.:
1. _The unit-lower-triangular matrix_ \(\mathsf{T}=(t_{n,k})_{n,k\geq 0}\) _is totally positive._
2. _The Hankel matrix_ \(H_{\infty}(\mathbf{t}^{(0)})=(t_{n+n^{\prime},0})_{n,n^{\prime}\geq 0}\) _is totally positive._
It is known [64, 35] that a Hankel matrix of real numbers is totally positive if and only if the underlying sequence is a Stieltjes moment sequence, i.e. the moments of a positive measure on \([0,\infty)\). And it is also known that \((n^{n})_{n\geq 0}\) is a Stieltjes moment sequence.5
Footnote 5: The integral representation [7][45, Corollary 2.4]
\[\frac{n^{n}}{n!}\;=\;\frac{1}{\pi}\int\limits_{0}^{\pi}\!\left(\frac{\sin\nu}{ \nu}\,e^{\nu\cot\nu}\right)^{n}d\nu\]
shows that \(n^{n}/n!\) is a Stieltjes moment sequence. Moreover, \(n!=\int_{0}^{\infty}x^{n}\,e^{-x}\,dx\) is a Stieltjes moment sequence. Since the entrywise product of two Stieltjes moment sequences is easily seen to be a Stieltjes moment sequence, it follows that \(n^{n}\) is a Stieltjes moment sequence. But we do not know any simple formula (i.e. one involving only a single integral over a real variable) for its Stieltjes integral representation. So Theorem 1.1(b) is equivalent to this known result. But our proof here is combinatorial and linear-algebraic, not analytic.
However, this is only the beginning of the story, because our main interest [73, 74, 77] is not with sequences and matrices of real numbers, but rather with sequences and matrices of _polynomials_ (with integer or real coefficients) in one or more indeterminates \(\mathbf{x}\): in applications they will typically be generating polynomials that enumerate some combinatorial objects with respect to one or more statistics. We equip the polynomial ring \(\mathbb{R}[\mathbf{x}]\) with the coefficientwise partial order: that is, we say that \(P\) is nonnegative (and write \(P\succeq 0\)) in case \(P\) is a polynomial with nonnegative coefficients. We then say that a matrix with entries in \(\mathbb{R}[\mathbf{x}]\) is _coefficientwise totally positive_ if all its minors are polynomials with nonnegative coefficients; and we say that a sequence \(\boldsymbol{a}=(a_{n})_{n\geq 0}\) with entries in \(\mathbb{R}[\mathbf{x}]\) is _coefficientwise Hankel-totally positive_ if its associated infinite Hankel matrix \(H_{\infty}(\boldsymbol{a})=(a_{n+n^{\prime}})_{n,n^{\prime}\geq 0}\) is coefficientwise totally positive.
Returning now to the matrix \(\mathsf{T}=(t_{n,k})_{n,k\geq 0}\), let us define its _row-generating polynomials_ in the usual way:
\[T_{n}(x)\;=\;\sum_{k=0}^{n}t_{n,k}\,x^{k}\;. \tag{1.4}\]
From the definition (1.1) we obtain the explicit formula
\[T_{n}(x)\;=\;(x+n)^{n}\;. \tag{1.5}\]
Our second result is then:
**Theorem 1.2**.: _The polynomial sequence \(\boldsymbol{T}=\big{(}T_{n}(x)\big{)}_{n\geq 0}\) is coefficientwise Hankel-totally positive. [That is, the Hankel matrix \(H_{\infty}(\boldsymbol{T})=\big{(}T_{n+n^{\prime}}(x)\big{)}_{n,n^{\prime} \geq 0}\) is coefficientwise totally positive.]_
Theorem 1.2 strengthens Theorem 1.1(b), and reduces to it when \(x=0\). The proof of Theorem 1.2 will be based on studying the _binomial row-generating matrix_\(\mathsf{T}B_{x}\), where \(B_{x}\) is the weighted binomial matrix
\[(B_{x})_{ij}\;=\;\binom{i}{j}\,x^{i-j} \tag{1.6}\]
(see Sections 2.4 and 2.6).
But this is not the end of the story, because we want to generalize these polynomials further by adding further variables. Given a rooted tree \(T\) and two vertices \(i,j\) of \(T\), we say that \(j\) is a _descendant_ of \(i\) if the unique path from the root of \(T\) to \(j\) passes through \(i\). (Note in particular that every vertex is a descendant of itself.) Now suppose that the vertex set of \(T\) is totally ordered (for us it will be \([n+1]\)), and let \(e=ij\) be an edge of \(T\), ordered so that \(j\) is a descendant of \(i\). We say that the edge \(e=ij\) is _improper_ if there exists a descendant of \(j\) (possibly \(j\) itself) that is lower-numbered than \(i\); otherwise we say that \(e=ij\) is _proper_. We denote by \(\operatorname{imprope}(T)\) [resp. \(\operatorname{prope}(T)\)] the number of improper (resp. proper) edges in the tree \(T\).
We now introduce these statistics into our second combinatorial model. Let \(\mathcal{T}_{n}^{\langle i;k\rangle}\) denote the set of rooted trees on the vertex set \([n]\) in which the vertex \(i\) has \(k\) children. For the identity \(|\mathcal{T}_{n+1}^{\langle i;k\rangle}|=t_{n,k}\), we can use any \(i\in[n+1]\); but for the following we specifically want to take \(i=1\). With this choice we observe that the \(k\) edges from the vertex \(1\) to its children are automatically proper. We therefore define
\[t_{n,k}(y,z)\ =\ \sum_{T\in\mathcal{T}_{n+1}^{\langle i;k\rangle}}y^{ \operatorname{imprope}(T)}z^{\operatorname{prope}(T)-k}\:. \tag{1.7}\]
Clearly \(t_{n,k}(y,z)\) is a homogeneous polynomial of degree \(n-k\) with nonnegative integer coefficients; it is a polynomial refinement of \(t_{n,k}\) in the sense that \(t_{n,k}(1,1)=t_{n,k}\). (Of course, it was redundant to introduce the two variables \(y\) and \(z\) instead of just one of them; we did it because it makes the formulae more symmetric.) The first few polynomials \(t_{n,k}(y,1)\) are
\begin{tabular}{c|l l l l} \(n\setminus k\) & 0 & 1 & 2 & 3 & 4 \\ \hline
0 & 1 & & & & \\
1 & \(y\) & 1 & & & \\
2 & \(y+3y^{2}\) & \(1+3y\) & 1 & & \\
3 & \(2y+10y^{2}+15y^{3}\) & \(2+10y+15y^{2}\) & \(3+6y\) & 1 & \\
4 & \(6y+40y^{2}+105y^{3}+105y^{4}\) & \(6+40y+105y^{2}+105y^{3}\) & \(11+40y+45y^{2}\) & \(6+10y\) & 1 \\ \end{tabular} The coefficient matrix of the zeroth-column polynomials \(t_{n,0}(y,1)\) is [61, A239098/A075856]. This table also suggests the following result, for which we will give a bijective proof:
**Proposition 1.3**.: _For \(n\geq 1\), \(t_{n,0}(y,z)=y\,t_{n,1}(y,z)\)._
In Section 4.2 we will show that the unit-lower-triangular matrix \(\mathsf{T}(y,z)=\big{(}t_{n,k}(y,z)\big{)}_{n,k\geq 0}\) is an exponential Riordan array \(\mathcal{R}[F,G]\), and we will compute \(F(t)\) and \(G(t)\).
We now generalize (1.4) by defining the row-generating polynomials
\[T_{n}(x,y,z)\ =\ \sum_{k=0}^{n}t_{n,k}(y,z)\,x^{k} \tag{1.8}\]
or in other words
\[T_{n}(x,y,z)\ =\ \sum_{T\in\mathcal{T}_{n+1}^{*}}x^{\deg_{T}(1)}y^{\operatorname{ imprope}(T)}z^{\operatorname{prope}(T)-\deg_{T}(1)} \tag{1.9}\]
where \(\deg_{T}(1)\) is the number of children of the vertex \(1\) in the rooted tree \(T\). Note that \(T_{n}(x,y,z)\) is a homogeneous polynomial of degree \(n\) in \(x,y,z\), with nonnegative integer coefficients; it reduces to \(T_{n}(x)\) when \(y=z=1\). Our third result is then:
**Theorem 1.4**.:
1. _The unit-lower-triangular polynomial matrix_ \(\mathsf{T}(y,z)=\big{(}t_{n,k}(y,z)\big{)}_{n,k\geq 0}\) _is coefficientwise totally positive (jointly in_ \(y,z\)_)._
2. _The polynomial sequence_ \(\boldsymbol{T}=\big{(}T_{n}(x,y,z)\big{)}_{n\geq 0}\) _is coefficientwise Hankel-totally positive (jointly in_ \(x,y,z\)_)._
Theorem 1.4 strengthens Theorems 1.1(a) and 1.2, and reduces to them when \(y=z=1\). The proof of Theorem 1.4(b) will be based on studying the binomial row-generating matrix \(\mathsf{T}(y,z)B_{x}\), using the representation of \(\mathsf{T}(y,z)\) as an exponential Riordan array.
Finally, let us consider our third combinatorial model, which is based on partial functional digraphs. Recall that a _functional digraph_ (resp. _partial functional digraph_) is a directed graph in which every vertex has out-degree \(1\) (resp. \(0\) or \(1\)). Each weakly connected component of a functional digraph consists of a directed cycle (possibly of length \(1\), i.e. a loop) together with a collection of (possibly trivial) directed trees rooted at the vertices of the cycle (with edges pointing towards the root). The weakly connected components of a partial functional digraph are trees rooted at the out-degree-\(0\) vertices (with edges pointing towards the root) together with components of the same form as in a functional digraph. We say that a vertex of a partial functional digraph is _recurrent_ (or _cyclic_) if it lies on one of the cycles; otherwise we call it _transient_ (or _acyclic_). If \(j\) and \(k\) are vertices of a digraph, we say that \(k\) is a _predecessor_ of \(j\) if there exists a directed path from \(k\) to \(j\) (in particular, every vertex is a predecessor of itself).6 Note that "predecessor" in a digraph generalizes the notion of "descendant" in a rooted tree, if we make the convention that all edges in the tree are oriented towards the root. Indeed, if \(j\) is a transient vertex in a partial functional digraph, then the predecessors of \(j\) are precisely the descendants of \(j\) in the rooted tree (rooted at either a recurrent vertex or an out-degree-\(0\) vertex) to which \(j\) belongs. On the other hand, if \(j\) is a recurrent vertex, then the predecessors of \(j\) are all the vertices in the weakly connected component containing \(j\).
Footnote 6: In a functional digraph, Dumont and Ramamonjisoa [26, p. 11] use the term “ascendance”, and the notation \(A(j)\), to denote the set of all predecessors of \(j\).
Now consider a partial functional digraph on a totally ordered vertex set (which for us will be \([n]\)). We say that an edge \(\overrightarrow{j!}\) (pointing from \(j\) to \(i\)) is _improper_ if there exists a predecessor of \(j\) (possibly \(j\) itself) that is \(\leq i\); otherwise we say that the edge \(\overrightarrow{j!}\) is _proper_. When \(j\) is a transient vertex, this coincides with the notion of improper/proper edge in a rooted tree. When \(j\) is a recurrent vertex, the edge \(\overrightarrow{j!}\) is always improper, because one of the predecessors of \(j\) is \(i\). (This includes the case \(i=j\): a loop is always an improper edge.) We denote by \(\operatorname{imprope}(G)\) [resp. \(\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{ \operatorname{\operatorname{\operatorname{\operatornameoperatorname{ \operatornameoperatorname \
Since \(G\in{\bf PFD}_{n,k}\) has \(n-k\) edges, \(\widetilde{t}_{n,k}(y,z)\) is a homogeneous polynomial of degree \(n-k\) with nonnegative integer coefficients. By bijection between our second and third combinatorial models, we will prove:
**Proposition 1.5**.: \(t_{n,k}(y,z)\;=\;\widetilde{t}_{n,k}(y,z)\:.\)__
The row-generating polynomials (1.8)/(1.9) thus have the alternate combinatorial interpretation
\[T_{n}(x,y,z)\;=\;\sum_{G\in{\bf PFD}_{n}}x^{\deg 0(G)}y^{\rm improve(G)}z^{\rm prope (G)} \tag{1.11}\]
where \(\deg 0(G)\) is the number of out-degree-0 vertices in \(G\).
We also have an interpretation of the polynomials \(t_{n,k}(y,z)\) in our first combinatorial model (rooted trees in which the root has \(k\) lower-numbered children); but since this interpretation is rather complicated, we defer it to Appendix A.
But this is _still_ not the end of the story, because we can add even more variables into our second combinatorial model -- in fact, an infinite set. Given a rooted tree \(T\) on a totally ordered vertex set and vertices \(i,j\in T\) such that \(j\) is a child of \(i\), we say that \(j\) is a _proper child_ of \(i\) if the edge \(e=ij\) is proper (that is, \(j\) and all its descendants are higher-numbered than \(i\)). Now let \(\boldsymbol{\phi}=(\phi_{m})_{m\geq 0}\) be indeterminates, and let \(t_{n,k}(y,\boldsymbol{\phi})\) be the generating polynomial for rooted trees \(T\in\mathcal{T}_{n+1}^{(1;k)}\) with a weight \(y\) for each improper edge and a weight \(\widehat{\phi}_{m}\stackrel{{\rm def}}{{=}}m!\,\phi_{m}\) for each vertex \(i\neq 1\) that has \(m\) proper children:
\[t_{n,k}(y,\boldsymbol{\phi})\;=\;\sum_{T\in\mathcal{T}_{n+1}^{(1;k)}}y^{\rm impe (T)}\,\prod_{i=2}^{n+1}\widehat{\phi}_{\rm page_{T}(i)} \tag{1.12}\]
where \({\rm page}_{T}(i)\) denotes the number of _proper_ children of the vertex \(i\) in the rooted tree \(T\). We will see later why it is convenient to introduce the factors \(m!\) in this definition. Observe also that the variables \(z\) are now redundant and therefore omitted, because they would simply scale \(\phi_{m}\to z^{m}\phi_{m}\). And note finally that, in conformity with (1.7), we have chosen to suppress the weight \(\widehat{\phi}_{k}\) that would otherwise be associated to the vertex \(1\). We call the polynomials \(t_{n,k}(y,\boldsymbol{\phi})\) the _generic rooted-tree polynomials_, and the lower-triangular matrix \({\sf T}(y,\boldsymbol{\phi})=\big{(}t_{n,k}(y,\boldsymbol{\phi})\big{)}_{n,k \geq 0}\) the _generic rooted-tree matrix_. Here \(\boldsymbol{\phi}=(\phi_{m})_{m\geq 0}\) are in the first instance indeterminates, so that \(t_{n,k}(y,\boldsymbol{\phi})\) belongs to the polynomial ring \(\mathbb{Z}[y,\boldsymbol{\phi}]\); but we can then, if we wish, substitute specific values for \(\boldsymbol{\phi}\) in any commutative ring \(R\), leading to values \(t_{n,k}(y,\boldsymbol{\phi})\in R[y]\). (Similar substitutions can of course also be made for \(y\).) When doing this we will use the same notation \(t_{n,k}(y,\boldsymbol{\phi})\), as the desired interpretation for \(\boldsymbol{\phi}\) should be clear from the context. The polynomial \(t_{n,k}(y,\boldsymbol{\phi})\) is homogeneous of degree \(n\) in \(\boldsymbol{\phi}\); it is also quasi-homogeneous of degree \(n-k\) in \(y\) and \(\boldsymbol{\phi}\) when \(\phi_{m}\) is assigned weight \(m\) and \(y\) is assigned weight \(1\). By specializing \(t_{n,k}(y,\boldsymbol{\phi})\) to \(\phi_{m}=z^{m}/m!\) and hence \(\widehat{\phi}_{m}=z^{m}\), we recover \(t_{n,k}(y,z)\).
We remark that the matrix \({\sf T}(y,\boldsymbol{\phi})\), unlike \({\sf T}(y,z)\), is not _unit_-lower-triangular: rather, it has diagonal entries \(t_{n,n}(y,\boldsymbol{\phi})=\phi_{0}^{n}\), corresponding to the tree in which \(1\) is the root and has all the vertices \(2,\ldots,n+1\) as children. More generally, the polynomial \(t_{n,k}(y,\boldsymbol{\phi})\) is divisible by \(\phi_{0}^{k}\), since the vertex \(1\) always has at least \(k\) leaf descendants. So we could define a unit-lower-triangular matrix
by \(t^{b}_{n,k}(y,\boldsymbol{\phi})=t_{n,k}(y,\boldsymbol{\phi})/\phi_{0}^{k}\). (Alternatively, we could simply choose to normalize to \(\phi_{0}=1\).)
In Section 4.3 we will show that \(\mathsf{T}(y,\boldsymbol{\phi})\) is an exponential Riordan array \(\mathcal{R}[F,G]\), and we will compute \(F(t)\) and \(G(t)\).
Also, generalizing Proposition 1.3, we will prove:
**Proposition 1.6**.: _For \(n\geq 1\), \(t_{n,0}(y,\boldsymbol{\phi})=y\,t_{n,1}(y,\boldsymbol{\phi})\)._
We can also define the corresponding polynomials \(\widetilde{t}_{n,k}(y,\boldsymbol{\phi})\) in the partial-functional-digraph model, as follows: If \(G\) is a partial functional digraph on a totally ordered vertex set, and \(i\) is a vertex of \(G\), we define the _proper in-degree_ of \(i\), \(\mathrm{p}\mathrm{n}\mathrm{e}\mathrm{g}_{G}(i)\), to be the number of proper edges \(\overrightarrow{ji}\) in \(G\). We then define
\[\widetilde{t}_{n,k}(y,\boldsymbol{\phi})\;=\;\sum_{G\in\mathbf{PFD}_{n,k}}\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
combined with the theory of exponential Riordan arrays [3, 22, 24, 70]. Therefore, in Section 2 we review some facts about total positivity, production matrices and exponential Riordan arrays that will play a central role in our arguments. This development culminates in Corollary 2.28; it is the fundamental theoretical result that underlies all our proofs. In Section 3 we give bijective proofs of Propositions 1.3, 1.5, 1.6 and 1.7. In Section 4 we show that the matrices \(\mathsf{T}\), \(\mathsf{T}(y,z)\) and \(\mathsf{T}(y,\boldsymbol{\phi})\) are exponential Riordan arrays \(\mathcal{R}[F,G]\), and we compute their generating functions \(F\) and \(G\). In Section 5 we combine the results of Sections 2 and 4 to complete the proofs of Theorems 1.1, 1.2, 1.4 and 1.8.
This paper is a sequel to our paper [76] on the total positivity of matrices that enumerate forests of rooted labeled trees. The methods here are basically the same as in this previous paper, but generalized nontrivially to handle exponential Riordan arrays \(\mathcal{R}[F,G]\) with \(F\neq 1\). Zhu [87, 89] has employed closely related methods. See also Gilmore [39] for some total-positivity results for \(q\)-generalizations of tree and forest matrices, using very different methods.
## 2 Preliminaries
Here we review some definitions and results from [62, 77] that will be needed in the sequel. We also include a brief review of ordinary and exponential Riordan arrays [3, 22, 24, 69, 70, 78] and Lagrange inversion [38].
The treatment of exponential Riordan arrays in Section 2.6 contains one novelty: namely, the rewriting of the production matrix in terms of new series \(\Phi\) and \(\Psi\) (see (2.23) ff. and Proposition 2.23). This is the key step that leads to Corollary 2.28.
### Partially ordered commutative rings and total positivity
In this paper all rings will be assumed to have an identity element \(1\) and to be nontrivial (\(1\neq 0\)).
A _partially ordered commutative ring_ is a pair \((R,\mathcal{P})\) where \(R\) is a commutative ring and \(\mathcal{P}\) is a subset of \(R\) satisfying
* \(0,1\in\mathcal{P}\).
* If \(a,b\in\mathcal{P}\), then \(a+b\in\mathcal{P}\) and \(ab\in\mathcal{P}\).
* \(\mathcal{P}\cap(-\mathcal{P})=\{0\}\).
We call \(\mathcal{P}\) the _nonnegative elements_ of \(R\), and we define a partial order on \(R\) (compatible with the ring structure) by writing \(a\leq b\) as a synonym for \(b-a\in\mathcal{P}\). Please note that, unlike the practice in real algebraic geometry [50, 57, 11, 65], we do _not_ assume here that squares are nonnegative; indeed, this property fails completely for our prototypical example, the ring of polynomials with the coefficientwise order, since \((1-x)^{2}=1-2x+x^{2}\not\succeq 0\).
Now let \((R,\mathcal{P})\) be a partially ordered commutative ring and let \(\mathbf{x}=\{x_{i}\}_{i\in I}\) be a collection of indeterminates. In the polynomial ring \(R[\mathbf{x}]\) and the formal-power-series ring \(R[[\mathbf{x}]]\), let \(\mathcal{P}[\mathbf{x}]\) and \(\mathcal{P}[[\mathbf{x}]]\) be the subsets consisting of polynomials (resp. series)
with nonnegative coefficients. Then \((R[\mathbf{x}],\mathcal{P}[\mathbf{x}])\) and \((R[[\mathbf{x}]],\mathcal{P}[[\mathbf{x}]])\) are partially ordered commutative rings; we refer to this as the _coefficientwise order_ on \(R[\mathbf{x}]\) and \(R[[\mathbf{x}]]\).
A (finite or infinite) matrix with entries in a partially ordered commutative ring is called _totally positive_ (TP) if all its minors are nonnegative; it is called _totally positive of order r_ (TP\({}_{r}\)) if all its minors of size \(\leq r\) are nonnegative. It follows immediately from the Cauchy-Binet formula that the product of two TP (resp. TP\({}_{r}\)) matrices is TP (resp. TP\({}_{r}\)).7 This fact is so fundamental to the theory of total positivity that we shall henceforth use it without comment.
Footnote 7: For infinite matrices, we need some condition to ensure that the product is well-defined. For instance, the product \(AB\) is well-defined whenever \(A\) is row-finite (i.e. has only finitely many nonzero entries in each row) or \(B\) is column-finite.
We say that a sequence \(\boldsymbol{a}=(a_{n})_{n\geq 0}\) with entries in a partially ordered commutative ring is _Hankel-totally positive_ (resp. _Hankel-totally positive of order r_) if its associated infinite Hankel matrix \(H_{\infty}(\boldsymbol{a})=(a_{i+j})_{i,j\geq 0}\) is TP (resp. TP\({}_{r}\)). We say that \(\boldsymbol{a}\) is _Toeplitz-totally positive_ (resp. _Toeplitz-totally positive of order r_) if its associated infinite Toeplitz matrix \(T_{\infty}(\boldsymbol{a})=(a_{i-j})_{i,j\geq 0}\) (where \(a_{n}\stackrel{{\rm def}}{{=}}0\) for \(n<0\)) is TP (resp. TP\({}_{r}\)).8
Footnote 8: When \(R=\mathbb{R}\), Toeplitz-totally positive sequences are traditionally called _Polya frequency sequences_ (PF), and Toeplitz-totally positive sequences of order \(r\) are called _Polya frequency sequences of order \(r\)_ (PF\({}_{r}\)). See [46, chapter 8] for a detailed treatment.
When \(R=\mathbb{R}\), Hankel- and Toeplitz-total positivity have simple analytic characterizations. A sequence \((a_{n})_{n\geq 0}\) of real numbers is Hankel-totally positive if and only if it is a Stieltjes moment sequence [35, Theoreme 9][64, section 4.6]. And a sequence \((a_{n})_{n\geq 0}\) of real numbers is Toeplitz-totally positive if and only if its ordinary generating function can be written as
\[\sum_{n=0}^{\infty}a_{n}t^{n}\ =\ Ce^{\gamma t}t^{m}\prod_{i=1}^{\infty}\frac{1+ \alpha_{i}t}{1-\beta_{i}t} \tag{2.1}\]
with \(m\in\mathbb{N}\), \(C,\gamma,\alpha_{i},\beta_{i}\geq 0\), \(\sum\alpha_{i}<\infty\) and \(\sum\beta_{i}<\infty\): this is the celebrated Aissen-Schoenberg-Whitney-Edrei theorem [46, Theorem 5.3, p. 412]. However, in a general partially ordered commutative ring \(R\), the concepts of Hankel- and Toeplitz-total positivity are more subtle.
We will need a few easy facts about the total positivity of special matrices:
**Lemma 2.1** (Bidiagonal matrices).: _Let \(A\) be a matrix with entries in a partially ordered commutative ring, with the property that all its nonzero entries belong to two consecutive diagonals. Then \(A\) is totally positive if and only if all its entries are nonnegative._
Proof. The nonnegativity of the entries (i.e. TP\({}_{1}\)) is obviously a necessary condition for TP. Conversely, for a matrix of this type it is easy to see that every nonzero minor is simply a product of some entries. \(\square\)
**Lemma 2.2** (Toeplitz matrix of powers).: _Let \(R\) be a partially ordered commutative ring, let \(x\in R\), and consider the infinite Toeplitz matrix_
\[T_{x}\ \stackrel{{\rm def}}{{=}}\ T_{\infty}(x^{\mathbb{N}})\ =\ \left[\begin{matrix}1&&&&\\ x&1&&\\ x^{2}&x&1&\\ x^{3}&x^{2}&x&1&\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{matrix}\right]. \tag{2.2}\]
_Then every minor of \(T_{x}\) is either zero or else a power of \(x\). Hence \(T_{x}\) is TP \(\iff\ T_{x}\) is TP \(\iff\ T_{1}\iff x\geq 0\)._
_In particular, if \(x\) is an indeterminate, then \(T_{x}\) is totally positive in the ring \(\mathbb{Z}[x]\) equipped with the coefficientwise order._
Proof. Consider a submatrix \(A=(T_{x})_{IJ}\) with rows \(I=\{i_{1}<\ldots<i_{k}\}\) and columns \(J=\{j_{1}<\ldots<j_{k}\}\). We will prove by induction on \(k\) that \(\det A\) is either zero or a power of \(x\). It is trivial if \(k=0\) or \(1\). If \(A_{12}=A_{22}=0\), then \(A_{1s}=A_{2s}=0\) for all \(s\geq 2\) by definition of \(T_{x}\), and \(\det A=0\). If \(A_{12}\) and \(A_{22}\) are both nonzero, then the first column of \(A\) is \(x^{j_{2}-j_{1}}\) times the second column, and again \(\det A=0\). Finally, if \(A_{12}=0\) and \(A_{22}\neq 0\) (by definition of \(T_{x}\) this is the only other possibility), then \(A_{1s}=0\) for all \(s\geq 2\); we then replace the first column of \(A\) by the first column minus \(x^{j_{2}-j_{1}}\) times the second column, so that the new first column has \(x^{i_{1}-j_{1}}\) in its first entry (or zero if \(i_{1}<j_{1}\)) and zeroes elsewhere. Then \(\det A\) equals \(x^{i_{1}-j_{1}}\) (or zero if \(i_{1}<j_{1}\)) times the determinant of its last \(k-1\) rows and columns, so the claim follows from the inductive hypothesis. \(\square\)
See also Example 2.9 below for a second proof of the total positivity of \(T_{x}\), using production matrices.
**Lemma 2.3** (Binomial matrix).: _In the ring \(\mathbb{Z}\), the binomial matrix \(B=\big{(}\binom{n}{k}\big{)}_{n,k\geq 0}\) is totally positive. More generally, the weighted binomial matrix \(B_{x,y}=\big{(}x^{n-k}y^{k}\binom{n}{k}\big{)}_{n,k\geq 0}\) is totally positive in the ring \(\mathbb{Z}[x,y]\) equipped with the coefficientwise order._
Proof. It is well known that the binomial matrix \(B\) is totally positive, and this can be proven by a variety of methods: e.g. using production matrices [46, pp. 136-137, Example 6.1][64, pp. 108-109], by diagonal similarity to a totally positive Toeplitz matrix [64, p. 109], by exponentiation of a nonnegative lower-subdiagonal matrix [28, p. 63], or by an application of the Lindstrom-Gessel-Viennot lemma [33, p. 24].
Then \(B_{x,y}=DBD^{\prime}\) where \(D=\operatorname{diag}\bigl{(}(x^{n})_{n\geq 0}\bigr{)}\) and \(D^{\prime}=\operatorname{diag}\bigl{(}(x^{-k}y^{k})_{k\geq 0}\bigr{)}\). By Cauchy-Binet, \(B_{x,y}\) is totally positive in the ring \(\mathbb{Z}[x,x^{-1},y]\) equipped with the coefficientwise order. But because \(B\) is lower-triangular, the elements of \(B_{x,y}\) actually lie in the subring \(\mathbb{Z}[x,y]\). \(\square\)
See also Example 2.10 below for an _ab initio_ proof of Lemma 2.3 using production matrices.
Finally, let us show that the sufficiency half of the Aissen-Schoenberg-Whitney-Edrei theorem holds (with a slight modification to avoid infinite products) in a general partially ordered commutative ring. We give two versions, depending on whether or not it is assumed that the ring \(R\) contains the rationals:
**Lemma 2.4** (Sufficient condition for Toeplitz-total positivity).: _Let \(R\) be a partially ordered commutative ring, let \(N\) be a nonnegative integer, and let \(\alpha_{1},\ldots,\alpha_{N}\), \(\beta_{1},\ldots,\beta_{N}\) and \(C\) be nonnegative elements in \(R\). Define the sequence \(\boldsymbol{a}=(a_{n})_{n\geq 0}\) in \(R\) by_
\[\sum_{n=0}^{\infty}a_{n}t^{n}\;=\;C\,\prod_{i=1}^{N}\frac{1+\alpha_{i}t}{1- \beta_{i}t}\;. \tag{2.3}\]
_Then the Toeplitz matrix \(T_{\infty}(\boldsymbol{a})\) is totally positive._
Of course, it is no loss of generality to have the same number \(N\) of alphas and betas, since some of the \(\alpha_{i}\) or \(\beta_{i}\) could be zero.
**Lemma 2.5** (Sufficient condition for Toeplitz-total positivity, with rationals).: _Let \(R\) be a partially ordered commutative ring containing the rationals, let \(N\) be a nonnegative integer, and let \(\alpha_{1},\ldots,\alpha_{N}\), \(\beta_{1},\ldots,\beta_{N}\), \(\gamma\) and \(C\) be nonnegative elements in \(R\). Define the sequence \(\boldsymbol{a}=(a_{n})_{n\geq 0}\) in \(R\) by_
\[\sum_{n=0}^{\infty}a_{n}t^{n}\;=\;C\,e^{\gamma t}\,\prod_{i=1}^{N}\frac{1+ \alpha_{i}t}{1-\beta_{i}t}\;. \tag{2.4}\]
_Then the Toeplitz matrix \(T_{\infty}(\boldsymbol{a})\) is totally positive._
Proof of Lemma 2.4. We make a series of elementary observations:
1) The sequence \(\boldsymbol{a}=(1,\alpha,0,0,0,\ldots)\), corresponding to the generating function \(A(t)=1+\alpha t\), is Toeplitz-totally positive if and only if \(\alpha\geq 0\). The "only if" is trivial, and the "if" follows from Lemma 2.1 because the Toeplitz matrix \(T_{\infty}(\boldsymbol{a})\) is bidiagonal.
2) The sequence \(\boldsymbol{a}=(1,\beta,\beta^{2},\beta^{3},\ldots)\), corresponding to the generating function \(A(t)=1/(1-\beta t)\), is Toeplitz-totally positive if and only if \(\beta\geq 0\). The "only if" is again trivial, and the "if" follows from Lemma 2.2.
3) If \(\boldsymbol{a}\) and \(\boldsymbol{b}\) are sequences with ordinary generating functions \(A(t)\) and \(B(t)\), then the convolution \(\boldsymbol{c}=\boldsymbol{a}\star\boldsymbol{b}\), defined by \(c_{n}=\sum_{k=0}^{n}a_{k}b_{n-k}\), has ordinary generating function \(C(t)=A(t)\,B(t)\); moreover, the Toeplitz matrix \(T_{\infty}(\boldsymbol{c})\) is simply the matrix product \(T_{\infty}(\boldsymbol{a})\,T_{\infty}(\boldsymbol{b})\). It thus follows from the Cauchy-Binet formula that if \(\boldsymbol{a}\) and \(\boldsymbol{b}\) are Toeplitz-totally positive, then so is \(\boldsymbol{c}\).
4) A Toeplitz-totally positive sequence can be multiplied by a nonnegative constant \(C\), and it is still Toeplitz-totally positive.
Combining these observations proves the lemma. \(\square\)
Proof of Lemma 2.5. We add to the proof of Lemma 2.4 the following additional observation:
5) The sequence \(\boldsymbol{a}=(\gamma^{n}/n!)_{n\geq 0}\), corresponding to the generating function \(A(t)=e^{\gamma t}\), is Toeplitz-totally positive if and only if \(\gamma\geq 0\). The "only if" is again trivial, and the "if" follows from Lemma 2.3 because \(\gamma^{n-k}/(n-k)!=\binom{n}{k}\gamma^{n-k}\times k!/n!\) and hence \(T_{\infty}(\boldsymbol{a})=D^{-1}B_{\gamma,1}D\) where \(D=\operatorname{diag}(\,(n!)_{n\geq 0})\). \(\square\)
### Production matrices
The method of production matrices [23, 24] has become in recent years an important tool in enumerative combinatorics. In the special case of a tridiagonal production matrix, this construction goes back to Stieltjes' [81, 82] work on continued fractions: the production matrix of a classical S-fraction or J-fraction is tridiagonal. In the present paper, by contrast, we shall need production matrices that are lower-Hessenberg (i.e. vanish above the first superdiagonal) but are not in general tridiagonal. We therefore begin by reviewing briefly the basic theory of production matrices. The important connection of production matrices with total positivity will be treated in the next subsection.
Let \(P=(p_{ij})_{i,j\geq 0}\) be an infinite matrix with entries in a commutative ring \(R\). In order that powers of \(P\) be well-defined, we shall assume that \(P\) is either row-finite (i.e. has only finitely many nonzero entries in each row) or column-finite.
Let us now define an infinite matrix \(A=(a_{nk})_{n,k\geq 0}\) by
\[a_{nk}\;=\;(P^{n})_{0k} \tag{2.5}\]
(in particular, \(a_{0k}=\delta_{0k}\)). Writing out the matrix multiplications explicitly, we have
\[a_{nk}\;=\;\sum_{i_{1},\ldots,i_{n-1}}p_{0i_{1}}\,p_{i_{1}i_{2}}\,p_{i_{2}i_{ 3}}\;\cdots\;p_{i_{n-2}i_{n-1}}\,p_{i_{n-1}k}\;, \tag{2.6}\]
so that \(a_{nk}\) is the total weight for all \(n\)-step walks in \(\mathbb{N}\) from \(i_{0}=0\) to \(i_{n}=k\), in which the weight of a walk is the product of the weights of its steps, and a step from \(i\) to \(j\) gets a weight \(p_{ij}\). Yet another equivalent formulation is to define the entries \(a_{nk}\) by the recurrence
\[a_{nk}\;=\;\sum_{i=0}^{\infty}a_{n-1,i}\,p_{ik}\qquad\mbox{for $n\geq 1$} \tag{2.7}\]
with the initial condition \(a_{0k}=\delta_{0k}\).
We call \(P\) the _production matrix_ and \(A\) the _output matrix_, and we write \(A={\cal O}(P)\). Note that if \(P\) is row-finite, then so is \({\cal O}(P)\); if \(P\) is lower-Hessenberg, then \({\cal O}(P)\) is lower-triangular; if \(P\) is lower-Hessenberg with invertible superdiagonal entries, then \({\cal O}(P)\) is lower-triangular with invertible diagonal entries; and if \(P\) is unit-lower-Hessenberg (i.e. lower-Hessenberg with entries \(1\) on the superdiagonal), then \({\cal O}(P)\) is unit-lower-triangular. In all the applications in this paper, \(P\) will be lower-Hessenberg.
The matrix \(P\) can also be interpreted as the adjacency matrix for a weighted directed graph on the vertex set \(\mathbb{N}\) (where the edge \(ij\) is omitted whenever \(p_{ij}=0\)). Then \(P\) is row-finite (resp. column-finite) if and only if every vertex has finite out-degree (resp. finite in-degree).
This iteration process can be given a compact matrix formulation. Let us define the _augmented production matrix_
\[\widetilde{P}\;\stackrel{{\rm def}}{{=}}\;\left[\begin{array}{ ccc}1&0&0&0&\cdots\\ \hline P&\end{array}\right]\;. \tag{2.8}\]
Then the recurrence (2.7) together with the initial condition \(a_{0k}=\delta_{0k}\) can be written as
\[A\;=\;\left[\begin{array}{c|c}1&0&0&0&\cdots\\ \hline AP&\end{array}\right]\;=\;\left[\begin{array}{c|c}1&{\bf 0}\\ \hline{\bf 0}&A\end{array}\right]\left[\begin{array}{c|c}1&0&0&0&\cdots\\ \hline P&\end{array}\right]\;=\;\left[\begin{array}{c|c}1&{\bf 0}\\ \hline{\bf 0}&A\end{array}\right]\widetilde{P}\;. \tag{2.9}\]
This identity can be iterated to give the factorization
\[A\;=\;\cdots\,\left[\begin{array}{c|c}I_{3}&\mathbf{0}\\ \hline\mathbf{0}&\widetilde{P}\end{array}\right]\,\left[\begin{array}{c|c}I_{2} &\mathbf{0}\\ \hline\mathbf{0}&\widetilde{P}\end{array}\right]\,\left[\begin{array}{c|c}I_{1} &\mathbf{0}\\ \hline\mathbf{0}&\widetilde{P}\end{array}\right]\widetilde{P} \tag{2.10}\]
where \(I_{k}\) is the \(k\times k\) identity matrix; and conversely, (2.10) implies (2.9).
Now let \(\Delta=(\delta_{i+1,j})_{i,j\geq 0}\) be the matrix with \(1\) on the superdiagonal and \(0\) elsewhere. Then for any matrix \(M\) with rows indexed by \(\mathbb{N}\), the product \(\Delta M\) is simply \(M\) with its zeroth row removed and all other rows shifted upwards. (Some authors use the notation \(\overline{M}\stackrel{{\mathrm{def}}}{{=}}\Delta M\).) The recurrence (2.7) can then be written as
\[\Delta\,\mathcal{O}(P)\;=\;\mathcal{O}(P)\,P\;. \tag{2.11}\]
It follows that if \(A\) is a row-finite matrix that has a row-finite inverse \(A^{-1}\) and has first row \(a_{0k}=\delta_{0k}\), then \(P=A^{-1}\Delta A\) is the unique matrix such that \(A=\mathcal{O}(P)\). This holds, in particular, if \(A\) is lower-triangular with invertible diagonal entries and \(a_{00}=1\); then \(A^{-1}\) is lower-triangular and \(P=A^{-1}\Delta A\) is lower-Hessenberg. And if \(A\) is unit-lower-triangular, then \(P=A^{-1}\Delta A\) is unit-lower-Hessenberg.
We shall repeatedly use the following easy fact:
**Lemma 2.6** (Production matrix of a product).: _Let \(P=(p_{ij})_{i,j\geq 0}\) be a row-finite matrix (with entries in a commutative ring \(R\)), with output matrix \(A=\mathcal{O}(P)\); and let \(B=(b_{ij})_{i,j\geq 0}\) be a lower-triangular matrix with invertible (in \(R\)) diagonal entries. Then_
\[AB\;=\;b_{00}\,\mathcal{O}(B^{-1}PB)\;. \tag{2.12}\]
_That is, up to a factor \(b_{00}\), the matrix \(AB\) has production matrix \(B^{-1}PB\)._
Proof.: Since \(P\) is row-finite, so is \(A=\mathcal{O}(P)\); then the matrix products \(AB\) and \(B^{-1}PB\) arising in the lemma are well-defined. Now
\[a_{nk}\;=\;\sum_{i_{1},\ldots,i_{n-1}}p_{0i_{1}}\,p_{i_{1}i_{2}}\,p_{i_{2}i_{ 3}}\,\cdots\,p_{i_{n-2}i_{n-1}}\,p_{i_{n-1}k}\;, \tag{2.13}\]
while
\[\mathcal{O}(B^{-1}PB)_{nk}\;=\;\sum_{j,i_{1},\ldots,i_{n-1},i_{n}}(B^{-1})_{0 j}\,p_{ji_{1}}\,p_{i_{1}i_{2}}\,p_{i_{2}i_{3}}\,\cdots\,p_{i_{n-2}i_{n-1}}\,p_{i_{ n-1}i_{n}}\,b_{i_{n}k}\;. \tag{2.14}\]
But \(B\) is lower-triangular with invertible diagonal entries, so \(B\) is invertible and \(B^{-1}\) is lower-triangular, with \((B^{-1})_{0j}=b_{00}^{-1}\delta_{j0}\). It follows that \(AB=b_{00}\,\mathcal{O}(B^{-1}PB)\).
### Production matrices and total positivity
Let \(P=(p_{ij})_{i,j\geq 0}\) be a matrix with entries in a partially ordered commutative ring \(R\). We will use \(P\) as a production matrix; let \(A=\mathcal{O}(P)\) be the corresponding output matrix. As before, we assume that \(P\) is either row-finite or column-finite.
When \(P\) is totally positive, it turns out [77] that the output matrix \(\mathcal{O}(P)\) has _two_ total-positivity properties: firstly, it is totally positive; and secondly, its zeroth column is Hankel-totally positive. Since [77] is not yet publicly available, we shall present briefly here (with proof) the main results that will be needed in the sequel.
The fundamental fact that drives the whole theory is the following:
**Proposition 2.7** (Minors of the output matrix).: _Every \(k\times k\) minor of the output matrix \(A=\mathcal{O}(P)\) can be written as a sum of products of minors of size \(\leq k\) of the production matrix \(P\)._
In this proposition the matrix elements \(\mathbf{p}=\{p_{ij}\}_{i,j\geq 0}\) should be interpreted in the first instance as indeterminates: for instance, we can fix a row-finite or column-finite set \(S\subseteq\mathbb{N}\times\mathbb{N}\) and define the matrix \(P^{S}=(p^{S}_{ij})_{i,j\in\mathbb{N}}\) with entries
\[p^{S}_{ij}\;=\;\begin{cases}p_{ij}&\text{if }(i,j)\in S\\ 0&\text{if }(i,j)\notin S\end{cases} \tag{2.15}\]
Then the entries (and hence also the minors) of both \(P\) and \(A\) belong to the polynomial ring \(\mathbb{Z}[\mathbf{p}]\), and the assertion of Proposition 2.7 makes sense. Of course, we can subsequently specialize the indeterminates \(\mathbf{p}\) to values in any commutative ring \(R\).
Proof of Proposition 2.7. For any infinite matrix \(X=(x_{ij})_{i,j\geq 0}\), let us write \(X_{N}=(x_{ij})_{0\leq i\leq N-1,\,j\geq 0}\) for the submatrix consisting of the first \(N\) rows (and _all_ the columns) of \(X\). Every \(k\times k\) minor of \(A\) is of course a \(k\times k\) minor of \(A_{N}\) for some \(N\), so it suffices to prove that the claim about minors holds for all the \(A_{N}\). But this is easy: the fundamental identity (2.9) implies
\[A_{N}\;=\;\left[\begin{array}{c|c}1&\mathbf{0}\\ \hline\mathbf{0}&A_{N-1}\end{array}\right]\;\left[\begin{array}{c|ccc}1&0&0& 0&\cdots\\ \hline P&\end{array}\right]\;. \tag{2.16}\]
So the result follows by induction on \(N\), using the Cauchy-Binet formula. \(\square\)
If we now specialize the indeterminates \(\mathbf{p}\) to values in some partially ordered commutative ring \(R\), we can immediately conclude:
**Theorem 2.8** (Total positivity of the output matrix).: _Let \(P\) be an infinite matrix that is either row-finite or column-finite, with entries in a partially ordered commutative ring \(R\). If \(P\) is totally positive of order \(r\), then so is \(A=\mathcal{O}(P)\)._
**Remarks.** 1. In the case \(R=\mathbb{R}\), Theorem 2.8 is due to Karlin [46, pp. 132-134]; see also [64, Theorem 1.11]. Karlin's proof is different from ours.
2. Our quick inductive proof of Proposition 2.7 follows an idea of Zhu [85, proof of Theorem 2.1], which was in turn inspired in part by Aigner [1, pp. 45-46]. The same idea recurs in recent work of several authors [86, Theorem 2.1][16, Theorem 2.1(i)][17, Theorem 2.3(i)][51, Theorem 2.1][18, Theorems 2.1 and 2.3][36]. However, all of these results concerned only special cases: [1, 17, 51, 85] treated the case in which the production matrix \(P\) is tridiagonal; [86] treated a (special) case in which \(P\) is upper bidiagonal; [16] treated the case in which \(P\) is the production matrix of a Riordan array; [18, 36] treated (implicitly) the case in which \(P\) is upper-triangular and Toeplitz. But the argument is in fact completely general, as we have just seen; there is no need to assume any special form for the matrix \(P\).
3. A slightly different version of this proof was presented in [62, 63]. The simplified reformulation given here, using the augmented production matrix, is due to Mu and Wang [60]. \(\blacksquare\)
**Example 2.9** (Toeplitz matrix of powers).: Let \(P=x{\bf e}_{00}+y\Delta\), where \(x\) and \(y\) are indeterminates (here \({\bf e}_{ij}\) denotes the matrix with an entry \(1\) in position \(ij\) and \(0\) elsewhere). By Lemma 2.1, \(P\) is TP in the ring \(\mathbb{Z}[x,y]\) equipped with the coefficientwise order. An easy computation shows that \(\mathcal{O}(x{\bf e}_{00}+y\Delta)_{nk}=x^{n-k}y^{k}\,\mathrm{I}[k\leq n]\). When \(y=1\), this is the Toeplitz matrix of powers (2.2). So Theorem 2.8 implies that \(T_{x}\) is TP in the ring \(\mathbb{Z}[x]\) equipped with the coefficientwise order. This gives a second proof of the total positivity stated in Lemma 2.2. \(\blacksquare\)
**Example 2.10** (Binomial matrix).: Let \(P\) be the upper-bidiagonal Toeplitz matrix \(xI+y\Delta\), where \(x\) and \(y\) are indeterminates. By Lemma 2.1, \(P\) is TP in the ring \(\mathbb{Z}[x,y]\) equipped with the coefficientwise order. An easy computation shows that \(\mathcal{O}(xI+y\Delta)=B_{x,y}\), the weighted binomial matrix with entries \((B_{x,y})_{nk}=x^{n-k}y^{k}\binom{n}{k}\). So Theorem 2.8 implies that \(B_{x,y}\) is TP in the ring \(\mathbb{Z}[x,y]\) equipped with the coefficientwise order. This gives an _ab initio_ proof of Lemma 2.3. \(\blacksquare\)
Now define \(\mathcal{O}_{0}(P)\) to be the zeroth-column sequence of \(\mathcal{O}(P)\), i.e.
\[\mathcal{O}_{0}(P)_{n}\ \stackrel{{\mathrm{def}}}{{=}}\ \mathcal{O}(P)_{n0}\ \stackrel{{\mathrm{def}}}{{=}}\ (P^{n})_{00}\;. \tag{2.17}\]
Then the Hankel matrix of \(\mathcal{O}_{0}(P)\) has matrix elements
\[H_{\infty}(\mathcal{O}_{0}(P))_{nn^{\prime}}\ =\ \mathcal{O}_{0}(P)_{n+n^{ \prime}}\ =\ (P^{n+n^{\prime}})_{00}\ =\ \sum_{k=0}^{\infty}(P^{n})_{0k}\,(P^{n^{\prime}})_{k0}\ =\] \[\sum_{k=0}^{\infty}(P^{n})_{0k}\,((P^{\mathrm{T}})^{n^{\prime}})_ {0k}\ =\ \sum_{k=0}^{\infty}\mathcal{O}(P)_{nk}\,\mathcal{O}(P^{\mathrm{T}})_{n^{ \prime}k}\ =\ \big{[}\mathcal{O}(P)\,\mathcal{O}(P^{\mathrm{T}})^{\mathrm{T}}\big{]}_{nn^{ \prime}}\;. \tag{2.18}\]
(Note that the sum over \(k\) has only finitely many nonzero terms: if \(P\) is row-finite, then there are finitely many nonzero \((P^{n})_{0k}\), while if \(P\) is column-finite, there are finitely many nonzero \((P^{n^{\prime}})_{k0}\).) We have therefore proven:
**Lemma 2.11** (Identity for Hankel matrix of the zeroth column).: _Let \(P\) be a row-finite or column-finite matrix with entries in a commutative ring \(R\). Then_
\[H_{\infty}(\mathcal{O}_{0}(P))\ =\ \mathcal{O}(P)\,\mathcal{O}(P^{\mathrm{T}} )^{\mathrm{T}}\;. \tag{2.19}\]
**Remark.** If \(P\) is row-finite, then \(\mathcal{O}(P)\) is row-finite; \(\mathcal{O}(P^{\mathrm{T}})\) need not be row- or column-finite, but the product \(\mathcal{O}(P)\,\mathcal{O}(P^{\mathrm{T}})^{\mathrm{T}}\) is anyway well-defined. Similarly, if \(P\) is column-finite, then \(\mathcal{O}(P^{\mathrm{T}})^{\mathrm{T}}\) is column-finite; \(\mathcal{O}(P)\) need not be row- or column-finite, but the product \(\mathcal{O}(P)\,\mathcal{O}(P^{\mathrm{T}})^{\mathrm{T}}\) is again well-defined. \(\blacksquare\)
Combining Proposition 2.7 with Lemma 2.11 and the Cauchy-Binet formula, we obtain:
**Corollary 2.12** (Hankel minors of the zeroth column).: _Every \(k\times k\) minor of the infinite Hankel matrix \(H_{\infty}(\mathcal{O}_{0}(P))=((P^{n+n^{\prime}})_{00})_{n,n^{\prime}\geq 0}\) can be written as a sum of products of the minors of size \(\leq k\) of the production matrix \(P\)._
And specializing the indeterminates \(\mathbf{p}\) to nonnegative elements in a partially ordered commutative ring, in such a way that \(P\) is row-finite or column-finite, we deduce:
**Theorem 2.13** (Hankel-total positivity of the zeroth column).: _Let \(P=(p_{ij})_{i,j\geq 0}\) be an infinite row-finite or column-finite matrix with entries in a partially ordered commutative ring \(R\), and define the infinite Hankel matrix \(H_{\infty}(\mathcal{C}_{0}(P))=((P^{n+n^{\prime}})_{00})_{n,n^{\prime}\geq 0}\). If \(P\) is totally positive of order \(r\), then so is \(H_{\infty}(\mathcal{C}_{0}(P))\)._
One might hope that Theorem 2.13 could be strengthened to show not only Hankel-TP of the zeroth column of the output matrix \(A=\mathcal{O}(P)\), but in fact Hankel-TP of the row-generating polynomials \(A_{n}(x)\) for all \(x\geq 0\) (at least when \(R=\mathbb{R}\)) -- or even more strongly, coefficientwise Hankel-TP of the row-generating polynomials. Alas, this hope is vain, for these properties do not hold _in general_:
**Example 2.14** (Failure of Hankel-TP of the row-generating polynomials).: Let \(P=\mathbf{e}_{00}+\Delta\) be the upper-bidiagonal matrix with \(1\) on the superdiagonal and \(1,0,0,0,\ldots\) on the diagonal; by Lemma 2.1 it is TP. Then \(A=\mathcal{O}(P)\) is the lower-triangular matrix will all entries \(1\) (see Example 2.9), so that \(A_{n}(x)=\sum_{k=0}^{n}x^{k}\). Since \(A_{0}(x)\,A_{2}(x)-A_{1}(x)^{2}=-x\), the sequence \((A_{n}(x))_{n\geq 0}\) is not even log-convex (i.e. Hankel-TP\({}_{2}\)) for any real number \(x>0\).
Nevertheless, in one important special case -- namely, exponential Riordan arrays \(\mathcal{R}[1,G]\) -- the total positivity of the production matrix _does_ imply the coefficientwise Hankel-TP of the row-generating polynomials of the output matrix: this was shown [76, Theorem 2.20]. That result will be generalized here, in Corollary 2.28, to provide a more general _sufficient_ (but not necessary) condition for the coefficientwise Hankel-TP of the row-generating polynomials of the output matrix.
### Binomial row-generating matrices
Let \(A=(a_{nk})_{n,k\geq 0}\) be a row-finite matrix with entries in a commutative ring \(R\). (In most applications, including all those in the present paper, the matrix \(A\) will be lower-triangular.) We define its _row-generating polynomials_ in the usual way:
\[A_{n}(x)\;\stackrel{{\mathrm{def}}}{{=}}\;\sum_{k=0}^{\infty}a_{ nk}\,x^{k}\;, \tag{2.20}\]
where the sum is actually finite because \(A\) is row-finite. More generally, let us define its _binomial partial row-generating polynomials_
\[A_{n,k}(x) \stackrel{{\mathrm{def}}}{{=}} \sum_{\ell=k}^{\infty}a_{n\ell}\,{\ell\choose k}\,x^{\ell-k} \tag{2.21a}\] \[= \frac{1}{k!}\,\frac{d^{k}}{dx^{k}}\,A_{n}(x)\;. \tag{2.21b}\]
(Note that the operator \((1/k!)\,d^{k}\!/\!dx^{k}\) has a well-defined action on the polynomial ring \(R[x]\) even if \(R\) does not contain the rationals, since \((1/k!)(d^{k}\!/\!dx^{k})x^{n}={n\choose k}x^{n-k}\).)
The polynomials \(A_{n,k}(x)\) are the matrix elements of the _binomial row-generating matrix_\(AB_{x}\):
\[(AB_{x})_{nk}\;=\;A_{n,k}(x)\;, \tag{2.22}\]
where \(B_{x}=B_{x,1}\) is the weighted binomial matrix defined in (1.6). The zeroth column of the matrix \(AB_{x}\) consists of the row-generating polynomials \(A_{n}(x)=A_{n,0}(x)\).
In this paper the matrix \(A\) will be either the matrix \(\mathsf{T}=(t_{n,k})_{n,k\geq 0}\) or one of its polynomial generalizations.
We can now explain the method that we will use to prove Theorems 1.2 and 1.4:
**Proposition 2.15**.: _Let \(P\) be a row-finite matrix with entries in a partially ordered commutative ring \(R\), and let \(A=\mathcal{O}(P)\)._
1. _If_ \(P\) _is totally positive of order_ \(r\)_, then so is_ \(A\)_._
2. _If the matrix_ \(B_{x}^{-1}PB_{x}\) _is totally positive of order_ \(r\) _in the ring_ \(R[x]\) _equipped with the coefficientwise order, then the sequence_ \((A_{n}(x))_{n\geq 0}\) _of row-generating polynomials is Hankel-totally positive of order_ \(r\) _in the ring_ \(R[x]\) _equipped with the coefficientwise order._
Indeed, (a) is just a restatement of Theorem 2.8; and (b) is an immediate consequence of Lemma 2.6 and Theorem 2.13 together with the fact that the zeroth column of the matrix \(AB_{x}\) consists of the row-generating polynomials \(A_{n}(x)\).
### Riordan arrays
Let \(R\) be a commutative ring, and let \(f(t)=\sum_{n=0}^{\infty}f_{n}t^{n}\) and \(g(t)=\sum_{n=1}^{\infty}g_{n}t^{n}\) be formal power series with coefficients in \(R\); note that \(g\) has zero constant term (for clarity we set \(g_{0}=0\)). Then the (ordinary) _Riordan array_ associated to the pair \((f,g)\) is the infinite lower-triangular matrix \(\mathcal{R}(f,g)=(\mathcal{R}(f,g)_{nk})_{n,k\geq 0}\) defined by
\[\mathcal{R}(f,g)_{nk}\;=\;[t^{n}]\,f(t)g(t)^{k}\;. \tag{2.23}\]
That is, the \(k\)th column of \(\mathcal{R}(f,g)\) has ordinary generating function \(f(t)g(t)^{k}\). Note that \(\mathcal{R}(f,g)\) is invertible in the ring \(R_{\text{lt}}^{\mathbb{N}\times\mathbb{N}}\) of lower-triangular matrices \(\iff\) the diagonal elements \(\mathcal{R}(f,g)_{nn}=f_{0}g_{1}^{n}\) are invertible elements of the ring \(R\iff\)\(f_{0}\) and \(g_{1}\) are invertible elements of \(R\iff\)\(f(t)\) has a multiplicative inverse \(f(t)^{-1}\) in the ring \(R[[t]]\) and \(g(t)\) has a compositional inverse \(\bar{g}(t)\) in the ring \(R[[t]]\).
**Warning.** We have interchanged the letters \(f\) and \(g\) compared to the notation of Shapiro _et al._[69, 70] and Barry [3]. This notation seems to us more natural, but the reader should be warned.
We shall use an easy but important result that is sometimes called the _fundamental theorem of Riordan arrays_ (FTRA):
**Lemma 2.16** (Fundamental theorem of Riordan arrays).: _Let \(\mathbf{b}=(b_{n})_{n\geq 0}\) be a sequence with ordinary generating function \(B(t)=\sum_{n=0}^{\infty}b_{n}t^{n}\). Considering \(\mathbf{b}\) as a column vector and letting \(\mathcal{R}(f,g)\) act on it by matrix multiplication, we obtain a sequence \(\mathcal{R}(f,g)\mathbf{b}\) whose ordinary generating function is \(f(t)\,B(g(t))\)._
Proof. We compute
\[\sum_{k=0}^{n}{\cal R}(f,g)_{nk}\,b_{k} = \sum_{k=0}^{\infty}[t^{n}]\,f(t)g(t)^{k}\,b_{k} \tag{2.24a}\] \[= [t^{n}]\,f(t)\sum_{k=0}^{\infty}b_{k}\,g(t)^{k}\] (2.24b) \[= [t^{n}]\,f(t)\,B(g(t))\;. \tag{2.24c}\]
\(\Box\)
We can now determine the production matrix of a Riordan array \({\cal R}(f,g)\). Let \(\boldsymbol{a}=(a_{n})_{n\geq 0}\) and \(\boldsymbol{z}=(z_{n})_{n\geq 0}\) be sequences in a commutative ring \(R\), with ordinary generating functions \(A(t)=\sum_{n=0}^{\infty}a_{n}t^{n}\) and \(Z(t)=\sum_{n=0}^{\infty}z_{n}t^{n}\). We then define the _AZ matrix_ associated to the sequences \(\boldsymbol{a}\) and \(\boldsymbol{z}\) by
\[\mathrm{AZ}(\boldsymbol{a},\boldsymbol{z})_{ij}\;=\;\begin{cases}z_{i}&\text{ if }j=0\\ a_{i-j+1}&\text{if }j\geq 1\end{cases} \tag{2.25}\]
or in other words
\[\mathrm{AZ}(\boldsymbol{a},\boldsymbol{z})\;=\;\begin{bmatrix}z_{0}&a_{0}&0& 0&0\\ z_{1}&a_{1}&a_{0}&0&0\\ z_{2}&a_{2}&a_{1}&a_{0}&0\\ z_{3}&a_{3}&a_{2}&a_{1}&a_{0}\\ \vdots&\vdots&\cdots&&\ddots\end{bmatrix}\;. \tag{2.26}\]
We also write \(\mathrm{AZ}(A,Z)\) as a synonym for \(\mathrm{AZ}(\boldsymbol{a},\boldsymbol{z})\). It is convenient to define also
\[Y(t)\;=\;\frac{A(t)}{A(t)-tZ(t)}\;, \tag{2.27}\]
which is well-defined if \(a_{0}\) is invertible in \(R\). We then have [42, 42][3, pp. 148-149][70, Theorems 4.15 and 6.29]9:
Footnote 9: This theorem is also essentially contained in [58, Theorems 3.2, 3.6 and 3.7], though those authors do not use the terminology of production matrices.
**Theorem 2.17** (Production matrices of Riordan arrays).: _Let \(L\) be a lower-triangular matrix (with entries in a commutative ring \(R\)) with invertible diagonal entries and \(L_{00}=1\), and let \(P=L^{-1}\Delta L\) be its production matrix. Then \(L\) is a Riordan array if and only if \(P\) is an AZ-matrix._
_More precisely, \(L={\cal R}(f,g)\) if and only if \(P=\mathrm{AZ}(\boldsymbol{a},\boldsymbol{z})\), where the generating functions \(\big{(}f(t),g(t)\big{)}\) and \(\big{(}A(t),Z(t)\big{)}\) are connected by_
\[g(t)\;=\;t\,A(g(t))\;,\qquad f(t)\;=\;\frac{1}{1\,-\,tZ(g(t))}\;=\;Y(g(t)) \tag{2.28}\]
_or equivalently_
\[A(t)\;=\;\frac{t}{\bar{g}(t)}\;,\qquad Z(t)\;=\;\frac{f(\bar{g}(t))\,-\,1}{ \bar{g}(t)\,f(\bar{g}(t))}\;. \tag{2.29}\]
Proof[42, p. 18]. Suppose that \(L={\cal R}(f,g)\). The hypotheses on \(L\) imply that \(f_{0}=1\) and that \(g_{1}\) is invertible in \(R\); so \(g(t)\) has a compositional inverse \(\bar{g}(t)\). Now let \((p_{k}(t))_{k\geq 0}\) be the column generating functions of \(P=L^{-1}\Delta L\). Applying the FTRA to each column of \(P\), we see that \({\cal R}(f,g)P\) is a matrix whose column generating functions are \(\big{(}f(t)\,p_{k}(g(t))\big{)}_{k\geq 0}\). On the other hand, \(\Delta\,{\cal R}(f,g)\) is the matrix \({\cal R}(f,g)\) with its zeroth row removed and all other rows shifted upwards, so it has column generating functions \([f(t)-1]/t\) for column \(0\) and \(f(t)g(t)^{k}/t\) for columns \(k\geq 1\). Comparing these two results, we see that \(\Delta\,{\cal R}(f,g)={\cal R}(f,g)\,P\) if and only if
\[f(t)\,p_{0}(g(t))\;=\;\frac{f(t)-1}{t} \tag{2.30}\]
and
\[p_{k}(g(t))\;=\;\frac{g(t)^{k}}{t}\qquad\mbox{for $k\geq 1$}\;. \tag{2.31}\]
The latter equation can be rewritten as
\[p_{k}(t)\;=\;\frac{t^{k}}{\bar{g}(t)}\;, \tag{2.32}\]
which means that the columns \(k\geq 1\) of the production matrix \(P\) are identical with those of AZ\((\mathbf{a},\mathbf{z})\), when \(a\) is given by (2.29). And (2.30) then states that column \(0\) of the production matrix \(P\) is identical with that of AZ\((\mathbf{a},\mathbf{z})\), when \(z\) is given by (2.29). Therefore, \(L={\cal R}(f,g)\) implies that \(L^{-1}\Delta L=\mbox{AZ}(\mathbf{a},\mathbf{z})\) where \(a\) and \(z\) are given by (2.29).
Conversely, suppose that \(P=\mbox{AZ}(\mathbf{a},\mathbf{z})\). Let \(g(t)\) be the unique formal power series in \(R[[t]]\) with \(g(0)=0\) that satisfies the functional equation \(g(t)=t\,A(g(t))\), and then let \(f(t)=1/[1-tZ(g(t))]\). Then running the foregoing computation backwards shows that \(\Delta\,{\cal R}(f,g)={\cal R}(f,g)\,P\). Since by hypothesis \(L_{00}=1\), it follows that \(L={\cal O}(P)={\cal R}(f,g)\). \(\Box\)
### Exponential Riordan arrays
Let \(R\) be a commutative ring containing the rationals, and let \(F(t)=\sum_{n=0}^{\infty}f_{n}t^{n}/n!\) and \(G(t)=\sum_{n=1}^{\infty}g_{n}t^{n}/n!\) be formal power series with coefficients in \(R\); we set \(g_{0}=0\). Then the _exponential Riordan array_[3, 22, 24, 70] associated to the pair \((F,G)\) is the infinite lower-triangular matrix \({\cal R}[F,G]=({\cal R}[F,G]_{nk})_{n,k\geq 0}\) defined by
\[{\cal R}[F,G]_{nk}\;=\;\frac{n!}{k!}\,[t^{n}]\,F(t)G(t)^{k}\;. \tag{2.33}\]
That is, the \(k\)th column of \({\cal R}[F,G]\) has exponential generating function \(F(t)G(t)^{k}/k!\). Equivalently, the bivariate exponential generating function of \({\cal R}[F,G]\) is
\[\sum_{n,k=0}^{\infty}{\cal R}[F,G]_{nk}\,\frac{t^{n}}{n!}\,x^{k}\;=\;F(t)\,e^ {xG(t)}\;. \tag{2.34}\]
The diagonal elements of \({\cal R}[F,G]\) are \({\cal R}[F,G]_{nn}=f_{0}g_{1}^{n}\), so the matrix \({\cal R}[F,G]\) is invertible in the ring \(R_{\rm lt}^{\mathbb{N}\times\mathbb{N}}\) of lower-triangular matrices if and only if \(f_{0}\) and \(g_{1}\) are invertible in \(R\).
Please note that the exponential Riordan array \(\mathcal{R}[F,G]\) is nothing other than a diagonal similarity transform of the ordinary Riordan array \(\mathcal{R}(F,G)\) associated to the same power series \(F\) and \(G\): that is,
\[\mathcal{R}[F,G]\;=\;D\,\mathcal{R}(F,G)\,D^{-1} \tag{2.35}\]
where \(D=\mathrm{diag}\big{(}(n!)_{n\geq 0}\big{)}\).
**Lemma 2.18** (Fundamental theorem of exponential Riordan arrays).: _Let \(\boldsymbol{b}=(b_{n})_{n\geq 0}\) be a sequence with exponential generating function \(B(t)=\sum_{n=0}^{\infty}b_{n}t^{n}/n!\). Considering \(\boldsymbol{b}\) as a column vector and letting \(\mathcal{R}[F,G]\) act on it by matrix multiplication, we obtain a sequence \(\mathcal{R}[F,G]\boldsymbol{b}\) whose exponential generating function is \(F(t)\,B(G(t))\)._
Proof. We compute
\[\sum_{k=0}^{n}\mathcal{R}[F,G]_{nk}\,b_{k} = \sum_{k=0}^{\infty}\frac{n!}{k!}\,[t^{n}]\,F(t)G(t)^{k}\,b_{k} \tag{2.36a}\] \[= n!\;[t^{n}]\,F(t)\sum_{k=0}^{\infty}b_{k}\,\frac{G(t)^{k}}{k!}\] (2.36b) \[= n!\;[t^{n}]\,F(t)\,B(G(t))\;. \tag{2.36c}\]
\(\square\)
Let us now consider the product of two exponential Riordan arrays \(\mathcal{R}[F_{1},G_{1}]\) and \(\mathcal{R}[F_{2},G_{2}]\). Applying the FTERA to the \(k\)th column of \(\mathcal{R}[F_{2},G_{2}]\), whose exponential generating function is \(F_{2}(t)G_{2}(t)^{k}/k!\), we readily obtain:
**Lemma 2.19** (Product of two exponential Riordan arrays).: _We have_
\[\mathcal{R}[F_{1},G_{1}]\,\mathcal{R}[F_{2},G_{2}]\;=\;\mathcal{R}[(F_{2} \circ G_{1})F_{1},\,G_{2}\circ G_{1}]\;. \tag{2.37}\]
In particular, if we let \(\mathcal{R}[F_{2},G_{2}]\) be the weighted binomial matrix \(B_{\xi}=\mathcal{R}[e^{\xi t},t]\) defined by (1.6), we obtain:
**Corollary 2.20** (Binomial row-generating matrix of an exponential Riordan array).: _We have_
\[\mathcal{R}[F,G]\,B_{\xi}\;=\;\mathcal{R}[e^{\xi G}F,G]\;. \tag{2.38}\]
Similarly, letting \(\mathcal{R}[F_{1},G_{1}]\) be the weighted binomial matrix \(B_{\xi}\), we obtain:
**Corollary 2.21** (Left binomial transform of an exponential Riordan array).: _We have_
\[B_{\xi}\,\mathcal{R}[F,G]\;=\;\mathcal{R}[e^{\xi t}F,G]\;. \tag{2.39}\]
We can now determine the production matrix of an exponential Riordan array \(\mathcal{R}[F,G]\). Let \(\boldsymbol{a}=(a_{n})_{n\geq 0}\) and \(\boldsymbol{z}=(z_{n})_{n\geq 0}\) be sequences in a commutative ring \(R\), with ordinary generating functions \(A(s)=\sum_{n=0}^{\infty}a_{n}s^{n}\) and \(Z(s)=\sum_{n=0}^{\infty}z_{n}s^{n}\). We then define the _exponential AZ matrix_ associated to the sequences \(\boldsymbol{a}\) and \(\boldsymbol{z}\) by
\[\mathrm{EAZ}(\boldsymbol{a},\boldsymbol{z})_{nk}\;=\;\frac{n!}{k!}\,(z_{n-k} \,+\,k\,a_{n-k+1})\;, \tag{2.40}\]
or equivalently (if \(R\) contains the rationals)
\[{\rm EAZ}(\boldsymbol{a},\boldsymbol{z})\ =\ D\,T_{\infty}(\boldsymbol{z})\,D^{-1} \,+\,D\,T_{\infty}(\boldsymbol{a})\,D^{-1}\,\Delta \tag{2.41}\]
where \(D={\rm diag}\big{(}(n!)_{n\geq 0}\big{)}\). We also write \({\rm EAZ}(A,Z)\) as a synonym for \({\rm EAZ}(\boldsymbol{a},\boldsymbol{z})\).
**Remark.** We have the exponential generating functions
\[\sum_{n,k=0}^{\infty}{\rm EAZ}(\boldsymbol{a},\boldsymbol{z})_{nk}\,\frac{s^{ n}}{n!}\,u^{k}\ =\ e^{su}\,\big{[}Z(s)\,+\,uA(s)\big{]} \tag{2.42}\]
and
\[\sum_{n,k=0}^{\infty}{\rm EAZ}(\boldsymbol{a},\boldsymbol{z})_{nk}\,\frac{s^{ n}}{n!}\,k!\,u^{k}\ =\ \frac{Z(s)}{1-su}\,+\,\frac{u\,A(s)}{(1-su)^{2}}\;. \tag{2.43}\]
\(\blacksquare\)
**Theorem 2.22** (Production matrices of exponential Riordan arrays).: _Let \(L\) be a lower-triangular matrix (with entries in a commutative ring \(R\) containing the rationals) with invertible diagonal entries and \(L_{00}=1\), and let \(P=L^{-1}\Delta L\) be its production matrix. Then \(L\) is an exponential Riordan array if and only if \(P\) is an exponential AZ matrix._
_More precisely, \(L={\cal R}[F,G]\) if and only if \(P={\rm EAZ}(A,Z)\), where the generating functions \(\big{(}F(t),G(t)\big{)}\) and \(\big{(}A(s),Z(s)\big{)}\) are connected by_
\[G^{\prime}(t)\;=\;A(G(t))\;,\qquad\frac{F^{\prime}(t)}{F(t)}\;=\;Z(G(t)) \tag{2.44}\]
_or equivalently_
\[A(s)\;=\;G^{\prime}(\bar{G}(s))\;,\qquad Z(s)\;=\;\frac{F^{\prime}(\bar{G}(s) )}{F(\bar{G}(s))} \tag{2.45}\]
_where \(\bar{G}(s)\) is the compositional inverse of \(G(t)\)._
Proof (mostly contained in [3, pp. 217-218]). Suppose that \(L={\cal R}[F,G]\). The hypotheses on \(L\) imply that \(f_{0}=1\) and that \(g_{1}\) is invertible in \(R\); so \(G(t)\) has a compositional inverse \(\bar{G}(s)\). Now let \(P=(p_{nk})_{n,k\geq 0}\) be a matrix; its column exponential generating functions are, by definition, \(P_{k}(t)=\sum_{n=0}^{\infty}p_{nk}\,t^{n}/n!\). Applying the FTERA to each column of \(P\), we see that \({\cal R}[F,G]P\) is a matrix whose column exponential generating functions are \(\big{(}F(t)\,P_{k}(G(t))\big{)}_{k\geq 0}\). On the other hand, \(\Delta\,{\cal R}[F,G]\) is the matrix \({\cal R}[F,G]\) with its zeroth row removed and all other rows shifted upwards, so it has column exponential generating functions
\[\frac{d}{dt}\,\big{(}F(t)\,G(t)^{k}/k!\big{)}\ =\ \frac{1}{k!}\,\Big{[}F^{\prime}(t) \,G(t)^{k}\:+\:k\,F(t)\,G(t)^{k-1}\,G^{\prime}(t)\Big{]}\;. \tag{2.46}\]
Comparing these two results, we see that \(\Delta\,{\cal R}[F,G]={\cal R}[F,G]\,P\) if and only if
\[P_{k}(G(t))\;=\;\frac{1}{k!}\,\frac{F^{\prime}(t)\,G(t)^{k}\:+\:k\,F(t)\,G(t)^ {k-1}\,G^{\prime}(t)}{F(t)}\;, \tag{2.47}\]
or in other words
\[P_{k}(t)\;=\;\frac{1}{k!}\left[\frac{F^{\prime}(\bar{G}(t))}{F(\bar{G}(t))}\,t^{k }\;+\;k\,t^{k-1}\,G^{\prime}(\bar{G}(t))\right]\,. \tag{2.48}\]
Therefore
\[p_{nk} = \frac{n!}{k!}\left[t^{n}\right]\left[\frac{F^{\prime}(\bar{G}(t)) }{F(\bar{G}(t))}\,t^{k}\;+\;k\,t^{k-1}\,G^{\prime}(\bar{G}(t))\right] \tag{2.49a}\] \[= \frac{n!}{k!}\left[\left[t^{n-k}\right]\frac{F^{\prime}(\bar{G}(t ))}{F(\bar{G}(t))}\;+\;k\left[t^{n-k+1}\right]G^{\prime}(\bar{G}(t))\right]\] (2.49b) \[= \frac{n!}{k!}\left(z_{n-k}\,+\,k\,a_{n-k+1}\right) \tag{2.49c}\]
where \(\mathbf{a}=(a_{n})_{n\geq 0}\) and \(\mathbf{z}=(z_{n})_{n\geq 0}\) are given by (2.45).
Conversely, suppose that \(P=\mbox{\rm EAZ}(A,Z)\). Define \(F(t)\) and \(G(t)\) as the unique solutions (in the formal-power-series ring \(R[[t]]\)) of the differential equations (2.44) with initial conditions \(F(0)=1\) and \(G(0)=0\). Then running the foregoing computation backwards shows that \(\Delta\,{\cal R}[F,G]={\cal R}[F,G]\,P\). Since by hypothesis \(L_{00}=1\), it follows that \(L={\cal R}[F,G]\). \(\;\Box\)
We refer to \(A(s)=\sum_{n=0}^{\infty}a_{n}s^{n}\) and \(Z(s)=\sum_{n=0}^{\infty}z_{n}s^{n}\) as the _A-series_ and _Z-series_ associated to the exponential Riordan array \({\cal R}[F,G]\).
**Remark.** The identity \(A(s)=G^{\prime}(\bar{G}(s))\) can equivalently be written as \(A(s)=1/(\bar{G})^{\prime}(s)\). This is useful in comparing our work with that of Zhu [87, 89], who uses the latter formulation. \(\;\blacksquare\)
Let us now show how to rewrite the production matrix (2.41) in a new way, which will be useful in what follows. Define
\[\Psi(s)\;\stackrel{{\rm def}}{{=}}\;F(\bar{G}(s))\;, \tag{2.50}\]
so that \(F(t)=\Psi(G(t))\) and \(\Psi(0)=F(0)=1\). Then a simple computation using (2.44)/(2.45) shows that
\[Z(s)\;=\;\frac{\Psi^{\prime}(s)}{\Psi(s)}\,A(s)\;. \tag{2.51}\]
And let us define \(\Phi(s)\stackrel{{\rm def}}{{=}}A(s)/\Psi(s)\). Then the pair \((\Phi,\Psi)\) is related to the pair \((A,Z)\) by
\[A(s) = \Phi(s)\,\Psi(s) \tag{2.52a}\] \[Z(s) = \Phi(s)\,\Psi^{\prime}(s) \tag{2.52b}\]
And conversely, given any pair \((A,Z)\) of formal power series (over a commutative ring \(R\) containing the rationals) such that \(A(0)\) is invertible in \(R\), there is a unique pair \((\Phi,\Psi)\) satisfying (2.52) together with the normalization \(\Psi(0)=1\), namely
\[\Psi(s) = \exp\!\left[\int\frac{Z(s)}{A(s)}\,ds\right] \tag{2.53a}\] \[\Phi(s) = A(s)\,\exp\!\left[-\int\frac{Z(s)}{A(s)}\,ds\right] \tag{2.53b}\]
[Here the integral of a formal power series is defined by
\[\int\biggl{(}\sum_{n=0}^{\infty}\alpha_{n}s^{n}\biggr{)}\,ds\ \stackrel{{ \rm def}}{{=}}\ \sum_{n=0}^{\infty}\alpha_{n}\,\frac{s^{n+1}}{n+1}. \tag{2.54}\]
It is the unique formal power series with zero constant term whose derivative is the given series.] We refer to \(\Phi(s)\) and \(\Psi(s)\) as the \(\boldsymbol{\Phi}\)_-series_ and \(\boldsymbol{\Psi}\)_-series_ associated to the exponential Riordan array \(\mathcal{R}[F,G]\).
Rewriting the production matrix (2.41) in terms of the pair \((\Phi,\Psi)\) provides a beautiful -- and as we shall see, very useful -- factorization. For reasons that shall become clear shortly (see Lemma 2.27 below), it is convenient to study the more general quantity \(\mathrm{EAZ}(A,Z+\xi A)\):
**Proposition 2.23**.: _Let \(R\) be a commutative ring containing the rationals, let \(\Phi(s)=\sum\limits_{n=0}^{\infty}\phi_{n}s^{n}\) and \(\Psi(s)=\sum\limits_{n=0}^{\infty}\psi_{n}s^{n}\) be formal power series with coefficients in \(R\), and let \(A(s)\) and \(Z(s)\) be defined by (2.52). Now let \(\xi\) be any element of \(R\) (or an indeterminate). Then_
\[\mathrm{EAZ}(A,Z+\xi A)\ =\ [D\,T_{\infty}(\boldsymbol{\phi})\,D^{-1}]\,( \Delta\,+\,\xi I)\,[D\,T_{\infty}(\boldsymbol{\psi})\,D^{-1}] \tag{2.55}\]
_where \(D=\mathrm{diag}\bigl{(}(n!)_{n\geq 0}\bigr{)}\)._
To prove Proposition 2.23, we need a lemma. Given a sequence \(\boldsymbol{\psi}=(\psi_{n})_{n\geq 0}\) in \(R\) with ordinary generating function \(\Psi(s)=\sum_{n=0}^{\infty}\psi_{n}s^{n}\), we define \(\boldsymbol{\psi}^{\prime}=(\psi^{\prime}_{n})_{n\geq 0}\) by \(\psi^{\prime}_{n}=(n+1)\psi_{n+1}\), so that \(\Psi^{\prime}(s)=\sum_{n=0}^{\infty}\psi^{\prime}_{n}s^{n}\). We then have:
**Lemma 2.24**.: _Let \(\boldsymbol{\psi}\) and \(\boldsymbol{\psi}^{\prime}\) be as above, and let \(D=\mathrm{diag}\bigl{(}(n!)_{n\geq 0}\bigr{)}\). Then_
\[T_{\infty}(\boldsymbol{\psi}^{\prime})\,+\,T_{\infty}(\boldsymbol{\psi})\,D^ {-1}\Delta D\ =\ D^{-1}\Delta D\,T_{\infty}(\boldsymbol{\psi})\;. \tag{2.56}\]
Proof. All three matrices in (2.56) are lower-Hessenberg, and their \((n,k)\) matrix elements are (for \(0\leq k\leq n+1\))
\[(n-k+1)\psi_{n-k+1}\,+\,k\psi_{n-(k-1)}\ =\ (n+1)\psi_{(n+1)-k}\;. \tag{2.57}\]
\(\square\)
**Remarks.** 1. The identity (2.56) can also be written as \([D^{-1}\Delta D,\,T_{\infty}(\boldsymbol{\psi})]=T_{\infty}(\boldsymbol{\psi }^{\prime})\), where \([A,B]\stackrel{{\rm def}}{{=}}AB-BA\) is the matrix commutator. Thus, \([D^{-1}\Delta D,\,\cdot\,]\) is the "differentiation operator" for Toeplitz matrices. Note that \(D^{-1}\Delta D\) is the matrix with \(1,2,3,\dots\) on the superdiagonal and zeroes elsewhere.
2. Lemma 2.24 was found independently by Ding, Mu and Zhu [25, proof of Theorem 2.1].
Proof of Proposition 2.23. From (2.41) we have
\[\mathrm{EAZ}(A,Z+\xi A)\ =\ D\,T_{\infty}(\boldsymbol{z}+\xi\boldsymbol{a})\,D^ {-1}\ +\ D\,T_{\infty}(\boldsymbol{a})\,D^{-1}\,\Delta. \tag{2.58}\]
The definitions (2.52) imply
\[T_{\infty}(\boldsymbol{a}) = T_{\infty}(\boldsymbol{\phi})\,T_{\infty}(\boldsymbol{\psi}) \tag{2.59a}\] \[T_{\infty}(\boldsymbol{z}+\xi\boldsymbol{a}) = T_{\infty}(\boldsymbol{\phi})\,T_{\infty}(\boldsymbol{\psi}^{ \prime})\,+\,\xi\,T_{\infty}(\boldsymbol{\phi})\,T_{\infty}(\boldsymbol{\psi}) \tag{2.59b}\]
Hence
\[\mathrm{EAZ}(A,Z+\xi A)\] \[=\;D\left[T_{\infty}(\boldsymbol{\phi})\,T_{\infty}(\boldsymbol{ \psi}^{\prime})\,+\,\xi\,T_{\infty}(\boldsymbol{\phi})\,T_{\infty}(\boldsymbol {\psi})\right]D^{-1}\,+\,D\,T_{\infty}(\boldsymbol{\phi})\,T_{\infty}( \boldsymbol{\psi})\,D^{-1}\,\Delta \tag{2.60a}\] \[=\;D\,T_{\infty}(\boldsymbol{\phi})\left[\xi T_{\infty}( \boldsymbol{\psi})\,+\,T_{\infty}(\boldsymbol{\psi}^{\prime})\,+\,T_{\infty} (\boldsymbol{\psi})\,D^{-1}\Delta D\right]D^{-1}\] (2.60b) \[=\;D\,T_{\infty}(\boldsymbol{\phi})\left[\xi T_{\infty}( \boldsymbol{\psi})\,+\,D^{-1}\Delta D\,T_{\infty}(\boldsymbol{\psi})\right]D ^{-1}\] (2.60c) \[=\;[D\,T_{\infty}(\boldsymbol{\phi})\,D^{-1}]\,(\Delta\,+\,\xi I) \,[D\,T_{\infty}(\boldsymbol{\psi})\,D^{-1}]\;, \tag{2.60d}\]
where the next-to-last step used Lemma 2.24. \(\square\)
As an immediate consequence of Proposition 2.23, we have:
**Corollary 2.25**.: _Fix \(1\leq r\leq\infty\). Let \(R\) be a partially ordered commutative ring containing the rationals, and let \(\boldsymbol{\phi}=(\phi_{n})_{n\geq 0}\) and \(\boldsymbol{\psi}=(\psi_{n})_{n\geq 0}\) be sequences in \(R\) that are Toeplitz-totally positive of order \(r\). Let \(\xi\) be an indeterminate. With the definitions (2.52), the matrix \(\mathrm{EAZ}(A,Z+\xi A)\) is totally positive of order \(r\) in the ring \(R[\xi]\) equipped with the coefficientwise order._
Proof. By Lemma 2.1, the matrix \(\Delta+\xi I\) is totally positive (of order \(\infty\)) in the ring \(\mathbb{Z}[\xi]\) equipped with the coefficientwise order. By hypothesis the matrices \(T_{\infty}(\boldsymbol{\phi})\) and \(T_{\infty}(\boldsymbol{\psi})\) are totally positive of order \(r\) in the ring \(R\); so Lemma 2.30 implies that also \(D\,T_{\infty}(\boldsymbol{\phi})\,D^{-1}\) and \(D\,T_{\infty}(\boldsymbol{\psi})\,D^{-1}\) are totally positive of order \(r\) in \(R\). The result then follows from Proposition 2.23 and the Cauchy-Binet formula. \(\square\)
**Remark.** The hypothesis that the ring \(R\) contains the rationals can be removed, by using Lemma 2.30 (see Section 2.7) together with the reasoning used in the proof of Theorem 1.8 (see Section 5.3). \(\blacksquare\)
It is worth observing that the converse to Corollary 2.25 is false:
**Example 2.26**.: Let \(A(s)=1+s\) and \(Z(s)=(\lambda+\mu)+\mu s\). Then \(P=\mathrm{EAZ}(A,Z)\) is the tridiagonal matrix with \(p_{n,n+1}=1\), \(p_{n,n}=\lambda+\mu+n\) and \(p_{n,n-1}=n\mu\), which can be written in the form \(P=LU+\lambda I\), where \(L\) is the lower-bidiagonal matrix with \(1\) on the diagonal and \(1,2,3,\ldots\) on the subdiagonal, \(U\) is the upper-bidiagonal matrix with \(1\) on the superdiagonal and \(\mu\) on the diagonal, and \(I\) is the identity matrix; so by the tridiagonal comparison theorem [77][88, Proposition 3.1]\(P\) is totally positive, coefficientwise in \(\lambda\) and \(\mu\). [When \(\mu=0\) the total positivity is even more elementary, by Lemma 2.1.] Note also that, in this example, \(\mathrm{EAZ}(A,Z+\xi A)\) is simply \(P=\mathrm{EAZ}(A,Z)\) with \(\mu\) replaced by \(\mu+\xi\).
But this pair \((A,Z)\) corresponds to
\[\Phi(s)\;=\;e^{-\mu s}\,(1+s)^{1-\lambda}\qquad\Psi(s)\;=\;e^{\mu s}\,(1+s)^{\lambda} \tag{2.61}\]
which are _not_ Toeplitz-TP coefficientwise in \(\lambda\) and \(\mu\). Indeed, even for real \(\lambda\) and \(\mu\), the sequence \(\boldsymbol{\phi}\) (resp. \(\boldsymbol{\psi}\)) is Toeplitz-TP only for \(\lambda\in\{0,1\}\) and \(\mu\leq 0\) (resp. \(\lambda\in\{0,1\}\) and \(\mu\geq 0\)). So all nonnegative \(\lambda,\mu\) other than \(\lambda\in\{0,1\}\) and \(\mu=0\) yield counterexamples to the converse to Corollary 2.25, and even to its restriction to the case \(\xi=0\).
In this example \(F(t)=e^{\lambda t+\mu(e^{t}-1)}\) and \(G(t)=e^{t}-1\), so the exponential Riordan array is \(\mathcal{R}[F,G]=\mathcal{R}[e^{\lambda t+\mu(e^{t}-1)},e^{t}-1]=B_{\lambda} \mathcal{R}[1,e^{t}-1]B_{\mu}\) by Corollaries 2.20 and 2.21; here \(\mathcal{R}[1,e^{t}-1]\) is the Stirling subset matrix [61, A008277]. \(\quad\blacksquare\)
So the condition of Corollary 2.25 is sufficient but not necessary for its conclusion.
Finally, a central role will be played in this paper by a simple but remarkable identity for \(B_{\xi}^{-1}\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{z})\,B_{\xi}\), where \(B_{\xi}\) is the \(\xi\)-binomial matrix defined in (1.6) and \(\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{z})\) is the exponential AZ matrix defined in (2.40)/(2.41).
**Lemma 2.27** (Identity for \(B_{\xi}^{-1}\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{z})\,B_{\xi}\)).: _Let \(\boldsymbol{a}=(a_{n})_{n\geq 0}\), \(\boldsymbol{z}=(z_{n})_{n\geq 0}\) and \(\xi\) be indeterminates. Then_
\[B_{\xi}^{-1}\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{z})\,B_{\xi}\;= \;\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{z}+\xi\boldsymbol{a})\;. \tag{2.62}\]
The special case \(\boldsymbol{z}=\boldsymbol{0}\) of this lemma was proven in [62, Lemma 3.6]; a simpler proof was given in [76, Lemma 2.16]. Here we give the easy generalization to include \(\boldsymbol{z}\). We will give two proofs: a first proof by direct computation from the definition (2.40)/(2.41), and a second proof using exponential Riordan arrays.
First Proof. We use the matrix definition (2.41):
\[\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{z})\;=\;D\,T_{\infty}( \boldsymbol{z})\,D^{-1}\;+\;D\,T_{\infty}(\boldsymbol{a})\,D^{-1}\,\Delta \tag{2.63}\]
where \(D=\operatorname{diag}\bigl{(}(n!)_{n\geq 0}\bigr{)}\). Since \(\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{z})=\operatorname{EAZ}( \boldsymbol{a},\boldsymbol{0})+\operatorname{EAZ}(\boldsymbol{0},\boldsymbol{ z})\), it suffices to consider separately the two contributions.
The key observation is that \(B_{\xi}=D\,T_{\infty}\bigl{(}(\xi^{n}/n!)_{n\geq 0}\bigr{)}\,D^{-1}\). Now two Toeplitz matrices always commute: \(T_{\infty}(\boldsymbol{a})\,T_{\infty}(\boldsymbol{b})=T_{\infty}( \boldsymbol{a}\star\boldsymbol{b})=T_{\infty}(\boldsymbol{b})\,T_{\infty}( \boldsymbol{a})\). It follows that \(DT_{\infty}(\boldsymbol{z})D^{-1}\) and \(DT_{\infty}(\boldsymbol{a})D^{-1}\) commute with \(B_{\xi}\). Therefore
\[B_{\xi}^{-1}\operatorname{EAZ}(\boldsymbol{0},\boldsymbol{z})\,B_{\xi}\;= \;\operatorname{EAZ}(\boldsymbol{0},\boldsymbol{z})\;. \tag{2.64}\]
On the other hand, the classic recurrence for binomial coefficients implies
\[\Delta B_{\xi}\;=\;B_{\xi}\,(\xi I+\Delta) \tag{2.65}\]
(cf. Example 2.10). Therefore
\[B_{\xi}^{-1}\operatorname{EAZ}(\boldsymbol{a},\boldsymbol{0})B_{\xi} = B_{\xi}^{-1}\,DT_{\infty}(\boldsymbol{a})D^{-1}\,\Delta\,B_{\xi}\] (2.66a) \[= B_{\xi}^{-1}\,DT_{\infty}(\boldsymbol{a})D^{-1}\,B_{\xi}\,(\xi I +\Delta)\] (2.66b) \[= \;\operatorname{\operatorname{\operatorname{\operatorname{\operatorname {\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname{\operatorname \operatorname{\operatornameoperatornameoperatornameoperatornameoperatorname{\operatorname \operatornameoperatorname{\operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{\operatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatorname{ \operatornameoperatorname{ \operatorname{ \operatornameoperatornameoperatorname{ \operatornameoperatornameoperatorname{ \ \ \ \ \ {
Second Proof. Let \({\cal R}[F,G]\) be the exponential Riordan array generated by the production matrix \({\rm EAZ}(A,Z)\) according to Theorem 2.22, so that
\[G^{\prime}(t)\;=\;A(G(t))\;,\qquad\frac{F^{\prime}(t)}{F(t)}\;=\;Z(G(t))\;. \tag{2.67}\]
By Corollary 2.20 we have
\[{\cal R}[F,G]\,B_{\xi}\;=\;{\cal R}[e^{\xi G}F,G]\;. \tag{2.68}\]
Since
\[\frac{d}{dt}\,\log[e^{\xi G(t)}F(t)]\;=\;\frac{F^{\prime}(t)}{F(t)}\,+\,\xi G^ {\prime}(t)\;, \tag{2.69}\]
Theorem 2.22 implies that the exponential Riordan array \({\cal R}[e^{\xi G}F,G]\) has production matrix \({\rm EAZ}(\widehat{A},\widehat{Z})\) where
\[\widehat{A}(s,\xi)\;=\;A(s)\;,\qquad\widehat{Z}(s,\xi)\;=\;Z(s)\,+\,\xi A(s)\;. \tag{2.70}\]
On the other hand, by Lemma 2.6 the production matrix of \({\cal R}[e^{\xi G}F,G]={\cal R}[F,G]\,B_{\xi}\) is \(B_{\xi}^{-1}\,{\rm EAZ}(A,Z)\,B_{\xi}\). \(\;\;\Box\)
**Remark.** A special case of the ideas in the second proof can be found in [4, Proposition 4]. \(\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\
**Lemma 2.29**.: _Consider an exponential Riordan array \({\cal R}[F,G]\) with \(F(0)=1\) and corresponding series \(A(s)\), \(Z(s)\), \(\Phi(s)\), \(\Psi(s)\). Then, for any constant \(c\), the following are equivalent:_
* \({\cal R}[F,G]_{n,0}=c\,{\cal R}[F,G]_{n,1}\) _for all_ \(n\geq 1\)_._
* \({\rm EAZ}(A,Z)_{n,0}=c\,{\rm EAZ}(A,Z)_{n,1}\) _for all_ \(n\geq 0\)_._
* \({\rm EAZ}(A,Z)={\rm EAZ}(A,Z)\,\Delta^{\rm T}\,(c\,{\bf e}_{00}+\Delta)\) _where_ \({\bf e}_{00}\) _denotes the matrix with an entry_ \(1\) _in position_ \((0,0)\) _and all other entries zero._
* \(\Psi(s)=1/(1-cs)\)_._
Proof. (a) \(\Longleftrightarrow\) (c): (a) holds if and only if \(F(t)=1+cF(t)G(t)\), or in other words \(F(t)=1/[1-cG(t)]\), or in other words \(\Psi(s)=1/(1-cs)\).
(b) \(\Longleftrightarrow\) (c): By (2.40), (b) holds if and only if \(z_{n}=c(z_{n-1}+a_{n})\), or in other words \(Z(s)=c[sZ(s)+A(s)]\), or in other words
\[\frac{\Psi^{\prime}(s)}{\Psi(s)}\ =\ \frac{Z(s)}{A(s)}\ =\ \frac{c}{1-cs}\;. \tag{2.71}\]
Since \(\Psi(0)=1\), this is equivalent to \(\Psi(s)=1/(1-cs)\).
(b\({}^{\prime}\)) \(\Longrightarrow\) (b): The zeroth column of the matrix \(c\,{\bf e}_{00}+\Delta\) equals \(c\) times its first column; so for any matrix \(M\), the zeroth column of the matrix \(M\,(c\,{\bf e}_{00}+\Delta)\) equals \(c\) times its first column.
(b) \(\Longrightarrow\) (b\({}^{\prime}\)): The matrix \({\rm EAZ}(A,Z)\,\Delta^{\rm T}\) is obtained from \({\rm EAZ}(A,Z)\) by removing its zeroth column; it is lower-triangular. And since, by hypothesis, the zeroth column of \({\rm EAZ}(A,Z)\) is \(c\) times its first column, \({\rm EAZ}(A,Z)\) can be recovered from \({\rm EAZ}(A,Z)\,\Delta^{\rm T}\) by by right-multiplying by \(c\,{\bf e}_{00}+\Delta\). \(\Box\)
The case \(c=0\) (that is, \(\Psi=1\) and hence \(F=1\)) corresponds to the _associated subgroup_ (or _Lagrange subgroup_) of exponential Riordan arrays; it arose in our earlier work [62, 76] on generic Lah and rooted-forest polynomials. Using criterion (a), we can already see that the matrix \({\sf T}\) defined in (1.1) will correspond to \(c=1\), while the matrices \({\sf T}(y,z)\) and \({\sf T}(y,\mathbf{\phi})\) defined in (1.7)/(1.12) will correspond, according to Propositions 1.3 and 1.6, to \(c=y\). Of course, in order to apply Lemma 2.29 we will first need to prove that these matrices are indeed exponential Riordan arrays: that will be done in Section 4. But we can see now that, once we do this, the \(\Psi\)-series will be \(\Psi(s)=1/(1-cs)\).
### A lemma on diagonal scaling
Given a lower-triangular matrix \(A=(a_{nk})_{n,k\geq 0}\) with entries in a commutative ring \(R\), let us define the matrix \(A^{\sharp}=(a^{\sharp}_{nk})_{n,k\geq 0}\) by
\[a^{\sharp}_{nk}\ =\ \frac{n!}{k!}\,a_{nk}\;; \tag{2.72}\]
this is well-defined since \(a_{nk}\neq 0\) only when \(n\geq k\), in which case \(n!/k!\) is an integer.
If \(R\) contains the rationals, we can of course write \(A^{\sharp}=DAD^{-1}\) where \(D=\operatorname{diag}\big{(}(n!)_{n\geq 0}\big{)}\). And if \(R\) is a partially ordered commutative ring that contains the rationals and \(A\) is \(\operatorname{TP}_{r}\), then we deduce immediately from \(A^{\sharp}=DAD^{-1}\) that also \(A^{\sharp}\) is \(\operatorname{TP}_{r}\). The following simple lemma [62, Lemma 3.7] shows that this conclusion holds even when \(R\) does not contain the rationals:
**Lemma 2.30**.: _Let \(A=(a_{ij})_{i,j\geq 0}\) be a lower-triangular matrix with entries in a partially ordered commutative ring \(R\), and let \(\boldsymbol{d}=(d_{i})_{i\geq 1}\). Define the lower-triangular matrix \(A^{\sharp\boldsymbol{d}}=(a^{\sharp\boldsymbol{d}}_{ij})_{i,j\geq 0}\) by_
\[a^{\sharp\boldsymbol{d}}_{ij}\ =\ d_{j+1}d_{j+2}\cdots d_{i}\,a_{ij}\;. \tag{2.73}\]
_Then:_
1. _If_ \(A\) _is_ \(\operatorname{TP}_{r}\) _and_ \(\boldsymbol{d}\) _are indeterminates, then_ \(A^{\sharp\boldsymbol{d}}\) _is_ \(\operatorname{TP}_{r}\) _in the ring_ \(R[\boldsymbol{d}]\) _equipped with the coefficientwise order._
2. _If_ \(A\) _is_ \(\operatorname{TP}_{r}\) _and_ \(\boldsymbol{d}\) _are nonnegative elements of_ \(R\)_, then_ \(A^{\sharp\boldsymbol{d}}\) _is_ \(\operatorname{TP}_{r}\) _in the ring_ \(R\)_._
Proof. (a) Let \(\boldsymbol{d}=(d_{i})_{i\geq 1}\) be commuting indeterminates, and let us work in the ring \(R[\boldsymbol{d},\boldsymbol{d}^{-1}]\) equipped with the coefficientwise order. Let \(D=\operatorname{diag}(1,\,d_{1},\,d_{1}d_{2},\,\ldots)\). Then \(D\) is invertible, and both \(D\) and \(D^{-1}=\operatorname{diag}(1,\,d_{1}^{-1},\,d_{1}^{-1}d_{2}^{-1},\,\ldots)\) have nonnegative elements. It follows that \(A^{\sharp\boldsymbol{d}}=DAD^{-1}\) is \(\operatorname{TP}_{r}\) in the ring \(R[\boldsymbol{d},\boldsymbol{d}^{-1}]\) equipped with the coefficientwise order. But the matrix elements \(a^{\sharp\boldsymbol{d}}_{ij}\) actually belong to the subring \(R[\boldsymbol{d}]\subseteq R[\boldsymbol{d},\boldsymbol{d}^{-1}]\). So \(A^{\sharp\boldsymbol{d}}\) is \(\operatorname{TP}_{r}\) in the ring \(R[\boldsymbol{d}]\) equipped with the coefficientwise order.
(b) follows from (a) by specializing indeterminates. \(\square\)
The special case \(A^{\sharp\boldsymbol{d}}=A^{\sharp}\) corresponds to taking \(d_{i}=i\).
Lemma 2.30 will be important to proving Theorem 1.8 in the case where the ring \(R\) does not contain the rationals (see Section 5.3).
### Lagrange inversion
We will use Lagrange inversion in the following form [38]: If \(\phi(u)\) is a formal power series with coefficients in a commutative ring \(R\) containing the rationals, then there exists a unique formal power series \(f(t)\) with zero constant term satisfying
\[f(t)\ =\ t\,\phi(f(t))\;, \tag{2.74}\]
and it is given by
\[[t^{n}]\,f(t)\ =\ \frac{1}{n}\,[u^{n-1}]\,\phi(u)^{n}\quad\text{for}\ n\geq 1\;; \tag{2.75}\]
and more generally, if \(H(u)\) is any formal power series, then
\[[t^{n}]\,H(f(t))\ =\ \frac{1}{n}\,[u^{n-1}]\,H^{\prime}(u)\,\phi(u)^{n}\quad \text{for}\ n\geq 1\;. \tag{2.76}\]
In particular, taking \(H(u)=u^{k}\) with integer \(k\geq 0\), we have
\[[t^{n}]\,f(t)^{k}\ =\ \frac{k}{n}\,[u^{n-k}]\,\phi(u)^{n}\quad\text{for}\ n\geq 1\;. \tag{2.77}\]
## 3 Bijective proofs
In this section we give bijective proofs of Propositions 1.3, 1.5, 1.6 and 1.7. This section can be skipped on a first reading, as it is not needed for proving the main theorems of the paper.
### Proof of Propositions 1.3 and 1.6
Here we will prove Proposition 1.3, which asserts that the polynomials \(t_{n,k}(y,z)\) defined in (1.7) satisfy \(t_{n,0}(y,z)=y\,t_{n,1}(y,z)\) for all \(n\geq 1\); and more generally Proposition 1.6, which asserts that the polynomials \(t_{n,k}(y,\boldsymbol{\phi})\) defined in (1.12) satisfy \(t_{n,0}(y,\boldsymbol{\phi})=y\,t_{n,1}(y,\boldsymbol{\phi})\) for all \(n\geq 1\).
We will prove these results by constructing, for each \(n\geq 1\), a bijection from the set \(\mathcal{T}_{n+1}^{(1;1)}\) of rooted trees on the vertex set \([n+1]\) in which the vertex \(1\) has exactly one child, to the set \(\mathcal{T}_{n+1}^{(1;0)}\) of rooted trees on the vertex set \([n+1]\) in which vertex \(1\) is a leaf, with the properties that
1. the number of improper edges is increased by \(1\), and
2. for each \(m\), the number of vertices with \(m\) proper children is preserved, provided that in \(T\in\mathcal{T}_{n+1}^{(1;1)}\) one ignores the vertex \(1\) (which has one child).
This construction is illustrated in Figure 1. Since the weight in (1.12) is \(y\) for each improper edge and \(\widehat{\phi}_{m}=m!\,\phi_{m}\) for each vertex \(i\neq 1\) with \(m\) proper children, this proves \(t_{n,0}(y,\boldsymbol{\phi})=y\,t_{n,1}(y,\boldsymbol{\phi})\). Specializing to \(\phi_{m}=z^{m}/m!\) then yields \(t_{n,0}(y,z)=y\,t_{n,1}(y,z)\).
Proof of Proposition 1.6. Fix \(n\geq 1\), and let \(T\) be a rooted tree on the vertex set \([n+1]\) in which \(r\) is the root and the vertex \(1\) has precisely one child \(a\). Let
Figure 1: Bijection between \(T\) and \(T^{\prime}\).
be the subtree rooted at \(a\), and let \(T_{r}\) the subtree obtained from \(T\) by removing \(T_{a}\) and the edge \(1a\). The vertex \(1\) is a leaf in \(T_{r}\).
Now we create a new tree \(T^{\prime}\), rooted at \(a\), as follows: we start with \(T_{a}\) and then graft \(T_{r}\) by making \(r\) a child of \(a\). In the tree \(T^{\prime}\), the vertex \(1\) is a leaf. The map \(T\mapsto T^{\prime}\) map is a bijection, since this construction can be reversed. (The vertex \(r\) can be identified in \(T^{\prime}\) as the child of \(a\) that has \(1\) as a descendant.)
Clearly, all the proper (resp. improper) edges in \(T\) are still proper (resp. improper) in \(T^{\prime}\), except that:
* The edge \(1a\) in \(T\) is proper, which is deleted in \(T^{\prime}\); and
* The edge \(ar\) in \(T^{\prime}\) is new and improper, since the vertex \(1\) is a descendant of \(r\).
In particular, the number of vertices with \(m\) proper children is the same in \(T\) and \(T^{\prime}\), provided that in \(T\) one ignores the vertex \(1\). \(\Box\)
### Proof of Propositions 1.5 and 1.7
Now we will prove Proposition 1.5, which asserts the equality of the polynomials \(t_{n,k}(y,z)\) defined in (1.7) using rooted trees and the polynomials \(\widetilde{t}_{n,k}(y,z)\) defined in (1.10) using partial functional digraphs. We will then show that the same argument proves the more general Proposition 1.7, which asserts the equivalence of the polynomials \(t_{n,k}(y,\boldsymbol{\phi})\) defined in (1.12) and the polynomials \(\widetilde{t}_{n,k}(y,\boldsymbol{\phi})\) defined in (1.13).
We recall that \(\mathcal{T}_{n}^{\bullet}\) denotes the set of rooted trees on the vertex set \([n]\), while \(\mathcal{T}_{n}^{(1;k)}\) denotes the subset in which the vertex \(1\) has \(k\) children. Similarly, \(\mathbf{PFD}_{n}\) denotes the set of partial functional digraphs on the vertex set \([n]\), while \(\mathbf{PFD}_{n,k}\) denotes the subset in which there are exactly \(k\) vertices of out-degree \(0\).
To prove Proposition 1.5, we will construct, for each fixed \(n\), a bijection \(\phi\colon\,\mathcal{T}_{n+1}^{\bullet}\to\mathbf{PFD}_{n}\) with the following properties:
* \(\phi\) maps \(\mathcal{T}_{n+1}^{(1;k)}\) onto \(\mathbf{PFD}_{n,k}\).
* \(\phi\) preserves the number of improper edges.
* \(\phi|_{\mathcal{T}_{n+1}^{(1;k)}}\) reduces the number of proper edges by \(k\).
We observe that (c) is an immediate consequence of (a) and (b), since trees in \(\mathcal{T}_{n+1}^{\bullet}\) have \(n\) edges, while digraphs in \(\mathbf{PFD}_{n,k}\) have \(n-k\) edges.
Proof of Proposition 1.5. (The reader may wish to follow, along with this proof, the example shown in Figure 2.)
Let \(T\) be a rooted tree on the vertex set \([n+1]\) in which the vertex \(1\) has \(k\) children. Note that the \(k\) edges from vertex \(1\) to its children are all proper. Now let \(P=v_{1}\cdots v_{\ell+1}\) (\(\ell\geq 0\)) be the unique path in \(T\) from the root \(v_{1}=r\) to the vertex \(v_{\ell+1}=1\); we call it the "backbone". (Here \(\ell=0\) corresponds to the case in which vertex \(1\) is the root.) Removing from \(T\) the edges of the path \(P\), we obtain a collection of (possibly trivial) trees \(T_{1},\ldots,T_{\ell+1}\) rooted at the vertices \(v_{1},\ldots,v_{\ell+1}\).
(b\({}_{1}\),b\({}_{2}\)) Partial functional digraphs \(D_{P}\) and \(D^{\prime}\). Improper edges arising from the cycles of the permutation \(\sigma\) are shown in red; the other improper edges are shown in black; the proper edges are shown in blue.
(c\({}_{1}\),c\({}_{2}\)) Partial functional digraphs \(G^{\prime}\) and \(G\) in the third model, where the two vertices 10 and 12 (resp. 9 and 11) have out-degree 0. Improper edges arising from the cycles of the permutation \(\sigma\) are shown in red; the other improper edges are shown in black; the proper edges are shown in blue.
Figure 2: (a) Tree \(T\) in the second model, where \(r=v_{1}=6\), \(v_{\max}=9\), \(\sigma=638951=(169)(3)(58)\), and vertex 1 has two children. The backbone edges are shown in red and are improper; the other improper edges are shown in black; the proper edges are shown in blue.
Now regard \(P\) as a permutation \(\sigma\) (written in word form) of its elements written in increasing order.10 In particular, \(\sigma(1)=r\) and \(\sigma(v_{\max})=1\) where \(v_{\max}=\max(v_{1},\ldots,v_{\ell+1})\). Let \(D_{P}\) be the digraph whose vertex set is \(\{v_{1},\ldots,v_{\ell+1}\}\), with edges \(\overrightarrow{ij}\) whenever \(j=\sigma(i)\). Then \(D_{P}\) consists of disjoint directed cycles (possibly of length 1); it is the representation in cycle form of the permutation \(\sigma\).
Footnote 10: That is, let \(v^{\prime}_{1}<\ldots<v^{\prime}_{\ell+1}\) be the elements of the set \(S=\{v_{1},\ldots,v_{\ell+1}\}\) written in increasing order. Then \(\sigma\) is the permutation of \(S\) defined by \(\sigma(v^{\prime}_{i})=v_{i}\).
Now let \(D^{\prime}\) be the digraph obtained from \(D_{P}\) by attaching the trees \(T_{1},\ldots,T_{\ell+1}\) to \(D_{P}\) (identifying vertices with the same label) and directing all edges of those trees towards the root. Then \(D^{\prime}\) is a functional digraph on the vertex set \([n+1]\). Furthermore, the map \(T\mapsto D^{\prime}\) is a bijection, since all the above steps can be reversed.
Now let \(G^{\prime}\) be the digraph obtained from \(D^{\prime}\) by deleting the vertex 1 and the \(k\) tree edges incident on vertex 1, and contracting the edges \(\overrightarrow{v_{\max}1}\) and \(\overrightarrow{1r}\) into a single edge \(\overrightarrow{v_{\max}r}\). Then \(G^{\prime}\) is a digraph on the vertex set \(\{2,\ldots,n+1\}\) in which every vertex has out-degree 1 except for the \(k\) children of vertex 1 in \(T\), which have out-degree 0. Relabeling all vertices \(i\to i-1\), we obtain a partial functional digraph \(G=\phi(T)\in\mathbf{PFD}_{n,k}\).
The step from \(D^{\prime}\) to \(G\) can also be reversed: given a partial functional digraph \(G=\mathbf{PFD}_{n,k}\), we relabel the vertices \(i\to i+1\) and then insert the vertex 1 immediately after the largest cyclic vertex of \(G\) (if any; otherwise 1 becomes a loop in \(D^{\prime}\)); all the vertices of out-degree 0 in \(G\) are made to point to the vertex 1 in \(D^{\prime}\).
It follows that the map \(\phi\colon T\mapsto G\) is a bijection from \(\mathcal{T}^{\bullet}_{n+1}\) to \(\mathbf{PFD}_{n}\) that maps \(\mathcal{T}^{(1;k)}_{n+1}\) onto \(\mathbf{PFD}_{n,k}\).
Clearly, in the rooted tree \(T\), all the edges in the path \(P=v_{1}\cdots v_{\ell+1}\) are improper, since each vertex in \(P\) has \(v_{\ell+1}=1\) as its descendant. These \(\ell\) edges correspond, after relabeling, to \(\ell+1\) cyclic edges in the functional digraph \(D^{\prime}\). These latter edges in turn correspond, after removal of vertex 1 and contraction of its edges, to \(\ell\) cyclic edges in the partial functional digraph \(G^{\prime}\) (and hence also \(G\)). Because they are cyclic edges, they are necessarily improper. All the other improper/proper edges in \(T\) coincide with improper/proper edges \(\overrightarrow{ij}\) in the partial functional digraph \(G^{\prime}\) (and hence \(G\)) where \(i\) is a transient vertex. \(\square\)
**Remark.** The first part of this proof (namely, the map \(T\mapsto D^{\prime}\)) is the well-known bijection from doubly-rooted trees to functional digraphs on the same vertex set [49, pp. 224-225][79, p. 26]. In our application we need the second step to remove the vertex 1 and thereby obtain a map from rooted trees on the vertex set \([n+1]\) to partial functional digraphs on the vertex set \([n]\). \(\blacksquare\)
Proof of Proposition 1.7. In the preceding proof, each vertex \(i\neq 1\) in the rooted tree \(T\) corresponds to a vertex \(i-1\) in the partial functional digraph \(G=\phi(T)\). And for each proper child \(j\) of \(i\) in \(T\), the proper edge \(ij\) in \(T\) corresponds to a proper edge \(\overrightarrow{j-1\,i-1}\) in \(G\); and those are the only proper edges in \(G\). Therefore, if the vertex \(i\neq 1\) in \(T\) has \(m\) proper children, then the vertex \(i-1\) in \(G\) has \(m\) proper incoming edges. This proves that \(t_{n,k}(y,\boldsymbol{\phi})=\widetilde{t}_{n,k}(y,\boldsymbol{\phi})\). \(\square\)
The matrices \(\mathsf{T}\), \(\mathsf{T}(y,z)\) and \(\mathsf{T}(y,\phi)\) as exponential Riordan arrays
In this section we show that the matrices \(\mathsf{T}\), \(\mathsf{T}(y,z)\) and \(\mathsf{T}(y,\phi)\) are exponential Riordan arrays \(\mathcal{R}[F,G]\), and we compute their generating functions \(F\) and \(G\) as well as their \(A\)-, \(Z\)-, \(\Phi\)- and \(\Psi\)-series.
### The matrix \(\mathsf{T}\)
**Proposition 4.1**.: _Define_
\[t_{n,k}\;=\;\binom{n}{k}\,n^{n-k}\;. \tag{4.1}\]
_Then the unit-lower-triangular matrix \(\mathsf{T}=(t_{n,k})_{n,k>0}\) is the exponential Riordan array \(\mathcal{R}[F,G]\) with \(F(t)=\sum_{n=0}^{\infty}n^{n}\,t^{n}/n!\) and \(G(t)=\sum_{n=1}^{\infty}n^{n-1}\,t^{n}/n!\)._
Before proving Proposition 4.1, let us use it to compute the \(A\)-, \(Z\)-, \(\Phi\)- and \(\Psi\)-series:
**Corollary 4.2**.: _The exponential Riordan array \(\mathsf{T}=\mathcal{R}[F,G]\) has_
\[A(s)\;=\;\frac{e^{s}}{1-s}\,,\quad Z(s)\;=\;\frac{e^{s}}{(1-s)^{2}} \tag{4.2}\]
_and_
\[\Phi(s)\;=\;e^{s}\,,\quad\Psi(s)\;=\;\frac{1}{1-s}\;. \tag{4.3}\]
Proof.: We observe that \(G(t)\) is the tree function \(T(t)\)[19], which satisfies the functional equation \(T(t)=te^{T(t)}\). Furthermore, we have \(F(t)=1/[1-T(t)]\): this well-known fact can be proven using the Lagrange inversion formula [see (4.4) below specialized to \(x=0\)] or by various other methods.11 We now apply Theorem 2.22 to determine the functions \(A(s)\) and \(Z(s)\). Implicit differentiation of the functional equation yields \(T^{\prime}(t)=e^{T(t)}/[1-T(t)]\), which implies that \(A(s)=e^{s}/(1-s)\). On the other hand, it follows immediately from the relation between \(F\) and \(G\) that \(\Psi(s)=1/(1-s)\). This implies that \(\Phi(s)=e^{s}\) and \(Z(s)=e^{s}/(1-s)^{2}\).
Footnote 11: Algebraic proof. \(F(t)\;=\;1\,+\,tT^{\prime}(t)\;=\;1\,+\,\frac{te^{T(t)}}{1-T(t)}\;=\;1\,+\, \frac{T(t)}{1-T(t)}\;=\;\frac{1}{1-T(t)}\,,\) where the first equality used the power series defining \(F(t)\) and \(T(t)\), the second equality used the identity \(T^{\prime}(t)=\frac{e^{T(t)}}{1-T(t)}\) arising from implicit differentiation of the functional equation, and the third equality used the functional equation.
We will give five proofs of Proposition 4.1: a direct algebraic proof using Lagrange inversion and an Abel identity; an inductive algebraic proof, using a different Abel identity; a third algebraic proof using the \(A\)- and \(Z\)-sequences of an ordinary Riordan array; a combinatorial proof using exponential generating functions based on the
interpretation of \(t_{n,k}\) as counting partial functional digraphs; and a bijective combinatorial proof based on the interpretation of \(t_{n,k}\) as counting rooted labeled trees according to the number of children of the root that are lower-numbered than the root. In Section 4.2 we will give yet another combinatorial proof (also using exponential generating functions), this time based on the interpretation of \(t_{n,k}\) as counting rooted labeled trees according to the number of children of a specified vertex \(i\); but this proof will be given in the more general context of the polynomials \(t_{n,k}(y,z)\).
First Proof of Proposition 4.1. The tree function \(T(t)\) satisfies the functional equation \(T(t)=te^{T(t)}\). We use Lagrange inversion (2.76) with \(\phi(u)=e^{u}\) and \(H(u)=e^{xu}/(1-u)\): this gives
\[[t^{n}]\;\frac{e^{xT(t)}}{1-T(t)} = \frac{1}{n}\,[u^{n-1}]\left(\frac{x}{1-u}\,+\,\frac{1}{(1-u)^{2}} \right)e^{(x+n)u} \tag{4.4a}\] \[= \frac{1}{n}\,\sum_{k=0}^{n-1}(x+k+1)\;\frac{(x+n)^{n-1-k}}{(n-1- k)!}\] (4.4b) \[= \frac{1}{n!}\,\sum_{k=0}^{n-1}\binom{n-1}{k}\,k!\;(x+k+1)\;\frac {(x+n)^{n-1-k}}{(n-1-k)!}\] (4.4c) \[= \frac{(x+n)^{n}}{n!}\;, \tag{4.4d}\]
where the last step used an Abel identity [67, p. 21, eq. (25) with \(n\to n-1\) and \(x\to x+1\)]. In view of (1.5), this proves (1.2), which by (2.34) proves that \({\sf T}={\cal R}[F,G]\). \(\Box\)
Second Proof of Proposition 4.1. It is immediate that the zeroth column of \({\sf T}\) has exponential generating function \(F(t)=\sum_{n=0}^{\infty}n^{n}\,t^{n}/n!\). We now show by induction on \(k\) that the \(k\)th column has egf \(F(t)\,G(t)^{k}/k!\) where \(G(t)=\sum_{n=0}^{\infty}n^{n-1}\,t^{n}/n!\): that is, we need to show that the \(k\)th column has egf equal to \(G(t)/k\) times the egf of the \((k-1)\)st column, or in other words
\[k!\,t_{n,k}\;=\;\sum_{j=1}^{n-k+1}\binom{n}{j}\,j^{j-1}\;(k-1)!\,t_{n-j,k-1} \tag{4.5}\]
for \(k\geq 1\). We start from the Abel identity [67, p. 18, eq. (13a)]
\[x^{-1}\,(x+y+n)^{n}\;=\;\sum_{i=0}^{n}\binom{n}{i}\,(x+i)^{i-1}\,(y+n-i)^{n-i}\;. \tag{4.6}\]
Now substitute \(x=1\) and \(y\to y-n-1\), divide both sides by \(n!\), and relabel \(i=j-1\): the result is
\[\frac{y^{n}}{n!}\;=\;\sum_{j=1}^{n+1}\frac{j^{j-1}}{j!}\;\frac{(y-j)^{n+1-j}} {(n-j+1)!}\;. \tag{4.7}\]
Next substitute \(n\to n-k\) and then set \(y=n\):
\[\frac{n^{n-k}}{(n-k)!}\ =\ \sum_{j=1}^{n-k+1}\frac{j^{j-1}}{j!}\,\frac{(n-j)^{n-j-k+ 1}}{(n-j-k+1)!}\;. \tag{4.8}\]
Multiplying this by \(n!\) yields
\[\frac{n!}{(n-k)!}\,n^{n-k}\ =\ \sum_{j=1}^{n-k+1}\binom{n}{j}\,j^{j-1}\,\frac{(n -j)!}{(n-j-k+1)!}\,(n-j)^{n-j-k+1}\;, \tag{4.9}\]
which is (4.5). \(\;\;\Box\)
Third Proof of Proposition 4.1. Showing that \((t_{n,k})_{n,k\geq 0}\) equals the exponential Riordan array \({\cal R}[F,G]\) is equivalent to showing that \(((k!/n!)\,t_{n,k})_{n,k\geq 0}\) equals the ordinary Riordan array \({\cal R}(F,G)\). We write \(r_{n,k}\stackrel{{\rm def}}{{=}}(k!/n!)\,t_{n,k}=n^{n-k}/(n-k)!\) and \(R=(r_{n,k})_{n,k\geq 0}\). By the binomial theorem we have
\[r_{n+1,k+1}\ \stackrel{{\rm def}}{{=}}\ \frac{(n+1)^{n-k}}{(n-k)!}\ =\ \sum_{j=0}^{n-k}\frac{1}{j!}\,\frac{n^{n-k-j}}{(n-k-j)!}\ \stackrel{{\rm def}}{{=}}\ \sum_{j=0}^{n-k}\frac{1}{j!}\,r_{n,k+j} \tag{4.10}\]
for all \(k\geq 0\). So the matrix \(R\) satisfies the appropriate identities to have the \(A\)-sequence \(a_{j}=1/j!\). Similarly, by the binomial theorem we have
\[r_{n+1,0}\ \stackrel{{\rm def}}{{=}}\ \frac{(n+1)^{n+1}}{(n+1)!}\ =\ \frac{(n+1)^{n}}{n!}\ =\ \sum_{j=0}^{n}\frac{1}{j!}\,\frac{n^{n-j}}{(n-j)!}\ \stackrel{{\rm def}}{{=}}\ \sum_{j=0}^{n}\frac{1}{j!}\,r_{n,j}\;. \tag{4.11}\]
So the matrix \(R\) satisfies the appropriate identities to have the \(Z\)-sequence \(z_{j}=1/j!\). It follows from Theorem 2.17 that \(R\) is an ordinary Riordan array \({\cal R}(F,G)\) where \(F\) and \(G\) are given by (2.28) with \(A(t)=Z(t)=e^{t}\). By the Lagrange inversion formula we find
\[[t^{n}]\,G(t)\ =\ \frac{1}{n}\,[t^{n-1}]\,A(t)^{n}\ =\ \frac{n^{n-1}}{n!}\;. \tag{4.12}\]
And the ordinary generating function of the zeroth column of \(R\) is obviously \(F(t)=\sum_{n=0}^{\infty}n^{n}\,t^{n}/n!\). \(\;\;\Box\)
**Remark.** The identity (4.8) in the second proof can be written for \(k\geq 1\) as
\[r_{n,k}\ =\ \sum_{j=1}^{n-k+1}\frac{j^{j-1}}{j!}\;r_{n-j,k-1}\;, \tag{4.13}\]
which shows that the ordinary generating function of the \(k\)th column of \(R\) equals \(G(t)=\sum_{j=1}^{\infty}j^{j-1}\,t^{j}/j!\) times the ordinary generating function of the \((k-1)\)st column. Combining this with the fact that the ordinary generating function of the zeroth column of \(R\) is \(F(t)=\sum_{n=0}^{\infty}n^{n}\,t^{n}/n!\) gives an alternate proof that \(R={\cal R}(F,G)\). \(\;\;\blacksquare\)
Fourth Proof of Proposition 4.1. We begin from the fact that \(t_{n,k}=\binom{n}{k}n^{n-k}\) counts partial functional digraphs on \(n\) labeled vertices that have \(k\) vertices of out-degree \(0\). Such a partial functional digraph is the disjoint union of \(k\) rooted trees
(rooted at the vertices of out-degree \(0\)) together with a functional digraph on the remaining vertices. Standard enumerative arguments then imply that the exponential generating function for the numbers \(t_{n,k}\) is
\[\sum_{n=k}^{\infty}t_{n,k}\,\frac{t^{n}}{n!}\;=\;F(t)\,\frac{T(t)^{k}}{k!} \tag{4.14}\]
where \(F(t)=\sum_{n=0}^{\infty}n^{n}\,t^{n}/n!\) is the exponential generating function for functional digraphs and \(T(t)=\sum_{n=1}^{\infty}n^{n-1}\,t^{n}/n!\) is the exponential generating function for rooted trees. \(\;\;\Box\)
Fifth Proof of Proposition 4.1. We begin from the fact [12, 13, 75] that \(t_{n,k}\) equals the number of rooted trees on the vertex set \([n+1]\) in which exactly \(k\) children of the root are lower-numbered than the root. We will prove (4.5) in the form
\[k\,t_{n,k}\;=\;\sum_{j=1}^{n-k+1}\binom{n}{j}\,j^{j-1}\,t_{n-j,k-1} \tag{4.15}\]
for \(k\geq 1\). We interpret \(k\,t_{n,k}\) as the number of triplets \((T,r,v_{\star})\) in which \((T,r)\) is a rooted tree on the vertex set \([n+1]\) in which exactly \(k\) children of the root are lower-numbered than the root, and \(v_{\star}\) is one of those lower-numbered children (we call it the "marked vertex"). See Figure 3. We interpret \(j^{j-1}\) as the number of rooted trees on \(j\) labeled vertices. So the summand on the right-hand side of (4.15) enumerates quintuplets \((A,T_{1},r_{1},T_{2},r_{2})\) where \(A\) is a subset of \([n]\) of cardinality \(j\), \((T_{1},r_{1})\) is a rooted tree on the vertex set \(A\), and \((T_{2},r_{2})\) is a rooted tree on the vertex set \([n+1]\setminus A\) in which exactly \(k-1\) children of the root are lower-numbered than the root. See Figure 4.
**Bijection RHS \(\Longrightarrow\) LHS.** Given the quintuplet \((A,T_{1},r_{1},T_{2},r_{2})\), we construct a triplet \((T,r,v_{\star})\) as follows. We distinguish two cases:
* **Case I: \(\boldsymbol{r_{1}<r_{2}}\).** We let \(r_{2}\) be the new root and add an edge making \(r_{1}\) a child of \(r_{2}\); this gives \((T,r)\). We then mark the vertex \(v_{\star}=r_{1}\). Please note that in this case the vertex \(n+1\) is not a descendant of \(v_{\star}\) (see Figure 5a).
* **Case II: \(\boldsymbol{r_{1}>r_{2}}\).** We let \(r_{1}\) be the new root and add an edge making \(r_{2}\) a child of \(r_{1}\); we then interchange the lower-numbered children of \(r_{1}\) (together with all their descendants) with the lower-numbered children of \(r_{2}\) (and their descendants). This gives \((T,r)\). We observe (see Figure 5b) that \(r_{2}\) is the largest-numbered among all the lower-numbered children of \(r\) in \(T\). We observe also that the vertex \(n+1\) must be a descendant of some lower-numbered child of \(r_{1}\) in \(T\); we set the marked vertex \(v_{\star}\) to be this lower-numbered child. Note that \(v_{\star}\) must either belong to the set \(S_{2}\) (consisting of the lower-numbered children of \(r_{2}\) in \(T_{2}\), which became lower-numbered children of \(r_{1}\) in \(T\)) or else be the vertex \(r_{2}\).
In both cases, in the rooted tree \((T,r)\), exactly \(k\) children of the root are lower-numbered than the root.
Tree \(T\) on vertex set \(A\subseteq[n]\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \(A\subseteq[n]\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{1}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\)Tree \(T_{2}\) on vertex set \([n+1]\setminus A\
Figure 6: Bijection LHS \(\Rightarrow\) RHS: Case I.
Figure 7: Bijection LHS \(\Rightarrow\) RHS: Case II.
We now describe the inverse bijection:
**Bijection LHS \(\Longrightarrow\) RHS.** Given the triplet \((T,r,v_{\star})\), we reconstruct the quintuplet \((A,T_{1},r_{1},T_{2},r_{2})\) as follows. We distinguish two cases:
* **Case I: \(\boldsymbol{n+1}\) is not a descendant of \(\boldsymbol{v_{\star}}\).** We delete the edge between \(r\) (the root of \(T\)) and \(v_{\star}\). Then \((T_{1},r_{1})\) is the tree whose root is \(v_{\star}\), and \(A\) is its vertex set; \((T_{2},r_{2})\) is the tree whose root is \(r\), and its vertex set is \([n+1]\setminus A\). Since \(n+1\) is not a descendant of \(v_{\star}\), it \(n+1\) belongs to \(T_{2}\), so that \(A\subseteq[n]\). And we have \(r_{1}=v_{\star}<r=r_{2}\). See Figure 6.
* **Case II: \(\boldsymbol{n+1}\) is a descendant of \(\boldsymbol{v_{\star}}\).** The root \(r\) of \(T\) has \(k\) (\(\geq 1\)) lower-numbered children; let \(v_{\bullet}\) be the largest-numbered of these. We delete the edge between \(r\) and \(v_{\star}\); then we interchange the lower-numbered children of \(r\) (together with all their descendants) with the lower-numbered children of \(v_{\bullet}\) (and their descendants). Then \((T_{1},r_{1})\) is the tree whose root is \(r\), and \(A\) is its vertex set; \((T_{2},r_{2})\) is the tree whose root is \(v_{\bullet}\), and its vertex set is \([n+1]\setminus A\). Please observe that the marked vertex \(v_{\star}\) was a lower-numbered child of \(r\) in \(T\); therefore, it is either equal to \(v_{\bullet}=r_{2}\) or else becomes a lower-numbered child of \(v_{\bullet}=r_{2}\) in \(T_{2}\). Since the vertex \(n+1\) was a descendant of \(v_{\star}\), it must belong to \(T_{2}\); therefore \(A\subseteq[n]\). And we have \(r_{1}=r>v_{\bullet}=r_{2}\). See Figure 7.
In both cases, in the rooted tree \((T_{2},r_{2})\), exactly \(k-1\) children of the root are lower-numbered than the root. \(\square\)
### The matrix \(\boldsymbol{\mathsf{T}(y,z)}\)
We now prove that the matrix \(\boldsymbol{\mathsf{T}(y,z)}=(t_{n,k}(y,z))_{n,k\geq 0}\) is an exponential Riordan array \(\mathcal{R}[F,G]\), and we compute \(F\) and \(G\). Most of this computation was done a quarter-century ago by Dumont and Ramamonjisoa [26]: their arguments handled the case \(k=0\), and we extend those arguments slightly to handle the case of general \(k\). Our presentation follows the notation of [76].
Let \(\mathcal{T}_{n}^{\bullet}\) denote the set of rooted trees on the vertex set \([n]\); let \(\mathcal{T}_{n}^{[i]}\) denote the subset of \(\mathcal{T}_{n}^{\bullet}\) in which the root vertex is \(i\); and let \(\mathcal{T}_{n}^{\langle i;k\rangle}\) denote the subset of \(\mathcal{T}_{n}^{\bullet}\) in which the vertex \(i\) has \(k\) children. Given a tree \(T\in\mathcal{T}_{n}^{\bullet}\), we write \(\mathrm{imprope}(T)\) for the number of improper edges of \(T\). Now define the generating polynomials
\[R_{n}(y,z) = \sum_{T\in\mathcal{T}_{n}^{\bullet}}y^{\mathrm{imprope}(T)}z^{n-1 -\mathrm{imprope}(T)} \tag{4.16}\] \[S_{n}(y,z) = \sum_{T\in\mathcal{T}_{n+1}^{\uparrow 1}}y^{\mathrm{imprope}(T)}z^{n -\mathrm{imprope}(T)}\] (4.17) \[A_{n,k}(y,z)\;=\;t_{n,k}(y,z) = \sum_{T\in\mathcal{T}_{n+1}^{\langle 1;k\rangle}}y^{\mathrm{imprope}(T)}z^{n -k-\mathrm{imprope}(T)} \tag{4.18}\]
in which each improper (resp. proper) edge gets a weight \(y\) (resp. \(z\)) except that in \(A_{n,k}\) the \(k\) proper edges connecting the vertex \(1\) to its children are unweighted. And
then define the exponential generating functions
\[{\cal R}(t;y,z) = \sum_{n=1}^{\infty}R_{n}(y,z)\,\frac{t^{n}}{n!} \tag{4.19}\] \[{\cal S}(t;y,z) = \sum_{n=0}^{\infty}S_{n}(y,z)\,\frac{t^{n}}{n!}\] (4.20) \[{\cal A}_{k}(t;y,z) = \sum_{n=0}^{\infty}A_{n,k}(y,z)\,\frac{t^{n}}{n!} \tag{4.21}\]
We will then prove the following key result, which is a slight extension of [26, Proposition 7] to handle the case \(k\neq 0\):
**Proposition 4.3**.: _The series \({\cal R}\), \({\cal S}\) and \({\cal A}_{k}\) satisfy the following identities:_
* \({\cal S}(t;y,z)\:=\:\exp\big{[}z\,{\cal R}(t;y,z)\big{]}\)__
* \({\cal A}_{k}(t;y,z)\:=\:\frac{{\cal R}(t;y,z)^{k}/k!}{1-y{\cal R}(t;y,z)}\)__
* \(\frac{d}{dt}{\cal R}(t;y,z)\:=\:{\cal A}_{0}(t;y,z)\,{\cal S}(t;y,z)\)__
_and hence_
* \(\frac{d}{dt}{\cal R}(t;y,z)\:=\:\frac{\exp\big{[}z\,{\cal R}(t;y,z) \big{]}}{1-y{\cal R}(t;y,z)}\)__
Solving the differential equation of Proposition 4.3(d) with the initial condition \({\cal R}(0;y,z)=0\), we obtain:
**Corollary 4.4**.: _The series \({\cal R}(t;y,z)\) satisfies the functional equation_
\[y-z+yz{\cal R}\:=\:(y-z+z^{2}t)\,e^{z{\cal R}} \tag{4.22}\]
_and hence has the solution_
\[{\cal R}(t;y,z)\:=\:\frac{1}{z}\bigg{[}T\Big{(}\Big{(}1-\frac{z}{y}+\frac{z^{ 2}}{y}t\Big{)}\,e^{-\,\big{(}1-\frac{z}{y}\big{)}}\Big{)}\:-\:\Big{(}1-\frac{ z}{y}\Big{)}\bigg{]} \tag{4.23}\]
_where \(T(t)\) is the tree function (1.3)._
Comparing Proposition 4.3(b) with the definition (2.33) of exponential Riordan arrays, we conclude:
**Corollary 4.5**.: _The matrix \({\sf T}(y,z)\) is the exponential Riordan array \({\cal R}[F,G]\) where_
\[F(t)\:=\:\frac{1}{1-y{\cal R}(t;y,z)}\,,\quad G(t)\:=\:{\cal R}(t;y,z) \tag{4.24}\]
_and \({\cal R}(t;y,z)\) is given by (4.23)._
And comparing Proposition 4.3(b,d) with the definitions (2.44)/(2.50)/(2.52) of the \(A\)-series, \(Z\)-series, \(\Phi\)-series and \(\Psi\)-series of an exponential Riordan array, we conclude:
**Corollary 4.6**.: _The exponential Riordan array \(\mathsf{T}(y,z)\) has_
\[A(s)\;=\;\frac{e^{zs}}{1-ys}\,,\quad Z(s)\;=\;\frac{ye^{zs}}{(1-ys)^{2}} \tag{4.25}\]
_and_
\[\Phi(s)\;=\;e^{zs}\,,\quad\Psi(s)\;=\;\frac{1}{1-ys}. \tag{4.26}\]
The proof of Proposition 4.3 follows the elegant argument of Jiang Zeng that was presented in [26, section 7], and extends it in part (b) to handle \(k\neq 0\):
Proof of Proposition 4.3. (a) Consider a tree \(T\in\mathcal{T}_{n+1}^{[1]}\), and suppose that the root vertex \(1\) has \(k\) (\(\geq 0\)) children. All \(k\) edges emanating from the root vertex are proper and thus get a weight \(z\) each. Deleting these edges and the vertex \(1\), one obtains an _unordered_ partition of \(\{2,\ldots,n+1\}\) into blocks \(B_{1},\ldots,B_{k}\) and a rooted tree \(T_{j}\) on each block \(B_{j}\). Standard enumerative arguments then yield the relation (a) for the exponential generating functions.
(b) Consider a tree \(T\in\mathcal{T}_{n+1}^{[1;k]}\) with root \(r\), and let \(r_{1},\ldots,r_{l+1}\) (\(l\geq 0\)) be the path in \(T\) from the root \(r_{1}=r\) to the vertex \(r_{l+1}=1\).12 All \(l\) edges of this path are improper, and all \(k\) edges from the vertex \(1\) to its children are proper (and unweighted). Deleting these edges and the vertex \(1\), one obtains a partition of \(\{2,\ldots,n+1\}\) into an _ordered_ collection of blocks \(B_{1},\ldots,B_{l}\) and an _unordered_ collection of blocks \(B^{\prime}_{1},\ldots,B^{\prime}_{k}\), together with a rooted tree on each block. Standard enumerative arguments then yield the relation (b) for the exponential generating functions.
Footnote 12: Here \(l=0\) corresponds to the case in which the vertex \(1\) is the root.
(c) In a tree \(T\in\mathcal{T}_{n}^{\ast}\), focus on the vertex \(1\) (which might be the root, a leaf, both or neither). Let \(T^{\prime}\) be the subtree rooted at \(1\), and let \(T^{\prime\prime}\) be the tree obtained from \(T\) by deleting all the vertices of \(T^{\prime}\) except the vertex \(1\) (it thus has the vertex \(1\) as a leaf). The vertex set \([n]\) is then partitioned as \(\{1\}\cup V^{\prime}\cup V^{\prime\prime}\), where \(\{1\}\cup V^{\prime}\) is the vertex set of \(T^{\prime}\) and \(\{1\}\cup V^{\prime\prime}\) is the vertex set of \(T^{\prime\prime}\); and \(T\) is obtained by joining \(T^{\prime}\) and \(T^{\prime\prime}\) at the common vertex \(1\). Standard enumerative arguments then yield the relation (c) for the exponential generating functions. \(\square\)
**Remarks.** 1. Dumont and Ramamonjisoa also gave [26, sections 2-5] a second (and very interesting) proof of the \(k=0\) case of Proposition 4.3, based on a context-free grammar [14] and its associated differential operator.
2. We leave it as an open problem to find a direct combinatorial proof of the functional equation (4.22), without using the differential equation of Proposition 4.3(d).
3. The polynomials \(R_{n}(y,z)\) enumerate rooted trees according to the number of improper and proper edges; they are homogenized versions of the celebrated _Ramanujan polynomials_[15, 26, 40, 41, 44, 52, 66, 71, 76, 84][61, A054589].
4. The polynomials \(R_{n}\) and \(A_{n,0}\) also arise [44] as derivative polynomials for the tree function: in the notation of [44] we have \(R_{n}(y,1)=G_{n}(y-1)\) and
\(y\,F_{n}(y-1)\) for \(n\geq 1\). The formula (4.23) is then equivalent to [44, Theorem 4.2, equation for \(G_{n}\)]. \(\blacksquare\)
### The matrix \(\mathsf{T}(y,\boldsymbol{\phi})\)
We now show how Proposition 4.3 can be generalized to incorporate the additional indeterminates \(\boldsymbol{\phi}=(\phi_{m})_{m\geq 0}\). We define \(\mathcal{T}_{n}^{\bullet}\), \(\mathcal{T}_{n}^{[i]}\) and \(\mathcal{T}_{n}^{\langle i;k\rangle}\) as before, and then define the obvious generalizations of (4.16)-(4.18):
\[R_{n}(y,\boldsymbol{\phi}) = \sum_{T\in\mathcal{T}_{n}^{\bullet}}y^{\mathrm{imprope}(T)}\,\prod _{i=1}^{n+1}\widehat{\phi}_{\mathrm{pdeg}_{T}(i)} \tag{4.27}\] \[S_{n}(y,\boldsymbol{\phi}) = \sum_{T\in\mathcal{T}_{n+1}^{[1]}}y^{\mathrm{imprope}(T)}\,\prod _{i=1}^{n+1}\widehat{\phi}_{\mathrm{pdeg}_{T}(i)}\] (4.28) \[A_{n,k}(y,\boldsymbol{\phi})\;=\;t_{n,k}(y,\boldsymbol{\phi}) = \sum_{T\in\mathcal{T}_{n+1}^{[1;k)}}y^{\mathrm{imprope}(T)}\,\prod _{i=2}^{n+1}\widehat{\phi}_{\mathrm{pdeg}_{T}(i)} \tag{4.29}\]
where \(\mathrm{pdeg}_{T}(i)\) denotes the number of proper children of the vertex \(i\) in the rooted tree \(T\), and \(\widehat{\phi}_{m}=m!\,\phi_{m}\). (Note that in \(R_{n}\) and \(S_{n}\) we give weights to all the vertices, while in \(A_{n,k}\) we do _not_ give any weight to the vertex \(1\).13) We then define the exponential generating functions
Footnote 13: This differs from the convention used in [76, eq. (3.24)], where \(A_{n}=A_{n,0}\) included a factor \(\phi_{0}=\widehat{\phi}_{0}\) associated to the leaf vertex \(1\).
\[\mathcal{R}(t;y,\boldsymbol{\phi}) = \sum_{n=1}^{\infty}R_{n}(y,\boldsymbol{\phi})\,\frac{t^{n}}{n!} \tag{4.30}\] \[\mathcal{S}(t;y,\boldsymbol{\phi}) = \sum_{n=0}^{\infty}S_{n}(y,\boldsymbol{\phi})\,\frac{t^{n}}{n!}\] (4.31) \[\mathcal{A}_{k}(t;y,\boldsymbol{\phi}) = \sum_{n=0}^{\infty}A_{n,k}(y,\boldsymbol{\phi})\,\frac{t^{n}}{n!} \tag{4.32}\]
Let us also define the generating function
\[\Phi(s)\;\stackrel{{\mathrm{def}}}{{=}}\;\sum_{m=0}^{\infty}\phi _{m}\,s^{m}\;=\;\sum_{m=0}^{\infty}\widehat{\phi}_{m}\,\frac{s^{m}}{m!}\;. \tag{4.33}\]
We then have:
**Proposition 4.7**.: _The series \(\mathcal{R}\), \(\mathcal{S}\) and \(\mathcal{A}_{k}\) defined in (4.30)-(4.32) satisfy the following identities:_
* \(\mathcal{S}(t;y,\boldsymbol{\phi})\,=\,\Phi\big{(}\mathcal{R}(t;y,\boldsymbol{ \phi})\big{)}\)
_._
2. \({\cal A}_{k}(t;y,\mathbf{\phi})\;=\;\frac{{\cal R}(t;y,z)^{k}/k!}{1-y{ \cal R}(t;y,\mathbf{\phi})}\)__
3. \(\frac{d}{dt}{\cal R}(t;y,\mathbf{\phi})\;=\;{\cal A}_{0}(t;y,\mathbf{\phi})\,{\cal S}(t;y,\mathbf{\phi})\)__
_and hence_
Proof. The proof is identical to that of Proposition 4.3, with the following modifications:
(a) Consider a tree \(T\in{\cal T}^{[1]}_{n+1}\) in which the root vertex \(1\) has \(k\) children. Since all \(k\) edges emanating from the root vertex are proper, we get here a factor \(\widehat{\phi}_{k}/k!\) in place of the \(z^{k}/k!\) that was seen in Proposition 4.3. Therefore, the function \(e^{zs}\) in Proposition 4.3 is replaced here by the generating function \(\Phi(s)\).
(b) No change is needed.
(c) No change is needed. (The tree \(T^{\prime\prime}\) has vertex \(1\) as a leaf, but in \(A_{n,0}\) the vertex \(1\) is anyway unweighted.) \(\Box\)
Comparing Proposition 4.7(b) with the definition (2.33) of exponential Riordan arrays, we conclude:
**Corollary 4.8**.: _The matrix \({\sf T}(y,\mathbf{\phi})\) is the exponential Riordan array \({\cal R}[F,G]\) where_
\[F(t)\;=\;\frac{1}{1-y{\cal R}(t;y,\mathbf{\phi})}\,,\quad G(t)\;=\;{ \cal R}(t;y,\mathbf{\phi}) \tag{4.34}\]
_and \({\cal R}(t;y,\mathbf{\phi})\) is the solution of the differential equation of Proposition 4.7(d) with initial condition \({\cal R}(0;y,\mathbf{\phi})=0\)._
We observe that (4.34) is identical in form to (4.24); only \({\cal R}\) is different.
Comparing Proposition 4.7(b,d) with the definitions (2.44)/(2.50)/(2.52) of the \(A\)-series, \(Z\)-series, \(\Phi\)-series and \(\Psi\)-series of an exponential Riordan array, we conclude:
**Corollary 4.9**.: _The exponential Riordan array \({\sf T}(y,\mathbf{\phi})\) has_
\[A(s)\;=\;\frac{\Phi(s)}{1-ys}\,,\quad Z(s)\;=\;\frac{y\,\Phi(s)}{(1-ys)^{2}} \tag{4.35}\]
_and_
\[\Psi(s)\;=\;\frac{1}{1-ys} \tag{4.36}\]
_where \(\Phi(s)\) is given by (4.33)._
We see that \(\Psi(s)\) is the same here as in (4.26); only \(\Phi\) is different. Proposition 4.7 and Corollaries 4.8-4.9 reduce to Proposition 4.3 and Corollaries 4.5-4.6 if we take \(\phi_{m}=z^{m}/m!\) and hence \(\widehat{\phi}_{m}=z^{m}\), \(\Phi(s)=e^{zs}\).
Proof of Theorems 1.1, 1.2, 1.4 and 1.8
In this section we will prove Theorems 1.1, 1.2, 1.4 and 1.8. The proofs are now very easy: we combine the general theory of total positivity in exponential Riordan arrays developed in Section 2 (culminating in Corollary 2.28) with the specific computations of \(\Phi\)- and \(\Psi\)-series carried out in Section 4.
It suffices of course to prove Theorem 1.8, since Theorems 1.1, 1.2 and 1.4 are contained in it as special cases: take \(\phi_{m}=z^{m}/m!\) to get Theorem 1.4; then take \(y=z=1\) to get Theorems 1.1 and 1.2. However, we find it instructive to work our way up, starting with Theorems 1.1 and 1.2 and then gradually adding extra parameters.
### The matrix \(\mathsf{T}\)
Proof of Theorems 1.1 and 1.2. In order to employ the theory of exponential Riordan arrays, we work here in the ring \(\mathbb{Q}\), even though the matrix elements actually lie in \(\mathbb{Z}\).
By Corollary 4.2, the exponential Riordan array \(\mathsf{T}\) has \(\Phi(s)=e^{s}\) and \(\Psi(s)=1/(1-s)\). By Lemma 2.5, the corresponding sequences \(\boldsymbol{\phi}\) and \(\boldsymbol{\psi}\) (namely, \(\phi_{m}=1/m!\) and \(\psi_{m}=1\)) are Toeplitz-totally positive in \(\mathbb{Q}\). Corollary 2.28 then yields Theorems 1.1(a) and 1.2. Theorem 1.1(b) is obtained from Theorem 1.2 by specializing to \(x=0\). \(\square\)
Since this proof employed the production-matrix method (hidden inside Corollary 2.28), it is worth making explicit what the production matrix is:
**Proposition 5.1** (Production matrix for \(\mathsf{T}\)).: _The production matrix \(P=\mathsf{T}^{-1}\Delta\mathsf{T}\) is the unit-lower-Hessenberg matrix_
\[P\;=\;B_{1}\,\Delta\,DT_{1}D^{-1} \tag{5.1}\]
_where \(B_{1}\) is the binomial matrix [i.e. (1.6) at \(x=1\)], \(T_{1}\) is the lower-triangular matrix of all ones [i.e. (2.2) at \(x=1\)], and \(D=\mathrm{diag}\big{(}(n!)_{n\geq 0}\big{)}\). More generally, we have_
\[B_{\xi}^{-1}P\,B_{\xi}\;=\;B_{1}\,(\Delta+\xi I)\,DT_{1}D^{-1}\;. \tag{5.2}\]
Proof. Since \(\phi_{m}=1/m!\) and \(\psi_{m}=1\), Proposition 2.23 implies
\[P\;=\;DT_{\infty}\big{(}(1/m!)_{m\geq 0}\big{)}D^{-1}\,\Delta\,DT_{1}D^{-1}\;= \;B_{1}\,\Delta\,DT_{1}D^{-1}\;, \tag{5.3}\]
and Lemma 2.27 implies (5.2). \(\square\)
**Remarks.** 1. The zeroth and first columns of the matrix \(P\) are identical: that is, \(p_{n,0}=p_{n,1}\). This can be seen from Lemma 2.29 with \(c=1\), by noting either that \(t_{n,0}=t_{n,1}\) for \(n\geq 1\) or that \(\Psi(s)=1/(1-s)\). Alternatively, it can be seen directly from (5.1): the zeroth and first columns of the matrix \(\Delta\,DT_{1}D^{-1}\) are identical (namely, they are both equal to \(1/(n+1)!\)); so the zeroth and first columns of \(M\,\Delta\,DT_{1}D^{-1}\) are identical, for _any_ row-finite matrix \(M\). (Indeed, this would be the case if \(D=\mathrm{diag}(\,(n!)_{n\geq 0})\)
were replaced by _any_ diagonal matrix \(\operatorname{diag}(d_{0},d_{1},d_{2},\ldots)\) satisfying \(d_{0}=d_{1}\).) We will also see that \(p_{n,0}=p_{n,1}\) in the explicit formula (5.16).
The equality \(p_{n,0}=p_{n,1}\) implies, by Lemma 2.29(b) \(\Longleftrightarrow\) (b\({}^{\prime}\)), the factorization
\[P\;=\;P\Delta^{\mathrm{T}}\,(\mathbf{e}_{00}+\Delta) \tag{5.4}\]
where \(\mathbf{e}_{00}\) denotes the matrix with an entry \(1\) in position \((0,0)\) and all other entries zero, and \(P\Delta^{\mathrm{T}}\) is the lower-triangular matrix obtained from \(P\) by deleting its zeroth column.
2. Closely related to the production matrix \(P=B_{1}\,\Delta\,DT_{1}D^{-1}\) are
\[\widehat{P}\,=\,B_{1}\,DT_{1}D^{-1}\,\Delta\qquad\text{and}\qquad\widehat{P}^{ \prime}\,=\,\Delta\,B_{1}\,DT_{1}D^{-1}\;. \tag{5.5}\]
It was shown in [76, Section 4.1] that \(\widehat{P}\) is the production matrix for the forest matrix \(\mathsf{F}=(f_{n,k})_{n,k\geq 0}\) where \(f_{n,k}=\binom{n}{k}\,k\,n^{n-k-1}\) counts \(k\)-component forests of rooted trees on \(n\) labeled vertices; and that \(\widehat{P}^{\prime}=\Delta\widehat{P}\Delta^{\mathrm{T}}\) is the production matrix for \(\mathsf{F}^{\prime}=\Delta\mathsf{F}\Delta^{\mathrm{T}}=(f_{n+1,k+1})_{n,k\geq 0}\). All three production matrices correspond to the same \(A\)-series \(A(s)=e^{s}/(1-s)\), but with different splittings into \(\Phi\) and \(\Psi\).
We have more to say about this production matrix \(P\), but in order to avoid disrupting the flow of the argument we defer it to Section 5.4.
### The matrix \(\mathsf{T}(y,z)\)
Proof of Theorem 1.4. In order to employ the theory of exponential Riordan arrays, we work here in the ring \(\mathbb{Q}[y,z]\), even though the matrix elements actually lie in \(\mathbb{Z}[y,z]\).
By Corollary 4.6, the exponential Riordan array \(\mathsf{T}(y,z)\) has \(\Phi(s)=e^{zs}\) and \(\Psi(s)=1/(1-ys)\). By Lemma 2.5, the corresponding sequences \(\boldsymbol{\phi}\) and \(\boldsymbol{\psi}\) (namely, \(\phi_{m}=z^{m}/m!\) and \(\psi_{m}=y^{m}\)) are Toeplitz-totally positive in the ring \(\mathbb{Q}[y,z]\) equipped with the coefficientwise order. Corollary 2.28 then yields Theorem 1.4.
Analogously to Proposition 5.1, we have:
**Proposition 5.2** (Production matrix for \(\mathsf{T}(y,z)\)).: _The production matrix \(P(y,z)=\mathsf{T}(y,z)^{-1}\Delta\mathsf{T}(y,z)\) is the unit-lower-Hessenberg matrix_
\[P(y,z)\;=\;B_{z}\,\Delta\,DT_{y}D^{-1} \tag{5.6}\]
_where \(B_{z}\) is the weighted binomial matrix (1.6), \(T_{y}\) is the Toeplitz matrix of powers (2.2), and \(D=\operatorname{diag}\bigl{(}(n!)_{n\geq 0}\bigr{)}\). More generally,_
\[B_{\xi}^{-1}P(y,z)\,B_{\xi}\;=\;B_{z}\,(\Delta+\xi I)\,DT_{y}D^{-1}\;. \tag{5.7}\]
**Remarks.** 1. The zeroth and first columns of the matrix \(P(y,z)\) satisfy \(p_{n,0}=yp_{n,1}\). This can be seen from Lemma 2.29 with \(c=y\), by noting either that \(t_{n,0}(y,z)=yt_{n,1}(y,z)\) for \(n\geq 1\) (Proposition 1.3) or that \(\Psi(s)=1/(1-ys)\). Alternatively, it can be seen directly from (5.1): the zeroth column of the matrix \(\Delta\,DT_{y}D^{-1}\) is \(y\) times
the first column (they are, respectively, \(y^{n+1}/(n+1)!\) and \(y^{n}/(n+1)!\)); so the zeroth column of \(M\,\Delta\,DT_{y}D^{-1}\) is \(y\) times the first column, for _any_ row-finite matrix \(M\).
The equality \(p_{n,0}=yp_{n,1}\) implies, by Lemma 2.29(b)\(\Longleftrightarrow\) (b\({}^{\prime}\)), the factorization
\[P(y,z)\;=\;P(y,z)\,\Delta^{\rm T}\left(y\,{\bf e}_{00}+\Delta\right)\,. \tag{5.8}\]
2. Closely related to the production matrix \(P(y,z)=B_{z}\,\Delta\,DT_{y}D^{-1}\) are
\[\widehat{P}(y,z)\;=\;B_{z}\,DT_{y}D^{-1}\,\Delta\qquad\mbox{and}\qquad\widehat {P}^{\prime}(y,z)\;=\;\Delta\,B_{z}\,DT_{y}D^{-1}\;. \tag{5.9}\]
It was shown in [76, Section 4.3] that \(\widehat{P}(y,z)\) is the production matrix for \({\sf F}(y,z)=\left(f_{n,k}(y,z)\right)_{n,k\geq 0}\) where \(f_{n,k}(y,z)\) counts \(k\)-component forests of rooted trees on the vertex set \([n]\) with a weight \(y\) (resp. \(z\)) for each improper (resp. proper) edge. Likewise, \(\widehat{P}^{\prime}(y,z)=\Delta\widehat{P}(y,z)\Delta^{\rm T}\) is the production matrix for \({\sf F}^{\prime}(y,z)=\Delta{\sf F}(y,z)\Delta^{\rm T}=\left(f_{n+1,k+1}(y,z) \right)_{n,k\geq 0}\). All three production matrices correspond to the same \(A\)-series \(A(s)=e^{zs}/(1-ys)\), but with different splittings into \(\Phi\) and \(\Psi\). \(\quad\blacksquare\)
### The matrix \({\sf T}(y,\boldsymbol{\phi})\)
The proof is similar to that in the preceding subsections, but a bit of care is needed to handle the case in which the ring \(R\) does not contain the rationals.
Proof of Theorem 1.8. We start by letting \(\boldsymbol{\phi}=(\phi_{m})_{m\geq 0}\) be indeterminates, and working in the ring \(\mathbb{Q}[y,\boldsymbol{\phi}]\).
By Corollary 4.9, the exponential Riordan array \({\sf T}(y,\boldsymbol{\phi})\) has \(\Phi(s)=\sum_{m=0}^{\infty}\phi_{m}s^{m}\) and \(\Psi(s)=1/(1-ys)\), so \(\psi_{m}=y^{m}\). We therefore have \({\sf T}(y,\boldsymbol{\phi})={\cal O}(P)\) and more generally \({\sf T}(y,\boldsymbol{\phi})B_{x}={\cal O}(B_{x}^{-1}PB_{x})\), where Proposition 2.23 and Lemma 2.27 tell us that
\[B_{x}^{-1}PB_{x}\;=\;[D\,T_{\infty}(\boldsymbol{\phi})\,D^{-1}]\,(\Delta\,+\, xI)\,[D\,T_{\infty}(\boldsymbol{\psi})\,D^{-1}]\;. \tag{5.10}\]
We now use the definition (2.72) to rewrite this as
\[B_{x}^{-1}PB_{x}\;=\;T_{\infty}(\boldsymbol{\phi})^{\sharp}\,(\Delta\,+\,xI) \,T_{\infty}(\boldsymbol{\psi})^{\sharp}\;. \tag{5.11}\]
Having done this, the equality \({\sf T}(y,\boldsymbol{\phi})B_{x}={\cal O}(B_{x}^{-1}PB_{x})\) is now a valid identity in the ring \(\mathbb{Z}[y,\boldsymbol{\phi}]\). We can therefore now substitute elements \(\boldsymbol{\phi}\) in any commutative ring \(R\) for the indeterminates \(\boldsymbol{\phi}\), and the identity still holds.
By hypothesis the sequence \(\boldsymbol{\phi}\) is Toeplitz-totally positive in the ring \(R\). By Lemma 2.4, the sequence \(\boldsymbol{\psi}\) is Toeplitz-totally positive in the ring \(\mathbb{Z}[y]\) equipped with the coefficientwise order. By Lemma 2.30, the matrices \(T_{\infty}(\boldsymbol{\phi})^{\sharp}\) and \(T_{\infty}(\boldsymbol{\psi})^{\sharp}\) are also totally positive. Therefore \(B_{x}^{-1}PB_{x}\) is totally positive in the ring \(R[x,y]\) equipped with the coefficientwise order. Proposition 2.15 then yields Theorem 1.8. \(\quad\Box\)
**Proposition 5.3** (Production matrix for \({\sf T}(y,\boldsymbol{\phi})\)).: _The production matrix \(P(y,\boldsymbol{\phi})={\sf T}(y,\boldsymbol{\phi})^{-1}\Delta{\sf T}(y, \boldsymbol{\phi})\) is the unit-lower-Hessenberg matrix_
\[P(y,\boldsymbol{\phi})\;=\;T_{\infty}(\boldsymbol{\phi})^{\sharp}\,\Delta\,T_{ y}^{\sharp} \tag{5.12}\]
_where \(T_{y}\) is the Toeplitz matrix of powers (2.2), and \({}^{\sharp}\) is defined in (2.72). More generally,_
\[B_{\xi}^{-1}P(y,\boldsymbol{\phi})\,B_{\xi}\;=\;T_{\infty}(\boldsymbol{\phi})^ {\sharp}\,(\Delta+\xi I)\,T_{y}^{\sharp}\;. \tag{5.13}\]
**Remark.** 1. The zeroth and first columns of the matrix \(P(y,\boldsymbol{\phi})\) satisfy \(p_{n,0}=yp_{n,1}\), for exactly the same reasons as were observed for \(P(y,z)\). This implies the factorization
\[P(y,\boldsymbol{\phi})\;=\;P(y,\boldsymbol{\phi})\,\Delta^{\mathrm{T}}\left(y \,\mathbf{e}_{00}+\Delta\right)\,. \tag{5.14}\]
2. Closely related to the production matrix \(P(y,\boldsymbol{\phi})=T_{\infty}(\boldsymbol{\phi})^{\sharp}\,\Delta\,T_{y}^ {\sharp}\) are
\[\widehat{P}(y,\boldsymbol{\phi})\;=\;T_{\infty}(\boldsymbol{\phi})^{\sharp}\,T _{y}^{\sharp}\,\Delta\qquad\mbox{and}\qquad\widehat{P}^{\prime}(y,\boldsymbol {\phi})\;=\;\Delta\,T_{\infty}(\boldsymbol{\phi})^{\sharp}\,T_{y}^{\sharp}\;. \tag{5.15}\]
It was shown in [76, Section 4.4] that \(\widehat{P}(y,\boldsymbol{\phi})\) is the production matrix for \(\mathsf{F}(y,\boldsymbol{\phi})=\left(f_{n,k}(y,\boldsymbol{\phi})\right)_{n, k\geq 0}\) where \(f_{n,k}(y,\boldsymbol{\phi})\) counts \(k\)-component forests of rooted trees on the vertex set \([n]\) with a weight \(y\) for each improper edge and a weight \(\widehat{\phi}_{m}\stackrel{{\mathrm{def}}}{{=}}m!\,\phi_{m}\) for each vertex with \(m\) proper children. Likewise, \(\widehat{P}^{\prime}(y,\boldsymbol{\phi})=\Delta\widehat{P}(y,\boldsymbol{ \phi})\Delta^{\mathrm{T}}\) is the production matrix for \(\mathsf{F}^{\prime}(y,\boldsymbol{\phi})=\Delta\mathsf{F}(y,\boldsymbol{\phi })\Delta^{\mathrm{T}}=\left(f_{n+1,k+1}(y,\boldsymbol{\phi})\right)_{n,k\geq 0}\). All three production matrices correspond to the same \(A\)-series \(A(s)=\Phi(s)/(1-ys)\), but with different splittings into \(\Phi\) and \(\Psi\).
### More on the production matrix for \(\mathsf{T}\)
We now wish to say a bit more about the production matrix \(P\) for the tree matrix \(\mathsf{T}\). We begin by giving an explicit formula:
**Proposition 5.4**.: _The production matrix \(P=\mathsf{T}^{-1}\Delta\mathsf{T}\) is the unit-lower-Hessenberg matrix with entries_
\[p_{n,k} = n{n\choose k}S_{n-k}\;+\;{n+1\choose k} \tag{5.16a}\] \[= \frac{n!}{k!\,(n-k+1)!}\;(nS_{n-k+1}\,+\,1) \tag{5.16b}\]
_where \(S_{m}\) denotes the ordered subset number [61, A000522]_
\[S_{m}\;\stackrel{{\mathrm{def}}}{{=}}\;\sum_{k=0}^{m}\frac{m!}{k!}\;. \tag{5.17}\]
_These matrix elements satisfy in particular \(p_{n,0}=p_{n,1}=nS_{n}+1\) for all \(n\geq 0\)._
The formula (5.16) has a very easy proof, based on the theory of exponential Riordan arrays together with our formulae for \(A(s)\) and \(Z(s)\); we begin by giving this proof. On the other hand, it is also of some interest to see that this production matrix can be found by "elementary" algebraic methods, without relying on the machinery of exponential Riordan arrays or on any combinatorial interpretation; this will be our second proof.
First Proof of Proposition 5.4. From \(A(s)=e^{s}/(1-s)\) we have
\[a_{n}\;=\;\sum_{j=0}^{n}\frac{1}{j!}\;=\;\frac{S_{n}}{n!}\;. \tag{5.18}\]
From \(Z(s)=e^{s}/(1-s)^{2}\) we have
\[z_{n}\;=\;\sum_{j=0}^{n}\frac{n+1-j}{j!}\;=\;n\sum_{j=0}^{n}\frac{1}{j!}\,+\, \frac{1}{n!}\;=\;\frac{nS_{n}+1}{n!}\;. \tag{5.19}\]
Theorem 2.22 and (2.40) give
\[p_{n,k}\;=\;\frac{n!}{k!}\,(z_{n-k}\,+\,k\,a_{n-k+1})\;, \tag{5.20}\]
and a little algebra leads to (5.16a,b). It is then easy to see that \(p_{n,0}=p_{n,1}=nS_{n}+1\). \(\square\)
Second Proof of Proposition 5.4. An Abel inverse relation [67, p. 96, unnumbered equation after (3b)] says that the inverse matrix to \({\sf T}=(t_{n,k})_{n,k\geq 0}=\left({n\choose k}n^{n-k}\right)_{n,k\geq 0}\) is
\[({\sf T}^{-1})_{n,k}\;=\;(-1)^{n-k}\,{n\choose k}\,n\,k^{n-k-1}\;. \tag{5.21}\]
It follows that \(P={\sf T}^{-1}\Delta{\sf T}\) has matrix elements
\[p_{n,k}\;=\;\sum_{j=k-1}^{n}(-1)^{n-j}\,{n\choose j}\,n\,j^{n-j-1}\,{j+1 \choose k}\,(j+1)^{j+1-k}\;. \tag{5.22}\]
Setting \(N=n-k+1\) and \(j=k-1+\ell\) gives
\[p_{n,k}\;=\;\sum_{\ell=0}^{N}(-1)^{N-\ell}\,{n\choose k-1+\ell}\,n\,(k-1+\ell )^{N-\ell-1}\,{k+\ell\choose k}\,(k+\ell)^{\ell}\;, \tag{5.23}\]
which after a bit of playing with the binomial coefficients gives
\[p_{n,k}\;=\;-\,\frac{n\,\cdot\,n!}{k!\,(n-k+1)!}\;\sum_{\ell=0}^{N}{N\choose \ell}\,(1-k-\ell)^{N-\ell-1}\,(k+\ell)^{\ell+1}\;. \tag{5.24}\]
We now use the Abel identity [67, p. 22, eq. (27)]
\[\sum_{\ell=0}^{N}{N\choose\ell}\,(x+\ell)^{\ell+1}\,(y+N-\ell)^{N-\ell-1}\;=\; y^{-1}\sum_{\ell=0}^{N}{N\choose\ell}\,\ell!\,(x+\ell)\,(x+y+N)^{N-\ell} \tag{5.25}\]
with \(x=k\) and \(y=1-N-k=-n\): this gives
\[p_{n,k} = -\,\frac{n\,\cdot\,n!}{k!\,(n-k+1)!}\;(-1/n)\,\sum_{\ell=0}^{N} \binom{N}{\ell}\,\ell!\,(k+\ell)\,1^{N-\ell} \tag{5.26a}\] \[= \frac{n!}{k!\,(n-k+1)!}\;\sum_{\ell=0}^{N}\frac{N!}{(N-\ell)!}\,(k +\ell)\] \[= \frac{n!}{k!\,(n-k+1)!}\;\sum_{m=0}^{N}\frac{N!}{m!}\,(k+N-m)\] \[= \frac{n!}{k!\,(n-k+1)!}\;\biggl{[}(n+1)S_{n-k+1}\,-\,\sum_{m=0}^{ n-k+1}\frac{(n-k+1)!}{(m-1)!}\biggr{]}\] \[= \frac{n!}{k!\,(n-k+1)!}\;[nS_{n-k+1}\,+\,1]\;, \tag{5.26e}\]
which is (5.16b). \(\square\)
**Remarks.** 1. The first few rows of this production matrix are
\[P\;=\;\begin{bmatrix}\phantom{-}1&1\\ \phantom{-}3&3&1\\ \phantom{-}11&11&5&1\\ \phantom{-}49&49&24&7&1\\ \phantom{-}261&261&130&42&9&1\\ \phantom{-}1631&1631&815&270&65&11&1\\ \phantom{-}11743&11743&5871&1955&485&93&13&1\\ \phantom{-}95901&95901&47950&15981&3990&791&126&15&\ddots\\ \phantom{-}\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots \end{bmatrix}. \tag{5.27}\]
This matrix \(P\) (or its lower-triangular variant \(P\Delta^{\mathrm{T}}\) in which the zeroth column is deleted) is not currently in [61]. However, the zeroth and first columns are [61, A001339], and the second column \(p_{n,2}=nS_{n}/2\) is [61, A036919].
2. As mentioned earlier, it is not an accident that \(p_{n,0}=p_{n,1}\): by Lemma 2.29 this reflects the fact that \(\Psi(s)=1/(1-s)\), or equivalently that \(t_{n,0}=t_{n,1}\). For the same reason, the production matrices \(P(y,z)\) and \(P(y,\boldsymbol{\phi})\) satisfy \(p_{n,0}=yp_{n,1}\).
3. The ordered subset numbers satisfy the recurrence \(S_{m}=mS_{m-1}+1\). \(\blacksquare\)
Let us now state some further properties of the matrix elements \(p_{n,k}\):
**Proposition 5.5**.: _Define the matrix \(P=(p_{n,k})_{n,k\geq 0}\) by (5.16)/(5.17). Then:_
1. _The_ \(p_{n,k}\) _are nonnegative integers that satisfy the backward recurrence_ \[p_{n,k}\;=\;(k+1)p_{n,k+1}\,+\,\binom{n}{k-1}\] (5.28) _with initial condition_ \(p_{n,n+1}=1\)
_._
2. _The_ \(p_{n,k}\) _are also given by_ \[p_{n,k}\;=\;\frac{nS_{n}\,-\,Q_{k}(n)}{k!}\;,\] (5.29) _where_ \[Q_{k}(n) = -1\,+\,\sum_{j=2}^{k}(j-1)!\,\binom{n}{j-2}\] (5.30a) \[= -1\,+\,\sum_{j=2}^{k}(j-1)\,n\underline{j-2}\] (5.30b) _are polynomials in_ \(n\) _with integer coefficients. In particular,_ \(Q_{0}(n)=Q_{1}(n)=-1\) _and_ \(Q_{2}(n)=0\)_, so that_ \(p_{n,0}=p_{n,1}=nS_{n}+1\) _and_ \(p_{n,2}=nS_{n}/2\)_._
Proof. (a) It is immediate from (5.16a)/(5.17) that the \(p_{n,k}\) are nonnegative integers. And it is easy to verify, using the recurrence \(S_{m}=mS_{m-1}+1\), that the quantities (5.16) indeed satisfy the recurrence (5.28).
(b) Introducing the Ansatz (5.29), a simple computation shows that the recurrence (5.28) for \(p_{n,k}\) is equivalent to the recurrence
\[Q_{k+1}(n)\;=\;Q_{k}(n)\,+\,k!\,\binom{n}{k-1} \tag{5.31}\]
for \(Q_{k}(n)\). Furthermore, simple computations show that \(p_{n,0}=p_{n,1}=nS_{n}+1\), so that \(Q_{0}(n)=Q_{1}(n)=-1\). It is then easy to see that the unique solution of the recurrence (5.31) with initial condition \(Q_{0}(n)=-1\) is (5.30). \(\Box\)
**Remarks.** 1. The first few polynomials \(Q_{k}(n)\) are
\[Q_{0}(n) = -1 \tag{5.32a}\] \[Q_{1}(n) = -1\] (5.32b) \[Q_{2}(n) = 0\] (5.32c) \[Q_{3}(n) = 2n\] (5.32d) \[Q_{4}(n) = 3n^{2}-n\] (5.32e) \[Q_{5}(n) = 4n^{3}-9n^{2}+7n\] (5.32f) \[Q_{6}(n) = 5n^{4}-26n^{3}+46n^{2}-23n\] (5.32g) \[Q_{7}(n) = 6n^{5}-55n^{4}+184n^{3}-254n^{2}+121n \tag{5.32h}\]
This triangular array is apparently not in [61]. In any case it follows immediately from (5.30b) that for \(k\geq 3\) the leading term in \(Q_{k}(n)\) is \((k-1)n^{k-2}\). And it also follows from (5.30b) that for \(k\geq 4\) the next-to-leading term in \(Q_{k}(n)\) is \(-[(k-2)(k^{2}-4k+1)/2]\,n^{k-3}\)[61, A154560].
2. Before we found either of the two proofs of Proposition 5.4, we initially _guessed_ the formulae (5.16) for \(p_{n,k}\), as follows: Comparison of successive columns of (5.27)
suggested the backwards recurrence (5.28) for each row of (5.27), with initial condition \(p_{n,n+1}=1\). On the other hand, by looking at the diagonals (\(n-k=\mbox{constant}\)) successively for \(n-k=-1,0,1,2,\dots\), a little experimentation led to the formula (5.16b).
3. The factorization (5.4) implies that the unit-lower-Hessenberg matrix \(P\) is totally positive if and only if the unit-lower-triangular matrix \(\widetilde{P}\stackrel{{\rm def}}{{=}}P\Delta^{\rm T}\), obtained from \(P\) by deleting its zeroth column, is totally positive. Now, the production matrix of \(\widetilde{P}\) -- namely, the unit-lower-Hessenberg matrix \(Q=\widetilde{P}^{-1}\Delta\widetilde{P}\) -- appears to have a very simple form:
\[Q\ \stackrel{{\rm def}}{{=}}\ \widetilde{P}^{-1}\Delta \widetilde{P}\ =\ \left[\begin{array}{ccccccccc}3&1&&&&\\ 2&2&1&&&\\ 6&3&2&1&&\\ 24&12&4&2&1&&\\ 120&60&20&5&2&1&\\ 720&360&120&30&6&2&\ddots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{array}\right] \tag{5.33}\]
or in other words
\[q_{n,k} = \frac{(n+1)!}{(k+1)!}\ \ \mbox{for $k<n$} \tag{5.34a}\] \[q_{0,0} = 3\] (5.34b) \[q_{n,n} = 2\ \ \mbox{for $n\geq 1$}\] (5.34c) \[q_{n,n+1} = 1\] (5.34d) \[q_{n,k} = 0\ \ \mbox{for $k>n+1$} \tag{5.34e}\]
(We have not proven this formula for \(Q\), but it is probably not too difficult.) Alas, this matrix \(Q\) is not even \({\rm TP}_{2}\) (for instance, \(q_{10}q_{21}-q_{11}q_{20}=2\cdot 3-2\cdot 6=-6\)), so we cannot use this method to prove the total positivity of \(P\). Nor does it help to subtract of the identity matrix from \(Q\) (which would correspond to factoring out a binomial matrix from \(\widetilde{P}\) on the left): there does not exist any \(c\in\mathbb{R}\) for which the leading \(3\times 3\) principal submatrix of \(Q-cI\) is \({\rm TP}_{2}\). \(\blacksquare\)
## Acknowledgments
We wish to thank Tomack Gilmore for many helpful conversations.
This research was supported in part by Engineering and Physical Sciences Research Council grant EP/N025636/1 and by National Natural Science Foundation of China grant 12271078.
## Appendix A Interpretation of \(t_{n,k}(y,z)\) in our first combinatorial model
In this appendix we give an interpretation of the polynomials \(t_{n,k}(y,z)\), which were defined in (1.7), in our first combinatorial model (rooted trees in which the root
has \(k\) lower-numbered children). However, in order to make this interpretation most natural, we modify the model slightly, by now considering rooted trees in which the root has \(k\)_higher_-numbered children (this is of course equivalent by reversing all the labels). We denote by \(\mathcal{T}_{n+1,k}^{\bullet}\) the set of rooted trees on the vertex set \([n+1]\) in which exactly \(k\) children of the root are higher-numbered than the root.
We will therefore be defining a bijection between two models on the vertex set \([n+1]\):
**Model 1.** Rooted trees in which the root has \(k\) higher-numbered children.
**Model 2.** Rooted trees in which the vertex \(1\) has \(k\) children.
We begin with some definitions.
Let \(T\) be a tree on a totally ordered vertex set (for us it will be \([n+1]\)), and let \(e=ij\) be an edge of \(T\), where \(i\) is the parent and \(j\) is the child. We say that the edge \(e=ij\) is _increasing_ if \(i<j\), and _decreasing_ if \(i>j\). We recall that the edge \(e=ij\) is _improper_ if there exists a descendant of \(j\) (possibly \(j\) itself) that is lower-numbered than \(i\); otherwise it is _proper_. Clearly, every decreasing edge is necessarily improper; an increasing edge can be either proper or improper, depending on the behavior of the descendants of \(j\).
We now classify edges in a tree \(T\in\mathcal{T}_{n+1,k}^{\bullet}\) (that is, Model 1) as either _regular_ or _irregular_, as follows:
**Definition A.1**.: _Let \(e=ij\) be an edge in a tree \(T\in\mathcal{T}_{n+1,k}^{\bullet}\), where \(i\) is the parent and \(j\) is the child. We classify this edge as follows:_
1. _If_ \(ij\) _is decreasing, then it is irregular._
2. _If_ \(ij\) _is increasing and improper, and_ \(i\) _is not the root, then_ \(ij\) _is irregular._
3. _If_ \(ij\) _is increasing and_ \(i\) _is the root, then_ \(ij\) _is regular. (That is, the_ \(k\) _increasing edges emanating from the root are all regular.)_
4. _Suppose that all the children of vertex_ \(1\) _are higher-numbered than the root. If_ \(i=1\) _and there is a descendant of_ \(j\) _that is lower-numbered than the root, then_ \(ij\) _is irregular. (Note that in this case the root cannot be vertex_ \(1\)_; so this rule does not contradict rule (I3).)_
5. _Suppose that vertex_ \(1\) _has at least one child that is lower-numbered than the root_ \(\rho\)_. (Note that this implies_ \(\rho\neq 1\)_.) Let_ \(T_{1}\) _be the maximal increasing subtree of_ \(T\) _rooted at vertex_ \(1\)_, whose vertices are_ \(1=v_{1}<\cdots<v_{\ell+1}<v_{\ell+2}<\cdots<v_{m}\)_, where_ \(v_{\ell+1}<\rho<v_{\ell+2}\) _(of course_ \(\rho\notin T_{1}\)_). Then:_ 1. _all the edges on the path from vertex_ \(1\) _to_ \(v_{\ell+1}\) _are irregular; and_ 2. _an edge_ \(ij\in T_{1}\) _with parent_ \(i=v_{s}\) _and child_ \(j=v_{t}\) _(_\(s<t\)_) is irregular in case one of the following is satisfied:_ 1. \(\ell+2\leq s<t\) _and there is a descendant of_ \(v_{t}\) _in_ \(T\) _that is_ \(<v_{s}\)_;_ 2. \(s\leq\ell<\ell+2\leq t\) _and there is a descendant of_ \(v_{t}\) _in_ \(T\) _that is_ \(<v_{s+1}\)_;_ 3. \(s=\ell+1<\ell+2\leq t\) _and there is a descendant of_ \(v_{t}\) _in_ \(T\) _that is_ \(<\rho\)
_._
* \(s<t\leq\ell\) _and there is a descendant_ \(v_{\tau}\) _of_ \(v_{t}\) _in_ \(T_{1}\) _such that_ \(v_{\tau+1}<\rho\) _and there is a descendant of_ \(v_{\tau+1}\) _in_ \(T\) _that is_ \(<v_{s+1}\)_;_
* \(s<t\leq\ell\) _and there is a descendant_ \(v_{\tau}\) _of_ \(v_{t}\) _in_ \(T_{1}\) _that is_ \(>\rho\)_, and a descendant of_ \(v_{\tau}\) _in_ \(T\) _that is_ \(<v_{s+1}\)_._
* _All other edges are regular._
(We apologize for the complexity of this definition; but these are the cases that seem to be needed.)
We recall that the polynomials \(t_{n,k}(y,z)\) enumerate trees in Model 2 with a weight \(y\) (resp. \(z\)) for each improper (resp. proper) edge, except that the \(k\) proper edges emanating from vertex 1 are unweighted. We now assert -- and this is the main result of this appendix -- that the same polynomials \(t_{n,k}(y,z)\) enumerate trees in Model 1 with a weight \(y\) (resp. \(z\)) for each irregular (resp. regular) edge, except that the \(k\) regular edges emanating from the root are unweighted:
**Proposition A.2**.: _The polynomials \(t_{n,k}(y,z)\) defined in (1.7) satisfy_
\[t_{n,k}(y,z)\;=\;\sum_{T\in\mathcal{T}^{*}_{n+1,k}}y^{\mathrm{irreg}(T)}z^{ \mathrm{reg}(T)-k}\;.\] (A.1)
To prove Proposition A.2, we will construct, for each fixed \(n\) and \(k\), a bijection \(\sigma\) from Model 2 (namely, the set \(\mathcal{T}^{(1;k)}_{n+1}\)) to Model 1 (namely, the set \(\mathcal{T}^{*}_{n+1,k}\)), with the property that the number of proper (resp. improper) edges in \(T\) equals the number of regular (resp. irregular) edges in \(\sigma(T)\). Moreover, we will be able to say which edge in \(T\) corresponds to which edge in \(\sigma(T)\): that is, for each \(T\in\mathcal{T}^{(1;k)}_{n+1}\) we will construct a bijection \(\psi_{T}\colon\, E(T)\to E(\sigma(T))\) such that \(e\in E(T)\) is proper (resp. improper) if and only if \(\psi_{T}(e)\in E(\sigma(T))\) is regular (resp. irregular). We summarize this as follows:
**Proposition A.3**.: _There are bijections \((\sigma,\psi_{T})\) from Model 2 to Model 1 that map proper (resp. improper) edges in Model 2 to regular (resp. irregular) edges in Model 1._
The remainder of this appendix is devoted to proving Proposition A.3.
Given a tree \(T\) rooted at \(r\) in Model 2, let \(v_{1}<v_{2}<\dots<v_{k}\) be the \(k\) children of the vertex 1. If vertex 1 is the root, then its \(k\) children are obviously higher-numbered than the root. In this situation we define \(\sigma(T)=T\), which also belongs to Model 1; and we define \(\psi_{T}(e)=e\) for all \(e\in E(T)\).
Now suppose that the vertex 1 is not the root. First consider the case \(k=0\). Since we will use this special case as a tool in handling the general case, we here denote the bijection \(\phi\) instead of \(\sigma\).
**Lemma A.4**.: _For \(k=0\), there are bijections \((\phi,\psi_{T})\) from Model 2 to Model 1 that map proper (resp. improper) edges in Model 2 to regular (resp. irregular) edges in Model 1._
The following construction is inspired by [13, proof of Lemma 1].
Proof of Lemma A.4. Let \(T\) be a tree in Model 2, in which \(r\) is the root and vertex 1 is a leaf. Since we have already handled the case \(r=1\), we assume henceforth
that \(r\neq 1\). Let \(L\) (resp. \(H\)) denote the set of lower- (resp. higher-) numbered children of the root; and let \(D_{L}\) (resp. \(D_{H}\)) denote the set of all descendants of the vertices in \(L\) (resp. \(H\)), excluding those in \(L\) (resp. \(H\)) itself. See Figure 8(a).
Let \(T_{\max}\) be the maximal increasing subtree of \(T\) rooted at \(r\), and let \(T_{0},\ldots,T_{p}\) be the trees obtained from \(T\) by deleting all the edges in \(T_{\max}\). Let \(r_{j}\) be the root of \(T_{j}\) for \(0\leq j\leq p\). In particular, we choose \(r_{0}\) to be the root \(r\). Note that each \(r_{j}\) is a vertex in \(T_{\max}\) (otherwise it would not become a root when we delete the edges in \(T_{\max}\)); and conversely, every vertex in \(T_{\max}\) becomes a root \(r_{j}\) (though its tree \(T_{j}\) might be trivial). Therefore, all of the higher-numbered children of \(r_{j}\) belong to \(T_{\max}\), while all of the lower-numbered children of \(r_{j}\) belong to \(T_{j}\). We denote the set of those lower-numbered children by \(L_{j}\). Of course, \(L_{0}=L\). See Figure 8(b). Furthermore, since \(T_{\max}\) is an increasing tree, it is rooted at its smallest label (namely, \(r\)); therefore \(r_{j}>r\) for \(1\leq j\leq p\).
**Notation:** Let \(S\) be a sequence of increasing numbers. For \(a\not\in S\) and \(b\in S\), define \(\rho_{+a}^{-b}\) as an operator acting on \(S\) such that \(\rho_{+a}^{-b}(S):=S\cup\{a\}\backslash\{b\}\) is still increasingly ordered. For example, \(\rho_{+2}^{-5}(1,3,5,7)=(1,2,3,7)\). We observe that the inverse of \(\rho_{+a}^{-b}\) is \(\rho_{+b}^{-a}\). Further, if \(T\) is a tree whose vertex set is \(S\), we write \(\rho_{+a}^{-b}(T)\) to denote the tree with vertex set \(\rho_{+a}^{-b}(S)\) that is obtained from \(T\) by relabeling the vertices according to the map \(\rho_{+a}^{-b}\).
We now make the trivial observation that, since \(r>1\) by assumption, the vertex \(1\) is not in \(T_{\max}\). Let \(T_{i}\) be the tree containing vertex \(1\); since \(k=0\), the vertex \(1\) is a leaf in \(T_{i}\). The bijection \(\phi\) is defined in three steps:
1. Take the tree \(T_{\max}\), and relabel its vertices to obtain \(\rho_{+1}^{-r_{i}}(T_{\max})\). Since \(T_{\max}\) is an increasing tree, it is rooted at its smallest label (namely, \(r\)); therefore, the relabeled tree \(\rho_{+1}^{-r_{i}}(T_{\max})\) is rooted at its smallest label, which is the vertex \(1\). [If \(i=0\), we relabeled \(r\to 1\) and left all other labels unaffected. If \(i\neq 0\), we relabeled \(r\to 1\) and relabeled the second-smallest label of \(T_{\max}\) (that is, the lowest-numbered child of \(r\)) to \(r\) -- among other relabelings, the details of which will be worked out below.]
Figure 8: (a) The tree \(T\) in Model 2 when \(r\neq 1\). (b) Subtrees of \(T\).
1. [label=Step 0]
2. Graft \(\rho_{+1}^{-r_{i}}(T_{\max})\) onto \(T_{i}\) by identifying the two vertices \(1\); call the result \(T_{i}^{\prime}\).
3. Graft each tree \(T_{j}\) (\(j\neq i\)) onto \(T_{i}^{\prime}\) by identifying the two vertices \(r_{j}\); call the result \(\phi(T)\). See Figure 9(a).
In this way we obtain a tree \(\phi(T)\) rooted at \(r_{i}\), in which all the children of \(r_{i}\) are lower-numbered, and in which \(\rho_{+1}^{-r_{i}}(T_{\max})\) is the maximal increasing subtree of \(\phi(T)\) rooted at the vertex \(1\). Furthermore, if \(i\neq 0\), then the lowest-numbered child of the vertex \(1\) is \(r\), which is smaller than the root \(r_{i}\) of \(\phi(T)\); while if \(i=0\), then, as shown in Figure 9(b), the children of vertex \(1\) are precisely the set \(H\) (which did not undergo any relabeling in Step 1), which are all larger than the root \(r\) of \(\phi(T)\).
These observations allow us to obtain the inverse of \(\phi\). If the smallest-numbered child of vertex \(1\) is smaller than the root of \(\phi(T)\), then that child is \(r\), and we are in the case \(i\neq 0\); otherwise we are in the case \(i=0\), and the root of \(\phi(T)\) is \(r\). If we delete the edges of \(\rho_{+1}^{-r_{i}}(T_{\max})\) from \(\phi(T)\), we recover the trees \(T_{i}\); this undoes Step 3. We then undo Step 2 by separating the subtree rooted at vertex \(1\). And finally, we undo Step 1 by relabeling \(\rho_{+1}^{-r_{i}}(T_{\max})\) using the map \(\rho_{+r_{i}}^{-1}\); this yields \(T_{\max}\). We can then reassemble the pieces \(T_{\max}\) and \(T_{0},\ldots,T_{p}\) to obtain \(T\).
It is also clear how the map \(\psi_{T}\) is defined, since each edge of \(T\) corresponds, via relabeling and grafting, to a well-defined edge of \(\phi(T)\).
We now look at how the map \(\psi_{T}\) acts on proper and improper edges of \(T\). We first observe that the edges in \(T_{0},\ldots,T_{p}\) do not undergo any relabeling; so an edge
Figure 9: The tree \(\phi(T)\) in Model 1 when \(r\neq 1\). (a) General case, but when \(i=0\) the vertex \(r\) and subtree \(T_{0}\) should be removed. (b) Redrawing of the special case \(i=0\), where \(\rho_{+1}^{-r_{i}}(T_{\max})\) is the entire subtree of \(\phi(T)\) rooted at the vertex \(1\).
in one of these subtrees is increasing (resp. decreasing) according as the edge \(\psi_{T}(e)\) in \(\phi(T)\) is increasing (resp. decreasing). Furthermore, the descendants in these subtrees are the same as the descendants of their images in \(\phi(T)\), except that the vertex \(1\) in \(\phi(T)\) has acquired extra descendants, which are anyway higher-numbered and therefore do not affect properness or improperness. Therefore, an edge \(e\) in one of these subtrees is proper (resp. improper) according as the edge \(\psi_{T}(e)\) in \(\phi(T)\) is proper (resp. improper). This means, by rules (I1), (I2) and (I6) of Definition A.1, that an edge \(e\) in one of these subtrees is proper (resp. improper) according as the edge \(\psi_{T}(e)\) in \(\phi(T)\) is regular (resp. irregular). [Note that rule (I3) plays no role here, because \(k=0\). Rule (I4) does not apply because all the edges in \(\phi(T)\) emanating from vertex \(1\) lie in \(\rho_{+1}^{-r_{i}}(T_{\max})\). And rule (I5) applies only within \(\rho_{+1}^{-r_{i}}(T_{\max})\).]
We now need to consider the edges in \(\rho_{+1}^{-r_{i}}(T_{\max})\). We divide the proof into two cases:
**Case 1: \(i=0\).** Let \(r=u_{1}<u_{2}<\cdots<u_{m}\) be the vertices in \(T_{\max}\); then \(\rho_{+1}^{-r_{i}}\) acts as follows:
\[\rho_{+1}^{-r_{i}}\colon\ (r,u_{2},\ldots,u_{m})\mapsto(1,u_{2},\ldots,u_{m}).\]
That is, as previously observed, \(\rho_{+1}^{-r_{i}}(T_{\max})\) is obtained from \(T_{\max}\) by only relabeling the root \(r\) as the vertex \(1\); and \(\phi(T)\) is as shown in Figure 9(b). Therefore, for all edges in \(T_{\max}\) other than those emanating from the root, properness/improperness in \(T\) corresponds to properness/improperness of their images in \(\phi(T)\); and by rules (I1), (I2) and (I6), this corresponds to regularity/irregularity of the images in \(\phi(T)\). [Rule (I4) does not apply because the parent is not vertex \(1\); and rule (I5) does not apply because all the children of vertex \(1\) are higher-numbered than the root \(r\).]
Now consider an edge \(e\) in \(T\) that emanates from the root \(r\) to a higher-numbered child \(h\in H\). The bijection \(\psi_{T}\) maps \(e\) to an edge \(e^{\prime}\) in \(\phi(T)\) that emanates from vertex \(1\) to \(h\in H\). The edge \(e\) is improper in case there is a descendant of \(h\) in \(T\) that is \(<r\); and this is equivalent to the existence of a descendant of \(h\) in \(\phi(T)\) that is \(<r\). Since in this case \(r\) is the root of \(\phi(T)\), and all the children of vertex \(1\) in \(\phi(T)\) are \(>r\), rule (I4) of Definition A.1 specifies that the edge \(e^{\prime}\) is irregular whenever \(e\) is improper; otherwise, by rule (I6), it is regular. [Once again, rule (I5) does not apply here.]
**Case 2: \(i\neq 0\).** Let \(r=u_{1}<u_{2}<\cdots<u_{\ell}<u_{\ell+1}=r_{i}<u_{\ell+2}<\cdots<u_{m}\) be the vertices in \(T_{\max}\); then \(\rho_{+1}^{-r_{i}}\) acts as follows:
\[\rho_{+1}^{-r_{i}}\colon\ (r,u_{2},u_{3},\ldots,u_{\ell},r_{i},u_{ \ell+2},\ldots,u_{m}) \mapsto (1,r,u_{2},\ldots,u_{\ell-1},u_{\ell},u_{\ell+2},\ldots,u_{m})\] \[:= (v_{1},v_{2},v_{3},\ldots,v_{\ell},v_{\ell+1},v_{\ell+2},\ldots, v_{m})\]
Set \(u_{0}:=1\). Then \(\rho_{+1}^{-r_{i}}(T_{\max})\) is obtained from \(T_{\max}\) by relabeling each vertex \(u_{s}\) that is \(\leq r_{i}\) by \(u_{s-1}\), and leaving all vertices \(>r_{i}\) unchanged; in other words, \(v_{s}=u_{s-1}\) for \(s\leq\ell+1\) and \(v_{s}=u_{s}\) for \(s\geq\ell+2\). Therefore, each edge \(e=u_{s}u_{t}\) in \(T_{\max}\subseteq T\) maps onto \(\psi_{T}(e)=v_{s}v_{t}\) in \(\rho_{+1}^{-r_{i}}(T_{\max})\subseteq\phi(T)\); and the descendants of \(u_{t}\) in \(T_{\max}\) map via \(\psi_{T}\) onto the descendants of \(v_{t}\) in \(\rho_{+1}^{-r_{i}}(T_{\max})\).
Note that in \(\phi(T)\), vertex \(1\) has at least one child (namely, \(r\)) that is lower-numbered than the root \(r_{i}\). Therefore rule (I5) applies, with \(\rho=r_{i}\) and \(T_{1}=\rho_{+1}^{-r_{i}}(T_{\max})\):
* All the edges on the path from the root \(r\) to vertex \(r_{i}\) in \(T_{\max}\subseteq T\) are improper, since vertex \(1\) is a descendant of \(r_{i}\) in \(T\). These edges map, under the relabeling \(\rho_{+1}^{-r_{i}}\), onto the path from vertex \(1\) to \(v_{\ell+1}\) in \(\phi(T)\). By rule (I5a) of Definition A.1, all the edges in this path are irregular.
The foregoing case needed to be treated separately, because the vertices in the path from \(r\) to \(r_{i}\) in \(T_{\max}\subseteq T\) have descendants (in particular, the vertex \(1\)) that do _not_ correspond (via the relabeling) to descendants in their images in \(\phi(T)\), because the tree \(T_{i}\) was moved from its position in \(T\) to the root in \(\phi(T)\). This problem does not arise in the remaining cases:
* Consider an edge \(e=u_{s}u_{t}\) in \(T_{\max}\), where \(\ell+2\leq s<t\). These vertices do not get relabeled, so \(\psi_{T}(e)=e\). This edge is improper in \(T\) in case there is a descendant of \(u_{t}\) in \(T\) that is \(<u_{s}\). By rule (I5b1) of Definition A.1, this edge is irregular in \(\phi(T)\) in exactly the same situation.
* Now consider an edge \(e=u_{s}u_{t}\) in \(T_{\max}\), where \(s\leq\ell+1<\ell+2\leq t\). Then vertex \(u_{s}\) gets relabeled to \(u_{s-1}\), while \(u_{t}\) does not get relabeled; so \(\psi_{T}(e)=v_{s}v_{t}=u_{s-1}u_{t}\). The edge \(e\) is improper in \(T\) in case there is a descendant of \(u_{t}\) in \(T\) that is \(<u_{s}\). Now \(u_{s}=v_{s+1}\) in case \(s\leq\ell\), while \(u_{s}=r_{i}=\rho\) in case \(s=\ell+1\). By rules (I5b2,3) of Definition A.1, the edge \(v_{s}v_{t}\) is irregular in \(\phi(T)\) exactly when \(e\) is improper in \(T\).
Note that in cases (b1-3), the descendants of \(u_{t}\) in \(T\) are the same as the descendants of \(v_{t}\) in \(\phi(T)\), because the relevant trees \(T_{j}\) were grafted in the same place (since their roots \(r_{j}\) did not get relabeled). Things will be slightly more complicated in the remaining cases:
* Consider an edge \(e=u_{s}u_{t}\) in \(T_{\max}\), where \(s<t\leq\ell\). Then vertices \(u_{s}\) and \(u_{t}\) both get relabeled, so \(\psi_{T}(e)=v_{s}v_{t}=u_{s-1}u_{t-1}\). The edge \(e\) is improper in \(T\) in case there is a descendant of \(u_{t}\) in \(T\) that is \(<u_{s}\). (Note that \(u_{s}=v_{s+1}\) and \(u_{t}=v_{t+1}\) because \(s,t\leq\ell\).) Such a descendant cannot lie in \(T_{\max}\), because \(T_{\max}\) is increasing, but it can lie in one of the trees \(T_{j}\) that is attached to \(T_{\max}\). So consider all of the descendants \(u_{\tau}\) of \(u_{t}\) in \(T_{\max}\). If one of these descendants is \(r_{i}\), then we are in the already-treated case (a); so we can assume that they are all either \(<r_{i}\) or \(>r_{i}\). The images of the vertices \(u_{\tau}\) under \(\rho_{+1}^{-r_{i}}\) are the descendants \(v_{\tau}=\rho_{+1}^{-r_{i}}(u_{\tau})\) of \(v_{t}\) in \(T_{1}=\rho_{+1}^{-r_{i}}(T_{\max})\). Now consider the two cases \(u_{\tau}<r_{i}\) and \(u_{\tau}>r_{i}\) (recalling that \(r_{i}=\rho\)):
* \(u_{\tau}<\rho\) is equivalent to \(u_{\tau}=v_{\tau+1}<\rho\). The edge \(u_{s}u_{t}\) is improper in \(T\) in case there is a descendant of \(u_{\tau}=v_{\tau+1}\) in \(T\) that is \(<u_{s}=v_{s+1}\); and the descendants of \(u_{\tau}=r_{j}\) in the tree \(T_{j}\subseteq T\) are the same as the descendants of \(v_{\tau+1}=r_{j}\) in the tree \(T_{j}\subseteq\phi(T)\). By rule (I5b4) of Definition A.1, the edge \(v_{s}v_{t}\) is irregular in \(\phi(T)\) exactly when \(e\) is improper in \(T\).
* \(u_{\tau}>\rho\) is equivalent to \(u_{\tau}=v_{\tau}>\rho\). The edge \(u_{s}u_{t}\) is improper in \(T\) in case there is a descendant of \(u_{\tau}=v_{\tau}\) in \(T\) that is \(<u_{s}=v_{s+1}\); and the descendants of \(u_{\tau}=r_{j}\) in the tree \(T_{j}\subseteq T\) are the same as the descendants of \(v_{\tau}=r_{j}\) in the tree \(T_{j}\subseteq\phi(T)\). By rule (I5b5) of Definition A.1, the edge \(v_{s}v_{t}\) is irregular in \(\phi(T)\) exactly when \(e\) is improper in \(T\).
See Figure 10 for an example illustrating the cases (b4,5).
There is, _a priori_, one additional case for an edge \(e=u_{s}u_{t}\) in \(T_{\max}\), namely, \(s<t=\ell+1\). But this corresponds to the last edge on the path from \(r\) to \(r_{i}\) in \(T_{\max}\), and hence was already treated on case (a).
We have now considered all the cases in which an edge \(e\in T\) can be improper; so by rule (I6) of Definition A.1, this completes the proof. \(\Box\)
**Remark.** In case (b4,5) one might worry what happens when \(v_{\tau}<\rho\) while \(v_{\tau+1}>\rho\), which was not included in either (b4) or (b5). This happens if and only if \(\tau=\ell+1\), i.e. \(u_{\tau}=r_{i}\), in which case all the edges having \(v_{\tau}\) as a descendant are irregular by case (a). \(\blacksquare\)
Now we consider the general case \(k\geq 1\). The following construction is inspired by [13, proof of Lemma 2].
Proof of Proposition A.3. Given a tree \(T\) rooted at \(r\) in Model 2, let \(v_{1}<v_{2}<\cdots<v_{k}\) be the \(k\) children of the vertex \(1\). For any vertex \(i\) other than the root, we define its _top ancestor_ to be the ancestor of \(i\) (possibly \(i\) itself) that is a child of the root. We construct the bijection \(\sigma\) in the following three cases:
Case I: \(v_{1}<r\).
Case II: \(v_{1}>r\) and the top ancestor of vertex \(1\) is lower-numbered than the root.
Figure 10: An example illustrating cases (b4,5) in the proof of Lemma A.4, showing the trees \(T_{\max}\subseteq T\) and \(\rho_{+1}^{-r_{i}}(T_{\max})\subseteq\phi(T)\) along with some of the trees \(T_{j}\) hanging off them (namely, those trees attached to the descendants of \(u_{t}\) are shown). The edge \(e=u_{s}u_{t}\) and its image \(\psi_{T}(e)=v_{s}v_{t}\) are shown in thick red. Note that tree \(T_{j}\) is attached at vertex \(u_{j}\), which equals \(v_{j}=\psi_{T}(u_{j})\) or \(v_{j+1}=\psi_{T}(u_{j+1})\) according as \(u_{j}>r_{i}\) or \(u_{j}<r_{i}\).
Case III: \(v_{1}>r\) and the top ancestor of vertex 1 is higher-numbered than the root.
**Case I: \(v_{1}<r\).**
Let \(L\) (resp. \(H\)) denote the lower (resp. higher)-numbered children of vertex \(v_{1}\), and let \(D_{L}\) (resp. \(D_{H}\)) denote their descendants excluding those in \(L\) (resp. \(H\)) itself. We construct the tree \(\sigma(T)\) as follows:
1. Delete the \(k\) edges emanating from vertex 1 to its children, and the edges from \(v_{1}\) to its higher-numbered children \(H\).
2. Attach the trees rooted at the vertices of \(H\) onto vertex 1 via new edges.
3. Attach the trees rooted at \(r,v_{2},\ldots,v_{k}\) onto vertex \(v_{1}\) via new edges.
See Figure 11. We obtain thereby a tree \(\sigma(T)\) rooted at \(v_{1}\) with \(k\) higher-numbered children \(r,v_{2},\ldots,v_{k}\), which also has the following properties:
1. all the children of vertex 1 are higher-numbered than the root;
2. the top ancestor of vertex 1 is higher-numbered than the root.
The \(k\) edges in \(T\) that emanate from vertex 1 to its children (shown in blue in Figure 11) are clearly proper. These edges are mapped to the \(k\) edges in \(\sigma(T)\) that emanate from the root \(v_{1}\) to its higher-numbered children, which are regular by rule (I3) of Definition A.1.
An edge \(e\in T\) that emanates from vertex \(v_{1}\) to a higher-numbered child \(h\in H\) (shown in red in Figure 11) is improper if (and only if) there is a descendant of \(h\) that is \(<v_{1}\). Such an edge is mapped to the edge \(\psi_{T}(e)\in\sigma(T)\) that emanates from vertex 1 to its child \(h\in H\); and since all the children of vertex 1 are higher-numbered than the root \(v_{1}\), rule (I4) of Definition A.1 applies and says that the edge \(\psi_{T}(e)\) is irregular if and only if \(e\) is improper.
All other edges \(e\in T\) (shown in black in Figure 11) have the property that \(\psi_{T}(e)=e\). Moreover, if \(e\notin T_{r}\), then the descendants of \(e\) in \(T\) are the same as its descendants in \(\sigma(T)\), so by rules (I1), (I2) and (I6) of Definition A.1, the edge \(e\) is proper/improper in \(T\) exactly when it is regular/irregular in \(\sigma(T)\). Finally, all the edges \(e\in T_{r}\) are improper in \(T\) (because vertex 1 is a descendant), and they are irregular in \(\sigma(T)\) by rules (I1) and (I2).
We have therefore shown that the bijection \(\psi_{T}\) maps proper/improper edges in \(T\) onto regular/irregular edges in \(\sigma(T)\).
**Case II: \(v_{1}>r\) and the top ancestor of vertex 1 is lower-numbered than the root.**
Let \(L\) (resp. \(H\)) denote the lower (resp. higher)-numbered children of the root \(r\), and let \(D_{L}\) (resp. \(D_{H}\)) denote their descendants excluding those in \(L\) (resp. \(H\)) itself. We construct the tree \(\sigma(T)\) as follows:
1. Delete the \(k\) edges emanating from vertex 1 to its children, and the edges from \(r\) to its higher-numbered children \(H\).
Step 2. Attach the trees rooted at the vertices of \(H\) onto vertex \(1\) via new edges.
Step 3. Attach the trees rooted at \(v_{1},v_{2},\ldots,v_{k}\) onto the root \(r\) via new edges.
(Note that this is identical to Case I but with the roles of \(v_{1}\) and \(r\) interchanged.) See Figure 12. We obtain thereby a tree \(\sigma(T)\) rooted at \(r\) with \(k\) higher-numbered children \(v_{1},v_{2},\ldots,v_{k}\), which also has the following properties:
1. all the children of vertex \(1\) are higher-numbered than the root;
2. the top ancestor of vertex \(1\) is lower-numbered than the root.
The \(k\) edges in \(T\) that emanate from vertex \(1\) to its children (shown in blue in Figure 12) are clearly proper. These edges are mapped to the \(k\) edges in \(\sigma(T)\) that emanate from the root \(r\) to its higher-numbered children, which are regular by rule (I3) of Definition A.1.
An edge \(e\in T\) that emanates from the root \(r\) to a higher-numbered child \(h\in H\) (shown in red in Figure 12) is improper if (and only if) there is a descendant of \(h\) that is \(<r\). Such an edge is mapped to the edge \(\psi_{T}(e)\in\sigma(T)\) that emanates from vertex \(1\) to its child \(h\in H\); and since all the children of vertex \(1\) are higher-numbered than the root \(r\), rule (I4) of Definition A.1 applies and says that the edge \(\psi_{T}(e)\) is irregular if and only if \(e\) is improper.
All other edges \(e\in T\) (shown in black in Figure 11) have the property that \(\psi_{T}(e)=e\). Moreover, if \(e\notin T\upharpoonright(\{r\}\cup L\cup D_{L})\), then the descendants of \(e\) in \(T\) are the same as its descendants in \(\sigma(T)\), so by rules (I1), (I2) and (I6) of Definition A.1, the edge \(e\) is proper/improper in \(T\) exactly when it is regular/irregular in \(\sigma(T)\). Finally, all the edges \(e\in T\upharpoonright(\{r\}\cup L\cup D_{L})\) are improper in \(T\) (because vertex \(1\) is a descendant), and they are irregular in \(\sigma(T)\) by rules (I1) and (I2).
We have therefore shown that the bijection \(\psi_{T}\) maps proper/improper edges in \(T\) onto regular/irregular edges in \(\sigma(T)\).
**Case III: \(\boldsymbol{v_{1}>r}\) and the top ancestor of vertex 1 is higher-numbered than the root.**
Let \(L\) (resp. \(H\)) denote the lower- (resp. higher-) numbered children of the root \(r\), and let \(D_{L}\) (resp. \(D_{H}\)) denote their descendants excluding those in \(L\) (resp. \(H\)) itself. Let \(u\) be the vertex on the path from the root \(r\) to vertex \(1\) such that the path from \(r\) to \(u\) is maximal increasing. Clearly, \(r<u\). The first two steps in constructing the tree \(\sigma(T)\) are as follows:
1. Delete the \(k\) edges from vertex \(1\) to its children, and denote by \(T_{0}\) the subtree rooted at \(r\) in which \(1\) is a leaf.
2. Use the bijection \(\phi\) constructed in Lemma A.4 to yield a tree \(\phi(T_{0})\). Note that the vertex \(u\) plays the role of \(r_{i}\) in Lemma A.4, so that the operator \(\rho_{+1}^{-u}\) acts on the maximal increasing subtree \((T_{0})_{\max}\) of \(T_{0}\) rooted at \(r\). Note also that \(\phi(T_{0})\) is rooted at \(u\), which has no higher-numbered children. See Figure 13.
Then we distinguish two subcases, according as \(u<v_{1}\) or \(u>v_{1}\):
**Case III(a): \(\boldsymbol{u<v_{1}}\).**
* Attach the trees rooted at \(v_{1},v_{2},\ldots,v_{k}\) onto the root \(u\) of \(\phi(T_{0})\) via new edges.
See Figure 14. We obtain thereby a tree \(\sigma(T)\) rooted at \(u\) with \(k\) higher-numbered children \(v_{1},v_{2},\ldots,v_{k}\), which also has the following properties:
* vertex \(1\) has at least one child that is lower-numbered than the root;
* the top ancestor of vertex \(1\) is lower-numbered than the root.
The \(k\) edges in \(T\) that emanate from vertex \(1\) to its children (shown in blue in Figure 14) are clearly proper. These edges are mapped to the \(k\) edges in \(\sigma(T)\) that emanate from the root \(u\) to its higher-numbered children, which are regular by rule (I3) of Definition A.1. By the discussion in Lemma A.4, the proper/improper edges in \(T_{0}\) are all mapped to regular/irregular edges in \(\phi(T_{0})\). Finally, the proper/improper edges in \(T_{1},\ldots,T_{k}\subseteq T\) are mapped to regular/irregular edges in \(T_{1},\ldots,T_{k}\subseteq\sigma(T)\) by rules (I1), (I2) and (I6). We have therefore shown that the bijection \(\psi_{T}\) maps proper/improper edges in \(T\) onto regular/irregular edges in \(\sigma(T)\).
**Case III(b): \(\boldsymbol{u>v_{1}}\).**
Consider the subtree \(T_{1}\subseteq T\) rooted at \(v_{1}\). Let \(L_{1}\) (resp. \(H_{1}\)) denote the lower-(resp. higher-) numbered children of the vertex \(v_{1}\), and let \(D_{L_{1}}\) (resp. \(D_{H_{1}}\)) denote their descendants excluding those in \(L_{1}\) (resp. \(H_{1}\)) itself. Let \(w\) be the smallest vertex in \(H_{1}\). Clearly, \(w>v_{1}\). We then proceed as follows:
* Let \((T_{1})_{\max}\) be the maximal increasing subtree of \(T_{1}\) rooted at \(v_{1}\). Relabel its vertices to obtain \(\rho_{+u}^{-v_{1}}((T_{1})_{\max})\), which is a tree rooted at \(w\), since \(w\) is the second-smallest vertex in \((T_{1})_{\max}\).
* Create a new tree \(T_{1}^{\prime}\), rooted at \(v_{1}\), as follows:
* Attach the subtrees in \(L_{1}\cup D_{L_{1}}\) to vertex \(v_{1}\) just as they are in \(T_{1}\).
* Attach the tree \(\rho_{+u}^{-v_{1}}((T_{1})_{\max})\) onto \(v_{1}\) via a new edge from \(v_{1}\) to \(w\).
* Create a new tree \(\phi^{\prime}(T_{1})\), rooted at \(v_{1}\), by grafting the remaining trees in \(T_{1}\setminus(L_{1}\cup D_{L_{1}}\cup(T_{1})_{\max})\) onto \(T_{1}^{\prime}\) by identifying vertices with the same label. Note that, in \(\phi^{\prime}(T_{1})\), the root \(v_{1}\) has only one higher-numbered child, namely \(w\). See Figure 15.
* Graft \(\phi(T_{0})\) onto \(\phi^{\prime}(T_{1})\) by identifying vertex \(u\) to obtain \(T^{\prime}\).
* Attach the trees rooted at \(v_{2},\ldots,v_{k}\) onto the root \(v_{1}\) of \(T^{\prime}\) via new edges, to obtain \(\sigma(T)\). See Figure 16.
Note that \(\phi^{\prime}(T_{1})\) has one more vertex than \(T_{1}\); but one vertex is lost in Step 6 in identifying the vertices \(u\) of \(\phi(T_{0})\) and \(\phi^{\prime}(T_{1})\), so \(\sigma(T)\) has the same number of vertices as \(T\).
We obtain thereby a tree \(\sigma(T)\) rooted at \(v_{1}\) with \(k\) higher-numbered children \(w,v_{2},\ldots,v_{k}\), which also has the following properties:
* vertex 1 has at least one child (namely, \(r\)) that is lower-numbered than the root \(v_{1}\);
* the top ancestor of vertex 1 (namely, \(w\)) is higher-numbered than the root \(v_{1}\).
The \(k\) edges in \(T\) that emanate from vertex 1 to its children (shown in blue in Figure 16) are clearly proper. These edges are mapped to the \(k\) edges in \(\sigma(T)\) that emanate from the root \(v_{1}\) to its higher-numbered children, which are regular by rule (I3) of Definition A.1. By the discussion in Lemma A.4, the proper/improper edges in \(T_{0}\) are all mapped to regular/irregular edges in \(\phi(T_{0})\). By similar reasoning, the proper/improper edges in \(T_{1}\) are all mapped to regular/irregular edges in \(\phi^{\prime}(T_{1})\). Finally, the proper/improper edges in \(T_{2},\ldots,T_{k}\subseteq T\) are mapped to regular/irregular edges in \(T_{2},\ldots,T_{k}\subseteq\sigma(T)\) by rules (I1), (I2) and (I6). We have therefore shown that the bijection \(\psi_{T}\) maps proper/improper edges in \(T\) onto regular/irregular edges in \(\sigma(T)\).
We complete the proof of Proposition A.3 by remarking that the map \(\sigma\) can be reversed, since every tree in Model 1 must satisfy one of the four properties consisting of a pair (A1)/(A2) and (B1)/(B2) stated in Case I, Case II, Case III(a) and Case III(b).
Figure 11: The trees \(T\) and \(\sigma(T)\) in Case I. The blue edges in \(T\) [resp. \(\sigma(T)\)] are proper (resp. regular). The red edges in \(T\) [resp. \(\sigma(T)\)] could be either proper or improper (resp. regular or irregular), depending on the behavior of their descendants.
Figure 12: The trees \(T\) and \(\sigma(T)\) in Case II. The blue edges in \(T\) [resp. \(\sigma(T)\)] are proper (resp. regular). The red edges in \(T\) [resp. \(\sigma(T)\)] could be either proper or improper (resp. regular or irregular), depending on the behavior of their descendants.
Figure 13: The trees \(T\), \(T_{0}\) and \(\phi(T_{0})\) in Case III. Here the vertex \(u\) can be either in \(H\) or in \(D_{H}\).
Figure 14: The trees \(T\) and \(\sigma(T)\) in Case III(a).
Figure 15: The trees \(T\), \(T_{1}\) and \(\phi^{\prime}(T_{1})\) in Case III(b). Note that here \(T\) is the same as in Figure 13, but the subtree \(T_{1}\) is shown in more detail.
Figure 16: The trees \(T\) and \(\sigma(T)\) in Case III(b). |
2310.05121 | Homogenization of some evolutionary non-Newtonian flows in porous media | In this paper, we consider the homogenization of evolutionary incompressible
purely viscous non-Newtonian flows of Carreau-Yasuda type in porous media with
small perforation parameter $0< \varepsilon \ll 1$, where the small holes are
periodically distributed. Darcy's law is recovered in the homogenization limit.
Applying Poincar\'e type inequality in porous media allows us to derive the
uniform estimates on velocity field, of which the gradient is small of size
$\varepsilon$ in $L^{2}$ space. This indicates the nonlinear part in the
viscosity coefficient does not contribute in the limit and a linear model
(Darcy's law) is obtained. The estimates of the pressure rely on a proper
extension from the perforated domain to the homogeneous non-perforated domain.
By integrating the equations in time variable such that each term in the
resulting equations has certain continuity in time, we can establish the
extension of the pressure by applying the dual formula with the restriction
operator. | Yong Lu, Zhengmao Qian | 2023-10-08T11:24:25Z | http://arxiv.org/abs/2310.05121v1 | # Homogenization of some evolutionary non-Newtonian flows in porous media
###### Abstract
In this paper, we consider the homogenization of evolutionary incompressible purely viscous non-Newtonian flows of Carreau-Yasuda type in porous media with small perforation parameter \(0<\varepsilon\ll 1\), where the small holes are periodically distributed. Darcy's law is recovered in the homogenization limit. Applying Poincare type inequality in porous media allows us to derive the uniform estimates on velocity field, of which the gradient is small of size \(\varepsilon\) in \(L^{2}\) space. This indicates the nonlinear part in the viscosity coefficient does not contribute in the limit and a linear model (Darcy's law) is obtained. The estimates of the pressure rely on a proper extension from the perforated domain to the homogeneous non-perforated domain. By integrating the equations in time variable such that each term in the resulting equations has certain continuity in time, we can establish the extension of the pressure by applying the dual formula with the restriction operator.
## 1 Introduction
In this paper we consider the homogenization of evolutionary incompressible viscous non-Newtonian flows in porous media. Non-Newtonian fluids are extensively involved in a number of applied problems involving the production of oil and gas from underground reservoirs. There are at least two typical situations: the flow of heavy oils and enhanced oil recovery. Therefore, it is important to establish filtration laws governing non-Newtonian flows through porous media. In this paper, we will consider only quasi-Newtonian fluids where the viscosity can be expressed as a function of the shear rate. In particular, we focus on the Carreau-Yasuda model in space-time cylinder \((0,T)\times\Omega_{\varepsilon}\):
\[\begin{cases}\varepsilon^{2}\partial_{t}\mathbf{u}_{\varepsilon}-\mathrm{div} \left(\eta_{r}(D\mathbf{u}_{\varepsilon})D\mathbf{u}_{\varepsilon}\right)+( \mathbf{u}_{\varepsilon}\cdot\nabla)\mathbf{u}_{\varepsilon}+\nabla p_{ \varepsilon}=\mathbf{f},&\text{in }(0,T)\times\Omega_{\varepsilon},\\ \mathrm{div}\,\mathbf{u}_{\varepsilon}=0,&\text{in }(0,T)\times\Omega_{ \varepsilon},\\ \mathbf{u}_{\varepsilon}=0,&\text{in }(0,T)\times\partial\Omega_{ \varepsilon},\\ \mathbf{u}_{\varepsilon}|_{t=0}=\mathbf{u}_{0},&\text{in }\Omega_{ \varepsilon}.\end{cases} \tag{1.1}\]
Here \(\mathbf{u}_{\varepsilon}\) is the velocity, \(\nabla\mathbf{u}_{\varepsilon}\) is the gradient velocity tensor, \(D\mathbf{u}_{\varepsilon}=\frac{1}{2}(\nabla\mathbf{u}_{\varepsilon}+\nabla^ {T}\mathbf{u}_{\varepsilon})\) denotes the rate-of-strain tensor, \(p_{\varepsilon}\) denotes the pressure, \(\mathbf{f}\) is the density of the external force and \(\mathbf{u}_{0}\) is the initial velocity. In this paper, we assume \(\mathbf{f}\) and \(\mathbf{u}_{0}\) are independent of \(\varepsilon\) and are both in \(L^{2}((0,T)\times\Omega;\mathbb{R}^{3})\). While, our main results still hold if \(\mathbf{f}\) and \(\mathbf{u}_{0}\) depend on \(\varepsilon\) and converge strongly in \(L^{2}((0,T)\times\Omega;\mathbb{R}^{3})\). The stress tensor \(\eta_{r}(D\mathbf{u}_{\varepsilon})\) is determined by the Carreau-Yasuda law:
\[\eta_{r}(D\mathbf{u}_{\varepsilon})=(\eta_{0}-\eta_{\infty})(1+\lambda|D \mathbf{u}_{\varepsilon}|^{2})^{\frac{r}{2}-1}+\eta_{\infty},\quad\eta_{0} \geq\eta_{\infty},\ \lambda>0,\ r>1,\]
where \(\eta_{0}\) is the zero-shear-rate viscosity, \(\lambda\) is a time constant, \((r-1)\) is a dimensionless constant describing the slope in the _power law region_ of \(\log\,\eta_{r}\) versus \(\log\,\left(|D(\mathbf{u}_{\varepsilon})|\right)\).
The perforated domain \(\Omega_{\varepsilon}\) under consideration is described as follows. Let \(\Omega\) be a bounded domain of class \(C^{2,\mu},0<\mu<1\). The holes in \(\Omega\) are denoted by \(T_{\varepsilon,k}\) which are assumed to satisfy
\[T_{\varepsilon,k}=\varepsilon x_{k}+\varepsilon T\subset\subset\varepsilon Q _{k},\]
where the cube \(Q_{k}=(-\frac{1}{2},\frac{1}{2})^{3}+k\) and \(x_{k}=x_{0}+k\) with \(x_{0}\in(-\frac{1}{2},\frac{1}{2})^{3},\ k\in\mathbb{Z}^{3}\); \(T\) is a model hole which is assumed to be a closed domain contained in \(Q_{0}\) with \(C^{2,\mu}\) boundary. The perforation parameter \(\varepsilon\) is used to measure the mutual distance and size of the holes, and \(\varepsilon x_{k}=\varepsilon x_{0}+\varepsilon k\) are the locations of the holes.
The perforated domain \(\Omega_{\varepsilon}\) is then defined as:
\[\Omega_{\varepsilon}=\Omega\backslash\bigcup_{k\in K_{\varepsilon}}T_{ \varepsilon,k},\quad\text{where }K_{\varepsilon}=\{k\in\mathbb{Z}^{3}: \varepsilon\overline{Q}_{k}\subset\Omega\}. \tag{1.2}\]
The study of homogenization problems in fluid mechanics have gained a lot interest. In particular, the homogenization of Stokes system in perforated domains has been systematically studied. In [24], Tartar considered the case where the size of the holes is proportional to the mutual distance of the holes and Darcy's law was derived. Then Allaire [1, 2] considered general cases and showed that the homogenized equation are determined by the ratio \(\sigma_{\varepsilon}\) between the size and the mutual distance of the holes:
\[\sigma_{\varepsilon}=\big{(}\frac{\varepsilon^{d}}{a_{\varepsilon}^{d-2}} \big{)}^{\frac{1}{2}},\quad d\geq 3;\qquad\sigma_{\varepsilon}=\varepsilon\big{|} \text{log}\frac{a_{\varepsilon}}{\varepsilon}\big{|}^{\frac{1}{2}},\quad d=2,\]
where \(\varepsilon\) and \(a_{\varepsilon}\) are used to measure the mutual distance of holes and the size of holes. Particularly, if \(\lim\limits_{\varepsilon\to 0}\sigma_{\varepsilon}=0\) corresponding to the case of large holes, the homogenized system is the Darcy's law; if \(\lim\limits_{\varepsilon\to 0}\sigma_{\varepsilon}=\infty\) corresponding to the case of small holes, there arise the same Stokes equations in homogeneous domains; if \(\lim\limits_{\varepsilon\to 0}\sigma_{\varepsilon}=\sigma_{*}\in(0,+\infty)\) corresponding to the case of critical size of holes, the homogenized equations are governed by the Brinkman's law-a combination of the Darcy's law and the original Stokes equations. Same results were shown in [18] by employing a generalized cell problem inspired by Tartar [24].
Later, the homogenization study is extended to more complicated models describing fluid flows: Mikelic [20] for the nonstationary incompressible Navier-Stokes equations, Masmoudi [21] for the compressible Navier-Stokes equations, Feireisl, Novotny and Takahashi [12] for the complete Navier-Stokes-Fourier equations. In all these studies, only the case where the size of holes is proportional to the mutual distance of the holes is considered and the Darcy's law is recovered in the limit.
Recently, cases with different sizes of holes are studied. Feireisl, Namlyeyeva and Necasova [10] studied the case with critical size of holes for the incompressible Navier-Stokes equations and they derived Brinkman's law; Yang and the first author [19] studied the homogenization of evolutionary incompressible Navier-Stokes system with large and small size of holes. In [8, 11, 16], with collaborators the first author considered the case of small holes for the compressible Navier-Stokes equations and it is shown that the homogenized equations remain the same as the original ones. Oschmann and Pokorny [22] also considered the case of small holes for the unsteady compressible Navier-Stokes equations for adiabatic exponent \(\gamma>3\) which improved the condition \(\gamma>6\) of [16], and they showed that the homogenized equations keep unchanged. Bella and Oschmann [6] considered the homogenization of compressible Navier-Stokes equations for the case with randomly perforated domains with small size of holes and they get the same limiting equation.
Hofer, Kowalczyk and Schwarzacher [13] studied the case of large holes for the compressible Navier-Stokes equations at low Mach number and derived the Darcy's law; Bella and Oschmann [5] also studied the case with critical size of holes for the compressible Navier-Stokes equations at low Mach number and they derived incompressible Navier-Stokes equations with Brinkman term. Bella, Feireisl and Oschmann [7] considered the case of unsteady compressible Navier-Stokes equations at low Mach number under the assumption \(\Omega_{\varepsilon}\to\Omega\) in sense of Mosco's convergence and they derived the incompressible Navier-Stokes equations. Oschmann and Necasova [23] studied homogenization of the two-dimensional evolutionary compressible Navier-Stokes equations with very small holes and limiting equations remain unchanged.
There are not many mathematical studies concerning the homogenization of non-Newtonian flows. Mikelic and Bourgeat [4] considered stationary case of Carreau-Yasuda type flows under the assumption \(a_{\varepsilon}\sim\varepsilon\) and derived Darcy's law. Mikelic summarized some theory of stationary non-Newtonian flows in Chapter 4 of [14]. While for evolutionary non-Newtonian fluid equations, according to the authors' knowledge, there is no rigorous mathematical analysis results. In this paper, we justify in porous media setting, the evolutionary Carreau-Yasuda model converges to Darcy's law.
### Notations and weak solutions
We recall some notations of Sobolev spaces. Let \(1\leq r\leq\infty\) and \(\Omega\) be a bounded domain. We use the notation \(L^{r}_{0}(\Omega)\) to denote the space of \(L^{r}(\Omega)\) functions with zero mean value:
\[L^{r}_{0}(\Omega)=\Big{\{}f\in L^{r}(\Omega)\ :\ \int_{\Omega}f\,\mathrm{d}x=0 \Big{\}}.\]
We use \(W^{1,r}(\Omega)\) to denote classical Sobolev space, and \(W^{1,r}_{0}(\Omega)\) denotes the completion of \(C^{\infty}_{c}(\Omega)\) in \(W^{1,r}(\Omega)\). Here \(C^{\infty}_{c}(\Omega)\) is the space of smooth functions compactly supported in \(\Omega\). We use \(W^{-1,r^{\prime}}(\Omega)\) to denote the dual space of \(W^{1,r}_{0}(\Omega)\). For \(1\leq r<\infty\), \(W^{1,r}(\mathbb{R}^{3})=W^{1,r}_{0}(\mathbb{R}^{3})\). We introduce the functional space \(W^{1,r}_{0,\mathrm{div}}(\Omega),\ 1\leq r\leq\infty\) by
\[W^{1,r}_{0,\mathrm{div}}(\Omega)=\Big{\{}u\in W^{1,r}_{0}(\Omega;\mathbb{R}^{3} ):\ \mathrm{div}\,u=0\ \mathrm{in}\,\Omega\Big{\}}\,.\]
Now we introduce the definition of finite energy weak solutions to (1.1):
**Definition 1.1**.: _Let \(T>0\). We say that \(\mathbf{u}_{\varepsilon}\) is a finite energy weak solution of (1.1) in \((0,T)\times\Omega_{\varepsilon}\) provided_
* \(\mathbf{u}_{\varepsilon}\in C_{\mathrm{weak}}([0,T);L^{2}(\Omega_{\varepsilon}; \mathbb{R}^{3}))\cap L^{2}(0,T;W^{1,2}_{0,\mathrm{div}}(\Omega_{\varepsilon}) )\cap L^{r}(0,T;W^{1,r}_{0,\mathrm{div}}(\Omega_{\varepsilon}))\)_._
* _The integral identity_ \[\int_{0}^{T}\int_{\Omega_{\varepsilon}}-\varepsilon^{2}\mathbf{u }_{\varepsilon}\cdot\partial_{t}\varphi-\mathbf{u}_{\varepsilon}\otimes \mathbf{u}_{\varepsilon}:\nabla\varphi+\eta_{r}(D\mathbf{u}_{\varepsilon})D \mathbf{u}_{\varepsilon}:D\varphi\,\mathrm{d}x\mathrm{d}t\] \[=\int_{0}^{T}\int_{\Omega_{\varepsilon}}\mathbf{f}\cdot\varphi\, \mathrm{d}x\mathrm{d}t+\varepsilon^{2}\int_{\Omega_{\varepsilon}}\mathbf{u}_{0} \cdot\varphi(0,\cdot)\,\mathrm{d}x\] _holds for any test function_ \(\varphi\in C^{\infty}_{c}([0,T)\times\Omega_{\varepsilon};\mathbb{R}^{3}),\ \mathrm{div}_{x}\varphi=0\)_._
* _The energy inequality_ \[\int_{\Omega_{\varepsilon}}\frac{\varepsilon^{2}}{2}\mathbf{u}_{\varepsilon}^ {2}\,\mathrm{d}x+\int_{0}^{t}\int_{\Omega_{\varepsilon}}\eta_{r}(D\mathbf{u}_ {\varepsilon})|D\mathbf{u}_{\varepsilon}|^{2}\,\mathrm{d}x\mathrm{d}t\leq\int_ {\Omega_{\varepsilon}}\frac{\varepsilon^{2}}{2}\mathbf{u}_{0}^{2}\,\mathrm{d}x+ \int_{0}^{t}\int_{\Omega_{\varepsilon}}\mathbf{f}\cdot\mathbf{u}_{\varepsilon} \,\mathrm{d}x\mathrm{d}t\] (1.3) _holds for a.a._ \(t\in(0,T)\)_._
The classical theory from Ladyzhenskaya [15], Theorem 1.1 in [3] and Theorem 1.3 in [26] gives the existence of at least one weak solution \({\bf u}_{\varepsilon}\in C_{\rm weak}([0,T);L^{2}(\Omega_{\varepsilon};\mathbb{R }^{3}))\cap L^{r}(0,T;W^{1,r}_{0,{\rm div}}(\Omega_{\varepsilon}))\) for \(r>2\) and \({\bf u}_{\varepsilon}\in C_{\rm weak}([0,T);L^{2}(\Omega_{\varepsilon}; \mathbb{R}^{3}))\cap L^{2}(0,T;W^{1,2}_{0,{\rm div}}(\Omega_{\varepsilon}))\) for \(1<r\leq 2\).
For brevity we use \(C\) to denote a constant independent of \(\varepsilon\) throughout the paper, while the value of \(C\) may differ from line to line.
### Restriction, extension, and some useful lemmas
Our goal is to obtain the limit system in homogeneous domains without holes, so we need to extend \(({\bf u}_{\varepsilon},p_{\varepsilon})\) to the whole of \(\Omega\). Due to the zero boundary conditions on \({\bf u}_{\varepsilon}\), it is nature to extend \({\bf u}_{\varepsilon}\) by zero to the holes. However, the extension of the pressure is more delicate. It is defined by the restriction operator due to Tartar [24] for the case where the size of the holes is proportional to their mutual distance and extended to general sizes of holes by Allaire [1, 2]. For \(\Omega_{\varepsilon}\) defined in (1.2), there exists a linear operator, named restriction operator, \(R_{\varepsilon}:W^{1,q}_{0}(\Omega;\mathbb{R}^{3})\to W^{1,q}_{0}(\Omega_{ \varepsilon};\mathbb{R}^{3})\)\((1<q<\infty)\) such that:
\[u\in W^{1,q}_{0}(\Omega_{\varepsilon};\mathbb{R}^{3})\Longrightarrow R_{ \varepsilon}(\tilde{u})=u\ \mbox{in}\ \Omega_{\varepsilon},\ \mbox{where}\ \tilde{u}\ \mbox{is the zero extension of}\ u,\] \[{\rm div}\,u=0\ \mbox{in}\ \Omega\Longrightarrow{\rm div}\,R_{ \varepsilon}(u)=0\ \mbox{in}\ \Omega_{\varepsilon}, \tag{1.4}\] \[\|\nabla R_{\varepsilon}(u)\|_{L^{q}(\Omega_{\varepsilon})}\leq C (\varepsilon^{-1}\|u\|_{L^{q}(\Omega)}+\|\nabla u\|_{L^{q}(\Omega)}).\]
The construction of such a restriction operator can be found in [20]. Later Allaire [1, 2] constructed such type of restriction operators for general sizes of holes in \(L^{2}\) framework. Recently, following the construction of Allaire, the first author [17] gave a construction of restriction operators for general sizes of holes in general \(L^{q}\) framework.
The extension \(\tilde{p}_{\varepsilon}\) of the pressure \(p_{\varepsilon}\) with \(\nabla p_{\varepsilon}\in W^{-1,q^{\prime}}(\Omega_{\varepsilon};\mathbb{R}^{ 3})\) is then defined through the following dual formulation:
\[\langle\nabla\tilde{p}_{\varepsilon},\varphi\rangle_{\Omega}=\langle\nabla p_{ \varepsilon},R_{\varepsilon}(\varphi)\rangle_{\Omega_{\varepsilon}},\qquad \forall\,\varphi\in W^{1,q}_{0}(\Omega;\mathbb{R}^{3}).\]
Such an extension \(\tilde{p}_{\varepsilon}\) is well defined due to the three properties in (1.4) of the restriction operator.
Now we introduce several useful conclusions which will be frequently used throughout this paper. Let's first introduce Poincare inequality in porous media. One may find this proof in [24, 20].
**Lemma 1.2**.: _For each \(u\in W^{1,q}_{0}(\Omega_{\varepsilon};\mathbb{R}^{3}),\ 1<q<\infty\), where \(\Omega_{\varepsilon}\) is defined in (1.2). Then there holds_
\[\|u\|_{L^{q}(\Omega_{\varepsilon})}\leq C\varepsilon\|\nabla u\|_{L^{q}( \Omega_{\varepsilon})}. \tag{1.5}\]
Next we introduce the restriction operator which concerns functions with time variable. For each \(\varphi\in L^{p}(0,T;W^{1,q}_{0}(\Omega))\) with \(1\leq p\leq\infty\), \(1<q<\infty\), the restriction \(R_{\varepsilon}(\varphi)\) is taken only on spatial variable:
\[R_{\varepsilon}(\varphi)(\cdot,t)=R_{\varepsilon}\big{(}\varphi(\cdot,t) \big{)}(\cdot),\quad\mbox{for}\ \mbox{each}\ t\in(0,T). \tag{1.6}\]
Then we can get the same properties as in (1.4) for each \(t\in(0,T)\). Moreover, it is rather straightforward to deduce from (1.4) and (1.5) the following lemma:
**Lemma 1.3**.: _Let \(\Omega\) be a bounded domain of class \(C^{1}\) and \(\Omega_{\varepsilon}\) be defined in (1.2). Let \(\varphi\in L^{p}(0,T;W^{1,q}_{0}(\Omega;\mathbb{R}^{3})),\ 1\leq p\leq\infty,\ 1<q<\infty\). Then we have_
\[\|R_{\varepsilon}(\varphi)\|_{L^{p}(0,T;L^{q}(\Omega_{\varepsilon}))}+ \varepsilon\|\nabla R_{\varepsilon}(\varphi)\|_{L^{p}(0,T;L^{q}(\Omega_{ \varepsilon}))}\leq C(\|\varphi\|_{L^{p}(0,T;L^{q}(\Omega))}+\varepsilon\| \nabla\varphi\|_{L^{p}(0,T;L^{q}(\Omega))}). \tag{1.7}\]
We finally give the following Korn type inequality, see for example Chapter 10 in [9]:
**Lemma 1.4**.: _(Korn inequality) Let \(\Omega\) be a bounded domain of class \(C^{1}\) and \(\Omega_{\varepsilon}\) be defined in (1.2). Let \(1<q<\infty\). For arbitrary \(u\in W_{0}^{1,q}(\Omega_{\varepsilon};\mathbb{R}^{3})\), there holds_
\[\|\nabla u\|_{L^{q}(\Omega_{\varepsilon})}\leq C(q)\|Du\|_{L^{q}(\Omega_{ \varepsilon})}, \tag{1.8}\]
_where \(C(q)\) is independent of \(\varepsilon\)._
### Main results
We now state our homogenization results, where the limits are taken up to possible extractions of subsequences. We shall follow the idea of Mikelic [20] and Teman [25] to consider new equations by integrating original equations with respect to time variable. Let \((\mathbf{u}_{\varepsilon},p_{\varepsilon})\) be a finite energy weak solution of equations (1.1). Introduce
\[U_{\varepsilon}=\int_{0}^{t}\mathbf{u}_{\varepsilon}\,\mathrm{d}s,\;G_{ \varepsilon}=\int_{0}^{t}(\mathbf{u}_{\varepsilon}\cdot\nabla)\mathbf{u}_{ \varepsilon}\,\mathrm{d}s,\;H_{\varepsilon}=\int_{0}^{t}(1+\lambda|D\mathbf{u }_{\varepsilon}|^{2})^{\frac{r}{2}-1}D\mathbf{u}_{\varepsilon}\,\mathrm{d}s, \;F=\int_{0}^{t}\mathbf{f}\,\mathrm{d}s. \tag{1.9}\]
Then we have \(U_{\varepsilon}\in C([0,T];\;W_{0,\mathrm{div}}^{1,2}(\Omega_{\varepsilon})), \;G_{\varepsilon}\in C([0,T];L^{\frac{3}{2}}(\Omega_{\varepsilon})),\;F\in C( [0,T];L^{2}(\Omega_{\varepsilon}))\) and
\[H_{\varepsilon}\in\begin{cases}C([0,T];L^{2}(\Omega_{\varepsilon}))&1<r\leq 2,\\ C([0,T];L^{\frac{r}{r-1}}(\Omega_{\varepsilon}))&r>2.\end{cases}\]
The classical theory of Stokes equations ensure the existence of
\[P_{\varepsilon}\in\begin{cases}C_{\mathrm{weak}}([0,T];L^{2}(\Omega_{ \varepsilon}))&1<r\leq 2,\\ C_{\mathrm{weak}}([0,T];L^{\frac{r}{r-1}}(\Omega_{\varepsilon}))&r>2,\end{cases}\]
such that for each \(t\in[0,T]\),
\[\nabla P_{\varepsilon}=F-\varepsilon^{2}(\mathbf{u}_{\varepsilon}-\mathbf{u}_ {0})+\frac{\eta_{\infty}}{2}\Delta U_{\varepsilon}-G_{\varepsilon}+(\eta_{0}- \eta_{\infty})\mathrm{div}\,H_{\varepsilon}\quad\mbox{in }\mathcal{D}^{\prime}(\Omega_{ \varepsilon}). \tag{1.10}\]
Now we are ready to state the main theorem:
**Theorem 1.5**.: _Let \(1<r<\infty\). Let \((\mathbf{u}_{\varepsilon},p_{\varepsilon})\) be a finite energy weak solution of equations (1.1), and \(\tilde{\mathbf{u}}_{\varepsilon}\) is the zero extension of \(\mathbf{u}_{\varepsilon}\). The extension \(\tilde{P}_{\varepsilon}\) is defined through the restriction operator as follows,_
\[\langle\nabla\tilde{P}_{\varepsilon},\varphi\rangle_{(0,T)\times\Omega}= \langle\nabla P_{\varepsilon},R_{\varepsilon}(\varphi)\rangle_{(0,T)\times \Omega_{\varepsilon}},\quad for\;all\;\varphi\in C_{c}^{\infty}((0,T)\times \Omega),\]
_where \(P_{\varepsilon}\) is defined in (1.10). Let \(\tilde{p}_{\varepsilon}=\partial_{t}\tilde{P}_{\varepsilon}\) be the extension of \(p_{\varepsilon}\). Then we can find \(\mathbf{u}\in L^{2}((0,T)\times\Omega)\) and_
\[p\in\begin{cases}W^{-1,2}(0,T;L^{2}(\Omega))&1<r\leq 2,\\ W^{-1,\frac{r}{r-1}}(0,T;L^{\frac{r}{r-1}}(\Omega))&r>2,\end{cases}\]
_which satisfy_
\[\varepsilon^{-2}\tilde{\mathbf{u}}_{\varepsilon}\to\mathbf{u}\;weakly\;in\;L ^{2}((0,T)\times\Omega),\]
\[\tilde{p}_{\varepsilon}\to p\;weakly\;in\begin{cases}W^{-1,2}(0,T;L^{2}(\Omega ))&1<r\leq 2,\\ W^{-1,\frac{r}{r-1}}(0,T;L^{\frac{r}{r-1}}(\Omega))&r>2.\end{cases}\]
_Moreover, the limit \((\mathbf{u},p)\) satisfies the Darcy's law:_
\[\frac{1}{2}\eta_{0}\mathbf{u}=A(\mathbf{f}-\nabla p)\quad\mbox{in }\mathcal{D}^{ \prime}((0,T)\times\Omega). \tag{1.11}\]
We give several remarks concerning our main results and main ideas of proof:
**Remark 1.6**.:
* _The permeability tensor_ \(A\) _which appears in (_1.11_) is a constant positive definite matrix defined in (_3.3_)._
* _Mikelic considered the homogenization of nonstationary Navier-Stokes equations in_ _[_20_]__, namely for the case_ \(r=2\)_, and derived Darcy's law. Observing the strong convergence of the nonlinear viscosity coefficient_ \(\eta_{r}(D\mathbf{u}_{\varepsilon})\) _to constant_ \(\eta_{0}\)_, we derived the Darcy's law for arbitrary_ \(r>1\)_._
* _The main difficulty compared to the steady case considered in_ _[_4_]_ _lies in dealing with the estimates of the pressure_ \(p_{\varepsilon}\)_, which lies in some negative order Sobolev space with respect to time variable due to the presence of the time derivative term_ \(\partial_{t}\mathbf{u}_{\varepsilon}\)_. We shall follow the idea of Mikelic_ _[_20_]_ _by integrating original equations with respect to time variable, and this allows us to define the extension of the pressure pointwisely in_ \(t\)_._
The rest of the paper is devoted to the proof of Theorem 1.5. In Section 2, we derive the uniform estimates of the velocity field and pressure. In Section 3, we employ the cell problem to modify test functions, and then pass to limit in the weak formulation of the new equations to derive the limit system--Darcy's law.
## 2 Uniform estimates
In this section, we derive the uniform estimates of the solutions. The estimates of the velocity follows from the energy inequality by using the Poincare inequality and the Korn inequality (see Lemma 1.2 and 1.4). Concerning the pressure \(p_{\varepsilon}\), we will not deduce the estimates of \(p_{\varepsilon}\) directly. Instead, we will consider a proper extension of its time integration \(P_{\varepsilon}\) given in (1.10). Such an extension is defined by a dual formula using restriction operator pointwisely in \(t\). The estimates of the extension of \(P_{\varepsilon}\) follow from the estimates of \(U_{\varepsilon}\) and the estimates of the restriction operator.
### Estimates of velocity field
Based on the energy inequality (1.3), we can derive the following estimates of velocity field \(\mathbf{u}_{\varepsilon}\):
**Proposition 2.1**.: _Let \(\mathbf{u}_{\varepsilon}\) be a weak solution of (1.1) in the sense of Definition 1.1. There holds_
\[\begin{split}\|\nabla\mathbf{u}_{\varepsilon}\|_{L^{2}((0,T) \times\Omega_{\varepsilon})}&\leq C\varepsilon,\qquad\|\mathbf{ u}_{\varepsilon}\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}\leq C \varepsilon^{2},\quad\|\mathbf{u}_{\varepsilon}\|_{L^{\infty}(0,T;L^{2}( \Omega_{\varepsilon}))}\leq C,\\ \|\nabla\mathbf{u}_{\varepsilon}\|_{L^{r}((0,T)\times\Omega_{ \varepsilon})}&\leq C\varepsilon^{\frac{2}{r}},\qquad\|\mathbf{ u}_{\varepsilon}\|_{L^{r}((0,T)\times\Omega_{\varepsilon})}\leq C \varepsilon^{\frac{2}{r}+1},\quad\text{if }r>2.\end{split} \tag{2.1}\]
Proof.: We first rewrite the energy inequality (1.3) as
\[\begin{split}&\frac{\varepsilon^{2}}{2}\int_{\Omega_{ \varepsilon}}\mathbf{u}_{\varepsilon}^{2}\,\mathrm{d}x+\int_{0}^{t}\int_{ \Omega_{\varepsilon}}\eta_{\infty}|D\mathbf{u}_{\varepsilon}|^{2}+(\eta_{0}- \eta_{\infty})(1+\lambda|D\mathbf{u}_{\varepsilon}|^{2})^{\frac{r}{2}-1}|D \mathbf{u}_{\varepsilon}|^{2}\,\mathrm{d}x\mathrm{d}t\\ &\quad\leq\int_{0}^{t}\int_{\Omega_{\varepsilon}}\mathbf{f}\cdot \mathbf{u}_{\varepsilon}\,\mathrm{d}x\mathrm{d}t+\frac{\varepsilon^{2}}{2} \int_{\Omega_{\varepsilon}}\mathbf{u}_{0}^{2}\,\mathrm{d}x,\quad\text{for a.a. }0<t\leq T.\end{split} \tag{2.2}\]
Applying the Poincare inequality in porous media and the Korn inequality (see Lemma 1.2 and 1.4) gives
\[\int_{0}^{T}\int_{\Omega_{\varepsilon}}\mathbf{f}\cdot\mathbf{u}_{\varepsilon }\,\mathrm{d}x\mathrm{d}t\leq C\varepsilon\int_{0}^{T}\|\mathbf{f}\|_{L^{2}( \Omega)}\|D\mathbf{u}_{\varepsilon}\|_{L^{2}(\Omega_{\varepsilon})}\mathrm{d}t. \tag{2.3}\]
Then from (2.2) and (2.3), and the assumptions that \(\mathbf{f}\) and \(\mathbf{u}_{0}\) are independent of \(\varepsilon\) and are in \(L^{2}((0,T)\times\Omega)\), we deduce
\[\|D\mathbf{u}_{\varepsilon}\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}^{2}\leq C \varepsilon\|D\mathbf{u}_{\varepsilon}\|_{L^{2}((0,T)\times\Omega_{ \varepsilon})}\|\mathbf{f}\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}+C \varepsilon^{2}.\]
Again by the Poincare inequality (1.5) and the Korn inequality (1.8), we obtain the \(L^{2}\) estimates in \(\eqref{eq:L^{2}}_{1}\).
If \(r>2\), using the estimates in \(\eqref{eq:L^{2}}_{1}\), we deduce from (2.2) that
\[\|D\mathbf{u}_{\varepsilon}\|_{L^{r}((0,T)\times\Omega_{\varepsilon})}^{r} \leq C\varepsilon^{2},\]
and the \(L^{r}\) estimates in \(\eqref{eq:L^{2}}_{2}\) follow from the Poincare inequality and the Korn inequality.
By the uniform estimates of velocity in Proposition 2.1, we have the following uniform estimates:
**Corollary 2.2**.: _Let \(U_{\varepsilon}\), \(G_{\varepsilon}\) and \(H_{\varepsilon}\) be defined as in (1.9). Then_
\[\|U_{\varepsilon}\|_{W^{1,2}(0,T;W^{1,2}_{0}(\Omega_{\varepsilon}) )} \leq C\varepsilon,\quad\|U_{\varepsilon}\|_{W^{1,2}(0,T;L^{2}(\Omega _{\varepsilon}))}\leq C\varepsilon^{2}, \tag{2.4}\] \[\|U_{\varepsilon}\|_{W^{1,r}(0,T;W^{1,r}_{0}(\Omega_{\varepsilon} ))} \leq C\varepsilon^{\frac{2}{r}},\quad\|U_{\varepsilon}\|_{W^{1,r}(0,T;L^{r}( \Omega_{\varepsilon}))}\leq C\varepsilon^{\frac{2}{r}+1},\quad\text{if }r>2,\] \[\|G_{\varepsilon}\|_{W^{1,1}(0,T;L^{\frac{3}{2}}(\Omega_{ \varepsilon}))} \leq C\varepsilon^{2},\] \[\|H_{\varepsilon}\|_{W^{1,2}(0,T;L^{2}(\Omega_{\varepsilon}))} \leq C\varepsilon,\quad\text{if }1<r\leq 2,\] \[\|H_{\varepsilon}\|_{W^{1,\frac{r}{r-1}}(0,T;L^{\frac{r}{r-1}}( \Omega_{\varepsilon}))} \leq C\varepsilon,\quad\text{if }r>2.\]
Proof.: The estimates for \(U_{\varepsilon}\) in \(\eqref{eq:L^{2}}_{1}\) and \(\eqref{eq:L^{2}}_{2}\) follow immediately from its definition and the uniform estimates of \(\mathbf{u}_{\varepsilon}\) in (2.1).
Using Sobolev embedding and Holder's inequality gives
\[\|\mathbf{u}_{\varepsilon}\cdot\nabla\mathbf{u}_{\varepsilon}\|_{L^{\frac{3} {2}}(\Omega_{\varepsilon})}\leq\|\mathbf{u}_{\varepsilon}\|_{L^{6}(\Omega_{ \varepsilon})}\|\nabla\mathbf{u}_{\varepsilon}\|_{L^{2}(\Omega_{\varepsilon}) }\leq C\|\nabla\mathbf{u}_{\varepsilon}\|_{L^{2}(\Omega_{\varepsilon})}^{2}.\]
Thus
\[\|\mathbf{u}_{\varepsilon}\cdot\nabla\mathbf{u}_{\varepsilon}\|_{L^{1}(0,T;L^ {\frac{3}{2}}(\Omega_{\varepsilon}))}\leq C\|\nabla\mathbf{u}_{\varepsilon} \|_{L^{2}((0,T)\times\Omega_{\varepsilon})}^{2}\leq C\varepsilon^{2},\]
which gives the estimates of \(G_{\varepsilon}\) in (2.4).
We turn to the estimates of \(H_{\varepsilon}\). If \(1<r\leq 2\), there holds
\[|(1+\lambda|D\mathbf{u}_{\varepsilon}|^{2})^{\frac{r}{2}-1}D\mathbf{u}_{ \varepsilon}|\leq|D\mathbf{u}_{\varepsilon}|.\]
Therefore
\[\|H_{\varepsilon}\|_{W^{1,2}(0,T;L^{2}(\Omega_{\varepsilon}))}\leq C\|(1+ \lambda|D\mathbf{u}_{\varepsilon}|^{2})^{\frac{r}{2}-1}D\mathbf{u}_{ \varepsilon}\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}\leq C\|D\mathbf{u}_{ \varepsilon}\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}\leq C\varepsilon.\]
If \(r>2\), we have
\[|(1+\lambda|D\mathbf{u}_{\varepsilon}|^{2})^{\frac{r}{2}-1}D\mathbf{u}_{ \varepsilon}|\leq C|D\mathbf{u}_{\varepsilon}|+C|D\mathbf{u}_{\varepsilon}|^{ r-1}.\]
Therefore
\[\|H_{\varepsilon}\|_{W^{1,\frac{r}{r-1}}(0,T;L^{\frac{r}{r-1}}( \Omega_{\varepsilon}))} \leq C\|(1+\lambda|D\mathbf{u}_{\varepsilon}|^{2})^{\frac{r}{2}-1 }D\mathbf{u}_{\varepsilon}\|_{L^{\frac{r}{r-1}}((0,T)\times\Omega_{ \varepsilon})}\] \[\leq C\|D\mathbf{u}_{\varepsilon}\|_{L^{\frac{r}{r-1}}((0,T) \times\Omega_{\varepsilon})}+C\|D\mathbf{u}_{\varepsilon}\|_{L^{r}((0,T) \times\Omega_{\varepsilon})}^{r-1}\] \[\leq C\|D\mathbf{u}_{\varepsilon}\|_{L^{2}((0,T)\times\Omega_{ \varepsilon})}+C\|D\mathbf{u}_{\varepsilon}\|_{L^{r}((0,T)\times\Omega_{ \varepsilon})}^{r-1}\] \[\leq C\varepsilon+C\varepsilon^{\frac{2(r-1)}{r}}\leq C\varepsilon,\]
where we use the fact \(\frac{r}{r-1}<2\) and \(\frac{2(r-1)}{r}>1\) if \(r>2\).
### Estimates of pressure: extension
Next we will define the extension of pressure and derive corresponding estimates based on the uniform estimates in (2.4).
**Proposition 2.3**.: _Let \(\tilde{P_{\varepsilon}}\) be the extension of \(P_{\varepsilon}\) defined by the restriction operator as follows_
\[\langle\nabla\tilde{P_{\varepsilon}},\varphi\rangle_{(0,T)\times\Omega}= \langle\nabla P_{\varepsilon},R_{\varepsilon}(\varphi)\rangle_{(0,T)\times \Omega_{\varepsilon}},\quad\forall\varphi\in C_{c}^{\infty}((0,T)\times \Omega), \tag{2.5}\]
_where \(P_{\varepsilon}\) is defined in (1.10) and the restriction operator \(R_{\varepsilon}\) is given in (1.6) satisfying the estimates (1.7). Then we have the following estimates:_
\[\big{|}\langle\nabla\tilde{P_{\varepsilon}},\varphi\rangle_{(0,T)\times\Omega} \big{|}\leq C\begin{cases}\|\varphi\|_{L^{2}((0,T)\times\Omega)}+\varepsilon\| \nabla\varphi\|_{L^{2}((0,T)\times\Omega)}&1<r\leq 2,\\ \|\varphi\|_{L^{r}((0,T)\times\Omega)}+\varepsilon\|\nabla\varphi\|_{L^{r}( (0,T)\times\Omega)}&r>2.\end{cases} \tag{2.6}\]
Proof.: By (1.10), we have
\[\langle\nabla\tilde{P_{\varepsilon}},\varphi\rangle_{(0,T)\times \Omega} =\langle\nabla P_{\varepsilon},R_{\varepsilon}(\varphi)\rangle_{(0,T) \times\Omega_{\varepsilon}}\] \[=\langle F-\varepsilon^{2}(\mathbf{u}_{\varepsilon}-\mathbf{u}_ {0})+\frac{\eta_{\infty}}{2}\Delta U_{\varepsilon}-G_{\varepsilon}+(\eta_{0}- \eta_{\infty})\text{div}\,H_{\varepsilon},R_{\varepsilon}(\varphi)\rangle_{(0, T)\times\Omega_{\varepsilon}}.\]
Using the fact \(F\) and \(\mathbf{u}_{0}\) are both in \(L^{2}((0,T)\times\Omega;\mathbb{R}^{3})\) and \(\eqref{eq:1}_{1}\) implies
\[\big{|}\langle F-\varepsilon^{2}(\mathbf{u}_{\varepsilon}-\mathbf{u}_{0}),R_{ \varepsilon}(\varphi)\rangle_{(0,T)\times\Omega_{\varepsilon}}\big{|}\leq C \|R_{\varepsilon}(\varphi)\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}.\]
By the estimates in (2.4) and Sobolev embedding inequality, we have:
\[\big{|}\langle\Delta U_{\varepsilon},R_{\varepsilon}(\varphi) \rangle_{(0,T)\times\Omega_{\varepsilon}}\big{|} \leq\|\nabla U_{\varepsilon}\|_{L^{2}((0,T)\times\Omega_{ \varepsilon})}\|\nabla R_{\varepsilon}(\varphi)\|_{L^{2}((0,T)\times\Omega_{ \varepsilon})}\leq C\varepsilon\|\nabla R_{\varepsilon}(\varphi)\|_{L^{2}((0,T )\times\Omega_{\varepsilon})},\] \[\big{|}\langle G_{\varepsilon},R_{\varepsilon}(\varphi)\rangle_{( 0,T)\times\Omega_{\varepsilon}}\big{|} \leq\|G_{\varepsilon}\|_{L^{2}((0,T;L^{\frac{3}{2}}(\Omega_{ \varepsilon}))}\|R_{\varepsilon}(\varphi)\|_{L^{2}(0,T;L^{3}(\Omega_{ \varepsilon}))}\leq C\varepsilon^{2}\|\nabla R_{\varepsilon}(\varphi)\|_{L^{2 }((0,T)\times\Omega_{\varepsilon})},\] \[\big{|}\langle\text{div}\,H_{\varepsilon},R_{\varepsilon}(\varphi) \rangle_{(0,T)\times\Omega_{\varepsilon}}\big{|} \leq\|H_{\varepsilon}\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}\| \nabla R_{\varepsilon}(\varphi)\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}\] \[\leq C\varepsilon\|\nabla R_{\varepsilon}(\varphi)\|_{L^{2}((0,T )\times\Omega_{\varepsilon})}\quad\text{if }1<r\leq 2,\] \[\big{|}\langle\text{div}\,H_{\varepsilon},R_{\varepsilon}(\varphi) \rangle_{(0,T)\times\Omega_{\varepsilon}}\big{|} \leq\|H_{\varepsilon}\|_{L^{r}\frac{r}{r-1}((0,T)\times\Omega_{ \varepsilon})}\|\nabla R_{\varepsilon}(\varphi)\|_{L^{r}((0,T)\times\Omega_{ \varepsilon})}\] \[\leq C\varepsilon\|\nabla R_{\varepsilon}(\varphi)\|_{L^{r}((0,T )\times\Omega_{\varepsilon})}\quad\text{if }r>2.\]
Hence, by the estimates of restriction operator in (1.7), we have for \(1<r\leq 2\),
\[\big{|}\langle\nabla\tilde{P_{\varepsilon}},\varphi\rangle_{(0,T) \times\Omega}\big{|} \leq C\big{(}\|R_{\varepsilon}(\varphi)\|_{L^{2}((0,T)\times \Omega_{\varepsilon})}+\varepsilon\|\nabla R_{\varepsilon}(\varphi)\|_{L^{2}((0,T)\times\Omega_{\varepsilon})}\big{)}\] \[\leq C\big{(}\|\varphi\|_{L^{2}((0,T)\times\Omega)}+\varepsilon\| \nabla\varphi\|_{L^{2}((0,T)\times\Omega)}\big{)}.\]
For \(r>2\),
\[\big{|}\langle\nabla\tilde{P_{\varepsilon}},\varphi\rangle_{(0,T) \times\Omega}\big{|} \leq C\big{(}\|R_{\varepsilon}(\varphi)\|_{L^{r}((0,T)\times \Omega_{\varepsilon})}+\varepsilon\|\nabla R_{\varepsilon}(\varphi)\|_{L^{r}((0,T)\times\Omega_{\varepsilon})}\big{)}\] \[\leq C\big{(}\|\varphi\|_{L^{r}((0,T)\times\Omega)}+\varepsilon\| \nabla\varphi\|_{L^{r}((0,T)\times\Omega)}\big{)}.\]
The proof is thus completed.
Homogenization process
This section is devoted to the limit passage and derive the limit equations. We first introduce the cell problem which is used to modify test functions. Then by the estimates in Proposition 2.1 and Corollary 2.2, we can pass to the limit term by term to get the limit system.
### Cell Problem
To obtain the limit system, a natural way is to pass \(\varepsilon\to 0\) in the weak formulation of (1.1), a proper surgery on \(C_{c}^{\infty}(\Omega)\) test functions needs to be done such that the test functions vanish on the holes and then become good test functions for the original equations in \(\Omega_{\varepsilon}\). To this issue, Tartar [24] considered the Stokes equations where the size of the holes is proportional to the mutual distance of the holes. Then near each single hole in \(\varepsilon Q_{k}\) in the perforated domain \(\Omega_{\varepsilon}\), after a scaling of size \(\varepsilon^{-1}\), there arises typically the following problem, named cell problem:
Let \((w^{i},\pi^{i})(i=1,2,3)\) be a \(Q_{0}\)-periodic solution of the following cell problem
\[\begin{cases}-\Delta w^{i}+\nabla\pi^{i}=e^{i}&\text{in }Q_{0}\setminus T,\\ \operatorname{div}w^{i}=0&\text{in }Q_{0}\setminus T,\\ w^{i}=0&\text{on }T.\end{cases} \tag{3.1}\]
Here \(\{e^{i}\}_{i=1,2,3}\) is the standard Euclidean coordinate of \(\mathbb{R}^{3}\). The cell problem (3.1) admits a unique weak solution \((w^{i},\pi^{i})\in W^{1,2}(Q_{0}\setminus T;\mathbb{R}^{3})\times L_{0}^{2}(Q _{0}\setminus T)\) with \((w^{i},\pi^{i})\)\(Q_{0}-\)periodic. Moreover, under the assumption \(T\) is of class \(C^{2,\mu}\), one has
\[\|w^{i}\|_{W^{1,\infty}(Q_{0}\setminus T)}+\|\pi^{i}\|_{L^{\infty}(Q_{0} \setminus T)}\leq C. \tag{3.2}\]
The permeability tensor \(A\) is defined as
\[A_{i,j}=\int_{Q_{0}}w^{i}_{j}(y)\,\mathrm{d}y,\qquad A=(A_{i,j})_{1\leq i,j \leq 3}, \tag{3.3}\]
where \(w^{i}_{j}\) denotes the \(j\)-th component of vector \(w^{i}\). It is shown in [24] that \(A\) is symmetric and positive definite.
Next we set
\[w^{i,\varepsilon}(x)=w^{i}(\frac{x}{\varepsilon}),\qquad\pi^{i,\varepsilon}(x )=\pi^{i}(\frac{x}{\varepsilon}).\]
Then \((w^{i,\varepsilon}(x),\pi^{i,\varepsilon}(x))\) satisfies the following equations
\[\begin{cases}-\varepsilon\nabla\pi^{i,\varepsilon}+\varepsilon^{2}\Delta w^{ i,\varepsilon}+e^{i}=0&\text{in }\varepsilon Q_{0}\setminus\varepsilon T,\\ \operatorname{div}w^{i,\varepsilon}=0&\text{in }\varepsilon Q_{0}\setminus \varepsilon T,\\ w^{i,\varepsilon}=0&\text{on }\varepsilon T,\\ (w^{i,\varepsilon},\pi^{i,\varepsilon})\text{ is }\varepsilon Q_{0}-\text{ periodic}.\end{cases} \tag{3.4}\]
Moreover, it follows from (3.2) that
\[\|w^{i,\varepsilon}\|_{L^{\infty}(\Omega_{\varepsilon})}\leq C,\quad\|\nabla w ^{i,\varepsilon}\|_{L^{\infty}(\Omega_{\varepsilon})}\leq C\varepsilon^{-1}, \quad\|\pi^{i,\varepsilon}\|_{L^{\infty}(\Omega_{\varepsilon})}\leq C. \tag{3.5}\]
By the fact that \(w^{i}\) is \(Q_{0}-\)periodic, using (3.5) implies
\[w^{i,\varepsilon}\to\bar{w}^{i}:=\int_{Q_{0}}w^{i}(y)\,\mathrm{d}y\quad\text{ weakly in }L^{r}(\Omega), \tag{3.6}\]
for each \(1<r<\infty\).
### Passing to the limit
Our main theorem actually follows from the following key proposition:
**Proposition 3.1**.: _Let \((U_{\varepsilon},P_{\varepsilon})\) be the solutions of the equation (1.10). Let \(\tilde{P}_{\varepsilon}\) be the extension of \(P_{\varepsilon}\) defined in (2.5) and \(\tilde{U}_{\varepsilon}\) be the zero extension of \(U_{\varepsilon}\). Then we can find \(U\in L^{2}((0,T)\times\Omega)\) and_
\[P\in\begin{cases}L^{2}((0,T)\times\Omega)&1<r\leq 2,\\ L^{\frac{r}{r-1}}((0,T)\times\Omega)&r>2,\end{cases} \tag{3.7}\]
_such that_
\[\varepsilon^{-2}\tilde{U}_{\varepsilon}\to U\ weakly\ in\ L^{2}((0,T) \times\Omega), \tag{3.8}\]
\[\tilde{P}_{\varepsilon}\to P\ weakly\ in\begin{cases}L^{2}((0,T)\times\Omega)&1<r \leq 2,\\ L^{\frac{r}{r-1}}((0,T)\times\Omega)&r>2.\end{cases} \tag{3.9}\]
_Moreover, the limit \((U,P)\) satisfies the Darcy's law:_
\[\frac{1}{2}\eta_{0}U=A(F-\nabla P)\quad\mathrm{in}\,\mathcal{D}^{\prime}((0,T )\times\Omega). \tag{3.10}\]
_Here the permeability tensor \(A\) is a constant positive definite matrix determined in (3.3)._
Proof.: The convergence in (3.8) follows directly from uniform estimates in \(\eqref{eq:2.4}_{1}\).
From (2.6), we can obtain for \(1<r\leq 2\),
\[\|\tilde{P}_{\varepsilon}\|_{L^{2}((0,T)\times\Omega)}\leq C\|\nabla\tilde{P} _{\varepsilon}\|_{L^{2}(0,T;W^{-1,2}(\Omega))}\leq C,\]
and for \(r>2\),
\[\|\tilde{P}_{\varepsilon}\|_{L^{\frac{r}{r-1}}((0,T)\times\Omega)}\leq C\| \nabla\tilde{P}_{\varepsilon}\|_{L^{\frac{r}{r-1}}(0,T;W^{-1,\frac{r}{r-1}}( \Omega))}\leq C.\]
Thus we can find
\[P\in\begin{cases}L^{2}((0,T)\times\Omega)&1<r\leq 2,\\ L^{\frac{r}{r-1}}((0,T)\times\Omega)&r>2,\end{cases}\]
such that (3.9) holds.
Next we will use the cell problem to construct test functions. Clearly \(w^{i,\varepsilon}\) defined in (3.4) vanishes on the holes in \(\Omega_{\varepsilon}\). Given any scalar function \(\phi\in C^{\infty}_{c}((0,T)\times\Omega)\), taking \(\phi w^{i,\varepsilon}\) as a test function to (1.10) implies
\[\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}\,\mathrm{div} \,(\phi w^{i,\varepsilon})\,\mathrm{d}x\mathrm{d}t-\int_{0}^{T}\int_{\Omega} \big{(}\frac{\eta_{\infty}}{2}\nabla\tilde{U}_{\varepsilon}+(\eta_{0}-\eta_{ \infty})H_{\varepsilon}\big{)}:\nabla(\phi w^{i,\varepsilon})\,\mathrm{d}x \mathrm{d}t\] \[=\int_{0}^{T}\int_{\Omega}\big{(}-F+\varepsilon^{2}(\tilde{\mathbf{ u}}_{\varepsilon}-\mathbf{u}_{0})+G_{\varepsilon}\big{)}\cdot\phi w^{i, \varepsilon}\,\mathrm{d}x\mathrm{d}t. \tag{3.11}\]
Then we will pass \(\varepsilon\to 0\) term by term where the limits are taken up to subsequences. It follows from (3.6) that
\[\lim_{\varepsilon\to 0}\int_{0}^{T}\int_{\Omega}F\cdot w^{i,\varepsilon}\phi\, \mathrm{d}x\mathrm{d}t=\int_{0}^{T}\int_{\Omega}F\cdot\tilde{w}^{i}\phi\, \mathrm{d}x\mathrm{d}t. \tag{3.12}\]
The estimates of \(\mathbf{u}_{\varepsilon}\) in (2.1) ensure
\[\big{|}\int_{0}^{T}\int_{\Omega}\varepsilon^{2}(\tilde{\mathbf{u}}_{ \varepsilon}-\mathbf{u}_{0})\cdot\phi w^{i,\varepsilon}\,\mathrm{d}x\mathrm{d }t\big{|}\leq\varepsilon^{2}\|\tilde{\mathbf{u}}_{\varepsilon}-\mathbf{u}_{0} \|_{L^{2}((0,T)\times\Omega)}\|\phi w^{i,\varepsilon}\|_{L^{2}((0,T)\times \Omega)}\leq C\varepsilon^{2}\to 0. \tag{3.13}\]
For the term related to the pressure, using the divergence free condition \(\operatorname{div}w^{i,\varepsilon}=0\) implies
\[\begin{split}&\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon} \operatorname{div}\left(\phi w^{i,\varepsilon}\right)\mathrm{d}x\mathrm{d}t= \int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}w^{i,\varepsilon}\cdot\nabla \phi\,\mathrm{d}x\mathrm{d}t\\ &=\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}(w^{i, \varepsilon}-\bar{w}^{i})\cdot\nabla\phi\,\mathrm{d}x\mathrm{d}t+\int_{0}^{T} \int_{\Omega}\tilde{P}_{\varepsilon}\bar{w}^{i}\cdot\nabla\phi\,\mathrm{d}x \mathrm{d}t.\end{split} \tag{3.14}\]
By the fact that
\[\tilde{P}_{\varepsilon}\to P\text{ weakly in }\begin{cases}L^{2}((0,T)\times \Omega)&1<r\leq 2,\\ L^{\frac{r}{r-1}}((0,T)\times\Omega)&r>2,\end{cases}\]
we have
\[\lim_{\varepsilon\to 0}\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}\bar{w}^{i} \cdot\nabla\phi\,\mathrm{d}x\mathrm{d}t=\int_{0}^{T}\int_{\Omega}P\bar{w}^{i} \cdot\nabla\phi\,\mathrm{d}x\mathrm{d}t=\int_{0}^{T}\int_{\Omega}P\operatorname {div}\left(\bar{w}^{i}\phi\right)\mathrm{d}x\mathrm{d}t. \tag{3.15}\]
For each fixed \(t\in(0,T)\), by the divergence free condition \(\operatorname{div}\left(w^{i,\varepsilon}-\bar{w}^{i}\right)=0\), we have \((w^{i,\varepsilon}-\bar{w}^{i})\cdot\nabla\phi\in L_{0}^{r+2}(\Omega)\). By employing the classical Bogovskii operator \(\mathcal{B}\) in domain \(\Omega\), we can find \(\psi_{\varepsilon}=\mathcal{B}\big{(}(w^{i,\varepsilon}-\bar{w}^{i})\cdot \nabla\phi\big{)}\) such that
\[\operatorname{div}\psi_{\varepsilon}=(w^{i,\varepsilon}-\bar{w}^{i})\cdot \nabla\phi,\quad\text{for each }t\in(0,T). \tag{3.16}\]
Moreover, we have the following estimate:
\[\|\psi_{\varepsilon}\|_{L^{\infty}(0,T;W^{1,r+2}_{0}(\Omega))}\leq C\|(w^{i, \varepsilon}-\bar{w}^{i})\cdot\nabla\phi\|_{L^{\infty}(0,T;L^{r+2}(\Omega))} \leq C. \tag{3.17}\]
By the fact that \(\partial_{t}\psi_{\varepsilon}=\partial_{t}\mathcal{B}\big{(}(w^{i,\varepsilon }-\bar{w}^{i})\cdot\nabla\phi\big{)}=\mathcal{B}\big{(}(w^{i,\varepsilon}- \bar{w}^{i})\cdot\partial_{t}\nabla\phi\big{)}\) we obtain
\[\|\partial_{t}\psi_{\varepsilon}\|_{L^{\infty}(0,T;W^{1,r+2}_{0}(\Omega))} \leq C\|(w^{i,\varepsilon}-\bar{w}^{i})\cdot\partial_{t}\nabla\phi\|_{L^{ \infty}(0,T;L^{r+2}(\Omega))}\leq C.\]
By compact Sobolev embedding, we have, up to a subsequence that
\[\psi_{\varepsilon}\to\psi\quad\text{strongly in }L^{r+2}((0,T)\times\Omega) \tag{3.18}\]
for some \(\psi\in W^{1,r+2}(0,T;W^{1,r+2}_{0}(\Omega))\). Recall that \(w^{i,\varepsilon}\to\bar{w}^{i}\) weakly in \(L^{r+2}((0,T)\times\Omega)\). Then
\[(w^{i,\varepsilon}-\bar{w}^{i})\cdot\nabla\phi\to 0\text{ weakly in }L^{r+2}((0,T)\times\Omega). \tag{3.19}\]
By (3.16), (3.18) and (3.19), we can deduce that \(\operatorname{div}\psi=0\). Then using (2.6) implies
\[\big{|}\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}(w^{i, \varepsilon}-\bar{w}^{i})\cdot\nabla\phi\,\mathrm{d}x\mathrm{d}t\big{|} =\big{|}\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}\operatorname{ div}\psi_{\varepsilon}\,\mathrm{d}x\mathrm{d}t\big{|}\] \[=\big{|}\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon} \operatorname{div}\left(\psi_{\varepsilon}-\psi\right)\mathrm{d}x\mathrm{d} t\big{|}\] \[=\big{|}\langle\nabla\tilde{P}_{\varepsilon},\psi_{\varepsilon}- \psi\rangle_{(0,T)\times\Omega}\big{|}\] \[\leq C\big{(}\|\psi_{\varepsilon}-\psi\|_{L^{r+2}((0,T)\times \Omega)}+\varepsilon\|\nabla(\psi_{\varepsilon}-\psi)\|_{L^{r+2}((0,T)\times \Omega)}\big{)}.\]
Then, together with (3.17) and (3.18) we finally deduce
\[\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}(w^{i,\varepsilon}-\bar{w}^{ i})\cdot\nabla\phi\,\mathrm{d}x\mathrm{d}t\to 0, \tag{3.20}\]
and consequently, by (3.15) and (3.20), passing \(\varepsilon\to 0\) in (3.14) gives
\[\lim_{\varepsilon\to 0}\int_{0}^{T}\int_{\Omega}\tilde{P}_{\varepsilon}\, \mathrm{div}\left(\phi w^{i,\varepsilon}\right)\mathrm{d}x\mathrm{d}t=\int_{0} ^{T}\int_{\Omega}P\,\mathrm{div}\left(\phi\bar{w}^{i}\right)\mathrm{d}x \mathrm{d}t. \tag{3.21}\]
By \(\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeqeqeqeqeq:eq
For the last term related to the nonlinear stress tensor \(H_{\varepsilon}\), due to the smallness of \(D\tilde{\mathbf{u}}_{\varepsilon}\), we shall show its contribution in the limit is nothing but a Newtonian stress tensor. Introduce the decomposition
\[H_{\varepsilon} =\int_{0}^{t}(1+\lambda|D\mathbf{u}_{\varepsilon}|^{2})^{\frac{r} {2}-1}D\mathbf{u}_{\varepsilon}\,\mathrm{d}s=\int_{0}^{t}\big{(}(1+\lambda|D \tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{)}D\mathbf{u}_{ \varepsilon}\,\mathrm{d}s+\int_{0}^{t}D\mathbf{u}_{\varepsilon}\,\mathrm{d}s\] \[=\int_{0}^{t}\big{(}(1+\lambda|D\mathbf{u}_{\varepsilon}|^{2})^{ \frac{r}{2}-1}-1\big{)}D\mathbf{u}_{\varepsilon}\,\mathrm{d}s+DU_{\varepsilon}.\]
Then
\[\int_{0}^{T}\int_{\Omega}H_{\varepsilon}:\nabla(\phi w^{i, \varepsilon})\,\mathrm{d}x\mathrm{d}t\] \[=\int_{0}^{T}\int_{\Omega}D\tilde{U}_{\varepsilon}:\nabla(\phi w ^{i,\varepsilon})\,\mathrm{d}x\mathrm{d}t+\int_{0}^{T}\int_{\Omega}\left( \int_{0}^{t}\big{(}(1+\lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{ r}{2}-1}-1\big{)}D\tilde{\mathbf{u}}_{\varepsilon}\mathrm{d}s\right):\nabla(\phi w ^{i,\varepsilon})\,\mathrm{d}x\mathrm{d}t. \tag{3.27}\]
Using the divergence free condition \(\operatorname{div}\tilde{U}_{\varepsilon}=0\) and (3.26) implies
\[\lim_{\varepsilon\to 0}\int_{0}^{T}\int_{\Omega}D\tilde{U}_{\varepsilon}: \nabla(\phi w^{i,\varepsilon})\,\mathrm{d}x\mathrm{d}t=\lim_{\varepsilon \to 0}\frac{1}{2}\int_{0}^{T}\int_{\Omega}\nabla\tilde{U}_{\varepsilon}: \nabla(\phi w^{i,\varepsilon})\,\mathrm{d}x\mathrm{d}t=\frac{1}{2}\int_{0}^{T }\int_{\Omega}\phi U_{i}\,\mathrm{d}x\mathrm{d}t. \tag{3.28}\]
For the other term on the right-hand side of (3.27), based on different values of \(r\), we use different inequalities to show that its limit is actually zero.
For \(1<r<2\), by inequality \(0\leq(1+s)^{\alpha}-s^{\alpha}\leq 1\,(0\leq\alpha\leq 1,\ s\geq 0)\), we have
\[\big{|}(1+\lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1 \big{|}=\big{|}(1+\lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2 }-1}\big{(}1-(1+\lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{1-\frac{r}{2 }}\big{)}\big{|}\leq C|D\tilde{\mathbf{u}}_{\varepsilon}|^{2-r}.\]
Then, for \(1<r<2\), there holds
\[\big{|}\int_{0}^{T}\int_{\Omega}\left(\int_{0}^{t}\big{(}(1+ \lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{)}D \tilde{\mathbf{u}}_{\varepsilon}\,\mathrm{d}s\right):\nabla w^{i,\varepsilon} \phi\,\mathrm{d}x\mathrm{d}t\,\big{|}\] \[\leq C\varepsilon^{-1}\|D\tilde{\mathbf{u}}_{\varepsilon}\|_{L^{ \infty}(\Omega)}^{3-r}\mathrm{d}t\int_{0}^{t}\mathrm{d}s\int_{\Omega}|\;(1+ \lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\;|\;|D \tilde{\mathbf{u}}_{\varepsilon}|\,\mathrm{d}x\] \[\leq C\varepsilon^{-1}\int_{0}^{T}\mathrm{d}t\int_{0}^{t}\mathrm{ d}s\int_{\Omega}|D\tilde{\mathbf{u}}_{\varepsilon}|^{3-r}\mathrm{d}x\] \[\leq C\varepsilon^{-1}\|D\tilde{\mathbf{u}}_{\varepsilon}\|_{L^{ \infty-r}((0,T)\times\Omega)}^{3-r}\leq C\varepsilon^{-1}\|D\tilde{\mathbf{u }}_{\varepsilon}\|_{L^{2}((0,T)\times\Omega)}^{3-r}\leq C\varepsilon^{2-r} \to 0.\]
For \(2<r\leq 4\), again by inequality \(0\leq(1+s)^{\alpha}-s^{\alpha}\leq 1\,(0\leq\alpha\leq 1,\ s\geq 0)\), we have
\[\big{|}(1+\lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1 \big{|}\leq C|D\tilde{\mathbf{u}}_{\varepsilon}|^{r-2}.\]
Then using the estimate \(\|\nabla\tilde{\mathbf{u}}_{\varepsilon}\|_{L^{r}((0,T)\times\Omega)}\leq C \varepsilon^{\frac{2}{r}}\) in \(\eqref{eq:2}_{2}\) gives
\[\big{|}\int_{0}^{T}\int_{\Omega}\left(\int_{0}^{t}\big{(}(1+ \lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{)}D \tilde{\mathbf{u}}_{\varepsilon}\,\mathrm{d}s\right):\nabla w^{i,\varepsilon} \phi\,\mathrm{d}x\mathrm{d}t\,\big{|}\] \[\leq C\varepsilon^{-1}\|D\tilde{\mathbf{u}}_{\varepsilon}\|_{L^ {r-1}((0,T)\times\Omega)}^{r-1}\leq C\varepsilon^{-1}\varepsilon^{\frac{2(r-1)} {r}}=C\varepsilon^{\frac{r-2}{r}}\to 0.\]
For \(r\geq 4\),
\[\big{|}(1+\lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{|} \leq\lambda(\frac{r}{2}-1)(1+\lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{ \frac{r}{2}-2}|D\tilde{\mathbf{u}}_{\varepsilon}|^{2}\leq C(|D\tilde{\mathbf{u }}_{\varepsilon}|^{2}+|D\tilde{\mathbf{u}}_{\varepsilon}|^{r-2}).\]
Using the estimate \(\|\nabla\tilde{\mathbf{u}}_{\varepsilon}\|_{L^{2}((0,T)\times\Omega)}^{2}+\| \nabla\tilde{\mathbf{u}}_{\varepsilon}\|_{L^{r}((0,T)\times\Omega)}^{r}\leq C \varepsilon^{2}\) and Holder's inequality implies
\[\|\nabla\tilde{\mathbf{u}}_{\varepsilon}\|_{L^{q}((0,T)\times\Omega)}^{q}\leq C \varepsilon^{2},\quad\forall\,q\in[2,r].\]
Thus,
\[\big{|}\int_{0}^{T}\int_{\Omega}\left(\int_{0}^{t}\big{(}(1+ \lambda|D\tilde{\mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{)}D \tilde{\mathbf{u}}_{\varepsilon}\,\mathrm{d}s\right):\nabla w^{i,\varepsilon }\phi\,\mathrm{d}x\mathrm{d}t\,\big{|}\] \[\leq C\|\nabla w^{i,\varepsilon}\|_{L^{\infty}(\Omega)}\int_{0}^ {T}\mathrm{d}t\int_{0}^{t}\mathrm{d}s\int_{\Omega}|\;(1+\lambda|D\tilde{ \mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\;|\;|D\tilde{\mathbf{u}}_{ \varepsilon}|\,\mathrm{d}x\] \[\leq C\varepsilon^{-1}\int_{0}^{T}\mathrm{d}t\int_{0}^{t} \mathrm{d}s\int_{\Omega}\big{(}|D\tilde{\mathbf{u}}_{\varepsilon}|^{r-1}+|D \tilde{\mathbf{u}}_{\varepsilon}|^{3}\big{)}\mathrm{d}x\] \[\leq C\varepsilon^{-1}(\|D\tilde{\mathbf{u}}_{\varepsilon}\|_{L^ {r-1}((0,T)\times\Omega)}^{r-1}+\|D\tilde{\mathbf{u}}_{\varepsilon}\|_{L^{3}( (0,T)\times\Omega)}^{3})\leq C\varepsilon\to 0.\]
To sum up, for \(1<r<\infty\),
\[\int_{0}^{T}\int_{\Omega}\left(\int_{0}^{t}\big{(}(1+\lambda|D\tilde{\mathbf{ u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{)}D\tilde{\mathbf{u}}_{ \varepsilon}\,\mathrm{d}s\right):\nabla w^{i,\varepsilon}\phi\,\mathrm{d}x \mathrm{d}t\to 0. \tag{3.29}\]
We can use same arguments to show that
\[\left|\int_{0}^{T}\int_{\Omega}\left(\int_{0}^{t}\big{(}(1+\lambda|D\tilde{ \mathbf{u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{)}D\tilde{\mathbf{u}}_{ \varepsilon}\,\mathrm{d}s\right):(w^{i,\varepsilon}\otimes\nabla\phi)\, \mathrm{d}x\mathrm{d}t\right|\leq C\varepsilon\to 0. \tag{3.30}\]
Thus by (3.29) and (3.30), we have
\[\int_{0}^{T}\int_{\Omega}\left(\int_{0}^{t}\big{(}(1+\lambda|D\tilde{\mathbf{ u}}_{\varepsilon}|^{2})^{\frac{r}{2}-1}-1\big{)}D\tilde{\mathbf{u}}_{ \varepsilon}\mathrm{d}s\right):\nabla(\phi w^{i,\varepsilon})\,\mathrm{d}x \mathrm{d}t\to 0. \tag{3.31}\]
Then using (3.28), (3.31) and passing \(\varepsilon\to 0\) in (3.27) implies
\[\lim_{\varepsilon\to 0}\int_{0}^{T}\int_{\Omega}H_{\varepsilon}:\nabla(\phi w^{i, \varepsilon})\,\mathrm{d}x\mathrm{d}t=\frac{1}{2}\int_{0}^{T}\int_{\Omega} \phi U_{i}\,\mathrm{d}x\mathrm{d}t. \tag{3.32}\]
Finally using (3.12), (3.13), (3.21), (3.22), (3.26), (3.32) and passing \(\varepsilon\to 0\) in (3.11) gives
\[\int_{0}^{T}\int_{\Omega}\frac{1}{2}\eta_{0}\phi U_{i}\,\mathrm{d}x\mathrm{d}t =\int_{0}^{T}\int_{\Omega}F\cdot\phi\bar{w}^{i}\,\mathrm{d}x\mathrm{d}t+\int_ {0}^{T}\int_{\Omega}P\,\mathrm{div}\,(\phi\bar{w}^{i})\,\mathrm{d}x\mathrm{d}t.\]
This is nothing but the Darcy's law (3.10) in the sense of distribution, with permeability tensor \(A=(\bar{w}^{i}_{j})\) determined in (3.3). Thus, we completed the proof of Proposition 3.1.
### End of the proof
From Proposition 3.1, we can deduce the limit equation in \(\mathbf{u}\). Set \(\tilde{p}_{\varepsilon}=\partial_{t}\tilde{P}_{\varepsilon}\) and \(p=\partial_{t}P\) in the weak sense. By (3.7) and (3.9) we have
\[p\in\begin{cases}W^{-1,2}(0,T;L^{2}(\Omega))&1<r\leq 2,\\ W^{-1,\frac{r}{r-1}}(0,T;L^{\frac{r}{r-1}}(\Omega))&r>2,\end{cases}\]
and
\[\tilde{p}_{\varepsilon}=\partial_{t}\tilde{P}_{\varepsilon}\to p\text{ weakly in }\begin{cases}W^{-1,2}(0,T;L^{2}(\Omega))&1<r\leq 2,\\ W^{-1,\frac{r}{r-1}}(0,T;L^{\frac{r}{r-1}}(\Omega))&r>2.\end{cases}\]
Thus, differentiating (3.10) with respect to time variable in the distribution sense gives the limit equation (1.11). The proof of Theorem 1.5 is completed.
## Acknowledgements
Both authors are partially supported by the NSF of China under Grant 12171235.
|
2303.12565 | Iterating Semi-proper Forcing using Virtual Models | By a virtual model, we mean a model of set theory which is elementary in its
transitive closure. Virtual models are first used by Neeman
\cite{neeman2014forcing} to iterate forcing. That paper is concerned with
proper forcing. The method was then adjusted by Veli\v{c}kovi\'c to the case of
semi-proper forcing and this was drafted in \cite{velickovic2021iteration}. We
here straighten the details and further elaborate on Veli\v{c}kovi\'c's method.
The first section collects facts about virtual model, the second section
describes the iteration, and the third one illustrates the method in the case
of getting saturation of $\mathsf{NS}_{\omega_1}$ (loosely relying on
\cite{schindler2016nsomega1}). | Obrad Kasum, Boban VeliÄkoviÄ | 2023-03-22T13:46:25Z | http://arxiv.org/abs/2303.12565v1 | # Iterating Semi-proper Forcing using Virtual Models
###### Abstract
By a virtual model, we mean a model of set theory which is elementary in its transitive closure. Virtual models are first used by Neeman [14] to iterate forcing. That paper is concerned with proper forcing. The method was then adjusted by Velickovic to iterate the case of semi-proper forcing and this was drafted in [15]. We here straighten the details and futher elaborate on Velickovic's method. The first section collects facts about virtual model, the second section describes the iteration, and the third one illustrates the method in the case of getting saturation of \(\mathsf{NS}_{\omega_{1}}\) (loosely relying on [13]).
###### Contents
* 1 Virtual Models
* 1.1 Admissible Structures
* 1.2 Virtual Models: Definition
* 1.3 Elementary Rank-initial Segments
* 1.4 Reduction of a Virtual Model
* 1.5 \(\alpha\)-isomorphism
* 1.6 Comparing Virtual Models
* 1.7 Forcing Extensions of Virtual Models
* 1.8 Semi-proper Forcing
* 1.9 Iterated Forcing Extensions of Virtual Models
* 2 Semi-proper Iteration
* 2.1 Setup for the Iteration
* 2.2 Recursive Step of the Definition
* 2.3 Basic Properties
* 2.4 Statement of Transfer Theorem
* 2.5 Some Lemmas for Transfer Theorem
* 2.6 Proof of Transfer Theorem
* 2.6.1 Successor Case
* 2.6.2 Limit Case
* 2.7 Semi-properness and Chain Condition
* 3 Saturating \(\mathsf{NS}_{\omega_{1}}\)
* 3.1 Careful Collapse
* 3.2 Sealed Predense Collections
* 3.3 Sealing Iteration
Virtual Models
### Admissible Structures
* **Summary.** We consider a basic notion of an admissible structure. It will depend on parameter \(\mathcal{L}\), where \(\mathcal{L}\) is a recursively enumerable first order language containing \(\in\). The language \(\mathcal{L}\) will most often be kept implicit.
* **Definition.** An _admissible structure_\(\mathbb{A}:=(A,\in,\dots)\) in language \(\mathcal{L}\) is a transitive structure which satisfies \(\mathsf{ZFC}^{-}\) in the extended language.
* **Remark.** We will often say "\(A\) is admissible" when we really mean "\(\mathbb{A}\) is admissible".
* **Example.** If \(A\) is transitive model for \(\mathsf{ZFC}^{-}\), then \((A,\in)\) is admissible.
* **Example.** If \(A\) is admissible and if \(U\subseteq A\) is definable with parameters over \((A,\in)\), then \((A,\in,U)\) is admissible.
* **Example.** If \(\theta>\omega\) is regular, then every structure \((H_{\theta},\in,\dots)\) is admissible.
### Virtual Models: Definition
* **Summary.** We introduce now a generalization of an admissible structure which we call a virtual model. This notion will play a crucial role in the main construction.
* **Definition.** A _virtual model (in language \(\mathcal{L}\))_ is a structure \((M,\in,\dots)\) such that there exists an admissible \(\mathbb{A}\) with \[(M,\in,\dots)\prec\mathbb{A}.\]
* **Example.** Every admissible structure is also a virtual model.
* **Remark.** If \(M\prec\mathbb{A}\), where \(\mathbb{A}\) is admissible, then the structure on \(M\) is uniquely determined by structure \(\mathbb{A}\).
* **Goal.** We will now show that given virtual model \(M\), there exists unique minimal structure \(\widehat{M}\) witnessing the fact that \(M\) is a virtual model.
* **Lemma.** Let \(A\) be admissible and let \(B\subseteq A\) be transitive. Then \(B\prec_{0}A\).
* **Lemma.** Let \(A\) be admissible and let \(M\prec_{0}A\) be such that \(\mathsf{trcl}(M)=A\). Then \(M\prec A\).
* **Remark.** We treat function symbols as relation symbols. In particular, if \(A\) is admissible and \(M\subseteq A\) contains all the constants, then \(M\) inherits a structure from \(A\). In that case, the inherited structure is the default structure on \(M\).
* **Lemma.** Let \(A\) be admissible and let \(M\prec A\). Then \(\mathsf{trcl}(M)\) is admissible and \[M\prec\mathsf{trcl}(M)\prec A.\]
* **Proposition.** Let \(M\) be a virtual model. Then there exists unique structure \(\widehat{M}\) on \(\mathsf{trcl}(M)\) satisfying \(M\prec\widehat{M}\). Furthermore, for every admissible \(A\), it holds that \(M\prec A\) if and only if \(\widehat{M}\prec A\).
Proof.: Existence follows from the previous lemma. To verify uniqueness, consider a relation \(R\) of structure \(M\) and note that
\[R^{\widehat{M}}=\bigcup_{\xi<\mathsf{Ord}\cap\widehat{M}}(R^{\widehat{M}}\cap (\widehat{M}\restriction\xi))=\bigcup_{\xi<\mathsf{Ord}\cap M}(R^{\widehat{M}} \cap(\widehat{M}\restriction\xi))=\bigcup_{\xi<\mathsf{Ord}\cap M}(R\cap V_{ \xi})^{M}\]
which depends only on \(M\).
### Elementary Rank-initial Segments
1.3.1 **Definition.** Let \(A\) be admissible. Then \(\mathscr{E}_{A}:=\{\alpha<\mathsf{Ord}^{A}:A\upharpoonright\alpha\prec A\}\).
2. **Remark.** Set \(\mathscr{E}_{A}\) is closed in \(\mathsf{Ord}^{A}\). If \(A=V_{\kappa}\) where \(\kappa\) is inaccessible, then \(\mathscr{E}_{A}\) is club in \(\kappa\).
3.3 **Proposition.** Let \(A,B\) be admissible and let \(\alpha\in\mathscr{E}_{A}\cap\mathscr{E}_{B}\). Suppose that \(A\upharpoonright\alpha=B\upharpoonright\alpha\). Then \[\mathscr{E}_{A}\cap[0,\alpha]=\mathscr{E}_{B}\cap[0,\alpha]\in A\cap B.\]
4. **Definition.** Let \(M\) be a virtual model. Then \(\mathscr{E}_{M}:=\{\alpha\in\mathsf{Ord}\cap M:M\cap V_{a}\prec M\}\).
5. **Remark.** Every admissible structure is also a virtual model. In that case, the two definitions coincide.
6. **Proposition.** Let \(M\) be a virtual model. Then \(\mathscr{E}_{M}=\mathscr{E}_{\widetilde{M}}\cap M\).
7. **Corollary.** Let \(M\) be a virtual model and let \(\gamma\) be a limit point of \(\mathscr{E}_{M}\). Then \(\gamma\in\mathscr{E}_{\widetilde{M}}\).
Proof.: Follows from the fact that \(\mathscr{E}_{M}\subseteq\mathscr{E}_{\widetilde{M}}\) and the fact that \(\mathscr{E}_{\widetilde{M}}\) is closed.
1.3.8 **Lemma.** Let \(M\) be a virtual model and let \(\alpha\in\mathscr{E}_{\widetilde{M}}\). Suppose that \(\beta:=\min(M\cap[\alpha,\infty))\). Then \(\beta\in\mathscr{E}_{M}\).
Proof.:
1. \({}^{\circ}\) We apply Tarski-Vaught test. Suppose \(a\in M\cap V_{\beta}\) and \(M\models\exists y\phi(a,y)\). We want to find \(b\in M\cap V_{\beta}\) such that \(M\models\phi[a,b]\).
2. \({}^{\circ}\) By elementarity, \(\widehat{M}\models\exists y\phi(a,y)\). We also have \(a\in M\cap V_{\beta}\subseteq M\cap V_{\alpha}\subseteq\widehat{M}\upharpoonright\alpha\).
3. \({}^{\circ}\) Since \(\alpha\in\mathscr{E}_{\widetilde{M}}\), there exists \(y\in\widehat{M}\upharpoonright\alpha\) such that \(\widehat{M}\models\phi[a,y]\).
4. \({}^{\circ}\) Since \(\alpha\leq\beta\), we have \[\widehat{M}\models(\exists y\in V_{\beta})\phi(a,y).\] By elementarity, \[M\models(\exists y\in V_{\beta})\phi(a,y).\]
5. \({}^{\circ}\) Thus, there is \(b\in M\cap V_{\beta}\) such that \(M\models\phi[a,b]\).
1.3.9 **Proposition.** Let \(M\) be a virtual model. Suppose that \(\gamma\) is a limit point of \(\mathscr{E}_{\widetilde{M}}\) and a limit point of \(M\cap\mathsf{Ord}\). Then \(\gamma\) is a limit point of \(\mathscr{E}_{M}\).
Proof.:
1. \({}^{\circ}\) Let \(\alpha<\gamma\) be arbitrary. We want to find \(\beta\in\mathscr{E}_{M}\cap\gamma\) such that \(\alpha<\beta\).
2. \({}^{\circ}\) There is \(\alpha^{\prime}\in\mathscr{E}_{\widetilde{M}}\cap\gamma\) such that \(\alpha<\alpha^{\prime}\).
3. \({}^{\circ}\) There is \(\alpha^{\prime\prime}\in M\cap\gamma\) such that \(\alpha^{\prime}<\alpha^{\prime\prime}\).
4. \({}^{\circ}\) By [1.3.8], we have \(\beta:=\min(M\cap[\alpha^{\prime},\infty))\in\mathscr{E}_{M}\).
5. \({}^{\circ}\) By \(3^{\circ}\), we have \(\beta\in[\alpha^{\prime},\alpha^{\prime\prime}]\subseteq(\alpha,\gamma)\).
### Reduction of a Virtual Model
* **Summary.** We introduce a basic construction of reducing the amount of information captured by a virtual model.
* **Definition.** Let \(M\) be a virtual model and let \(X\) be an arbitrary set. We define \[\mathsf{Hull}(M,X):=\{f(x):f:d\to r,\,f\in M,\,x\in X^{<\omega}\cap d\}.\]
* **Proposition.**\(M\) be a virtual model, let \(\alpha:=\sup(\mathsf{Ord}\cap M)\), and let \(X\) be an arbitrary set. Then the following holds. 1. \(M\prec\mathsf{Hull}(M,X)\prec\widehat{M}\) 2. \(\mathsf{Hull}(\widehat{M},X)=\widehat{M}\) 3. \(X\cap\widehat{M}\subseteq\mathsf{Hull}(M,X)\) 4. \(\mathsf{Hull}(M,X)=\mathsf{Hull}(M,X\cap\widehat{M})\) 5. For every virtual model \(N\) satisfying \(M\prec N\) and \(X\cap\widehat{M}\subseteq N\), we have \(\mathsf{Hull}(M,X)\prec N\). 6. \(|\mathsf{Hull}(M,X)|\leq|M|+|X|\)
* **Definition.**_The \(\alpha\)-reduction \(M\downarrow\alpha\) of virtual model \(M\) is defined as follows: let \(\pi:\mathsf{Hull}(M,V_{\alpha})\to N\) be the transitive collapse and set \(M\downarrow\alpha:=\pi[M]\).
* **Remark.** We do not require \(V_{\alpha}\subseteq\widehat{M}\) or even \(\delta:=\sup(\mathsf{Ord}\cap M)\geq\alpha\). What we do is simply collapsing \(M\) while "freezing" its part below rank \(\alpha\). In particular, if \(\delta\leq\alpha\), this means that \(M\subseteq V_{\alpha}\) and \(\mathsf{Hull}(M,V_{\alpha})=\widehat{M}\), i.e. \(M\downarrow\alpha=M\).
* **Exercise.** Let \(M\) be a virtual model and let \(\alpha\) be an ordinal. Then \(\widehat{M\downarrow\alpha}\) is the transitive collapse of \(\mathsf{Hull}(M,V_{\alpha})\).
* **Definition.** A virtual model \(M\) is said to be \(\alpha\)_-generated_ if \(\widehat{M}=\mathsf{Hull}(M,V_{\alpha})\).
* **Lemma.** Let \(N\) be a virtual model and let \(\pi:N\to\overline{N}\) be the transitive collapse. Then for every \(\Sigma_{0}\) formula \(\phi(\overline{x})\), we have \[(\forall\overline{x}\in N^{<\omega})(\phi(\overline{x})\iff\phi(\pi(\overline {x}))).\] _Proof._ \[\phi(\overline{x}) \iff \widehat{N}\models\phi(\overline{x})\] (1) \[\iff N\models\phi(\overline{x})\] (2) \[\iff \overline{N}\models\phi(\pi(\overline{x}))\] (3) \[\iff \phi(\pi(\overline{x})),\] (4) where: 1. follows by \(\Sigma_{0}\)-absoluteness; 2. follows by elementarity; 3. follows since \(\pi\) is an isomorphism; 4. follows by \(\Sigma_{0}\)-absoluteness.
* **Proposition.** Let \(M\) be a virtual model. Then \(M\downarrow\alpha\) is \(\alpha\)-generated.
* **Proposition.** Let \(M\) be a virtual model and let \(\beta\leq\alpha\). Then \((M\downarrow\alpha)\downarrow\beta=M\downarrow\beta\).
Proof.: This is a straight forward computation; use [1.4.8] to show that collapses agree with the computations of \(\mathsf{Hull}\)'s.
\(\alpha\)-isomorphism
1.5.1 **Summary.** The relation introduced in this part has for the goal to capture the idea that two virtual models carry the same information up to \(\alpha\).
2. **Definition.** Let \(M,N\) be virtual models. We define \(M\cong_{\alpha}N\) to hold if there is an isomorphism \(f:\mathsf{Hull}(M,V_{\alpha})\cong\mathsf{Hull}(N,V_{\alpha})\) satisfying \(f[M]=N\).
3. **Proposition.** Let \(M\) be a virtual model. Then \(M\cong_{\alpha}M\downarrow\alpha\).
4. **Proposition.** Let \(M,N\) be \(\alpha\)-generated virtual models. If \(M\cong_{\alpha}N\), then \(M=N\).
5. **Proposition.** Let \(M,N\) be virtual models. Then \(M\cong_{\alpha}N\) if and only if \(M\downarrow\alpha=N\downarrow\alpha\).
6. **Corollary.**\(\cong_{\alpha}\) is an equivalence relation between virtual models.
### Comparing Virtual Models
1.6.1 **Summary.**
2. **Definition.** Let \(M,N\) be countable virtual models. Then we define \(M\lhd_{\alpha}N\) to hold if there exists \(M^{\prime}\in N\) such that \(M^{\prime}\) with the inherited structure is a virtual model and \(M\cong_{\alpha}M^{\prime}\).
3. **Definition.** A virtual model \(M\) is said to be \(\xi\)_-strong_ if \(V_{\xi}\subseteq\widehat{M}\).
4. **Lemma.** Let \(M\) be an \((\omega+1)\)-strong virtual model and let \(N\in M\) be a countable virtual model. Then \(|N|^{M}=\omega\).
Proof.:
1. [label=\(1\)]
2. Let \(\pi:M\to\overline{M}\) be the transitive collapse of \((M,\in)\). We have \(\overline{M}\in H_{\omega_{1}}\) and there is a surjection \(f:\omega\to Y\).
3. We have \(f\in H_{\omega_{1}}\subseteq\widehat{N}\).
4. Thus, \(\widehat{N}\models(\overline{M}\) is countable).
5. By absoluteness of transitive collapse, \(\widehat{N}\models(\overline{M},\in)\cong(M,\in)\).
6. Thus, \(\widehat{N}\models|M|=|\overline{M}|=\omega\) and consequently \(N\models|M|=\omega\).
1.6.5 **Proposition.** Suppose that \(\alpha>\omega\). Then relation \(\lhd_{\alpha}\) is a partial order between countable \((\omega+1)\)-strong virtual models.
Proof.:
1. [label=\(1\)]
2. Suppose that \(M,N,P\) are countable \((\omega+1)\)-strong virtual models satisfying \(M\lhd_{\alpha}N\lhd_{\alpha}P\).
3. There exist a virtual model \(M^{\prime}\in N\), an isomorphism \[f:\mathsf{Hull}(M,V_{\alpha})\cong\mathsf{Hull}(M^{\prime},V_{\alpha})\] with \(f[M]=M^{\prime}\), a virtual model \(N^{\prime}\in P\), and an isomorphism \[g:\mathsf{Hull}(N,V_{\alpha})\cong\mathsf{Hull}(N^{\prime},V_{\alpha})\] with \(g[N]=N^{\prime}\).
4. By [1.6.4], we have \(M^{\prime}\subseteq N\) and \(g(M^{\prime})=g[M^{\prime}]\).
5. For \(h\in N^{\prime}\) and \(x\in\mathsf{dom}(f)\cap V_{\alpha}^{<\omega}\), we have \[g(h(x))=g(h)(g(x))=g(h)(x).\] We conclude \[g[\mathsf{Hull}(M^{\prime},V_{\alpha})]=\mathsf{Hull}(g[M^{\prime}],V_{\alpha} )=\mathsf{Hull}(g(M^{\prime}),V_{\alpha}).\]
* Thus, \(g(M^{\prime})\in P\) and \[g\circ f:\mathsf{Hull}(M,V_{\alpha})\cong\mathsf{Hull}(g(M^{\prime}),V_{\alpha}).\]
* We also verify \[(g\circ f)[M]=g[f[M]]=g[M^{\prime}]=g(M^{\prime}),\] which yields the conclusion \(M\lhd_{\alpha}P\).
**Proposition**.: Suppose that \(M,M^{\prime},N,N^{\prime}\) are countable virtual models such that \(M\cong_{\alpha}M^{\prime}\) and \(N\cong_{\alpha}N^{\prime}\). Then \(M\lhd_{\alpha}N\) if and only if \(M^{\prime}\lhd_{\alpha}N^{\prime}\).
**Proposition**.: Suppose that \(M,N\) are countable virtual models and \(M\in N\). Then \(M\lhd_{\alpha}N\).
**Proposition**.: Let \(\alpha\) be an ordinal and let \(M,N\) be countable virtual models. Suppose that \(M,\alpha\in N\), that \(M\) is \(\alpha\)-generated, and \(M\lhd_{\alpha}N\). Then \(M\in N\).
Proof.:
* There is a virtual model \(M^{\prime}\in N\) such that \(M\cong_{\alpha}M^{\prime}\).
* Since \(M\) is \(\alpha\)-generated, we have \(M=M^{\prime}\downarrow\alpha\in N\).
**Proposition**.: Let \(\alpha\) be a beth-fixed-point and let \(M,N\) be countable \(\alpha\)-strong virtual models such that \(M\lhd_{\alpha}N\). Let \(\theta\in M\cap V_{\alpha}\) be regular and uncountable. Then \(M\cap H_{\theta}\in N\cap H_{\theta}\).
Proof.: Since \(N\) is \(\alpha\)-strong and \(\theta<\alpha\), we have that \(H_{\theta}=H_{\theta}^{N}\in N\). Let \(M^{\prime}\in N\) be such that \(M\cong_{\alpha}M^{\prime}\). We have \(M\cap H_{\theta}=M^{\prime}\cap H_{\theta}\in N\). On the other hand, since \(M\cap H_{\theta}\in[H_{\theta}]^{\omega}\), we conclude \(M\cap H_{\theta}\in H_{\theta}\).
### Forcing Extensions of Virtual Models
* **Definition**.: Let \(A\) be admissible, let \(\mathbb{P}\in A\) be a poset, and let \(M\prec A\) with \(\mathbb{P}\in M\). For \(G\rightsquigarrow A^{\mathbb{P}}\), we define \[M[G]:=\{\tau^{G}:\tau\in M^{\mathbb{P}}=A^{\mathbb{P}}\cap M\},\quad M^{G}:=M[G ]\cap A.\]
* **Proposition**.: Let \(A\) be admissible, let \(\mathbb{P}\in A\) be a poset, let \(M\prec A\) with \(\mathbb{P}\in M\), and let \(G\rightsquigarrow A^{\mathbb{P}}\). Then the following holds.
* \(M\prec M^{G}\prec A\)
**Proposition**.: Let \(M\) be an \(\alpha\)-strong virtual model, let \(\alpha\) be such that \(\widehat{M}\upharpoonright\alpha\models\mathsf{ZFC}^{-}\), let \(\mathbb{P}\in M\cap V_{\alpha}\), and let \(G\rightsquigarrow V^{\mathbb{P}}\). Then \((\widehat{M}\upharpoonright\alpha)[G]\subseteq\widehat{M[G]}\).
**Lemma**.: Let \(M\) be a virtual model, let \(\alpha\in\widehat{M}\) satisfy \(\widehat{M}\upharpoonright\alpha\models\mathsf{ZFC}^{-}\), let \(\mathbb{P}\in M\cap V_{\alpha}\), and let \(G\in V\) satisfy \(G\rightsquigarrow\widehat{M}^{\mathbb{P}}\). Then \(\mathsf{Hull}(M[G],V_{\alpha})=\mathsf{Hull}(M,V_{\alpha})[G]\).
Proof.:
* We first consider inclusion \((\supseteq)\).
* Let \(\hat{y}\in\mathsf{Hull}(M,V_{\alpha})\) and let us verify \(\hat{y}^{G}\in\mathsf{Hull}(M[G],V_{\alpha})\).
* We have \(\dot{y}=f(x)\) for a function \(f\in M\) and an \(x\in V_{\alpha}\cap\mathsf{dom}(f)\). We may choose \(f\) so that \(\mathsf{im}(f)\subseteq\widehat{M}^{\mathbb{P}}\).
* Let \(g\in M[G]\) be the function satisfying \(\mathsf{dom}(g)=\mathsf{dom}(f)\) and \(g(u):=f(u)^{G}\). We have \(g\in M[G]\) and \(x\in\mathsf{dom}(g)\cap V_{\alpha}\).
* Hence, \(\dot{y}^{G}=f(x)^{G}=g(x)\in\mathsf{Hull}(M[G],V_{\alpha})\).
* We consider now inclusion \((\subseteq)\)
* Let us consider \(x_{0}\in\mathsf{Hull}(M[G],V_{\alpha})\) and let us show \(x_{0}\in\mathsf{Hull}(M,V_{\alpha})[G]\).
* There exist a function \(f\in M[G]\) and \(x\in V_{\alpha}\cap\mathsf{dom}(f)\) such that \(x_{0}=f(x)\).
* There exist \(\mathbb{P}\)-names \(\dot{f},\dot{d}\) such that \[\widehat{M}^{\mathbb{P}}\models(\dot{f}\text{ is a function on }\dot{d})\] and \(\dot{f}^{G}=f\).
* Note that \[x\in\widehat{M[G]}\upharpoonright\alpha=\widehat{M}[G]\upharpoonright\alpha=( \widehat{M}\upharpoonright\alpha)[G],\] so there exists a \(\mathbb{P}\)-name \(\dot{x}\in\widehat{M}\upharpoonright\alpha\) such that \(\dot{x}^{G}=x\).
* Let \(\xi\in M\) satisfy \(\xi>\alpha\) and let \(e:=\{\dot{y}\in V_{\xi}^{M}:\dot{y}\text{ is a $\mathbb{P}$-name}\}\in M\).
* There exists a function \(g\) such that \(\mathsf{dom}(g)=e\) and \[(\forall\dot{y}\in e)(\forall p\in\mathbb{P})(p\Vdash^{\widehat{M}}_{\mathbb{ P}}\dot{y}\in\dot{d}\implies p\Vdash^{\widehat{M}}_{\mathbb{P}}\dot{f}(\dot{y})= \ulcorner^{\tau}g(\dot{y})\urcorner).\] By elementarity, function \(g\) can be chosen in \(M\).
* We have \(\dot{z}:=g(\dot{x})\in\mathsf{Hull}(M,V_{\alpha})\) and \(\dot{z}^{G}\in\mathsf{Hull}(M,V_{\alpha})[G]\).
* Let \(p\in G\) be such that \(p\Vdash^{\widehat{M}}_{\mathbb{P}}\dot{x}\in\dot{d}\). Since \(\dot{x}\in e\), we have \(p\Vdash^{\widehat{M}}_{\mathbb{P}}\dot{f}(\dot{x})=\dot{z}\) and consequently \(\dot{f}^{G}(\dot{x}^{G})=\dot{z}^{G}\).
* Hence, \(x_{0}=f(x)=\dot{f}^{G}(\dot{x}^{G})=\dot{z}^{G}\in\mathsf{Hull}(M,V_{\alpha})[G]\).
**Proposition**.: Let \(M,N\) be virtual models and let \(\alpha\in\widehat{M}\cap\widehat{N}\). Suppose that
\[\widehat{M}\upharpoonright\alpha=\widehat{N}\upharpoonright\alpha\models \mathsf{ZFC}^{-}\]
and that \(M\cong_{\alpha}N\). Let \(\mathbb{P}\in M\cap V_{\alpha}\) and let \(G\vartriangleleft\widehat{M}^{\mathbb{P}}\). Then
\[\mathbb{P}\in N\cap V_{\alpha},\quad G\vartriangleleft\widehat{N}^{\mathbb{P }},\quad M[G]\cong_{\alpha}N[G],\quad M^{G}\cong_{\alpha}N^{G}.\]
Furthermore, the isomorphism witnessing \(M[G]\cong_{\alpha}N[G]\) extends the isomorphism witnessing \(M^{G}\cong_{\alpha}N^{G}\), which in turn extends the isomorphism witnessing \(M\cong_{\alpha}N\).
Proof.:
* \(M\cong_{\alpha}N\) implies \(M\cap V_{\alpha}=N\cap V_{\alpha}\), so \(\mathbb{P}\in N\cap V_{\alpha}\). The fact that \(G\vartriangleleft\widehat{N}^{\mathbb{P}}\) is obvious.
* We verify below that \(M[G]\cong_{\alpha}N[G]\) and that the isomorphism witnessing this fact extends the isomorphism witnessing \(M\cong_{\alpha}N\). Since \(M^{G}\) and \(N^{G}\) are computed as the appropriate grounds, we will immediately have \(M^{G}\cong_{\alpha}N^{G}\), via the restricted isomorphism.
* Let \(F:\mathsf{Hull}(M,V_{\alpha})\cong\mathsf{Hull}(N,V_{\alpha})\) be such that \(F[M]=N\).
* By [1.7.4], every element of \(\mathsf{Hull}(M[G],V_{\alpha})\) is of the form \(\dot{x}^{G}\) for some \(\dot{x}\in\mathsf{Hull}(M,V_{\alpha})^{\mathbb{P}}\). Also, \(\dot{y}^{G}\in\mathsf{Hull}(N[G],V_{\alpha})\) for every \(\dot{y}\in\mathsf{Hull}(N,V_{\alpha})^{\mathbb{P}}\).
* Hence, we can define \(F[G]:\mathsf{Hull}(M[G],V_{\alpha})\to\mathsf{Hull}(N[G],V_{\alpha})\) by setting \(F[G](\dot{x}^{G}):=F(\dot{x})^{G}\) for \(\dot{x}\in\mathsf{Hull}(M,V_{\alpha})^{\mathbb{P}}\). We want to show that \(F[G]\) is an isomorphism, that \((F[G])[M[G]]=N[G]\), and that \(F[G]\upharpoonright\mathsf{Hull}(M,V_{\alpha})=F\).
* **Claim.** For \(*\in\{\in,=\}\) and \(\dot{x}_{0},\dot{x}_{1}\in\mathsf{Hull}(M,V_{\alpha})^{\mathbb{P}}\), we have \[\dot{x}_{0}^{G}*\dot{x}_{1}^{G}\iff F(\dot{x}_{0})^{G}*F(\dot{x}_{1})^{G}.\]
Proof.: \[\dot{x}_{0}^{G}\ast\dot{x}_{1}^{G} \Longleftrightarrow (\exists p\in G)\mathsf{Hull}(M,V_{\alpha})\models(p\Vdash\dot{x}_{0} \ast\dot{x}_{1})\] (5) \[\Longleftrightarrow (\exists p\in G)\mathsf{Hull}(N,V_{\alpha})\models(p\Vdash F( \dot{x}_{0})\ast F(\dot{x}_{1}))\] (6) \[\Longleftrightarrow F(\dot{x}_{0})^{G}\ast F(\dot{x}_{1})^{G},\] (7) where (6) follows from \(F\upharpoonright V_{\alpha}=\mathsf{id}\) and \(\mathbb{P}\in V_{\alpha}\).
* Hence, \(F[G]\) is correctly defined injection agreeing with \(\in\).
* Analogously, we have injection \[F^{-1}[G]:\mathsf{Hull}(N[G],V_{\alpha})\to\mathsf{Hull}(M[G],V_{\alpha})\] agreeing with \(\in\).
* Clearly, \(F[G]\circ F^{-1}[G]=\mathsf{id}\) and \(F^{-1}[G]\circ F[G]=\mathsf{id}\), leading to the conclusion that \[F[G]:\mathsf{Hull}(M[G],V_{\alpha})\cong\mathsf{Hull}(N[G],V_{\alpha}).\]
* For \(x\in\mathsf{Hull}(M,V_{\alpha})\), we have \[F[G](x)=F[G](\dot{x}^{G})=F(\dot{x})^{G}=((F(x))^{\cdot})^{G}=F(x).\]
* Hence, it remains to establish \((F[G])[M[G]]=N[G]\), i.e. \((F[G])[M[G]]\subseteq N[G]\) and \((F^{-1}[G])[N[G]]\subseteq M[G]\).
* We consider here only the first conjuct. Let \(y\in M[G]\) be arbitrary.
* By definition of \(M[G]\), we have \(y=\dot{x}^{G}\) for some \(x\in M^{\mathbb{P}}\subseteq\mathsf{Hull}(M,V_{\alpha})^{\mathbb{P}}\).
* By definition of \(F[G]\), we have \(F[G](y)=F(\dot{x})^{G}\).
* Since \(\dot{x}\in M\) and \(F[M]=N\), we have \(F(\dot{x})\in N^{\mathbb{P}}\) and consequently \(F[G](y)=F(\dot{x})^{G}\in N[G]\).
**Corollary**.: Let \(M,N\) be countable virtual models and let \(\alpha\in\widehat{M}\cap\widehat{N}\). Suppose that
\[\widehat{M}\upharpoonright\alpha=\widehat{N}\upharpoonright\alpha\models \mathsf{ZFC}^{-}\]
and that \(M\lhd_{\alpha}N\). Let \(\mathbb{P}\in M\cap V_{\alpha}\) and let \(G\in V\) satisfy \(G\rightsquigarrow\widehat{M}^{\mathbb{P}}\). Then \(M[G]\) and \(N[G]\) are countable and \(M[G]\lhd_{\alpha}N[G]\).
Proof.:
* \(M,N\) surject onto \(M[G],N[G]\), respectively, so \(M[G],N[G]\) are countable.
* Let \(M^{\prime}\in N\) be a virtual model in \(V\) satisfying \(M\cong_{\alpha}M^{\prime}\).
* By [1.7.5], we have \(M[G]\cong_{\alpha}M^{\prime}[G]\in N[G]\).
* Thus, \(M[G]\lhd_{\alpha}N[G]\).
**Lemma**.: Let \(M\) be a virtual model and let \(\alpha\in\widehat{M}\) be such that \(\widehat{M}\upharpoonright\alpha\models\mathsf{ZFC}^{-}\). Suppose that \(M\) is \(\alpha\)-generated. Let \(\mathbb{P}\in M\cap V_{\alpha}\) and let \(G\in V\) satisfy \(G\rightsquigarrow\widehat{M}^{\mathbb{P}}\). Then \(M[G]\) and \(M^{G}\) are \(\alpha\)-generated.
Proof.: To see that \(M[G]\) is \(\alpha\)-generated, compute as follows:
\[\widehat{M[G]}=\widehat{M}[G]=\mathsf{Hull}(M,V_{\alpha})[G]=\mathsf{Hull}(M[G ],V_{\alpha}).\]
To see that \(M^{G}\) is \(\alpha\)-generated, recall that \(M\prec M^{G}\prec\widehat{M}\).
**Proposition**.: Let \(M\) be a virtual model and let \(\alpha\in\widehat{M}\) be such that \(\widehat{M}\upharpoonright\alpha\models\mathsf{ZFC}^{-}\), let \(\mathbb{P}\in M\cap V_{\alpha}\), and let \(G\in V\) satisfy \(G\rightsquigarrow\widehat{M}^{\mathbb{P}}\). Then \((M\downarrow\alpha)[G]=M[G]\downarrow\alpha\) and \((M\downarrow\alpha)^{G}=M^{G}\downarrow\alpha\).
Proof.: Since \(M\downarrow\alpha\cong_{\alpha}M\), we have \((M\downarrow\alpha)[G]\cong_{\alpha}M[G]\). Since \(M\downarrow\alpha\) is \(\alpha\)-generated, so is \((M\downarrow\alpha)[G]\) and we conclude
\[(M\downarrow\alpha)[G]=M[G]\downarrow\alpha.\]
The other part is completely analogous.
### Semi-proper Forcing
1. **Definition.** Let \(\mathbb{P}\) be a poset, let \(\theta>2^{2^{[\text{\rm{not}}(p)]}}\) be regular, and let \(M\prec(H_{\theta},\in,\mathbb{P})\) be countable. a. For \(G\rightsquigarrow V^{\mathbb{P}}\), we say that \(G\) is _semi-\(M^{\mathbb{P}}\)-generic_ if \(M[G]\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\). b. For \(p\in\mathbb{P}\), we say that \(p\) is _semi-\(M^{\mathbb{P}}\)-generic_ if \(p\Vdash M[\dot{G}]\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\).
2. **Lemma.** Let \(\mathbb{P}\) be a poset, let \(\theta>2^{2^{[\text{\rm{not}}(p)]}}\) be regular, and let \(M\prec(H_{\theta},\in,\mathbb{P})\) be countable. If \(p\in\mathbb{P}\) is semi-\(M^{\mathbb{P}}\)-generic, then \(p\Vdash\omega_{1}=\omega_{1}^{V}\).
Proof.:
1. Let \(g\rightsquigarrow V^{\mathbb{P}}\) with \(p\in g\).
2. Since \(M[g]\prec H_{\theta}^{V[g]}\), it holds that "\(\omega_{1}\) is the smallest ordinal \(\xi\) with the property that \(\sup(M[g]\cap\xi)<\xi\)".
3. Since \(M[g]\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\) and the last set is an ordinal, we see that the solution for \(\xi\) is exactly \(\xi=\omega_{1}^{V}\).
**Definition.** Let \(\mathbb{P}\) be a poset and let \(\theta>2^{2^{[\text{\rm{not}}(p)]}}\) be regular. Poset \(\mathbb{P}\) is _semi-proper_ if for every countable \(M\prec(H_{\theta},\in,\mathbb{P})\) and every \(p\in\mathbb{P}\cap M\) there exists \(q\leq p\) which is semi-\(M^{\mathbb{P}}\)-generic.
1. **Proposition.** Suppose that \(\mathbb{P}\) is semi-proper. Then:
2. \(\mathbb{P}\Vdash\omega_{1}=\omega_{1}^{V}\)
3. for every stationary \(S\subseteq\omega_{1}\), it holds that \(\mathbb{P}\Vdash(S\) is stationary).
Proof.:
1. Let \(\theta\) be sufficiently large regular.
2. By [1.8.2], it suffices for a to show that \[D:=\{p\in\mathbb{P}:(\exists M\prec(H_{\theta},\in,\mathbb{P}))(|M|=\omega \wedge p\text{ is semi-}M^{\mathbb{P}}\text{-generic})\}\] is dense in \(\mathbb{P}\).
3. Let \(p_{0}\in\mathbb{P}\) be arbitrary and let \(M\prec(H_{\theta},\in,\mathbb{P},p_{0})\) be countable.
4. Since \(\mathbb{P}\) is semi-proper, there is \(p\leq p_{0}\) which is semi-\(M^{\mathbb{P}}\)-generic. Clearly, \(p\in D\).
5. For b, suppose that \(S\) is stationary in \(\omega_{1}\), that \(\Vdash\dot{f}:\omega_{1}\to\omega_{1}\), and let \(p\in\mathbb{P}\) be arbitrary. We want to find \(q\leq p\) and \(\xi\in S\) such that \(q\Vdash\dot{f}[\xi]\subseteq\xi\).
6. Set of all countable \(M\prec(H_{\theta},\in,\mathbb{P},p,\dot{f})\) is club in \([H_{\theta}]^{\omega}\).
7. Since \(S\subseteq\omega_{1}\) is stationary in \(\omega_{1}\), there exists a countable \(M\prec(H_{\theta},\in,\mathbb{P},p,\dot{f})\) satisfying \(\xi:=M\cap\omega_{1}\in S\).
8. There exists \(q\leq p\) which is semi-\(M^{\mathbb{P}}\)-generic.
9. For every \(g\rightsquigarrow M^{\mathbb{P}}\) with \(q\in g\), we have \(\xi=M[g]\cap\omega_{1}\) and \(M[g]\models\dot{f}^{g}:\omega_{1}\to\omega_{1}\). We conclude \(\dot{f}^{g}[\xi]\subseteq\xi\).
**Definition.** Let \(\alpha\) be a beth-fixed point, let \(M\) be a countable \(\alpha\)-strong virtual model, and let \(\mathbb{P}\in V_{\alpha}\cap M\) be a poset.
1. For \(G\rightsquigarrow V^{\mathbb{P}}\), we say that \(G\) is _semi-\(M^{\mathbb{P}}\)-generic_ if \(M[G]\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\).
2. For \(p\in\mathbb{P}\), we say that \(p\) is _semi-\(M^{\mathbb{P}}\)-generic_ if \(p\Vdash M[\dot{G}]\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\).
1. [label=0.8.6] **Proposition.** Let \(\alpha\) be a beth-fixed point, let \(M\) be a countable \(\alpha\)-strong virtual model, let \(\mathbb{P}\in V_{\alpha}\cap M\) be a poset, and let \(\theta>2^{2^{\operatorname{\mathsf{Bad}}(\mathbb{P})}}\) be a regular cardinal satisfying \(\theta<\alpha\). For every \(G\rightsquigarrow V^{\mathbb{P}}\), we have that \(G\) is semi-\(M^{\mathbb{P}}\)-generic in the sense of [1.8.5] if and only if it is semi-\((H_{\theta}\cap M)^{\mathbb{P}}\)-generic in the sense of [1.8.1].
1. [label=0.8.7] **Proposition.** Let \(\gamma\) satisfy \(V_{\gamma}\models\mathsf{ZFC}^{-}\), let \(M,N\) be countable \(\gamma\)-strong virtual models, let \(\mathbb{P}\in M\cap V_{\gamma}\), and let \(G\rightsquigarrow V^{\mathbb{P}}\). Suppose that \(M\cong_{\gamma}N\). Then \(G\) is semi-\(M^{\mathbb{P}}\)-generic if and only if it is semi-\(N^{\mathbb{P}}\)-generic.
Proof.: We have \(M[G]\cong_{\gamma}N[G]\) and consequently
\[M[G]\cap\omega_{1}^{V}=N[G]\cap\omega_{1}^{V}.\]
1. [label=0.8.7]
2. **Corollary.** Let \(\gamma\) satisfy \(V_{\gamma}\models\mathsf{ZFC}^{-}\), let \(M\) be a countable \(\gamma\)-strong virtual model, let \(\mathbb{P}\in M\cap V_{\gamma}\), and let \(p\in\mathbb{P}\). Then \(p\) is semi-\(M^{\mathbb{P}}\)-generic if and only if it is semi-\((M\downarrow\gamma)^{\mathbb{P}}\)-generic.
### Iterated Forcing Extensions of Virtual Models
1. [label=0.9.0]
2. **Definition.** A _forcing iteration (with support \(E\))_ is a family \(\vec{\mathbb{P}}=(\mathbb{P}_{\alpha}:\alpha\in E)\) where: 1. [label=0.9.0] 2. \(E\subseteq\mathsf{Ord}\) and \(E\neq\emptyset\); 3. \(\mathbb{P}_{\alpha}\) is a poset for all \(\alpha\); 4. \(\mathbb{P}_{\alpha}\) is a complete sub-poset of \(\mathbb{P}_{\beta}\) for all \(\alpha\leq\beta\) in \(E\).
3. **Proposition.** Let \(\vec{\mathbb{P}}\) be an iteration with support \(E\) and let \(\mathbb{Q}:=\cup\{\mathbb{P}_{\alpha}:\alpha\in E\}\). Then \(\vec{\mathbb{P}}\tilde{\smallsetminus}\mathbb{Q}\) is an iteration with support \(E\cup\{\sup\{\xi+1:\xi\in E\}\}\).
4. **Definition.** Let \(\vec{\mathbb{P}}\) be a forcing iteration with support \(E\), let \(\delta\in E\), let \(\alpha\in E\cap\delta\), and let \(G_{\delta}\rightsquigarrow V^{\mathbb{P}_{\delta}}\). Then the _\(\alpha\)-restriction of the generic \(G_{\delta}\)_ is defined as \(G_{\alpha}:=G_{\delta}\cap\mathbb{P}_{\alpha}\).
5. **Remark.** It holds that \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\).
6. **Definition.** Let \(A\) be admissible and let \(\vec{\mathbb{P}}\) be a forcing iteration with support \(E\). Iteration \(\vec{\mathbb{P}}\) is said to be _point-wise definable over \(A\)_ if \(\vec{\mathbb{P}}\subseteq A\) and for every \(\alpha\in E\), constant \(\mathbb{P}_{\alpha}\) is definable over \(A\) from the parameter \(\alpha\).
7. **Definition.** Let \(A\) be admissible and let \(\vec{\mathbb{P}}\) be an iteration with support \(E\). Suppose that \(\vec{\mathbb{P}}\) is point-wise definable over \(A\). Let \(M\prec A\), let \(\delta\in E\), and let \(G_{\delta}\rightsquigarrow V^{\mathbb{P}_{\delta}}\). Elementary submodels \(M_{<\alpha}^{G_{\delta}}\) and \(M_{\alpha}^{G_{\delta}}\) of \(A\) and elementary submodels \(M_{\alpha}[G_{\delta}]\) of \(A[G_{\alpha}]\) (inside \(V[G_{\alpha}]\)) are defined by recursion on \(\alpha\in E\cap[0,\delta]\), as follows. 1. [label=0.9.0] 2. \(M_{<\alpha}^{G_{\delta}}:=M\cup\bigcup_{\xi\in E\cap\alpha}M_{\xi}^{G_{\delta}}\) 3. if \(\alpha\in M_{<\alpha}^{G_{\delta}}\), we define \(M_{\alpha}[G_{\delta}]:=M_{<\alpha}^{G_{\delta}}[G_{\alpha}]\) and \(M_{\alpha}^{G_{\delta}}:=(M_{<\alpha}^{G_{\delta}})^{G_{\alpha}}=M_{\alpha}[G_ {\delta}]\cap A\); 4. if \(\alpha\not\in M_{<\alpha}^{G_{\delta}}\), we define \(M_{\alpha}^{G_{\delta}}:=M_{<\alpha}^{G_{\delta}}\) and leave \(M_{\alpha}[G_{\delta}]\) undefined.
8. **Proposition.** The previous definition is correct and for all \(\alpha,\beta\in E\), we have: 1. [label=0.9.0] 2. \(M_{<\alpha}^{G_{\delta}}\prec M_{\alpha}^{G_{\delta}}\prec A\) 3. \(M_{\alpha}^{G_{\delta}}\prec M_{<\beta}^{G_{\delta}}\) whenever \(\alpha<\beta\); 4. \(M_{\alpha}[G_{\delta}]\prec A[G_{\alpha}]\) whenever \(\alpha\in M_{<\alpha}^{G_{\delta}}\).
1. [label=0.9.0]
2. **Proposition.** Let \(A\) be admissible, let \(\vec{\mathbb{P}}\) be an iteration with support \(E\) that is point-wise definable over \(A\), let \(M\prec A\), let \(\delta_{1},\delta_{2}\in E\) with \(\delta_{1}\leq\delta_{2}\), let \(G_{\delta_{2}}\rightsquigarrow V^{\mathbb{P}_{\delta_{2}}}\). Then for all \(\alpha\in E\cap[0,\delta_{1}]\), it holds that: 1. [label=0.9.0] 2. **Proposition.** Let \(A\) be admissible and let \(\vec{\mathbb{P}}\) be an iteration with support \(E\) that is point-wise definable over \(A\). Let \(M\prec A\), let \(\delta_{1},\delta_{2}\in E\) with \(\delta_{1}\leq\delta_{2}\), let \(G_{\delta_{2}}\rightsquigarrow V^{\mathbb{P}_{\delta_{2}}}\). Then for all \(\alpha\in E\cap[0,\delta_{1}]\), it holds that: 2. [label=0.9.0] 3. **Proposition.** Let \(A\) be admissible and let \(\vec{\mathbb{P}}\) be an iteration with support \(E\). Suppose that \(\vec{\mathbb{P}}\) is point-wise definable over \(A\). Let \(M\prec A\), let \(\delta\in E\), and let \(G_{\delta}\rightsquigarrow V^{\mathbb{P}_{\delta_{2}}}\). Elementary submodels \(M_{<\alpha}^{G_{\delta}}\) and \(M_{\alpha}^{G_{\delta}}\) of \(A\) and elementary submodels \(M_{\alpha}[G_{\delta}]\) of \(A[G_{\alpha}]\) (inside \(V[G_{\alpha}]\)) are defined by recursion on \(\alpha\in E\cap[0,\delta]\), as follows. 1. [label=0.9.0] 2. \(M_{<\alpha}^{G_{\delta}}:=M\cup\bigcup_{\xi\in E\cap\alpha}M_{\xi}^{G_{\delta}}\) 3. if \(\alpha\in M_{<\alpha}^{G_{\delta}}\), we define \(M_{\alpha}[G_{\delta}]:=M_{<\alpha}^{G_{\delta}}[G_{\alpha}]\) and \(M_{\alpha}^{G_{\delta}}:=(M_{<\alpha}^{G_{\delta}})^{G_{\alpha}}=M_{\alpha}[G_ {\delta}]\cap A\); 4. if \(\alpha\not\in M_{<\alpha}^{G_{\delta}}\), we define \(M_{\alpha}^{G_{\delta}}:=M_{<\alpha}^{G_{\delta}}\) and leave \(M_{\alpha}[G_{\delta}]\) undefined.
3. **Proposition.** The previous definition is correct and for all \(\alpha,\beta\in E\), we have: 1. [label=0.9.0] 2. \(M_{<\alpha}^{G_{\delta}}\prec M_{\alpha}^{G_{\delta}}\prec A\) 4. \(M_{\alpha}^{G_{\delta}}\prec M_{<\beta}^{G_{\delta}}\) whenever \(\alpha<\beta\); 3. \(M_{\alpha}[G_{\delta}]\prec A[G_{\alpha}]\) whenever \(\alpha\in M_{<\alpha}^{G_{\delta}}\).
1. [label=0.9.0]
2. **Proposition.** Let \(A\) be admissible, let \(\vec{\mathbb{P}}\) be an iteration with support \(E\) that is point-wise definable over \(A\), let \(M\prec A\), let \(\delta_{1},\delta_{2}\in E\) with \(\delta_{1}\leq\delta_{2}\), let \(G_{\delta_{2}}\rightsquigarrow V^{\mathbb{P}_{\delta_{2}}}\). Then for all \(\alpha\in E\cap[0,\delta_{1}]\), it holds that: 3. [label=0.9.0]
1. \(M_{<\alpha}^{G_{\delta_{1}}}=M_{<\delta}^{G_{\delta_{2}}}\)
2. \(M_{\alpha}^{G_{\delta_{1}}}=M_{\alpha}^{G_{\delta_{2}}}\)
3. \(M_{\alpha}[G_{\delta_{1}}]=M_{\alpha}[G_{\delta_{2}}]\) whenever is either of them defined.
**Proposition**.: Let \(A\) be admissible, let \(\vec{\mathbb{P}}\) be an iteration with support \(E\) that is point-wise definable over \(A\), let \(M\prec A\), let \(\delta\in E\), let \(G_{\delta}\rightsquigarrow V^{\mathbb{P}_{\delta}}\), and let \(\alpha\in E\cap[0,\delta]\). Suppose that \(\alpha\in M\). Then
\[M_{\alpha}^{G_{\delta}}=M^{G_{\alpha}},\quad M_{\alpha}[G_{\delta}]=M[G_{ \alpha}].\]
Proof.:
1. It suffices to prove \(M_{\alpha}[G_{\delta}]=M[G_{\alpha}]\).
2. We have \(M\subseteq M_{<\alpha}^{G_{\delta}}\), so we conclude \[M[G_{\alpha}]\subseteq M_{<\alpha}^{G_{\delta}}[G_{\alpha}]=M_{\alpha}[G_{ \delta}].\]
3. We consider now the other inclusion. Let us first verify that \[M_{<\alpha}^{G_{\delta}}\subseteq M[G_{\alpha}].\]
4. This is done by inductively showing \[M_{\xi}^{G_{\delta}}\subseteq M[G_{\alpha}]\] for all \(\xi\in E\cap\alpha\).
5. Assume that the statement is true all \(\eta\in E\cap\xi\) and let us verify it for \(\xi\).
6. By the assumption, \[M_{<\xi}^{G_{\delta}}=\bigcup_{\eta\in E\cap\xi}M_{\eta}^{G_{\delta}}\subseteq M [G_{\alpha}].\]
7. If \(\xi\not\in M_{<\xi}^{G_{\delta}}\), then \[M_{\xi}^{G_{\delta}}=M_{<\xi}^{G_{\delta}}\subseteq M[G_{\alpha}].\] Hence, let us assume \(\xi\in M_{<\xi}^{G_{\delta}}\)
8. Let \(x\in M_{\xi}^{G_{\delta}}\) be arbitrary. Then there exists \(\dot{x}\in M_{<\xi}^{G_{\delta}}\cap V^{\mathbb{P}_{\xi}}\) such that \(x=\dot{x}^{G_{\xi}}\).
9. By \(6^{\circ}\), we have \(\dot{x},\xi\in M[G_{\alpha}]\).
10. Since also \(G_{\alpha}\in M[G_{\alpha}]\), we conclude \(G_{\xi}\in M[G_{\alpha}]\).
11. Hence, \(x=\dot{x}^{G_{\xi}}\in M[G_{\alpha}]\).
12. This concludes the induction. Let us now go back to the main point, i.e. \(M_{\alpha}[G_{\delta}]\subseteq M[G_{\alpha}]\).
13. Let \(x\in M_{\alpha}[G_{\delta}]\) be arbitrary. Then there exists \(\dot{x}\in M_{<\alpha}^{G_{\delta}}\cap V^{\mathbb{P}_{\alpha}}\) such that \(x=\dot{x}^{G_{\alpha}}\).
14. By what we have just proved, we have \(\dot{x}\in M[G_{\alpha}]\).
15. Since \(G_{\alpha}\in M[G_{\alpha}]\) as well, we conclude \(x=\dot{x}^{G_{\alpha}}\in M[G_{\alpha}]\).
**Notation**.: Let \(A\) be admissible, let \(\vec{\mathbb{P}}\) be an iteration with support \(E\) that is point-wise definable over \(A\), let \(M\prec A\), let \(\delta\in E\), let \(G_{\delta}\rightsquigarrow V^{\mathbb{P}_{\delta}}\), and let \(\alpha\in E\cap[0,\delta]\). We shall henceforth write
\[M^{G_{\alpha}}:=M_{\alpha}^{G_{\delta}},\quad M[G_{\alpha}]:=M_{\alpha}[G_{ \delta}].\]
By the previous two propositions, this notation introduces no ambiguity.
**Proposition**.: Let \(A\) be admissible, let \(\mathbb{\widetilde{P}}\) be an iteration with the support \(E\) which is point-wise definable over \(A\), let \(M\prec A\), let \(\alpha,\delta\in E\) with \(\alpha\leq\delta\), and let \(G_{\delta}\rightsquigarrow V^{\mathbb{P}_{\delta}}\). Suppose that \(\delta\in M^{G_{\alpha}}\). Then
\[M[G_{\delta}]=M^{G_{\alpha}}[G_{\delta}]\text{ and }M^{G_{\delta}}=(M^{G_{ \alpha}})^{G_{\delta}}.\]
Proof.: This is in fact obvious. Namely, the second equality simply states that extending \(M\) to \(M^{G_{\delta}}\) according to the iteration \((\mathbb{P}_{\xi}:\xi\in E\cap[0,\delta])\) is the same as extending \(M\) to \(M^{G_{\alpha}}\) according to \((\mathbb{P}_{\xi}:\xi\in E\cap[0,\alpha])\) and then extending \(M^{G_{\alpha}}\) to \((M^{G_{\alpha}})^{G_{\delta}}\) according to \((\mathbb{P}_{\xi}:\xi\in E\cap(\alpha,\delta])\). An additional note is only required in the case \(\alpha=\delta\in M^{G_{\delta}}\), where we apply e of [1.7.2]. For the first equality, we now have
\[M^{G_{\alpha}}[G_{\delta}]=(M^{G_{\alpha}})^{G_{\delta}}[G_{\delta}]=M^{G_{ \delta}}[G_{\delta}]=(M^{G_{\delta}}_{<\delta})^{G_{\delta}}[G_{\delta}]=M^{G_ {\delta}}_{<\delta}[G_{\delta}],\]
where the first equality follows from [1.7.2], the second follows from what we have just prove, the third by definition, the forth again by [1.7.2], and the fifth by definition.
**Proposition**.: Let \(\gamma\) be such that \(V_{\gamma}\models\mathsf{ZFC}^{-}\), let \(A\) be admissible, let \(M\prec A\) be a \(\gamma\)-strong virtual model, let \(\mathbb{\widetilde{P}}\in V_{\gamma}\) be an iteration with support \(E\) which is point-wise definable over \(A\), let \(\alpha\in E\), and let \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\). Suppose that \(M[G_{\alpha}]\) is defined. Then \(M[G_{\alpha}]\) is \(\gamma\)-strong in \(V[G_{\alpha}]\).
Proof.: This follows immediately from [1.7.3].
**Proposition**.: Let \(\gamma\) be such that \(V_{\gamma}\models\mathsf{ZFC}^{-}\), let \(A\) be admissible, let \(M,N\prec A\) be \(\gamma\)-strong virtual models, let \(\mathbb{\widetilde{P}}\in V_{\gamma}\) be an iteration with support \(E\) which is point-wise definable over \(A\), let \(\alpha\in E\), and let \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\). Suppose that \(M\cong_{\gamma}N\). Then \(M^{G_{\alpha}}\cong_{\gamma}N^{G_{\alpha}}\) and the isomorphism witnessing this fact extends the isomorphism witnessing \(M\cong_{\gamma}N\).
Proof.: This follows by induction on \(\alpha\). To handle the successor step, we simply apply [1.7.5]. Since the isomorphisms extend each other, the limit step is handled by simply taking the union of the previous isomorphisms.
**Proposition**.: Let \(\gamma\) be such that \(V_{\gamma}\models\mathsf{ZFC}^{-}\), let \(A\) be admissible, let \(M,N\prec A\) be \(\gamma\)-strong virtual models, let \(\mathbb{\widetilde{P}}\in V_{\gamma}\) be an iteration with support \(E\) which is point-wise definable over \(A\), let \(\alpha\in E\), and let \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\). Suppose that \(M[G_{\alpha}]\) is defined. Then \(N[G_{\alpha}]\) is also defined and \(M[G_{\alpha}]\cong_{\gamma}N[G_{\alpha}]\).
Proof.: This follows from the previous proposition and [1.7.5].
## 2 Semi-proper Iteration
### Setup for the Iteration
* **Definition**.: A _blueprint for a semi-proper iteration_ is a pair \((\mathbb{V},\mathbf{Q})\) satisfying
* \(\kappa\) is inaccessible,
* \(U\subseteq V_{\kappa}\),
* \(\mathbf{Q}:\kappa\times V_{\kappa}\to V_{\kappa}\) is definable without parameters over \(\mathbb{V}\),
* for all \(\alpha<\kappa\) and for all \(\mathbb{P}\in V_{\kappa}\), if \(\mathbb{P}\) is a poset, then \(\mathbf{Q}(\alpha,\mathbb{P})\in V^{\mathbb{P}}\) and \(\Vdash_{\mathbb{P}}\)"\(\mathbf{Q}(\alpha,\mathbb{Q})\) is semi-proper".
* **Declaration**.: We fix a blueprint \((\mathbb{V},\mathbf{Q})\) for a semi-proper iteration, where \(\mathbb{V}=(V_{\kappa},\in,U)\). We will assume for the rest of the notes that the default language for virtual models is \(\{\in,U\}\), where \(\dot{U}\) is a unary predicate.
* For \(\alpha<\kappa\), we denote \(\mathbb{V}_{\alpha}:=(V_{\alpha},\in,U\cap V_{\alpha})\).
* We denote \(\mathscr{E}:=\mathscr{E}_{\mathbb{V}}=\{\alpha<\kappa:\mathbb{V}_{\alpha} \prec\mathbb{V}\}\).
* For \(\alpha\in\mathscr{E}\), we say that a virtual model \(M\) is _\(\alpha\)-correct_ if \(\mathbb{V}_{\alpha}\prec\widehat{M}\).
* For \(\alpha\in\mathscr{E}\), we define set \[\mathcal{C}_{\alpha}:=\{M:M\text{ is countable, $\alpha$-correct, and $\alpha$-generated virtual model}\}.\]
* For \(S\subseteq\mathsf{Ord}\), we define \[\mathcal{C}_{S}:=\bigcup_{\alpha\in S\cap\mathscr{E}}\mathcal{C}_{\alpha}.\]
* **Remark**.: Traditionally, forcing iterations are constructed as follows: \[\mathbb{P}_{0} := \{1\}\] \[\mathbb{P}_{\alpha+1} := \mathbb{P}_{\alpha}*\hat{\mathbb{Q}}_{\alpha}\] \[\mathbb{P}_{\lambda} \subseteq \{p:(\forall\alpha<\lambda)(p\upharpoonright\alpha\in\mathbb{P}_{ \alpha})\}\] (\(\lambda\) limit). In the limit step, threads are required to satisfy some sort of a "support condition". In our iteration, we will double the steps: after adding a poset \(\hat{\mathbb{Q}}_{\alpha}\), we will have an additional step of adding a "scaffolding". At the limit stages, the threading will now be controlled by that scaffolding.
* For convenience, the iteration will be indexed by \(\mathscr{E}^{*}\) and the initial poset \(\mathbb{P}_{\min\mathscr{E}}\) will not be equal to \(\{1\}\). Nevertheless, poset \(\mathbb{P}_{\min\mathscr{E}}\) will stil be trivial.
* Stages \(\alpha\in\mathscr{E}^{+}\) correspond to adding a poset.
* Stages \(\alpha\) where \(\alpha\) is a successor point of \(\mathscr{E}\) correspond to the operation of adding a scaffolding.
* Limit points of \(\mathscr{E}\) are limit stages of the iteration.
* The canonical name for a \(\mathbb{P}_{\alpha}\)-generic will be denoted by \(\dot{G}_{\alpha}\) (for \(\alpha\in\mathscr{E}^{*}\)). Accordingly, a \(\mathbb{P}_{\alpha}\)-generic will have denotation \(G_{\alpha}\) and for \(\beta\in\mathscr{E}^{*}\cap\alpha\), we will write \(G_{\beta}:=G_{\alpha}\cap\mathbb{P}_{\beta}\).
* **Remark**.: The iteration \((\mathbb{P}_{\alpha}:\alpha\in\mathscr{E}^{*})\) is defined by recursion. The recursive step of the definition is described in the following subsection.
### Recursive Step of the Definition
* Let \(\delta\in\mathscr{E}\) and let \(\vec{\mathbb{P}}=(\mathbb{P}_{\alpha}:\alpha\in\mathscr{E}^{*}\cap\delta)\) be a forcing iteration.
* Suppose that \(\vec{\mathbb{P}}\) is point-wise definable over \(\mathbb{V}\).
* Suppose that for every \(\alpha\in\mathscr{E}\cap\delta\) and every \(p\in\mathbb{P}_{\alpha}\), it holds that that \(p=(w_{p},\mathcal{M}_{p})\) for some finite partial function \(w_{p}:\mathscr{E}\cap\alpha\to V_{\kappa}\) and some finite \(\mathcal{M}_{p}\subseteq\mathcal{C}_{\leq\alpha}\).
* Let \((\mathbb{Q}_{\alpha}:\alpha\in\mathscr{E}\cap\delta)\) be a sequence of names satisfying \(\mathbb{Q}_{\alpha}\in V^{\mathbb{P}_{\alpha}}\) and \[\mathbb{P}_{\alpha}\sqcup\mathbb{Q}_{\alpha}\text{ is a semi- proper poset}\] for every \(\alpha\).
* **Definition.** Suppose that \(\delta=\min\mathscr{E}\). Then _poset_\(\mathbb{P}_{\delta}\) consists of all pairs \(p=(w_{p},\mathcal{M}_{p})\) where \(w_{p}=\emptyset\) and \(\mathcal{M}_{p}\) is a finite subset of \(\mathcal{C}_{\delta}\). _The order \(q\leq p\) in \(\mathbb{P}_{\delta}\)_ holds if and only if \(\mathcal{M}_{q}\supseteq\mathcal{M}_{p}\).
* Let \(M\in\mathcal{C}_{\geq\delta}\), let \(\alpha\in\mathscr{E}\cap\delta\), and let \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\). We say that \(M^{G_{\alpha}}\)_is active at \(\delta\)_ in either of the following cases:
* there exists a predecessor \(\gamma\) of ordinal \(\delta\) inside \(\mathscr{E}\) and \(\gamma\in M^{G_{\alpha}}\);
* ordinal \(\delta\) is a limit point of \(\mathscr{E}\) and \(\sup(M^{G_{\alpha}}\cap\mathscr{E}\cap\delta)=\delta\).
* **Definition.** Let \(\mathcal{M}\in\mathcal{C}_{[0,\kappa]}\), let \(\alpha\in\mathscr{E}\cap\delta\), and let \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\). We define \[\mathcal{M}^{\delta}[G_{\alpha}]:=\{M\downarrow\delta:M\in\mathcal{M},\,M^{G_{ \alpha}}\text{ is active at }\delta\}.\]
* Let \(\mathcal{M}\subseteq\mathcal{C}_{\delta}\), let \(\alpha<\delta\), and let \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\). The set \(\mathcal{M}\) is _weak \(\lhd_{\delta}\)-chain w.r.t. \(G_{\alpha}\)_ if for every \(M,N\in\mathcal{M}\), it holds that:
* \(\omega_{1}\cap M=\omega_{1}\cap N\implies M=N\);
* \(\omega_{1}\cap M<\omega_{1}\cap N\implies M\lhd_{\delta}N_{\alpha}^{G_{\alpha}}\).
* **Remark.** If in the second point the model \(N_{\alpha}^{G_{\alpha}}\) is replaced by \(N\), we get the usual notion of a \(\lhd_{\delta}\)-chain.
* **Remark.** Suppose that \(\alpha\leq\beta<\delta\), that \(G_{\beta}\rightsquigarrow V^{\mathbb{P}_{\beta}}\), and that \(\mathcal{M}\) is a weak \(\lhd_{\delta}\)-chain w.r.t. \(G_{\alpha}\). Then \(\mathcal{M}\) is a weak \(\lhd_{\delta}\)-chain w.r.t. \(G_{\beta}\).
* **Notation.** Let \(p=(w_{p},\mathcal{M}_{p})\) where \(w_{p}\) is a partial function on \(\mathscr{E}\) and \(\mathcal{M}_{p}\subseteq\mathcal{C}_{[0,\kappa]}\) and let \(\alpha\in\mathscr{E}\). Then \[\mathcal{M}_{p}\downarrow\alpha :=\{M\downarrow\alpha:M\in\mathcal{M}_{p}\},\] \[p\upharpoonright\alpha :=(w_{p}\upharpoonright\alpha,\mathcal{M}_{p}\downarrow\alpha),\] \[p\upharpoonright(\alpha+1) :=(w_{p}\upharpoonright(\alpha+1),\mathcal{M}_{p}\downarrow\alpha).\]
* Suppose that \(\delta\) has a predecessor \(\gamma\) inside \(\mathscr{E}\). The _the poset_\(\mathbb{P}_{\delta}\) consists of all pairs \(p=(w_{p},\mathcal{M}_{p})\) satisfying:
* object \(w_{p}\) is a finite partial function \(\mathscr{E}\cap\delta\to V_{\kappa}\);
* object \(\mathcal{M}_{p}\) is a finite subset of \(\mathcal{C}_{\leq\delta}\);
* \(p\upharpoonright\alpha\in\mathbb{P}_{\alpha}\) for every \(\alpha\in\mathscr{E}^{*}\cap\delta\);
* \(p\upharpoonright(\gamma+1)\sqcup\mathbb{P}_{\gamma+1}\) (\(\mathcal{M}_{p}^{\delta}[\dot{G}_{\gamma}]\) is a weak \(\lhd_{\delta}\) -chain w.r.t. \(\dot{G}_{\gamma+1}\))
* if \(\gamma\in\mathsf{dom}(w_{p})\), then \[p\upharpoonright\gamma\sqcup\mathbb{P}_{\gamma}\ (\forall M\in\mathcal{M}_{p}^{ \delta}[\dot{G}_{\gamma}])(w_{p}(\gamma)\text{ is semi-}M[\dot{G}_{\gamma}]^{ \mathbb{Q}_{\gamma}}\text{-generic}).\] _The order_ \(q\leq p\) _in_\(\mathbb{P}_{\delta}\) holds if and only if:
* \(q\upharpoonright\alpha\leq_{\mathbb{P}_{\alpha}}p\upharpoonright\alpha\) for every \(\alpha\in\mathscr{E}^{*}\cap\delta\);
g. \(\mathcal{M}_{q}\cap\mathcal{C}_{\delta}\supseteq\mathcal{M}_{p}\cap\mathcal{C}_{\delta}\).
* **Definition.** Suppose that \(\delta\) is a limit point of \(\mathscr{E}\). Then _the poset_\(\mathbb{P}_{\delta}\) consists of all pairs \(p=(w_{p},\mathcal{M}_{p})\) satisfying: 1. object \(w_{p}\) is a finite partial function \(\mathscr{E}\cap\delta\to V_{\kappa}\); 2. object \(\mathcal{M}_{p}\) is a finite subset of \(\mathcal{C}_{\leq\delta}\); 3. \(p\upharpoonright\alpha\in\mathbb{P}_{\alpha}\) for every \(\alpha\in\mathscr{E}^{*}\cap\delta\); 4. there exists \(\delta_{0}<\delta\) such that for every \(\alpha\in\mathscr{E}\cap(\delta_{0},\delta)\) we have that \[p\upharpoonright(\alpha+1)\Vdash_{\mathbb{P}_{\alpha+1}}(\mathcal{M}_{p}^{ \delta}[\dot{G}_{\alpha}]\text{ is a weak }\prec_{\delta}\text{-chain w.r.t. }\dot{G}_{\alpha+1}).\] _The order_ \(q\leq p\) _in_\(\mathbb{P}_{\delta}\) holds if and only if: 1. \(q\upharpoonright\alpha\leq_{\mathbb{P}_{\alpha}}p\upharpoonright\alpha\) for every \(\alpha\in\mathscr{E}^{*}\cap\delta\); 2. \(\mathcal{M}_{q}\cap\mathcal{C}_{\delta}\supseteq\mathcal{M}_{p}\cap\mathcal{C}_{\delta}\).
* **Definition.**\(\dot{\mathbb{Q}}_{\delta}:=\mathbf{Q}(\delta,\mathbb{P}_{\delta})\)
* **Definition.**_Poset \(\mathbb{P}_{\delta+1}\)_ consists of all pairs \((w_{p},\mathcal{M}_{p})\) satisfying: 1. object \(w_{p}\) is a finite partial function of the type \(\mathscr{E}\cap(\delta+1)\to V_{\kappa}\); 2. object \(\mathcal{M}_{p}\) is a finite subset of \(\mathcal{C}_{\leq\delta}\); 3. \(p\upharpoonright\delta\in\mathbb{P}_{\delta}\) 4. if \(\delta\in\operatorname{\mathsf{dom}}(w_{p})\), then \(w_{p}(\delta)\) is a canonical \(\mathbb{P}_{\delta}\)-name for an element of \(\dot{\mathbb{Q}}_{\delta}\). _The order_ \(q\leq p\) _in_\(\mathbb{P}_{\delta+1}\) holds if and only if: 1. \(q\upharpoonright\delta\leq_{\mathbb{P}_{\delta}}p\upharpoonright\delta\) 2. if \(\delta\in\operatorname{\mathsf{dom}}(w_{p})\), then \(\delta\in\operatorname{\mathsf{dom}}(w_{q})\) and \(q\upharpoonright\delta\Vdash_{\mathbb{P}_{\delta}}w_{q}(\delta)\leq w_{p}(\delta)\).
### Basic Properties
* **Definition.**_The semi-proper iteration given by the blueprint \((\mathbb{V},\mathbf{Q})\)_ is obtained by recursively iterating the construction of Subsection 2.2.
* **Definition.** Let \(\mathbb{P}\) and \(\mathbb{Q}\) be posets and let \(\pi:\mathbb{Q}\to\mathbb{P}\). Suppose that \(\mathbb{P}\) is a suborder of \(\mathbb{Q}\). Then \(\pi\) is said to be _restriction of conditions_ if 1. \((\forall p\in\mathbb{P})(\pi(p)=p)\), 2. \((\forall p,q\in\mathbb{Q})(p\leq q\implies\pi(p)\leq\pi(q))\), 3. \((\forall p\in\mathbb{P})(\forall q\in\mathbb{Q})(p\leq\pi(q)\implies p\upharpoonright q)\).
* **Proposition.**\((\mathbb{P}_{\alpha}:\alpha\in\mathscr{E}^{*})\) is a correctly defined forcing iteration. For elements \(\alpha\leq\delta\) of \(\mathscr{E}^{*}\), the mapping \(\mathbb{P}_{\delta}\to\mathbb{P}_{\alpha}:p\mapsto p\upharpoonright\alpha\) is a restriction of conditions.
Proof.:
* Let \(\delta\in\mathscr{E}\) and let us assume recursively that \((\mathbb{P}_{\alpha}:\alpha\in\mathscr{E}^{*}\cap\delta)\) has been defined correctly and that it is an iteration.
* We can now define the poset \(\mathbb{P}_{\delta}\). We need to verify \(\mathbb{P}_{\alpha}\) is a complete subposet of \(\mathbb{P}_{\delta}\) for every \(\alpha\in\mathscr{E}^{*}\cap\delta\).
* It is immediate from the definitions that \(\mathbb{P}_{\alpha}\) is a suborder of \(\mathbb{P}_{\delta}\). It suffices for the conclusion to verify that the mapping \(\mathbb{P}_{\delta}\to\mathbb{P}_{\alpha}:p\mapsto p\upharpoonright\alpha\) is a restriction of conditions, which is also obvious.
* We can now define the poset \(\mathbb{P}_{\delta+1}\).
* It is clear that \(\mathbb{P}_{\delta}\) is a suborder of \(\mathbb{P}_{\delta+1}\) and it is a matter of routine to verify that the mapping \(\mathbb{P}_{\delta+1}\to\mathbb{P}_{\delta}:p\mapsto p\upharpoonright\delta\) is a restriction of conditions. Hence, \(\mathbb{P}_{\delta}\) is a complete subposet of \(\mathbb{P}_{\delta+1}\)
**Definition**.: \(\mathbb{P}_{\kappa}:=\bigcup_{\alpha\in\mathscr{E}^{*}}\mathbb{P}_{\alpha}\)__
**Proposition**.: \((\mathbb{P}_{\alpha}:\alpha\in\mathscr{E}^{*}\cup\{\kappa\})\) is a forcing iteration. Mapping \(\mathbb{P}_{\kappa}\to\mathbb{P}_{\alpha}:p\mapsto p\upharpoonright\alpha\) is a restriction of conditions for all \(\alpha\in\mathscr{E}^{*}\).
**Proposition**.: Let \(\delta\in\mathscr{E}\).
* Let \(p=(w_{p},\mathcal{M}_{p})\). Then \(p\in\mathbb{P}_{\delta}\) if and only if the following holds:
* \(w_{p}\) is a finite partial function from \(\mathscr{E}\cap\delta\) into \(V_{\kappa}\);
* for all \(\gamma\in(\min\mathscr{E},\delta]\), there exists \(\beta\in\mathscr{E}\cap\gamma\) such that for all \(\alpha\in\mathscr{E}\cap[\beta,\gamma)\), \[p\upharpoonright(\alpha+1)\Vdash_{\mathbb{P}_{\alpha+1}}(\mathcal{M}_{p}^{*}[ \dot{G}_{\alpha}]\text{ is a weak }\lhd_{\gamma}\text{-chain w.r.t. }\dot{G}_{\alpha+1});\]
* for all \(\beta\in\mathscr{E}\cap\delta\) and for \(\gamma\) the successor of \(\beta\) inside \(\mathscr{E}\), \[\beta\in\text{dom}(w_{p})\implies p\upharpoonright\beta\Vdash_{\mathbb{P}_{ \beta}}(\forall M\in\mathcal{M}_{p}^{\gamma}[\dot{G}_{\beta}])(w_{p}(\beta) \text{ is semi-}M[\dot{G}_{\beta}]^{\mathbb{Q}_{\beta}}\text{-generic}).\]
* Let \(p,q\in\mathbb{P}_{\delta}\). Then \(p\leq_{p_{\delta}}q\) if and only if:
* for all \(\alpha\in\text{dom}(w_{q})\), \[p\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}w_{p}(\alpha)\leq_{\mathbb{Q}_ {\alpha}}w_{q}(\alpha);\]
* for all \(N\in\mathcal{M}_{q}\) and for \(\gamma:=\sup\{\beta:V_{\beta}\subseteq\widehat{N}\}\), there exists \(M\in\mathcal{M}_{q}\) such that \(N=M\downarrow\gamma\).
**Proposition**.: Iteration \((\mathbb{P}_{\alpha}:\alpha\in\mathscr{E}^{*})\) is point-wise definable over \(\mathbb{V}\).
**Proposition**.: Caridinality of \(\mathbb{P}_{\alpha}\) is strictly less then \(\beta\) for \(\alpha\in\mathscr{E}^{*}\) and \(\beta=\min(\mathscr{E}-(\alpha+1))\).
**Proposition**.: \(\mathbb{P}_{\delta+1}\) is forcing equivalent to \(\mathbb{P}_{\delta}*\dot{\mathbb{Q}}_{\delta}\) for every \(\delta\in\mathscr{E}\).
**Remark**.: The previous equivalence is canonical.
**Remark**.: Suppose that \(\beta<\gamma\) are elements of \(\mathscr{E}^{*}\). Recall that we have a canonical name \(\mathbb{P}_{\gamma}/\mathbb{P}_{\beta}\in V^{\mathbb{P}_{\beta}}\) that satisfies
\[\Vdash_{\mathbb{P}_{\beta}}\mathbb{P}_{\gamma}/\mathbb{P}_{\beta}=\{q\in \mathbb{P}_{\gamma}:q\upharpoonright\beta\in\dot{G}_{\beta}\}.\]
This is a name for a poset and the following is verified.
* \(\mathbb{P}_{\beta}*(\mathbb{P}_{\gamma}/\mathbb{P}_{\beta})\) is forcing equivalent to \(\mathbb{P}_{\gamma}\).
* For every dense subset \(D\) of the poset \(\mathbb{P}_{\gamma}\), the set \(\{q\in D:q\upharpoonright\beta\in\dot{G}_{\mathbb{P}_{\beta}}\}\) is dense in the poset \(\mathbb{P}_{\gamma}/\mathbb{P}_{\beta}\) inside the universe \(V^{\mathbb{P}_{\beta}}\).
* If \(G_{\beta}\rightsquigarrow V^{\mathbb{P}_{\beta}}\) and \(H\rightsquigarrow V[G_{\beta}]^{(\mathbb{P}_{\gamma}/\mathbb{P}_{\beta})^{G_{ \beta}}}\), then \(H\) is also a filter in \(\mathbb{P}_{\gamma}\). If we want to think of it as such, we denote it by \(G_{\beta}\cdot H\). Note that \(G_{\beta}\cdot H\) corresponds to \(G_{\beta}*H\) under the equivalence \(\mathbb{P}_{\gamma}\simeq\mathbb{P}_{\beta}*(\mathbb{P}_{\gamma}/\mathbb{P}_{ \beta})\). In particular, \(G_{\beta}\cdot H\rightsquigarrow V^{\mathbb{P}_{\gamma}}\) and \(G_{\beta}\subseteq G_{\beta}\cdot H\).
**Definition**.: Let \(\alpha\leq\beta\) be elements of \(\mathscr{E}^{*}\), let \(p\in\mathbb{P}_{\alpha}\), and let \(q\in\mathbb{P}_{\beta}\). Suppose that \(p\leq q\upharpoonright\alpha\). Then we define \(pq=(w,\mathcal{M})\) as follows:
\[w:=w_{p}\cup(w_{q}\upharpoonright(\alpha,\beta]),\quad\mathcal{M}:=\mathcal{M}_{ p}\cup\mathcal{M}_{q}.\]
**Proposition**.: Let \(\alpha\leq\beta\) be elements of \(\mathscr{E}^{*}\), let \(p\in\mathbb{P}_{\alpha}\), and let \(q\in\mathbb{P}_{\beta}\). Suppose that \(p\leq q\upharpoonright\alpha\). Then \(pq\in\mathbb{P}_{\beta}\), it satisfies \(pq\leq p,q\) and \(pq\upharpoonright\alpha=p\), and
\[(\forall r\in\mathbb{P}_{\beta})(r\leq p,q\iff r\leq pq).\]
### Statement of Transfer Theorem
* \(M\) : virtual model,
* \((\mathbb{S}_{\alpha}:\alpha\in E)\) : forcing iteration point-wise definable over \(\widehat{M}\),
* \(\gamma\in E\),
Suppose that there exists a both-fixed point \(\alpha\) such that \(\vec{\mathbb{S}}\in V_{\alpha}\subseteq\widehat{M}\).
Then we say that \(p\)_is locally \(M^{\mathbb{S}_{\gamma}}\)-generic_ if
\[p\Vdash_{\mathbb{S}_{\gamma}}^{V}M^{\hat{G}_{\gamma}}=M^{\hat{G}_{\gamma}}_{< \gamma}.\]
* **Remark.** If \(E=\{0\}\), then \(p\) is locally \(M^{\mathbb{S}_{0}}\)-generic if and only if \(p\) is \(M^{\mathbb{S}_{0}}\)-generic1. In general, "\(M^{\mathbb{S}_{\gamma}}\)-generic" implies "locally \(M^{\mathbb{S}_{\gamma}}\)-generic", but we will use this notion here to propagate semi-genericity through our iteration. Footnote 1: in the sense of Remark 1.2-XI.f. of [11]
* **Transfer Theorem.** Let \(\gamma\in\mathscr{E}\). Suppose that for all \(\alpha\in\mathscr{E}\cap\gamma\), for all \(M\in\mathcal{C}_{>\alpha}\), for all \(p\in\mathbb{P}_{\alpha}\) satisfying \(M\downarrow\alpha\in\mathcal{M}_{p}\), \(p\Vdash_{\mathbb{P}_{\alpha}}M^{\hat{G}_{\alpha}}\cap\omega_{1}^{V}=M\cap \omega_{1}^{V}\). Then for all \(M\in\mathcal{C}_{>\gamma}\), every condition \(p\in\mathbb{P}_{\gamma}\) satisfying \(M\downarrow\gamma\in\mathcal{M}_{p}\) is locally \(M^{\mathbb{P}_{\gamma}}\)-generic.
Proof.: The case \(\gamma=\min\mathscr{E}\) is obvious since the poset \(\mathbb{P}_{\gamma}\) is trivial. The case where \(\gamma\) is a successor point of \(\mathscr{E}\) is proved in Subsubsection 2.6.1 and the case where \(\gamma\) is a limit point of \(\mathscr{E}\) is proved in Subsubsection 2.6.2.
* **Local Genericity Criterion.*
* \((\mathbb{S}_{\alpha}:\alpha\in E)\) : forcing iteration,
* \(M\) : virtual model,
* \(\gamma\in E\),
Suppose that
* there exists a both-fixed point \(\alpha\) such that \(\vec{\mathbb{S}}\in V_{\alpha}\subseteq\widehat{M}\),
* for all \(q\leq_{\mathbb{S}_{\gamma}}p\), for all dense open subsets \(D\) of \(\mathbb{S}_{\gamma}\) satisfying \(q\Vdash_{\mathbb{S}_{\gamma}}D\in M^{\hat{G}_{\gamma}}_{<\gamma}\), there exist \(r\in D\) and \(s\leq_{\mathbb{S}_{\gamma}}q,r\) such that \[s\Vdash_{\mathbb{S}_{\gamma}}\gamma\not\in M^{\hat{G}_{\gamma}}_{<\gamma} \lor r\in M^{\hat{G}_{\gamma}}_{<\gamma}.\]
Then \(p\) is locally \(M^{\mathbb{S}_{\gamma}}\)-generic.
Proof.:
* Let \(p_{0}\leq p\) be arbitrary and let us show that there exists \(p_{1}\leq p_{0}\) such that \[q\Vdash M^{\hat{G}_{\gamma}}=M^{\hat{G}_{\gamma}}_{<\gamma}.\]
* If \(p_{0}\Vdash\gamma\not\in M^{\hat{G}_{\gamma}}_{<\gamma}\), then the conclusion follows by definition. Let us assume that \(p_{0}\not\Vdash\gamma\not\in M^{\hat{G}_{\gamma}}_{<\gamma}\).
* Then there exists \(p_{1}\leq p_{0}\) such that \(p_{1}\Vdash\gamma\in M^{\hat{G}_{\gamma}}_{<\gamma}\). We claim that \(p_{1}\) is as required.
* Let \(\dot{\tau}\in V^{\mathbb{S}_{\gamma}}\) be such that \(p_{1}\Vdash\dot{\tau}\in(M^{\hat{G}_{\gamma}}_{<\gamma})^{\mathbb{S}_{\gamma}}\). We want to show that \(p_{1}\Vdash\dot{\tau}^{\hat{G}_{\gamma}}\in M^{\hat{G}_{\gamma}}_{<\gamma}\).
* Let \(p_{2}\leq p_{1}\) be arbitrary. We want to find \(s\leq p_{2}\) such that \(s\Vdash\dot{\tau}^{\hat{G}_{\gamma}}\in M^{\hat{G}_{\gamma}}_{<\gamma}\).
* Note that \(p_{2}\Vdash\dot{\tau}\in V^{\mathbb{S}_{\gamma}}\). This means that there exists \(p_{3}\leq p_{2}\) and some \(\sigma\in V^{\mathbb{S}_{\gamma}}\) such that \(p_{3}\Vdash\dot{\tau}=\check{\sigma}\).
* Let \(D\) be the set of all conditions in \(\mathbb{S}_{\gamma}\) that decide the value of \(\sigma\). Set \(D\) is dense open in \(\mathbb{S}_{\gamma}\). For \(r\in D\), let \(x_{r}\) be the value of \(\sigma\) as decided by \(r\). We have \[p_{3}\Vdash D,(x_{r}:r\in D)\in M^{G_{\gamma}}_{<\gamma}.\]
* There exist \(r\in D\) and \(s\leq p_{3},r\) such that \[s\Vdash r\in M^{G_{\gamma}}_{<\gamma}.\]
* Then \[s\Vdash\dot{\tau}^{G_{\gamma}}=\check{\sigma}^{G_{\gamma}}=x_{s}=x_{r}\in M^{ G_{\gamma}}_{<\gamma},\] the last fact being due to \(s\Vdash(x_{r})_{r\in D},r\in M^{G_{\gamma}}_{<\gamma}\).
**Remark**.: In the case \(E=\{\emptyset\}\), the above criterion reduces to the usual genericity criterion (see for example Lemma 1.2-IV of [10]).
### Some Lemmas for Transfer Theorem
1. **Lemma**.: Let \(\beta\in\mathscr{E}\), let \(N\in\mathcal{C}_{\geq\beta}\), and let \(p\in\mathbb{P}_{\beta}\cap N\). Then there exists \(q\leq_{\mathbb{P}_{\beta}}p\) such that \(\mathsf{dom}(w_{q})=\mathsf{dom}(w_{p})\) and \(\mathcal{M}_{q}=\mathcal{M}_{p}\cup\{N\downarrow\beta\}\).
Proof.:
* Let \(\mathcal{M}_{q}:=\mathcal{M}_{p}\cup\{N\downarrow\beta\}\). We need to define \(w_{q}\) in such a way as to ensure \(q:=(w_{q},\mathcal{M}_{q})\in\mathbb{P}_{\beta}\) and \(q\leq p\).
* Note that for all \(P\in\mathcal{M}_{p}\), we have \(P\cap\omega_{1}^{V}<N\cap\omega_{1}^{V}\) and \(P\lhd_{\beta}N\). Hence, we will be done with the argument if we construct \(w_{q}\) satisfying: 1. \(\mathsf{dom}(w_{q})=\mathsf{dom}(w_{p})=:d\) 2. \(q\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}w_{q}(\alpha)\leq w_{p}(\alpha)\) for all \(\alpha\in d\); 3. \(q\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}\Big{(}\alpha\in N^{G_{ \alpha}}\implies(w_{p}(\alpha)\text{ is semi-}N[\check{G}_{\alpha}]^{\mathbb{Q}_{ \alpha}}\text{-generic})\Big{)}\) for all \(\alpha\in d\). (Cf. Proposition [2.3.6].)
* \(w_{q}(\alpha)\) is defined by recursion on \(\alpha\in d\). Suppose that \(w_{q}\upharpoonright\alpha\) has been defined and let us define \(w_{q}(\alpha)\).
* Let \(G_{\alpha}\rightsquigarrow V^{\mathbb{P}_{\alpha}}\) be an arbitrary generic containing \(q\upharpoonright\alpha\).
* Suppose that \(\alpha\in N^{G_{\alpha}}\). Then \(N[G_{\alpha}]\) is defined.
* Since \(\mathbb{Q}_{\alpha}:=\check{\mathbb{Q}}_{\alpha}^{G_{\alpha}}\in N[G_{\alpha}]\) is semi-proper and \(w_{p}(\alpha)^{G_{\alpha}}\in N[G_{\alpha}]\), there exists \(w\leq_{\mathbb{Q}_{\alpha}}w_{p}(\alpha)^{G_{\alpha}}\) which is semi-\(N[G_{\alpha}]^{\mathbb{Q}_{\alpha}}\)-generic.
* Since \(G_{\alpha}\) was arbitrary containing \(q\upharpoonright\alpha\), we can find a name \(\dot{w}\in V^{\mathbb{P}_{\alpha}}\) such that \[q\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}\dot{w}\leq_{\mathbb{Q}_{ \alpha}}w_{p}(\alpha)\wedge\Big{(}\alpha\in N^{\check{G}_{\alpha}}\implies( \dot{w}\text{ is semi-}N^{G_{\alpha}}\text{-generic})\Big{)}\,.\]
* We can now set \(w_{q}(\alpha):=\dot{w}\).
**Lemma**.: Let
* \(\mu,\nu\in\mathscr{E}\) : \(\nu\) is the successor of \(\mu\) in \(\mathscr{E}\),
* \(p\in\mathbb{P}_{\nu}\),
* \(G_{\mu}\rightsquigarrow V^{\mathbb{P}_{\mu}}\) : \(p\upharpoonright\mu\in G_{\mu}\),
* \(M\in\mathcal{M}_{p}^{\nu}[G_{\mu}]\),
* \(\mathbb{Q}_{\mu}:=\hat{\mathbb{Q}}_{\mu}^{G_{\mu}}\),
* \(u\in\mathbb{Q}_{\mu}\cap M[G_{\mu}]\).
Then there exists \(v\in\mathbb{Q}_{\mu}\) such that \(v\leq u\) and that for all \(N\in\mathcal{M}_{p}^{\nu}[G_{\mu}]\) satisfying \(N\cap\omega_{1}^{V}\geq M\cap\omega_{1}^{V}\), it holds that \(v\) is semi-\(N[G_{\mu}]^{\mathbb{Q}_{\mu}}\)-generic.
Proof.:
* \(\{M_{i}:i<n\}:=\mathcal{M}_{p}^{\nu}[G_{\mu}]\) : for all \(i<j<n\), \(M_{i}\cap\omega_{1}^{V}<M_{j}\cap\omega_{1}^{V}\),
* \(k<n\) : \(M_{k}=M\),
* \(\lambda:=|\mathbb{Q}_{\mu}|\).
* For all \(i<n\), we have \(\mathbb{Q}_{\mu}\in M_{i}[G_{\mu}]\) and \(\mathbb{Q}_{\mu}\) is semi-proper (this follows from activity).
* We may assume w.l.o.g. that \(\mathbb{Q}_{\mu}=\lambda\).
* Consider \(i<n\). We let \(N_{i}:=M_{i}[G_{\mu}]\cap H((2^{\lambda})^{+})\prec H((2^{\lambda})^{+})\). Note that \(w\in\mathbb{Q}_{\mu}\) is semi-\(M_{i}[G_{\mu}]^{\mathbb{Q}_{\mu}}\)-generic if and only if it is semi-\(N_{i}^{\mathbb{Q}_{\mu}}\)-generic.
* Hence, it suffices to find \(v\in\mathbb{Q}_{\mu}\) such that \(v\leq u\) and which is semi-\(N_{i}^{\mathbb{Q}_{\mu}}\)-generic for all \(i\in[k,n)\).
* Since \(M_{i}\triangleleft_{\nu}M_{j}[G_{\mu}]\) for all \(i<j<n\), Proposition [1.6.9] implies that \(N_{i}\in N_{j}\) for all \(i<j<n\).
* By recursion on \(i\in[k,n)\), we construct a sequence \((u_{i}:k\leq i<n)\) of \(\mathbb{Q}_{\mu}\)-conditions satisfying \(u_{k}:=u\in N_{k}\) and satisfying for all \(i\in(k,n)\) that \(u_{i}\in N_{i}\), \(u_{i}\leq_{\mathbb{Q}_{\mu}}u_{i-1}\), and \(u_{i}\) is semi-\(N_{i-1}^{\mathbb{Q}_{\mu}}\)-generic.
* There exists \(v\leq u_{n-1}\) such that \(v\) is semi-\(N_{n-1}^{\mathbb{Q}_{\mu}}\)-generic.
**Lemma**.: Let
* \(\mu,\nu\in\mathscr{E}\) : \(\nu\) is the successor of \(\mu\) in \(\mathscr{E}\),
* \(p\in\mathbb{P}_{\nu}\),
* \(M\in\mathcal{M}_{p}\) : \(p\upharpoonright\mu\Vdash_{\mathbb{P}_{\mu}}(M^{\hat{G}_{\mu}}\) is active at \(\nu)\),
* \(\hat{u}\in V^{\mathbb{P}_{\mu}}\) : \(p\upharpoonright\mu\Vdash_{\mathbb{P}_{\mu}}\hat{u}\in\hat{\mathbb{Q}}_{\mu} \cap M[\hat{G}_{\mu}]\).
Then there exists a canonical \(\mathbb{P}_{\mu}\)-name \(\hat{v}\) for an element of \(\hat{\mathbb{Q}}_{\mu}\) such that \(p\upharpoonright\mu\) forces that "\(\hat{v}\leq\hat{u}\) and that for all \(N\in\mathcal{M}_{p}^{\nu}[\hat{G}_{\mu}]\) satisfying \(N\cap\omega_{1}^{V}\geq M\cap\omega_{1}^{V}\), it holds that \(\hat{v}\) is semi-\(N[\hat{G}_{\mu}]^{\hat{\mathbb{Q}}_{\mu}}\)-generic.
Proof.: This is immediate from the previous lemma.
**Lemma**.: Let \(\gamma\in\mathscr{E}\), let \(p\in\mathbb{P}_{\gamma}\), and let \(\beta\in\mathscr{E}\cap\gamma\). Then there exists \(p_{\beta}\leq_{\mathbb{P}_{\gamma}}p\) such that \(\beta\in\mathsf{dom}(w_{p_{\beta}})\).
Proof.:
* We may assume w.l.o.g. that \((\beta,\gamma)\cap\mathscr{E}=\emptyset\).
* Let \(G_{\beta}\leadsto V^{\mathbb{P}_{\beta}}\) be arbitrary containing \(p\upharpoonright\beta\), let \(M\in\mathcal{M}_{p}^{\gamma}[G_{\beta}]\) be such that \(M\cap\omega_{1}^{V}\) is minimal, and let \(\mathbb{Q}_{\beta}:=\hat{\mathbb{Q}}_{\beta}^{G_{\beta}}\).
* We may apply Lemma [2.5.2] to
* \(\underline{\mu}:=\beta\), \(\nu:=\gamma\),
e. \(\underline{u}:=1_{\partial_{\beta}}\), which yields \[(\exists v\in\mathbb{Q}_{\beta})(\forall N\in\mathcal{M}_{p}^{\gamma}[G_{\beta}]) (v\text{ is semi-}N[G_{\beta}]^{\mathbb{Q}_{\beta}}\text{-generic}).\]
4. Since \(G_{\beta}\) was arbitrary, we can find a canonical \(\mathbb{P}_{\beta}\) name \(\hat{v}\) for an element of \(\hat{\mathbb{Q}}_{\beta}\) such that \[p\upharpoonright\beta\Vdash_{\mathbb{P}_{\beta}}^{V}(\forall N\in\mathcal{M}_{ p}^{\gamma}[\dot{G}_{\beta}])(\hat{v}\text{ is semi-}N[\dot{G}_{\beta}]^{\mathbb{Q}_{\beta}}\text{-generic}).\]
5. Let \(w_{p_{\beta}}:=w_{p}\cup\{(\beta,\hat{v})\}\), \(\mathcal{M}_{p_{\beta}}:=\mathcal{M}_{p}\), and \(p_{\beta}:=(w_{p_{\beta}},\mathcal{M}_{p_{\beta}})\). We see that \(p_{\beta}\) is as required.
**Lemma**.: Let \(\beta\in\mathscr{E}-\{\min\mathscr{E}\}\) and let \(\xi<\beta\). Then \(\Vdash_{\mathbb{P}_{\beta}}|\xi|\leq\omega_{1}\).
Proof.:
1. It suffices to consider the case when \(\beta\) has a predecessor \(\alpha\) inside \(\mathscr{E}\).
2. Let \(\hat{\mathcal{M}}\) be a \(\mathbb{P}_{\beta}\)-name satisfying \[\Vdash_{\mathbb{P}_{\beta}}\hat{\mathcal{M}}=\bigcup_{p\in G_{\beta}} \mathcal{M}_{p}^{\beta}[\dot{G}_{\alpha}].\]
3. We want to show that: 1. \(\Vdash_{\mathbb{P}_{\beta}}\xi\subseteq\bigcup\hat{\mathcal{M}}\) b. \(\Vdash_{\mathbb{P}_{\beta}}|\hat{\mathcal{M}}|\leq\omega_{1}\). This suffices for the conclusion.
4. Let us first verify \(3^{\circ}\)a. Let \(\eta<\xi\) and let \(D:=\{p\in\mathbb{P}_{\beta}:(\exists M\in\mathcal{M}_{p})(\eta,\alpha\in M)\}\). We are done if we show that \(D\) is dense in \(\mathbb{P}_{\beta}\).
5. Let \(p_{0}\in\mathbb{P}_{\beta}\) be arbitrary. Then there exists \(N\prec\mathbb{V}\) such that \(\alpha,\beta,\eta,p_{0}\in N\).
6. By Lemma [2.5.1], there exists \(p\leq_{\mathbb{P}_{\beta}}p_{0}\) such that \(N\downarrow\beta\in\mathcal{M}_{p}\).
7. Since \(\eta,\alpha\in N\downarrow\beta\), we conclude \(p\in D\).
8. Let us now verify \(3^{\circ}\)b. Let \(G_{\beta}\leadsto V^{\mathbb{P}_{\beta}}\) be arbitrary and let us work inside \(V[G_{\beta}]\).
9. Let \(\mathcal{M}:=\hat{\mathcal{M}}^{G_{\beta}}\subseteq\mathcal{C}_{\beta}\). It suffices to show that the mapping \[\mathcal{M}\to\omega_{1}^{V}:M\mapsto M\cap\omega_{1}^{V}\] is an injection.
10. Let \(M,N\in\mathcal{M}\) be arbitrary satisfying \(M\cap\omega_{1}^{V}=N\cap\omega_{1}^{V}\). We want to show that \(M=N\).
11. We have that \(\alpha\in M\cap N\) and there exist \(p,q\in G_{\beta}\) such that \(M\in\mathcal{M}_{p}^{\beta}[G_{\alpha}]\) and \(N\in\mathcal{M}_{q}^{\beta}[G_{\alpha}]\).
12. Since \(p,q\in G_{\beta}\), there exists \(r\in G_{\beta}\) such that \(r\leq_{\mathbb{P}_{\beta}}p,q\). We have that \(M,N\in\mathcal{M}_{r}^{\beta}[G_{\alpha}]\).
13. Since \(r\in G_{\beta}\), we have that \(\mathcal{M}_{r}^{\beta}[G_{\alpha}]\) is a weak \(\lhd_{\alpha}\)-chain w.r.t. \(G_{\alpha+1}\). In particular, the fact that \(M\cap\omega_{1}^{V}=N\cap\omega_{1}^{V}\) implies that \(M=N\).
**Lemma**.: Let \(M\) be a virtual model, let \(\delta\) be an inaccessible of \(M\), let \(\mathbb{P}\in M\cap V_{\delta}\) be a poset, and let \(G\leadsto\widehat{M}^{\mathbb{P}}\). Then
\[\sup(M[G]\cap\delta)=\sup(M\cap\delta).\]
Proof.:
1. Let \(\tau\in M^{\mathbb{P}}\) be such that \(\tau_{G}<\delta\). We want to show that there exists some \(\beta\in M\cap\delta\) such that \(\tau_{G}<\beta\).
* We may assume w.l.o.g. that \(\mathbb{U}_{\mathbb{P}}^{V}\tau<\delta\).
* There exists a maximal antichain \(A\in M\) of \(\mathbb{P}\) such that for all \(p\in A\), there exists \(\alpha_{p}<\delta\) such that \(p\Vdash_{\mathbb{P}}^{V}\tau=\alpha_{p}\).
* Note that \((\alpha_{p}:p\in A)\in M\). Since \(\delta\) is inaccessible in \(M\), we conclude that \(\beta:=\sup_{p\in A}\alpha_{p}\in M\cap\delta\).
* It is now clear that \(\beta\) is as required.
**Lemma**.: Let
* \(\gamma\) : limit point of \(\mathscr{E}\),
* \(M\in\mathcal{C}_{[0,\kappa]}\),
* \(p\in\mathbb{P}_{\gamma}\) : \(M\downarrow\gamma\in\mathcal{M}_{p}\).
Suppose that for all \(\alpha\in\mathscr{E}\cap\gamma\), condition \(p\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}M^{\hat{G}_{\alpha}}\cap \omega_{1}^{V}=M\cap\omega_{1}^{V}\).
Then there exist ordinals \(\beta<\gamma^{*}\leq\gamma\) and a condition \(q\leq_{\mathbb{P}_{\beta}}p\) such that \(\mathcal{M}_{q}\cap\mathcal{C}_{\gamma}=\mathcal{M}_{p}\cap\mathcal{C}_{\gamma}\) and
\[(\forall\alpha\in\mathscr{E}\cap(\beta,\gamma))(q\upharpoonright\alpha\Vdash_{ \mathbb{P}_{\alpha}}\sup(M^{\hat{G}_{\alpha}}\cap\gamma)=\gamma^{*}).\]
Proof.:
* If \(M\cap\mathsf{Ord}\subseteq\gamma\), then we may set \(\gamma^{*}:=\sup(M\cap\mathsf{Ord})\), \(\beta:=0\), and \(q:=p\). Hence, let us assume that there exists \(\xi\in M\) with \(\xi\geq\gamma\).
* Let \(\eta\) be the least ordinal \(\eta\geq\gamma\) satisfying \[(\exists\beta\in\mathscr{E}^{*}\cap\gamma)(\exists r\leq_{\mathbb{P}_{\beta}}p \upharpoonright\beta)(r\Vdash_{\mathbb{P}_{\beta}}\eta\in M^{\hat{G}_{\beta}}).\]
* Let \(\beta_{0}\in\mathscr{E}^{*}\cap\gamma\) be the least ordinal satisfying \[(\exists r\leq_{\mathbb{P}_{\beta_{0}}}p\upharpoonright\beta_{0})(r\Vdash_{ \mathbb{P}_{\beta_{0}}}\eta\in M^{\hat{G}_{\beta_{0}}}).\] Let \(r\) be a witness for the last formula.
* By lemma 1.3.8, we have that \(\eta\in\mathscr{E}\).
* **Claim**.: \(r\Vdash_{\mathbb{P}_{\beta_{0}}}\beta_{0}\in M^{\hat{G}_{\beta_{0}}}_{<\beta_ {0}}\)__
Proof.:
* Assume otherwise. Then there exists \(r^{\prime}\leq_{\mathbb{P}_{\beta_{0}}}r\) such that \(r^{\prime}\Vdash_{\partial_{0}}\not\in M^{\hat{G}_{\beta_{0}}}_{<\beta_{0}}\).
* This means that \(r^{\prime}\Vdash\eta\in M^{\hat{G}_{\beta_{0}}}=M^{\hat{G}_{\beta_{0}}}_{<\beta _{0}}\).
* Hence, there are \(r^{\prime\prime}\leq_{\mathbb{P}_{\beta_{0}}}r^{\prime}\) and \(\beta_{1}\in\mathscr{E}^{*}\cap\beta_{0}\) such that \(r^{\prime\prime}\Vdash\eta\in M^{\hat{G}_{\beta_{1}}}\). This contradicts the minimality of \(\beta_{0}\).
Hence, \(r\Vdash_{\mathbb{P}_{\beta_{0}}}(M[\hat{G}_{\beta_{0}}]\) is defined).
* Let \(s\leq_{\mathbb{P}_{\beta_{0}}}r\) and let \(\lambda\in\mathsf{Ord}\) be such that \(s\Vdash_{\mathbb{P}_{\beta_{0}}}\mathsf{cof}^{M[\hat{G}_{\beta_{0}}]}(\eta)=\lambda\).
* **Claim**.: There exists \(\beta\in\mathscr{E}\cap(\beta_{0},\gamma)\) and \(t\leq_{\mathbb{P}_{\beta}}s\) such that \[t\Vdash_{\mathbb{P}_{\beta}}\beta\in M^{\hat{G}_{\beta}}_{<\beta}\wedge \mathsf{cof}^{M[\hat{G}_{\beta}]}(\eta)\in\{\omega,\omega_{1},\eta\}.\]
Proof.:
* If \(\lambda=\eta\), it suffices to take \(\beta:=\min(\mathscr{E}-\beta_{0})\) and \(t:=s\). Hence, let us assume that \(\lambda<\eta\).
* Note that \(s\Vdash_{\mathbb{P}_{\beta_{0}}}\lambda\in M^{G_{\beta_{0}}}\). By the choice of \(\eta\), we then have that \(\lambda<\gamma\).
* Let \(G_{\beta_{0}}\leadsto V^{\mathbb{P}_{\beta_{0}}}\) be an arbitrary generic containing \(s\) and let us work in \(V[G_{\beta_{0}}]\).
* Since \(\eta\in\mathscr{E}\), the fact that \(\mathscr{E}\cap(\lambda,\eta)\neq\emptyset\) can be expressed as saying that there exists \(\beta\in(\lambda,\eta)\) such that \(\widehat{M}\upharpoonright\beta\prec\widehat{M}\upharpoonright\eta\). Since this is a first order fact of \(\widehat{M}\) with parameters from \(M^{G_{\beta_{0}}}\prec\widehat{M}\), there exists such an ordinal \(\beta\) in \(M^{G_{\beta_{0}}}\).
* By the choice of \(\eta\), we have \(\beta\in\mathscr{E}\cap(\lambda,\gamma)\).
* Let \(G_{\beta}\leadsto V^{\mathbb{P}_{\beta}}\) extend \(G_{\beta_{0}}\) and let us work inside \(V[G_{\beta}]\).
* We have \(\beta\in M^{G_{\beta_{0}}}\subseteq M^{G_{\beta}}_{<\beta}\).
* By Lemma 2.5.5, we have \(|\lambda|\leq\omega_{1}\) and consequently \(\mathsf{cof}^{M[G_{\beta}]}(\eta)\leq\omega_{1}\).
* Condition \(t\) is now obtained by Forcing Theorem.
* Up to strengthening \(t\), we may assume that there exists \(\gamma^{*}\) such that \(t\Vdash_{\mathbb{P}_{\beta}}\gamma^{*}=\sup(M[\dot{G}_{\beta}]\cap\gamma)\). Let \(q:=tp\in\mathbb{P}_{\gamma}\) and let us show that \(\beta\), \(\gamma^{*}\), and \(q\) are as required. It is immediate that \(\mathcal{M}_{q}\cap\mathcal{C}_{\gamma}=\mathcal{M}_{p}\cap\mathcal{C}_{\gamma}\).
* Since \(t\Vdash_{\mathbb{P}_{\beta}}\beta\in M[\dot{G}_{\beta}]\cap\gamma\wedge \gamma^{*}=\sup(M[\dot{G}_{\beta}]\cap\gamma)\), we conclude \(\beta<\gamma^{*}\leq\gamma\).
* **Claim.** For all \(\alpha\in\mathscr{E}\cap(\beta,\gamma)\), we have that \(q\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}\sup(M^{\dot{G}_{\alpha}} \cap\gamma)=\gamma^{*}\).
Proof.:
* Let \(G_{\alpha}\leadsto V^{\mathbb{P}_{\alpha}}\) be an arbitrary generic containing \(q\upharpoonright\alpha\) and let us work inside \(V[G_{\alpha}]\). We will distinguish between two cases, depending on whether \(\mathsf{cof}^{M[G_{\beta}]}(\eta)\leq\omega_{1}^{V[G_{\beta}]}\) or \(\mathsf{cof}^{M[G_{\beta}]}(\eta)=\eta\).
* _Suppose first that \(\mathsf{cof}^{M[G_{\beta}]}(\eta)\leq\omega_{1}^{V[G_{\beta}]}\)._ Then there exists \(f\in M[G_{\beta}]\) such that \[f:\omega_{1}^{V[G_{\beta}]}\to\eta\] and \[M[G_{\beta}]\models(f\text{ is cofinal in }\eta).\]
* By the assumption of the lemma, we have that \[M[G_{\beta}]\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}.\] In particular, \(\omega_{1}^{V}\not\subseteq M[G_{\beta}]\), which means that \(\omega_{1}^{V}\) is not countable in \(V[G_{\beta}]\).
* Hence, \(\omega_{1}^{V}=\omega_{1}^{V[G_{\beta}]}\) and \(f:\omega_{1}^{V}\to\eta\) is such that \[\sup(f[M\cap\omega_{1}^{V}])=\sup(M^{G_{\beta}}\cap\eta).\]
* This further implies that \[\sup(f[M\cap\omega_{1}^{V}])=\sup(M^{G_{\beta}}\cap\gamma)=\gamma^{*}.\]
* Observe that \(M[G_{\beta}]\prec\widehat{M}[G_{\beta}]\), that \(\widehat{M}[G_{\alpha}]\) is a generic extension of \(\widehat{M}[G_{\beta}]\), and that \(M[G_{\alpha}]\prec\widehat{M}[G_{\alpha}]\). This implies that \[M[G_{\alpha}]\models(f:\omega_{1}^{V}\to\eta\text{ cofinally}).\]
* We can now compute as follows: \[\sup(M^{G_{\alpha}}\cap\gamma)=\sup(M^{G_{\alpha}}\cap\eta)=\sup(f[M^{G_{ \alpha}}\cap\omega_{1}^{V}])=\sup(f[M\cap\omega_{1}^{V}])=\gamma^{*},\] where the first equality follows by the choice of \(\eta\), the second one from the previous point, the third one from the hypothesis that \[M^{G_{\alpha}}\cap\omega_{1}^{V}=M\cap\omega_{1}^{V},\] and the fourth one from \(5^{\prime}\).
* _Suppose now that_ \(\mathsf{cod}^{M[G_{\beta}]}(\eta)=\eta\)_._
* _Since_ \(M[G_{\beta}]\cap\eta=M^{G_{\beta}}\cap\eta=M^{G_{\beta}}\cap\gamma\)_, we see that_ \(\eta\) _is a strong limit in_ \(M[G_{\beta}]\)_. Hence,_ \[M[G_{\beta}]\models(\eta\text{ is inaccessible}).\]
* _Since posets_ \((\mathbb{P}_{\xi}:\xi\in\mathscr{E}^{*}\cap\gamma)\) _are of rank_ \(<\gamma\leq\eta\)_, the iterative application of Lemma_ 2.5_._ _yields that_ \[\sup(M^{G_{\xi}}\cap\eta)=\sup(M^{G_{\beta}}\cap\eta),\] _for all_ \(\xi\in\mathscr{E}^{*}\cap[\beta,\alpha]\)_._
* _In particular, we have that_ \(\sup(M^{G_{\alpha}}\cap\eta)=\gamma^{*}\)_, which further implies_ \[\sup(M^{G_{\alpha}}\cap\gamma)=\gamma^{*}.\]
* _This completes the proof of the lemma._
**Lemma**.: _Let_
* \(M\) _: virtual model,_
* \(\gamma\in\mathscr{E}_{M}\) _: limit point of_ \(\mathscr{E}_{\widetilde{M}}\)_,_
* \(\gamma^{*}:=\sup(M\cap\gamma)\)_,_
* \(N\) _: virtual model satisfying_ \(N\lhd_{\gamma^{*}}M\)_._
_Then there exists a unique_ \(\gamma\)_-generated model_ \(N^{+}\in M\) _such that_ \(N\cong_{\gamma^{*}}N^{+}\)_. Moreover, if_ \(\widetilde{M}\upharpoonright\gamma^{*}\prec\widehat{N}\)_, then_ \(\widehat{M}\upharpoonright\gamma\prec\widehat{N^{+}}\)_._
Proof.:
* To establish existence of such a \(N^{+}\), note that by definition there exists \(P\in M\) such that \(N\cong_{\gamma^{*}}P\). Then \(N^{+}:=P\downarrow\gamma\in M\) is as required.
* Let us verify uniqueness. Let \(N^{\prime}\in M\) be a \(\gamma\)-generated virtual model such that \(N\cong_{\gamma^{*}}N^{\prime}\). It suffices to show \(N^{\prime}\cong_{\gamma}N^{+}\).
* Let \(\alpha\in\mathscr{E}_{M}\cap\gamma\) be arbitrary. Then \(\alpha<\gamma^{*}\) and consequently \(N^{\prime}\cong_{\alpha}N\cong_{\alpha}N^{+}\).
* Hence, \[M\models(\forall\alpha<\gamma)(V_{\alpha}\prec(V_{\gamma},\in,U\cap V_{ \gamma})\implies N^{\prime}\cong_{\alpha}N^{+}).\]
* By elementarity, the same statement is true in \(\widehat{M}\). This means that for all \(\alpha\in\mathscr{E}_{\widetilde{M}}\cap\gamma\), we have that \(N^{\prime}\cong_{\alpha}N^{+}\).
* For \(\alpha\in\mathscr{E}_{\widetilde{M}}\cap\gamma\), let \(f_{\alpha}:\mathsf{Hull}(N^{\prime},V_{\alpha})\cong\mathsf{Hull}(N^{+},V_{ \alpha})\) be the unique isomorphism witnessing \(N^{\prime}\cong_{\alpha}N^{+}\). For \(\alpha_{0}<\alpha_{1}\), we have \(f_{\alpha_{0}}\subseteq f_{\alpha_{1}}\).
* It is now easily seen that \[f_{\gamma}:=\bigcup_{\alpha\in\mathscr{E}_{\widetilde{M}}\cap\gamma}f_{\alpha} :\mathsf{Hull}(N^{\prime},V_{\gamma})\to\mathsf{Hull}(N^{+},V_{\gamma})\] witnesses \(\mathsf{Hull}(N^{\prime},V_{\gamma})\cong_{\gamma}\mathsf{Hull}(N^{+},V_{ \gamma})\).
* Let us establish moreover part. Let \(\alpha\in\mathscr{E}_{M}\cap\gamma\) be arbitrary. We have that \(\alpha<\gamma^{*}\).
* This implies that \(\widehat{M}\upharpoonright\alpha\prec\widehat{N}\) and consequently \(\widehat{M}\upharpoonright\alpha=\mathsf{Hull}(N,V_{\gamma^{*}})\cap V_{ \alpha}\prec\mathsf{Hull}(N,V_{\gamma^{*}})\).
* Since \(N\cong_{\gamma^{*}}N^{+}\), we conclude that \[\widehat{M}\upharpoonright\alpha=\mathsf{Hull}(N^{+},V_{\gamma^{*}})\cap V_{ \alpha}\prec\mathsf{Hull}(N^{+},V_{\gamma^{*}})\prec\widehat{N^{+}}\] and consequently \(M\models V_{\alpha}\prec\widehat{N^{+}}\).
* Hence, \[M\models(\forall\alpha<\gamma)(V_{\alpha}\prec(V_{\gamma},\in,U\cap V_{ \gamma})\implies V_{\alpha}\prec\widehat{N^{+}}).\]
* Since \(M\prec\widehat{M}\), we conclude that for all \(\alpha\in\mathscr{E}_{\widetilde{M}}\cap\gamma\), we have \(\widehat{M}\upharpoonright\alpha\prec\widehat{N^{+}}\). This suffices for the conclusion.
### Proof of Transfer Theorem
#### 2.6.1 Successor Case
* Let \(\gamma\in\mathscr{E}\) be the successor of some \(\beta\) inside \(\mathscr{E}\). We are assuming that \[(\forall N\in\mathcal{C}_{>\beta})(\forall q\in\mathbb{P}_{\beta})(N\downarrow \beta\in\mathcal{M}_{q}\implies(q\text{ is semi-}N^{\mathbb{P}_{\beta}}\text{- generic})).\] (8) Let \(M\in\mathcal{C}_{>\gamma}\) and let \(p\in\mathbb{P}_{\gamma}\) satisfying \(M\downarrow\gamma\in\mathcal{M}_{p}\). We want to show that \(p\) is locally \(M^{\mathbb{P}_{\gamma}}\)-generic.
* We will use Local Genericity Criterion [2.4.4]. Let \(D\) be a dense open subset of \(\mathbb{P}_{\gamma}\) and let \(q\leq_{\mathbb{P}_{\gamma}}\), \(p\) be arbitrary satisfying \[q\Vdash_{\mathbb{P}_{\gamma}}D\in M^{\hat{G}_{\gamma}}_{<\gamma}.\] We want to find \(r\in D\) and \(s\leq_{\mathbb{P}_{\gamma}}q,r\) such that \[s\Vdash_{\mathbb{P}_{\gamma}}\gamma\not\in M^{\hat{G}_{\gamma}}_{<\gamma} \lor r\in M^{\hat{G}_{\gamma}}_{<\gamma}.\]
* Up to strengthening \(q\), we may assume that \(q\in D\).
* If \(q\Vdash_{\mathbb{P}_{\gamma}}\gamma\not\in M^{\hat{G}_{\gamma}}_{<\gamma}\), we can take \(s:=r:=q\). Hence, let us assume that \(q\not\Vdash_{\mathbb{P}_{\gamma}}\gamma\not\in M^{\hat{G}_{\gamma}}_{<\gamma}\).
* Up to strengthening \(q\), we may assume that \(q\Vdash_{\mathbb{P}_{\gamma}}\gamma\in M^{\hat{G}_{\gamma}}_{<\gamma}\). Since \(\beta\) is definable from \(\gamma\) over \(\widehat{M}\), we have that \(q\Vdash_{\mathbb{P}_{\gamma}}\beta\in M^{\hat{G}_{\gamma}}_{<\gamma}\). This implies that \(q\Vdash_{\mathbb{P}_{\beta}}\beta\in M^{\hat{G}_{\beta}}\) and \(q\Vdash_{\mathbb{P}_{\beta}}M\downarrow\gamma\in\mathcal{M}^{\gamma}_{q}[\hat {G}_{\beta}]\).
* By Lemma [2.5.4], we may strengthen \(q\) to ensure \(\beta\in\mathsf{dom}(w_{q})\).
* **Claim.**\(q\restriction(\beta+1)\Vdash_{\mathbb{P}_{\beta+1}}M^{\hat{G}_{\beta+1}}\cap \omega_{1}^{V}=M\cap\omega_{1}^{V}\)
Proof.:
* By (8), we have that \[q\restriction\beta\Vdash_{\mathbb{P}_{\beta}}M^{\hat{G}_{\beta}}\cap\omega_{1} ^{V}=M\cap\omega_{1}^{V}.\]
* Let \(G_{\beta+1}\) be a generic containing \(q\restriction(\beta+1)\) and let us verify that \(M^{G_{\beta+1}}\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\). Let \(\mathbb{Q}_{\beta}:=\mathbb{Q}^{G_{\beta}}_{\beta}\).
* Since \(\beta\in\mathsf{dom}(w_{q})\) and \(M\downarrow\gamma\in\mathcal{M}^{\gamma}_{q}[G_{\beta}]\), we have that \(w_{q}(\beta)^{G_{\beta}}\) is semi-\((M\downarrow\gamma)[G_{\beta}]^{\mathbb{Q}_{\beta}}\)-generic in \(V[G_{\beta}]\).
* Hence, \[(M\downarrow\gamma)[G_{\beta+1}]\cap\omega_{1}^{V[G_{\beta}]}=(M\downarrow \gamma)[G_{\beta}]\cap\omega_{1}^{V[G_{\beta}]}.\]
* This shows that \[M^{G_{\beta+1}}\cap\omega_{1}^{V}=M^{G_{\beta}}\cap\omega_{1}^{V}=M\cap\omega _{1}^{V}.\]
* We may assume that there exists \(\mathcal{M}\in[\mathcal{C}_{\gamma}]^{<\omega}\) such that \[q\restriction\beta\Vdash_{\mathbb{P}_{\beta}}\mathcal{M}^{\gamma}_{q}[\hat{G} _{\beta}]=\mathcal{M}.\] Namely, there exists \(\bar{q}\leq_{\mathbb{P}_{\beta}}q\restriction\beta\) such that \(\bar{q}\) decides \(\mathcal{M}^{\gamma}_{q}[\hat{G}_{\beta}]\). Since \(\mathcal{M}_{\bar{q}q}\cap\mathcal{C}_{\gamma}=\mathcal{M}_{q}\cap\mathcal{C}_ {\gamma}\), we may replace \(q\) by \(\bar{q}q\).
* Note that \(M\downarrow\gamma\in\mathcal{M}\) and \[q\restriction(\beta+1)\Vdash_{\mathbb{P}_{\beta+1}}\mathcal{M}\text{ is a weak }\lhd_{\gamma}\text{-chain w.r.t. }\dot{G}_{\beta+1}.\]
* There exist \(r\in D\) and \(s_{0}\in\mathbb{P}_{\beta+1}\) such that
* \(s_{0}\Vdash_{\mathbb{P}_{\beta+1}}r\in M^{\hat{G}_{\beta+1}}\).
Proof.:
* Let \(\mathcal{N}:=\{N\in\mathcal{M}:N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\}\) and let \(G_{\beta+1}\rightsquigarrow V^{\mathbb{P}_{\beta+1}}\) arbitrary containing \(q\restriction(\beta+1)\).
* For all \(N\in\mathcal{N}\), we have \[N\lhd_{\gamma}(M\downarrow\gamma)^{G_{\beta+1}}\cong_{\gamma}M^{G_{\beta+1}},\] which yields \(P\in M^{G_{\beta+1}}\) such that \(N=P\downarrow\gamma\); since \(\gamma\in M^{G_{\beta+1}}\), we conclude \(N\in M^{G_{\beta+1}}\). Hence, \(\mathcal{N}\in M^{G_{\beta+1}}\).
* Let \(D^{*}:=\{r\in D:r\restriction(\beta+1)\in G_{\beta+1},\ \beta\in\mathsf{dom}(w_{r}),\ \mathcal{N}\subseteq \mathcal{M}_{r}\}\in M^{G_{\beta+1}}\).
* Note that \(q\in D^{*}\). By elementarity, \(M^{G_{\beta+1}}\models D^{*}\neq\emptyset\). Hence, there exists \(r\in D\cap M^{G_{\beta+1}}\).
* By Forcing Theorem, there exists \(s_{0}\in G_{\beta+1}\) such that
* \(s_{0}\leq q\restriction(\beta+1),r\restriction(\beta+1)\),
* \(s_{0}\Vdash_{\mathbb{P}_{\beta+1}}r\in M^{\hat{G}_{\beta+1}}\). Conditions \(r,s_{0}\) are as required.
**Claim.** There exists \(s\in\mathbb{P}_{\gamma}\) such that \(s\leq_{\mathbb{P}_{\gamma}}q,r,s_{0}\).
Proof.:
* Let \(w_{s}:=w_{s_{0}}\), \(\mathcal{M}_{s}:=\mathcal{M}_{s_{0}}\cup\mathcal{M}_{q}\cup\mathcal{M}_{r}\), \(s:=(w_{s},\mathcal{M}_{s})\). We will be done once we show that \(s\in\mathbb{P}_{\gamma}\).
* We have that \(s\restriction(\beta+1)=s_{0}\in\mathbb{P}_{\beta+1}\), so it remains to verify that \[s\restriction(\beta+1) \Vdash_{\mathbb{P}_{\beta+1}} (\mathcal{M}_{s}^{\gamma}[\hat{G}_{\beta}]\text{ is a weak }\lhd_{\gamma}\text{-chain w.r.t. }\dot{G}_{\beta+1}),\] (9) \[s\restriction\beta \Vdash_{\mathbb{P}_{\beta}} (\forall N\in\mathcal{M}_{s}^{\gamma}[\hat{G}_{\beta}])(w_{s}( \beta)\text{ is semi-}N[\hat{G}_{\beta}]^{\mathbb{Q}_{\beta}}\text{- generic}).\] (10)
* Let \(G_{\beta+1}\rightsquigarrow V^{\mathbb{P}_{\beta}}\) be arbitrary containing \(s\restriction(\beta+1)\). Let us first verify that \(\mathcal{M}_{s}^{\gamma}[G_{\beta}]\) is a weak \(\lhd_{\gamma}\)-chain w.r.t. \(G_{\beta+1}\).
* Let \(\mathcal{M}^{*}:=\{N\in\mathcal{M}:N\cap\omega_{1}^{V}\geq M\cap\omega_{1}^{V}\}\). Since \(\{N\in\mathcal{M}_{q}^{\gamma}[G_{\beta}]:N\cap\omega_{1}^{V}<M\cap\omega_{1}^{ V}\}\subseteq\mathcal{M}_{r}\), we have that \[\mathcal{M}_{s}^{\gamma}[G_{\beta}]=\mathcal{M}_{q}^{\gamma}[G_{\beta}]\cup \mathcal{M}_{r}^{\gamma}[G_{\beta}]=\mathcal{M}_{r}^{\gamma}[G_{\beta}]\cup \mathcal{M}^{*}.\]
* Both \(\mathcal{M}_{r}^{\gamma}[G_{\beta}]\) and \(\mathcal{M}^{*}\) are weak \(\lhd_{\gamma}\)-chains w.r.t. \(G_{\beta+1}\) and for all \(N\in\mathcal{M}_{r}^{\gamma}[G_{\beta}]\) and all \(P\in\mathcal{M}^{*}\), we have that \[N\cap\omega_{1}^{V}<M^{G_{\beta+1}}\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\leq P \cap\omega_{1}^{V}\] ("\(<\)" follows from \(r\in M^{G_{\beta+1}}\) and "\(=\)" from Claim 7\({}^{\circ}\)). Hence, it suffices to verify that for all \(N\in\mathcal{M}_{r}^{\gamma}[G_{\beta}]\) and all \(P\in\mathcal{M}^{*}\), we have that \(N\lhd_{\gamma}P^{G_{\beta+1}}\).
* Since \(r\in M^{G_{\beta+1}}\), we have \(N\lhd_{\gamma}M^{G_{\beta+1}}\).
* Since \(M\downarrow\gamma,P\in\mathcal{M}\) and \(M\cap\omega_{1}^{V}\leq P\cap\omega_{1}^{V}\), we have \(M\downarrow\gamma=P\) or \(M\downarrow\gamma\lhd_{\gamma}P^{G_{\beta+1}}\).
* Hence, \[N\lhd_{\gamma}(M\downarrow\gamma)^{G_{\beta}+1}\subseteq P^{G_{\beta+1}},\] as required.
* Let us now verify that for every \(N\in\mathcal{M}_{s}^{\gamma}[G_{\beta}]\), we have that \(w_{s}(\beta)^{G_{\beta}}\) is semi-\(N[G_{\beta}]^{\mathbb{Q}_{\beta}}\)-generic.
* Either \(N\in\mathcal{M}_{q}^{\gamma}[G_{\beta}]\) or \(N\in\mathcal{M}_{r}^{\gamma}[G_{\beta}]\).
\(11^{\prime}\) If \(N\in\mathcal{M}_{q}^{\gamma}[G_{\beta}]\), the conclusion follows from \[w_{s}(\beta)^{G_{\beta}}=w_{s_{0}}(\beta)^{G_{\beta}}\leq w_{q}(\beta)^{G_{\beta }};\] if \(N\in\mathcal{M}_{r}^{\gamma}[G_{\beta}]\), the conclusion follows from \[w_{s}(\beta)^{G_{\beta}}=w_{s_{0}}(\beta)^{G_{\beta}}\leq w_{r}(\beta)^{G_{ \beta}}.\] \(12^{\prime}\) This concludes the proof of Claim \(11^{\circ}\).
\(12^{\circ}\) It is clear that \(r\) and \(s\) are as required in \(2^{\circ}\).
#### 2.6.2 Limit Case
1. We are assuming that \(\gamma\) is a limit point of \(\mathscr{E}\) and that \[(\forall\alpha\in\mathscr{E}\cap\gamma)(\forall N\in\mathcal{C}_{>\alpha})( \forall q\in\mathbb{P}_{\alpha})(N\downarrow\alpha\in\mathcal{M}_{q}\implies q \Vdash_{\mathbb{P}_{\alpha}}N^{\dot{G}_{\alpha}}\cap\omega_{1}^{V}=N\cap \omega_{1}^{V}).\] Let us fix some \(M\in\mathcal{C}_{>\gamma}\) and some \(p\in\mathbb{P}_{\gamma}\) such that \(M\downarrow\gamma\in\mathcal{M}_{p}\). We want to show that \(p\) is locally \(M^{\mathbb{P}_{\gamma}}\)-generic.
2. We use Local Genericity Criterion [2.4.4]. Let \(q\leq_{\mathbb{P}_{\gamma}}\), \(p\) be an arbitrary condition and let \(D\) be an arbitrary dense open subset of \(\mathbb{P}_{\gamma}\) such that \[q\Vdash_{\mathbb{P}_{\gamma}}D\in M_{<\gamma}^{\dot{G}_{\gamma}}.\] We want to find \(r\in D\) and \(s\leq_{\mathbb{P}_{\gamma}}q,r\) such that \[s\Vdash_{\mathbb{P}_{\gamma}}\gamma\not\in M_{<\gamma}^{\dot{G}_{\gamma}} \lor r\in M_{<\gamma}^{\dot{G}_{\gamma}}.\]
3. By Lemma [2.5.7], we may assume, up strengthening \(q\), that there exist \(\beta\in\mathscr{E}\cap\gamma\) and \(\gamma^{*}\in(\beta,\gamma]\) such that \[(\forall\xi\in\mathscr{E}\cap[\beta,\gamma))(q\uparrow\xi\Vdash_{\mathbb{P}_{ \xi}}\sup(M^{\dot{G}_{\xi}}\cap\gamma)=\gamma^{*}).\]
4. If \(q\Vdash_{\mathbb{P}_{\gamma}}\gamma\not\in M_{<\gamma}^{\dot{G}_{\gamma}}\), we can set \(r=s\) to be an arbitrary element of \(D\) which is \(\leq q\). Hence, we will assume that \(q\not\Vdash_{\mathbb{P}_{\gamma}}\gamma\not\in M_{<\gamma}^{\dot{G}_{\gamma}}\). Up to strengthening \(q\), we may assume that \(q\Vdash_{\mathbb{P}_{\gamma}}\gamma\in M_{<\gamma}^{\dot{G}_{\gamma}}\). Up to further strengthening and increasing of \(\beta\), we may assume that \(q\Vdash_{\mathbb{P}_{\gamma}}\gamma\in M_{<\beta}^{\dot{G}_{\gamma}}\), or equivalently \[q\restriction\beta\Vdash_{\mathbb{P}_{\beta}}\gamma\in M_{<\beta}^{\dot{G}_{ \beta}}.\]
5. Up to strengthening \(q\) and increasing \(\beta\), we may also assume that \[q\restriction\beta\Vdash_{\mathbb{P}_{\beta}}D\in M_{<\beta}^{\dot{G}_{\beta}}.\]
6. **Claim.** We may assume that for all \(N\in\mathcal{M}_{q}\), either \[q\restriction\beta\Vdash_{\mathbb{P}_{\beta}}\sup(N^{\dot{G}_{\beta}}\cap \gamma^{*})=\gamma^{*}\] or \[(\forall\xi\in\mathscr{E}\cap[\beta,\gamma^{*}))(q\restriction\xi\Vdash_{ \mathbb{P}_{\xi}}\sup(N^{\dot{G}_{\xi}}\cap\gamma^{*})\leq\beta).\]
Proof.:
1. Let \(\mathcal{M}_{q}=\{N_{i}:i<k\}\), let \(\eta_{-1}:=\beta\), and let \(t_{-1}:=q\restriction\gamma^{*}\).
* Let \(i<k\). Suppose recursively that an ordinal \(\eta_{i-1}\in\mathscr{E}\cap[\beta,\gamma^{*})\) and a condition \(t_{i-1}\leq_{\mathbb{P}_{\gamma^{*}}}\), \(q\upharpoonright\gamma^{*}\) satisfying \[\mathcal{M}_{t_{i-1}}\cap\mathcal{C}_{\gamma^{*}}=\mathcal{M}_{q\upharpoonright \gamma^{*}}\cap\mathcal{C}_{\gamma^{*}}\] have been defined and that for all \(j<i\), either \[t_{i-1}\upharpoonright\eta_{i-1}\Vdash_{\mathbb{P}_{\eta_{i-1}}}\sup(N_{j}^{ \hat{C}_{\eta_{i-1}}}\cap\gamma^{*})=\gamma^{*}\] or \[(\forall\xi\in\mathscr{E}\cap[\eta_{i-1},\gamma^{*}))(t_{i-1}\upharpoonright \xi\Vdash_{\mathbb{P}_{\xi}}\sup(N_{j}^{\hat{C}_{\xi}}\cap\gamma^{*})\leq\eta_ {i-1})).\]
* We want to define an ordinal \(\eta_{i}\in\mathscr{E}\cap[\eta_{i-1},\gamma^{*})\) and a condition \(t_{i}\leq_{\mathbb{P}_{\gamma^{*}}}t_{i-1}\) satisfying \[\mathcal{M}_{t_{i}}\cap\mathcal{C}_{\gamma^{*}}=\mathcal{M}_{q\upharpoonright \gamma^{*}}\cap\mathcal{C}_{\gamma^{*}}\] such that for all \(j\leq i\), either \[t_{i}\upharpoonright\eta_{i}\Vdash_{\mathbb{P}_{\eta_{i}}}\sup(N_{j}^{\hat{C}_ {\eta_{i}}}\cap\gamma^{*})=\gamma^{*}\] or \[(\forall\xi\in\mathscr{E}\cap[\eta_{i},\gamma^{*}))(t_{i}\upharpoonright \xi\Vdash_{\mathbb{P}_{\xi}}\sup(N_{j}^{\hat{C}_{\xi}}\cap\gamma^{*})\leq\eta_ {i})).\]
* If \(N_{i}\in\mathcal{C}_{<\gamma^{*}}\), then let \(\eta_{i}\) be the minimum of the set \[\mathscr{E}\cap[\max\{\eta_{i-1},\sup(N_{i}\cap\mathsf{Ord})\},\gamma^{*})\] and let \(t_{i}:=t_{i-1}\).
* Otherwise, we have \(N_{i}\in\mathcal{C}_{\geq\gamma^{*}}\). Then we can apply Lemma [2.5.7] to
* \(\underline{P}:=t_{i-1}\), and obtain ordinals \(\mu<\nu\leq\gamma^{*}\) and a condition \(t_{i}\leq_{\mathbb{P}_{\gamma^{*}}}t_{i-1}\) satisfying \[\mathcal{M}_{t_{i}}\cap\mathcal{C}_{\gamma^{*}}=\mathcal{M}_{t_{i-1}}\cap \mathcal{C}_{\gamma^{*}},\] \[(\forall\xi\in\mathscr{E}\cap[\mu,\gamma^{*}))(t_{i}\upharpoonright\xi \Vdash_{\mathbb{P}_{\xi}}\sup(N_{i}^{\hat{C}_{\xi}}\cap\gamma^{*})=\nu).\]
* If \(\nu=\gamma^{*}\), then set \[\eta_{i}:=\max\{\eta_{i-1},\min(\mathscr{E}\cap[\mu,\gamma^{*}))\}.\] If \(\nu<\gamma^{*}\), then set \[\eta_{i}:=\max\{\eta_{i-1},\min(\mathscr{E}\cap[\mu,\gamma^{*})),\nu\}.\]
* By Lemma [1.3.9], we have \(\nu\in\mathscr{E}\) and consequently \(\eta_{i}\in\mathscr{E}\).
* This concludes the recursion. Note that \(\mathcal{M}_{t_{k-1}}-\mathcal{M}_{q}\subseteq\mathcal{C}_{<\gamma^{*}}\), so there exists \(\eta_{k}\in\mathscr{E}\cap[\eta_{k-1},\gamma^{*})\) such that for all \(N\in\mathcal{M}_{t_{k-1}}-\mathcal{M}_{q}\) and for all \(\xi\in\mathscr{E}\cap[\eta_{k},\gamma^{*})\), \[\Vdash_{\mathbb{P}_{\xi}}\sup(N^{\hat{C}_{\xi}}\cap\gamma^{*})\leq\sup(N\cap \mathsf{Ord})\leq\eta_{k}.\]
* The claim now follows by replacing \(q\) with \(t_{k-1}q\) and by replacing \(\beta\) with \(\eta_{k}\).
* Up to increasing \(\beta\), we may assume that \(\mathsf{dom}(w_{q})\cap\gamma^{*}\subseteq\beta\).
* For all \(\xi\in\mathscr{E}\cap[\beta,\gamma^{*})\), condition \(q\upharpoonright\xi\) decides the value of \(\mathcal{M}_{q}^{\gamma^{*}}[\hat{G}_{\xi}]\). Up to increasing \(\beta\), we may assume that all these values are equal to some fixed \(\mathcal{M}\in[\mathcal{C}_{\gamma^{*}}]^{<\omega}\). Hence, we can ensure the following:
* \((\forall\xi\in\mathscr{E}\cap[\beta,\gamma^{*}))(q\upharpoonright\xi\Vdash_{ \mathbb{P}_{\xi}}\mathcal{M}_{q}^{\gamma^{*}}[\hat{G}_{\xi}]=\mathcal{M})\),
* \(q\upharpoonright\beta\Vdash_{\mathbb{P}_{\beta}}(\mathcal{M}\) is a weak \(\lhd_{\gamma^{*}}\) -chain w.r.t. \(G_{\beta})\),
* \(M\downarrow\gamma^{*}\in\mathcal{M}\).
* By \(3^{\circ}\) and Lemma 1.3.9, we have that \[q\upharpoonright\beta\Vdash_{\mathbb{P}_{\beta}}\sup(\mathscr{E}\cap M^{G_{ \beta}}\cap\gamma)=\gamma^{*}>\beta.\] This means that we may strengthen \(q\) and increase \(\beta\) as to ensure \[q\upharpoonright\beta\Vdash_{\mathbb{P}_{\beta}}\beta\in M^{G_{\beta}}_{< \beta}.\]
* **Claim.** There exist \(r\in D\) and \(s_{0}\in\mathbb{P}_{\beta}\) such that 1. \((\forall N\in\mathcal{M}:N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V})(\exists N^{+} \in\mathcal{M}_{r}\cap\mathcal{C}_{\gamma})(N=N^{+}\downarrow\gamma)\), 2. \(s_{0}\leq q\upharpoonright\beta,r\upharpoonright\beta\), 3. \(s_{0}\Vdash_{\mathbb{P}_{\beta}}r\in M^{G_{\beta}}\).
Proof.:
* Let \(G_{\beta}\) be a \(V^{\mathbb{P}_{\beta}}\)-generic containing \(q\upharpoonright\beta\) and let us work inside \(V[G_{\beta}]\).
* For \(N\in\mathcal{M}\) with \(N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\), we may apply Lemma 2.5.8 to 1. \(M\vDash:=M^{G_{\beta}}\), 2. \(\gamma:=\gamma\), 3. \(N\vDash:=N\), and obtain the unique \(N^{+}\in M^{G_{\beta}}\cap\mathcal{C}_{\gamma}\) satisfying \(N^{+}\downarrow\gamma^{*}=N\). Let \[\mathcal{N}:=\{N^{+}:N\in\mathcal{M},\,N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V} \}\in M^{G_{\beta}}.\]
* Let \(D^{*}:=\{r\in D:r\upharpoonright\beta\in G_{\beta},\,\,\mathcal{N}\subseteq \mathcal{M}_{r}\}\). We want to find \(r\in D^{*}\cap M^{G_{\beta}}\).
* Since \(\beta\in M^{G_{\beta}}_{<\beta}\), model \(M[G_{\beta}]\) is defined and \(D^{*}\in M[G_{\beta}]\). By elementarity, it suffices to show that \(D^{*}\neq\emptyset\).
* **Subclaim.**\(\mathcal{N}\) is a weak \(\lhd_{\gamma}\)-chain w.r.t. \(G_{\beta}\).
Proof.:
* Let \(N,P\in\mathcal{M}\) be arbitrary satisfying \(N\cap\omega_{1}^{V}\leq P\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\). We want to show that
* if \(N\cap\omega_{1}^{V}=P\cap\omega_{1}^{V}\), then \(N^{+}=P^{+}\),
* if \(N\cap\omega_{1}^{V}<P\cap\omega_{1}^{V}\), then \(N^{+}\lhd_{\gamma}(P^{+})^{G_{\beta}}\). The first case is obvious, since we have \(N=P\). Let us consider the case when \(N\cap\omega_{1}^{V}<P\cap\omega_{1}^{V}\).
* We have that \(N\lhd_{\gamma^{*}}P^{G_{\beta}}\). Since \(P\cong_{\gamma^{*}}P^{+}\), we have that \(N\lhd_{\gamma^{*}}(P^{+})^{G_{\beta}}\).
* Hence, there exists \(N^{\prime}\in(P^{+})^{G_{\beta}}\) such that \(N\cong_{\gamma^{*}}N^{\prime}\).
* Since \(P^{+}\in(M\downarrow\gamma)^{G_{\beta}}\), we conclude that \[(P^{+})^{G_{\beta}}\subseteq(M\downarrow\gamma)^{G_{\beta}}.\]
* This means that \(N^{\prime}\in(M\downarrow\gamma)^{G_{\beta}}\). Then there exists \(N^{\prime\prime}\in M^{G_{\beta}}\) such that \(N^{\prime}\cong_{\gamma}N^{\prime\prime}\).
* Hence, \(N^{\prime}\downarrow\gamma=N^{\prime\prime}\downarrow\gamma\in M^{G_{\beta}}\), while \((N^{\prime}\downarrow\gamma)\downarrow\gamma^{*}=N\). We conclude that \[N^{+}=N^{\prime}\downarrow\gamma\cong_{\gamma}N^{\prime}\in(P^{+})^{G_{\beta}},\] which means \(N^{+}\lhd_{\gamma}(P^{+})^{G_{\beta}}\).
* There exists \(t\in G_{\beta}\) such that 1. \(t\leq q\upharpoonright\beta\),
* \(t\Vdash_{\mathbb{P}_{\xi}}^{V}(\mathcal{N}\) is a weak \(\lhd_{\gamma}\)-chain w.r.t. \(\dot{G}_{\delta})\),
* \(t\Vdash_{\mathbb{P}_{\xi}}^{V}(\forall P\in\hat{\mathcal{N}})(P^{\dot{G}_{ \beta}}\) is active at \(\gamma^{*})\).
* Let \(w_{u}:=w_{t}\), let \(\mathcal{M}_{u}:=\mathcal{M}_{t}\cup\mathcal{N}\), and let \(u:=(w_{u},\mathcal{M}_{u})\). The previous point implies that \(u\in\mathbb{P}_{\gamma}\). We also have that \(u\upharpoonright\beta=t\in G_{\beta}\), which means that \(u\in(\mathbb{P}_{\gamma}/\mathbb{P}_{\beta})^{G_{\beta}}\).
* Since \(\{v\in D:v\upharpoonright\beta\in G_{\beta}\}\) is dense in \((\mathbb{P}_{\gamma}/\mathbb{P}_{\beta})^{G_{\beta}}\), we conclude that there exists \(v\in D\) such that \(v\upharpoonright\beta\in G_{\beta}\) and \(v\leq_{\mathbb{P}_{\gamma}}u\); in particular, \(\mathcal{N}\subseteq\mathcal{M}_{v}\) and consequently \(v\in D^{*}\).
* Hence, there exists \(r\in D^{*}\cap M[G_{\beta}]\). This means that
* \(r\in D\),
* \(r\upharpoonright\beta\in G_{\beta}\),
* By Forcing Theorem, there exists \(s_{0}\in G_{\beta}\) such that
* \(s_{0}\leq q\upharpoonright\beta,r\upharpoonright\beta\),
* \(s_{0}\Vdash_{\mathbb{P}_{\beta}}r\in M^{\dot{G}_{\beta}}\).
* **Claim.** There exists \(s\in\mathbb{P}_{\gamma}\) such that \(s\leq_{\mathbb{P}_{\gamma}}q,r,s_{0}\).
Proof.:
* Let \(t:=s_{0}q\in\mathbb{P}_{\gamma}\). Note that \[\mathsf{dom}(w_{t})\cap[\beta,\gamma^{*})=\mathsf{dom}(w_{q})\cap[\beta, \gamma^{*})=\emptyset.\]
* Let \(\mathsf{dom}(w_{s}):=\mathsf{dom}(w_{t})\cup\mathsf{dom}(w_{r})\). For \(\xi\in\mathsf{dom}(w_{s})-[\beta,\gamma^{*})\), let \(w_{s}(\xi):=w_{t}(\xi)\).
* For \(\xi\in\mathsf{dom}(w_{s})\cap[\beta,\gamma^{*})=\mathsf{dom}(w_{r})-\beta\), let us denote by \(\eta\) its successor in \(\mathscr{E}\).
* Since \(s_{0}\Vdash_{\mathbb{P}_{\beta}}r\in M^{\dot{G}_{\beta}}\) and \(t\upharpoonright\xi\leq s_{0}\), we have that \(t\upharpoonright\xi\Vdash_{\mathbb{P}_{\xi}}(M^{\dot{G}_{\xi}}\) is active at \(\eta)\).
* Hence, we may apply Lemma 2.5.3 to
* \(\underline{\mu}:=\xi\), \(\underline{\nu}:=\eta\),
* \(\dot{u}:=w_{r}(\xi)\) and obtain a canonical \(\mathbb{P}_{\xi}\)-name \(w_{s}(\xi)\) for an element of \(\mathbb{\hat{Q}}_{\xi}\) such that \[t\upharpoonright\xi\Vdash_{\mathbb{P}_{\xi}}w_{s}(\xi)\leq w_{r}(\xi),\] \[t\upharpoonright\xi\Vdash_{\mathbb{P}_{\xi}}(\forall N\in\mathcal{M}_{t}^{\eta} [\dot{G}_{\xi}])(N\cap\omega_{1}^{V}\geq M\cap\omega_{1}^{V}\implies(w_{s}(\xi) \text{ is semi-}N[\dot{G}_{\xi}]^{\mathbb{\hat{Q}}_{\xi}}\text{-generic})).\]
* Let \(\mathcal{M}_{s}:=\mathcal{M}_{t}\cup\mathcal{M}_{r}\) and let \(s:=(w_{s},\mathcal{M}_{s})\). We will be done if we show \(s\in\mathbb{P}_{\gamma}\). We show by induction on \(\eta\in\mathscr{E}\cap[\beta,\gamma]\) that \(s\upharpoonright\eta\in\mathbb{P}_{\eta}\).
* **Case.**\(\eta=\beta\) Proof.: This case is obvious since \(s\upharpoonright\beta=s_{0}\).
* **Case.** Let us assume that \(\eta\in(\beta,\gamma^{*}]\) and that \(\eta\) has a predecessor \(\xi\) inside \(\mathscr{E}\). Proof. \(1^{\prime\prime}\) We need to verify that \[\xi\in\mathsf{dom}(w_{s})\implies s\upharpoonright\xi\Vdash_{\mathbb{P}_{\xi}}( \forall N\in\mathcal{M}_{s}^{\eta}[\dot{G}_{\xi}])(w_{s}(\xi)\text{ is semi-}N[\dot{G}_{\xi}]^{ \mathbb{\hat{Q}}_{\xi}}\text{-generic})\] and that \[s\upharpoonright(\xi+1)\Vdash_{\mathbb{P}_{\xi+1}}(\mathcal{M}_{s}^{\eta}[ \dot{G}_{\xi}]\text{ is a weak }\lhd_{\eta}\text{-chain w.r.t. }\dot{G}_{\xi+1}).\]
* Let \(G_{\xi}\rightsquigarrow V^{\mathbb{P}_{\xi}}\) be an arbitrary generic containing \(s\upharpoonright\xi\) and let \(\mathbb{Q}_{\xi}:=\dot{\mathbb{Q}}_{\xi}^{G_{\xi}}\). Note that \[\mathcal{M}_{s}^{\eta}[G_{\xi}]=\mathcal{M}_{t}^{\eta}[G_{\xi}]\cup\mathcal{M} _{r}^{\eta}[G_{\xi}].\]
* Since \(\mathcal{M}_{t}^{\eta}[G_{\xi}]\subseteq\mathcal{M}_{t}\cap\mathcal{C}_{\geq \eta}=\mathcal{M}_{q}\), we have that \(\mathcal{M}_{t}^{\eta}[G_{\xi}]=\mathcal{M}_{q}^{\eta}[G_{\xi}]\) and consequently \[\mathcal{M}_{s}^{\eta}[G_{\xi}]=\mathcal{M}_{q}^{\eta}[G_{\xi}]\cup\mathcal{M }_{r}^{\eta}[G_{\xi}].\]
* **Subclaim.**\(\mathcal{M}_{q}^{\eta}[G_{\xi}]\subseteq\mathcal{M}\downarrow\eta\) _Proof._
* Let \(N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]\) be arbitrary. Then there exists \(P\in\mathcal{M}_{q}\cap\mathcal{C}_{\geq\eta}\) such that \(\xi\in P^{G_{\xi}}\) and \(N=P\downarrow\eta\).
* We now have \(\sup(P^{G_{\xi}}\cap\gamma)>\xi\geq\beta\) and it follows from Claim 6\({}^{\circ}\) that \[\sup(P^{G_{\xi}}\cap\gamma)=\gamma^{*}.\]
* This means that \(P\downarrow\gamma^{*}\in\mathcal{M}_{q}^{\gamma^{*}}[G_{\xi}]=\mathcal{M}\), while \(N=(P\downarrow\gamma^{*})\downarrow\eta\).
* **Subclaim.**\(\{N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]:N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V} \}\subseteq\mathcal{M}_{r}^{\eta}[G_{\xi}]\) _Proof._
* Let \(N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]\) be such that \(N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\).
* By Subclaim 4\({}^{\prime\prime}\), there exists \(P\in\mathcal{M}\) such that \(N=P\downarrow\eta\).
* We have \(P\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\), so Claim 10\({}^{\circ}\)a implies that there exists \(Q\in\mathcal{M}_{r}\cap\mathcal{C}_{\gamma}\) such that \(P=Q\downarrow\gamma^{*}\).
* Note that \[\xi\in N^{G_{\xi}}\cong_{\eta}P^{G_{\xi}}\cong_{\gamma^{*}}Q^{G_{\xi}},\] so \(Q\downarrow\eta\in\mathcal{M}_{r}^{\eta}[G_{\xi}]\), while \(N=Q\downarrow\eta\).
* For all \(\xi\in\mathsf{dom}(w_{s})\) and for all \(N\in\mathcal{M}_{s}^{\eta}[G_{\xi}]\), we have that \(w_{s}(\xi)^{G_{\xi}}\) is semi-\(N[G_{\xi}]^{\mathbb{Q}_{\xi}}\)-generic. _Proof._
* Suppose first that \(N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]\). Since \(\xi\in[\beta,\gamma^{*})\), we have \(\xi\in\mathsf{dom}(w_{r})-\beta\) and consequently that \(w_{r}(\xi)^{G_{\xi}}\) is semi-\(N[G_{\xi}]^{\mathbb{Q}_{\xi}}\)-generic. The conclusion now follows from the fact that \(w_{s}(\xi)^{G_{\xi}}\leq w_{r}(\xi)^{G_{\xi}}\).
* Suppose next that \(N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]\) and that \(N\cap\omega_{1}^{V}\geq M\cap\omega_{1}^{V}\). Then \(w_{s}(\xi)^{G_{\xi}}\) is semi-\(N[G_{\xi}]^{\mathbb{Q}_{\xi}}\)-generic by the choice of \(w_{s}(\xi)\).
* Suppose finally that \(N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]\) and that \(N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\). By Subclaim 5\({}^{\prime\prime}\), we have that \(N\in\mathcal{M}_{r}^{\eta}[G_{\xi}]\), which means that \(w_{r}(\xi)^{G_{\xi}}\) is semi-\(N[G_{\xi}]^{\mathbb{Q}_{\xi}}\)-generic. The desired conclusion now follows from the fact that \(w_{s}(\xi)^{G_{\xi}}\leq w_{r}(\xi)^{G_{\xi}}\).
* Let \(G_{\xi+1}\rightsquigarrow V^{\mathbb{P}_{\xi+1}}\) be an arbitrary generic extending \(G_{\xi}\) and containing \(s\upharpoonright(\xi+1)\). Then \(\mathcal{M}_{s}^{\eta}[G_{\xi}]\) is a weak \(\lhd_{\eta}\)-chain w.r.t. \(G_{\xi+1}\). _Proof._
* Let \(\mathcal{M}^{*}:=\{P\in\mathcal{M}_{q}^{\eta}[G_{\xi}]:P\cap\omega_{1}^{V} \geq M\cap\omega_{1}^{V}\}\). By Subclaim 5\({}^{\prime\prime}\), set \(\mathcal{M}_{s}^{\eta}[G_{\xi}]\) is the union of sets \(\mathcal{M}_{r}^{\eta}[G_{\xi}]\) and \(\mathcal{M}^{*}\).
* Each of sets \(\mathcal{M}_{r}^{\eta}[G_{\xi}]\) and \(\mathcal{M}^{*}\) is a weak \(\lhd_{\eta}\)-chain w.r.t. \(G_{\xi+1}\). Furthermore, for every \(N\in\mathcal{M}_{r}^{\eta}[G_{\xi}]\) and every \(P\in\mathcal{M}^{*}\), we have \[N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\leq P\cap\omega_{1}^{V}.\]
* Hence, it remains to verify that for every \(N\in\mathcal{M}_{r}^{\eta}[G_{\xi}]\) and every \(P\in\mathcal{M}^{*}\), we have \(N\lhd_{\eta}P^{G_{\xi+1}}\).
* Since \(N\in\mathcal{M}_{r}^{\eta}[G_{\xi}]\) and \(r\in M^{G_{\xi}}\), we have \(N\lhd_{\eta}M^{G_{\xi}}\).
* By Subclaim 4\({}^{\prime\prime}\), there exists \(Q\in\mathcal{M}\) such that \(P=Q\downarrow\eta\). Since \(Q\cap\omega_{1}^{V}\geq M\cap\omega_{1}^{V}\), we conclude that \(M\downarrow\gamma^{*}=Q\) or \(M\downarrow\gamma^{*}\lhd_{\gamma^{*}}Q^{G_{\xi}}\).
* Hence, there exists \(R\in Q^{G_{\xi}}\cup\{Q\}\) such that \(M\cong_{\gamma^{*}}R\).
* We conclude \(N\lhd_{\eta}R^{G_{\xi}}\subseteq Q^{G_{\xi}}\).
* Thus, \[N\lhd_{\eta}Q^{G_{\xi}}\cong_{\eta}P^{G_{\xi}}\subseteq P^{G_{\xi+1}},\] as required.
* This concludes the proof of Case \(8^{\prime}\)
* Let us assume that \(\eta\in(\beta,\gamma^{*}]\) and that \(\eta\) is a limit point of \(\mathscr{E}\). _Proof._
* We need to show that there exists \(\beta_{0}<\eta\) such that \[(\forall\xi\in\mathscr{E}\cap(\beta_{0},\eta))(s\restriction(\xi+1)\Vdash_{ \mathbb{P}_{\xi+1}}(\mathcal{M}_{s}^{\eta}[\dot{G}_{\xi}]\text{ is a weak }<_{\eta}\text{-chain w.r.t. }\dot{G}_{\xi+1})).\] We will show that \(\beta_{0}:=\beta\) works.
* Let \(\xi\in\mathscr{C}\cap(\beta,\eta)\) and let \(G_{\xi}\rightsquigarrow V^{\mathbb{P}_{\xi}}\) satisfying \(s\restriction\xi\in G_{\xi}\).
* We have \(\mathcal{M}_{s}^{\eta}[G_{\xi}]=\mathcal{M}_{t}^{\eta}[G_{\xi}]\cup\mathcal{M} _{\tau}^{\eta}[G_{\xi}]=\mathcal{M}_{q}^{\eta}[G_{\xi}]\cup\mathcal{M}_{\tau}^ {\eta}[G_{\xi}]\).
* Let \(N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]\) be arbitrary. Then there exists \(P\in\mathcal{M}_{q}\cap\mathcal{C}_{\geq\eta}\) such that \[\sup(P^{G_{\xi}}\cap\eta)=\eta\] and \(N=P\downarrow\eta\).
* We now have \(\sup(P^{G_{\xi}}\cap\gamma)>\beta\) and it follows from Claim \(6^{\circ}\) that \[\sup(P^{G_{\xi}}\cap\gamma)=\gamma^{*}.\]
* This means that \(P\downarrow\gamma^{*}\in\mathcal{M}_{q}^{\gamma^{*}}[G_{\xi}]=\mathcal{M}\), while \(N=(P\downarrow\gamma^{*})\downarrow\eta\).
* **Subclaim.**\(\{N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]:N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V} \}\subseteq\mathcal{M}_{r}^{\eta}[G_{\xi}]\)
* Let \(N\in\mathcal{M}_{q}^{\eta}[G_{\xi}]\) be such that \(N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\).
* By Subclaim \(4^{\prime\prime}\), there exists \(P\in\mathcal{M}\) such that \(N=P\downarrow\eta\).
* We have \(P\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\), so Claim \(10^{\circ}\)a implies that there exists \(Q\in\mathcal{M}_{r}\cap\mathcal{C}_{\gamma}\) such that \(P=Q\downarrow\gamma^{*}\).
* Since \[N^{G_{\xi}}\cong_{\eta}P^{G_{\xi}}\cong_{\gamma^{*}}Q^{G_{\xi}},\] we have that \[\sup(Q^{G_{\xi}}\cap\eta)=\sup(N^{G_{\xi}}\cap\eta)=\eta,\] i.e. \(Q\downarrow\eta\in\mathcal{M}_{r}^{\eta}[G_{\xi}]\).
* The conclusion now follows from the fact that \(N=Q\downarrow\eta\).
* Let \(G_{\xi+1}\rightsquigarrow V^{\mathbb{P}_{\xi+1}}\) be an arbitrary generic extending \(G_{\xi}\) and containing \(s\restriction(\xi+1)\). Then \(\mathcal{M}_{s}^{\eta}[G_{\xi}]\) is a weak \(\lhd_{\eta}\)-chain w.r.t. \(G_{\xi+1}\).
* Let \(\mathcal{M}^{*}:=\{P\in\mathcal{M}_{s}^{\eta}[G_{\xi}]:P\cap\omega_{1}^{V}\geq M \cap\omega_{1}^{V}\}\). By Subclaim \(5^{\prime\prime}\), set \(\mathcal{M}_{s}^{\eta}[G_{\xi}]\) is the union of sets \(\mathcal{M}_{s}^{\eta}[G_{\xi}]\) and \(\mathcal{M}^{*}\).
* Each of sets \(\mathcal{M}_{r}^{\eta}[G_{\xi}]\) and \(\mathcal{M}^{*}\) is a weak \(\lhd_{\eta}\)-chain w.r.t. \(G_{\xi+1}\). Furthermore, for every \(N\in\mathcal{M}_{r}^{\eta}[G_{\xi}]\) and every \(P\in\mathcal{M}^{*}\), we have \[N\cap\omega_{1}^{V}<M\cap\omega_{1}^{V}\leq P\cap\omega_{1}^{V}.\]
* Hence, it remains to verify that for every \(N\in\mathcal{M}_{\tau}^{q}[G_{\xi}]\) and every \(P\in\mathcal{M}^{*}\), we have \(N\lhd_{\eta}P^{G_{\xi+1}}\).
* Since \(N\in\mathcal{M}_{\tau}^{q}[G_{\xi}]\) and \(r\in M^{G_{\xi}}\), we have \(N\lhd_{\eta}M^{G_{\xi}}\).
* By Subclaim 4\({}^{\prime\prime}\), there exists \(Q\in\mathcal{M}\) such that \(P=Q\downarrow\eta\). Since \(Q\cap\omega_{1}^{V}\geq M\cap\omega_{1}^{V}\), we conclude that \(M\downarrow\gamma^{*}=Q\) or \(M\downarrow\gamma^{*}\lhd_{\gamma^{*}}Q^{G_{\xi}}\).
* Hence, there exists \(R\in Q^{G_{\xi}}\cup\{Q\}\) such that \(M\cong_{\gamma^{*}}R\).
* We conclude \(N\lhd_{\eta}R^{G_{\xi}}\subseteq Q^{G_{\xi}}\).
* Thus, \[N\lhd_{\eta}Q^{G_{\xi}}\cong_{\eta}P^{G_{\xi}}\subseteq P^{G_{\xi+1}},\] as required.
* This concludes the proof of Case 9\({}^{\prime}\).
* Let us assume that \(\eta\in(\gamma^{*},\gamma]\). _Proof._
* Since \(s_{0}\Vdash_{\mathbb{P}_{\beta}}r\in M^{G_{\beta}}\cap\mathbb{P}_{\gamma}\), we have that \(\mathsf{dom}(w_{r})\subseteq\gamma^{*}\) and \(\mathcal{M}_{r}\subseteq\mathcal{C}_{<\gamma^{*}}\). Hence, \[w_{s\upharpoonright\eta}=(w_{s}\upharpoonright\gamma^{*})\cup(w_{t}\upharpoonright \gamma^{*},\eta))=(w_{s}\upharpoonright\gamma^{*})\cup(w_{q}\upharpoonright[ \gamma^{*},\eta)),\] \[\mathcal{M}_{s\upharpoonright\eta}=(\mathcal{M}_{s}\cap\mathcal{C}_{\leq \gamma^{*}})\cup(\mathcal{M}_{t}\downarrow\eta)=(\mathcal{M}_{s}\cap\mathcal{C }_{\leq\gamma^{*}})\cup(\mathcal{M}_{q}\downarrow\eta).\]
* Suppose first that \(\eta\) is the successor of some \(\beta\) inside \(\mathscr{E}\). We have to show that \[\xi\in\mathsf{dom}(w_{s})\implies s\upharpoonright\xi\Vdash_{\mathbb{P}_{ \xi}}(\forall N\in\mathcal{M}_{s}^{q}[\dot{G}_{\xi}])(w_{s}(\xi)\text{ is semi-}N[\dot{G}_{\xi}]^{\mathbb{Q}_{\xi}}\text{-generic})\] (11) and that \[s\upharpoonright(\xi+1)\Vdash_{\mathbb{P}_{\xi+1}}(\mathcal{M}_{s}^{\eta}[ \dot{G}_{\xi}]\text{ is a weak }\lhd_{\eta}\text{-chain w.r.t. }\dot{G}_{\xi+1}).\] (12)
* By 1\({}^{\prime\prime}\), we have that \(\Vdash_{\mathbb{P}_{\xi}}\mathcal{M}_{s}^{q}[\dot{G}_{\xi}]=\mathcal{M}_{q}^{ q}[\dot{G}_{\xi}]\) and \(w_{s}(\xi)=w_{q}(\xi)\). Properties (11) and (12) now follow from the fact that \(q\) is a condition and that \(s\upharpoonright(\xi+1)\leq_{\mathbb{P}_{\xi+1}}q\upharpoonright(\xi+1)\).
* Suppose now that \(\eta\) is a limit point of \(\mathscr{E}\). We have to show that there exists \(\bar{\eta}<\eta\) such that for all \(\xi\in\mathscr{E}\cap(\bar{\eta},\eta)\), \[s\upharpoonright(\xi+1)\Vdash_{\mathbb{P}_{\xi+1}}(\mathcal{M}_{s}^{\eta}[ \dot{G}_{\xi}]\text{ is a weak }\lhd_{\eta}\text{-chain w.r.t. }\dot{G}_{\xi+1}).\] (13)
* We claim that \(\bar{\eta}:=\gamma^{*}\) works. Namely, for all \(\xi\in\mathscr{E}\cap(\gamma^{*},\eta)\), we have that \[\Vdash_{\mathbb{P}_{\xi}}\mathcal{M}_{s}^{q}[\dot{G}_{\xi}]=\mathcal{M}_{q}^ {\eta}[\dot{G}_{\xi}],\] \[s\upharpoonright(\xi+1)\leq_{\mathbb{P}_{\xi+1}}q\upharpoonright(\xi+1),\] so the conclusion again follows from the fact that \(q\) is a condition.
* This concludes the proof of Claim 11\({}^{\circ}\).
* It is now clear that \(s\) and \(r\) as required in 2\({}^{\circ}\), concluding the argument.
### Semi-properness and Chain Condition
1. **Lemma.** Let \(\gamma\in\mathscr{E}\), let \(M\in\mathcal{C}_{>\gamma}\), and let \(p\in\mathbb{P}_{\gamma}\) satisfy \(M\downarrow\gamma\in\mathcal{M}_{p}\). Then \[p\Vdash_{\mathbb{P}_{\gamma}}M^{\hat{G}_{\gamma}}\cap\omega_{1}^{V}=M\cap \omega_{1}^{V}.\]
Proof.:
1. This is shown by induction on \(\gamma\).
2. **Case.**\(\gamma=\min\mathscr{E}\) Proof.: By Transfer Theorem, condition \(p\) is locally \(M^{\mathbb{P}_{\gamma}}\)-generic, which in this case translates to \(M^{\mathbb{P}_{\gamma}}\)-generic.
3. **Case.** Suppose that \(\gamma\) is a successor of some \(\beta\) inside \(\mathscr{E}\). Proof.: 1. Let \(G_{\gamma}\rightsquigarrow V^{\mathbb{P}_{\gamma}}\) be arbitrary containing \(p\) and let us verify that \[M^{G_{\gamma}}\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}.\] 2. By the IH and Transfer Theorem, we have that \[M^{G_{\beta}}\cap\omega_{1}^{V}=M\cap\omega_{1}^{V},\quad M^{G_{\gamma}}=M^{G _{\beta+1}}.\] Hence, it suffices to verify that \[M^{G_{\beta+1}}\cap\omega_{1}^{V}=M^{G_{\beta}}\cap\omega_{1}^{V}.\] 3. If \(\beta\not\in M^{G_{\beta}}_{<\beta}\), then \[M^{G_{\beta}}_{<\beta}=M^{G_{\beta}}=M^{G_{\beta}+1}.\] Let us then assume that \(\beta\in M^{G_{\beta}}_{<\beta}\). 4. Since conditions \(q\in\mathbb{P}_{\gamma}\) such that \(\beta\in\mathsf{dom}(w_{q})\) are dense below \(p\), there exists \(q\in G_{\gamma}\) such that \(q\leq p\) and \(\beta\in\mathsf{dom}(w_{q})\). 5. We have \(M\downarrow\gamma\in\mathcal{M}_{q}^{\gamma}[G_{\beta}]\). Since \(\beta\in\mathsf{dom}(w_{q})\), we conclude that \(w_{q}(\beta)^{G_{\beta}}\) is semi-\((M\downarrow\gamma)[G_{\beta}]^{\mathbb{Q}_{\beta}}\)-generic. 6. Since \(q\in G_{\beta}\), we conclude that \[(M\downarrow\gamma)[G_{\beta+1}]\cap\omega_{1}^{V[G_{\beta}]}=(M\downarrow \gamma)[G_{\beta}]\cap\omega_{1}^{V[G_{\beta}]}.\] This suffices for the conclusion.
4. **Case.**\(\gamma\) is a limit point of \(\mathscr{E}\). Proof.: By the IH and Transfer Theorem, we have that \[p\Vdash_{\mathbb{P}_{\gamma}}M^{\hat{G}_{\gamma}}=M^{\hat{G}_{\gamma}}_{< \gamma}.\] Hence, \[p\Vdash_{\mathbb{P}_{\gamma}}M^{\hat{G}_{\gamma}}\cap\omega_{1}^{V}=\bigcup_{ \alpha\in\mathscr{E}\cap\gamma}(M^{G_{\alpha}}\cap\omega_{1}^{V})=M\cap \omega_{1}^{V}\] by another application of the IH.
5. This concludes the induction.
**Proposition**.: For all \(\gamma\in\mathscr{E}^{*}\), poset \(\mathbb{P}_{\gamma}\) is semi-proper.
Proof.:
1. Suppose first that \(\gamma\in\mathscr{E}\). It suffices to show that for every \(M\in\mathcal{C}_{>\gamma}\) and every \(p\in M\), there exists \(q\leq p\) which is semi-\(M^{\mathbb{P}_{\gamma}}\)-generic.
2. By Lemma [2.5.1], there exists \(q\leq p\) such that \(M\downarrow\gamma\in\mathcal{M}_{q}\). By the previous lemma, condition \(q\) is semi-\(M^{\mathbb{P}_{\gamma}}\)-generic.
3. Suppose now that \(\gamma\in\mathscr{E}^{+}\). Then \[\mathbb{P}_{\gamma}\simeq\mathbb{P}_{\gamma-1}*\dot{\mathbb{Q}}_{\gamma-1},\] which suffices to conclude semi-properness.
**Lemma**.: Suppose that
1. \(\gamma\in\mathscr{E}\),
2. \(\mathsf{cof}(\gamma)=\omega_{1}\),
3. \(p\in\mathbb{P}_{\gamma}\).
Then there exist \(q\) and \(\bar{\gamma}\) such that
1. \(q\leq_{\mathbb{P}_{\gamma}}p\),
2. \(\bar{\gamma}\in\mathscr{E}\cap\gamma\),
3. \(\mathsf{dom}(w_{q})\subseteq\bar{\gamma}\),
4. for all \(\alpha\in\mathscr{E}\cap[0,\gamma]\), for all \(N\in\mathcal{M}_{q}\), \(q\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}\sup(N^{G_{\alpha}}\cap \gamma)\leq\bar{\gamma}\).
Proof.:
1. We note that \(\gamma\) is necessarily a limit point of \(\mathscr{E}\).
2. Let \(M\in\mathcal{C}_{>\gamma}\) be such that \(\gamma,p\in M\) and let \(q\leq p\) be such that \(\mathsf{dom}(w_{q})=\mathsf{dom}(w_{p})\) and \(\mathcal{M}_{q}=\mathcal{M}_{p}\cup\{M\downarrow\gamma\}\) (cf. Lemma [2.5.1]).
3. Let \(\bar{\gamma}:=\sup(\gamma\cap M)<\gamma\). We have that \(\mathsf{cof}(\bar{\gamma})=\omega\), that \(\bar{\gamma}\) is a limit point of \(\mathscr{E}\), and that \(\mathsf{dom}(w_{q})=\mathsf{dom}(w_{p})\subseteq\bar{\gamma}\).
4. **Claim**.: \(q\Vdash_{\mathbb{P}_{\gamma}}\sup(M^{\dot{G}_{\gamma}}\cap\gamma)=\bar{\gamma}\)
Proof.:
1. Let \(G_{\gamma}\rightsquigarrow V^{\mathbb{P}_{\gamma}}\) be an arbitrary generic containing \(q\) and let us show that \[\sup(M^{G_{\gamma}}\cap\gamma)=\bar{\gamma}.\]
2. Let \(f\in M\) map cofinally \(\omega_{1}^{V}\) into \(\gamma\).
3. By Lemma [2.7.1], we have that \(M^{G_{\gamma}}\cap\omega_{1}^{V}=M\cap\omega_{1}^{V}\).
4. Hence, \[\sup(M^{G_{\gamma}}\cap\gamma)=\sup(f[M^{G_{\gamma}}\cap\omega_{1}^{V}])=\sup (f[M\cap\omega_{1}^{V}])=\sup(M\cap\gamma)=\bar{\gamma}.\]
5. Consequently, for all \(\alpha\in\mathscr{E}\cap[0,\gamma]\) and for all \(N\in\mathcal{M}_{q}\), we have \[q\upharpoonright\alpha\Vdash_{\mathbb{P}_{\alpha}}\sup(N^{\dot{G}_{\alpha}}\cap \gamma)\leq\bar{\gamma}.\]
**Theorem**.: \(\mathbb{P}_{\kappa}\) has \(\kappa\)-c.c.
Proof.:
1. Assume otherwise.
2. Let \(S:=\{\alpha\in\mathscr{E}:\mathsf{cof}(\alpha)=\omega_{1}\}\). Then there exists an anti-chain \((p_{\alpha}:\alpha\in S)\) in \(\mathbb{P}_{\kappa}\).
3. By the previous lemma, for all \(\alpha\in S\), there exist \(q_{\alpha}\) and \(\bar{\alpha}\) such that 1. \(q_{\alpha}\leq_{\mathbb{P}_{\alpha}}p_{\alpha}\upharpoonright\alpha\), 2. \(\bar{\alpha}\in\mathscr{E}\cap\alpha\), 3. \(\mathsf{dom}(w_{q_{\alpha}})\subseteq\bar{\alpha}\), 4. for all \(\xi\in\mathscr{E}\cap[0,\alpha]\), for all \(N\in\mathcal{M}_{q_{\alpha}}\), \(q_{\alpha}\upharpoonright\xi\Vdash_{\mathbb{P}_{\xi}}\sup(N^{\bar{\mathscr{G}} _{\xi}}\cap\alpha)\leq\bar{\alpha}\).
4. By pressing down, there exist a stationary \(S_{1}\subseteq S\) and \(\gamma\in\mathscr{E}\) such that for all \(\alpha\in S_{1}\), we have \(\bar{\alpha}=\gamma\).
5. There exist \(\alpha,\beta\in S_{1}\) such that 1. \(\gamma<\alpha<\beta\), 2. \(q_{\alpha}\in\mathbb{P}_{\beta}\), 3. \(q_{\alpha}\upharpoonright\gamma=q_{\beta}\upharpoonright\gamma\).
6. Let \(q:=q_{\alpha}\upharpoonright\gamma=q_{\beta}\upharpoonright\gamma\) and let \[w_{r} := w_{q}\cup(w_{p_{\alpha}}\upharpoonright\gamma,\beta))\cup(w_{p_{ \beta}}\upharpoonright[\beta,\kappa)),\] \[\mathcal{M}_{r} := \mathcal{M}_{q}\cup\mathcal{M}_{p_{\alpha}}\cup\mathcal{M}_{p_{ \beta}},\] \[r := (w_{r},\mathcal{M}_{r}).\] It is now easily seen that \(r\in\mathbb{P}_{\kappa}\) and \(r\leq p_{\alpha}\), \(p_{\beta}\), which contradicts \(p_{\alpha}\bot_{\mathbb{P}_{\kappa}}p_{\beta}\).
**Theorem**.: \(\mathbb{P}_{\kappa}\) is semi-proper.
Proof.:
1. Let \(\theta\) be large enough regular, let \(M\prec(H_{\theta},\in,\kappa,U)\) be countable, let \(p\in\mathbb{P}_{\kappa}\cap M\), and let us find \(q\leq_{\mathbb{P}_{\kappa}}p\) which is semi-\(M^{\mathbb{P}_{\kappa}}\)-generic.
2. Let \(\bar{\kappa}:=\sup(\kappa\cap M)\in\mathscr{E}\cap\kappa\), let \(N:=M\cap V_{\kappa}\in\mathcal{C}_{\bar{\kappa}}\), and let \(q\in\mathbb{P}_{\bar{\kappa}}\) be such that 1. \(q\leq p\), 2. \(\mathsf{dom}(w_{q})=\mathsf{dom}(w_{p})\), 3. \(\mathcal{M}_{q}=\mathcal{M}_{p}\cup\{N\}\) (cf. Lemma [2.5.1]; note that \(p\in\mathbb{P}_{\bar{\kappa}}\)). We want to show that \(q\) is semi-\(M^{\mathbb{P}_{\kappa}}\)-generic.
3. Let \(\tau\in M^{\mathbb{P}_{\kappa}}\) with \(\Vdash_{\mathbb{P}_{\kappa}}\tau<\omega_{1}^{V}\) be arbitrary. We want to show that \(q\Vdash_{\mathbb{P}_{\kappa}}\tau\in M\).
4. There exists \(A\in M\) such that 1. \(A\) is a maximal anti-chain of \(\mathbb{P}_{\kappa}\), 2. for all \(r\in A\), there exists unique \(\alpha_{r}<\omega_{1}^{V}\) such that \(r\Vdash_{\mathbb{P}_{\kappa}}\tau=\alpha_{r}\).
5. By the previous theorem, there exists \(\gamma\in\mathscr{E}\cap M\) such that \((w_{q},\mathcal{M}_{p})\in\mathbb{P}_{\gamma}\) and \(A\subseteq\mathbb{P}_{\gamma}\).
6. We can now find \(\sigma\in N^{\mathbb{P}_{\gamma}}\) such that for all \(r\in A\), we have \(r\Vdash_{\mathbb{P}_{\gamma}}\sigma=\alpha_{r}\). Note that then \(\Vdash_{\mathbb{P}_{\kappa}}\tau=\sigma\) and consequently \(\Vdash_{\mathbb{P}_{\gamma}}\sigma<\omega_{1}^{V}\).
7. Let \(q_{r}:=(w_{q},\mathcal{M}_{p}\cup\{N\downarrow\gamma\})\). It is clear that \(q_{\tau}\in\mathbb{P}_{\gamma}\) and \(q\leq_{\mathbb{P}_{\kappa}}q_{\tau}\).
8. By Lemma [2.7.1], we have that \(q_{\tau}\) is semi-\(N^{\mathbb{P}_{\gamma}}\)-generic. In particular, \(q_{\tau}\Vdash_{\mathbb{P}_{\gamma}}\sigma\in N\).
9. Hence, \[q\leq_{\mathbb{P}_{\kappa}}q_{\tau}\Vdash_{\mathbb{P}_{\kappa}}\tau=\sigma\in N \subseteq M,\] as required.
## 3 Saturating \(\mathsf{NS}_{\omega_{1}}\)
### Careful Collapse
* **Notation.** For \(M\) a virtual model, we denote \(\delta_{M}:=\min(\mathsf{Ord}-M)\).
* **Definition.** Suppose that 1. \(\theta>\omega\) is regular, 2. \(U\) is a stationary subset of \([H_{\theta}]^{\omega}\). _The collapsing poset_ \(\mathsf{Col}_{U}\) _guided by_ \(U\), is defined as follows. 1. \(p\in\mathsf{Col}_{U}\) if and only if \(p=(\mathcal{M}_{p},d_{p})\) where 1. for all \(M\in\mathcal{M}_{p}\), either \(M\in U\) or there does not exist \(N\in U\) satisfying \(M\subseteq N\) and \(\delta_{N}=\delta_{M}\), 2. \(\mathcal{M}_{p}\) is a finite \(\in\)-chain, 3. \(d_{p}:\mathcal{M}_{p}\to[H_{\theta}]^{<\omega}\), 4. for all \(M,N\in\mathcal{M}_{p}\), if \(M\in N\), then \(d_{p}(M)\subseteq N\). 2. \(p\leq q\) in \(\mathsf{Col}_{U}\) if and only if \(\mathcal{M}_{p}\supseteq\mathcal{M}_{q}\) and for all \(M\in\mathcal{M}_{q}\), \(d_{p}(M)\supseteq d_{q}(M)\).
* **Lemma.** Suppose that 1. \(\theta>\omega\) is regular, 2. \(U\) is a stationary subset of \([H_{\theta}]^{\omega}\), 3. \(x\in H_{\theta}\), 4. \(D:=\{p\in\mathsf{Col}_{U}:\exists M\in\mathcal{M}_{p},x\in M\}\). Then \(D\) is dense in \(\mathsf{Col}_{U}\).
Proof.:
* Let \(p_{0}\in\mathsf{Col}_{U}\) be arbitrary and let us find \(p\in D\) satisfying \(p\leq p_{0}\). \(2^{\circ}\) Since \(U\) is stationary, there exists \(M\in U\) such that \(x,p_{0}\in M\). \(3^{\circ}\) Then \(p:=(\mathcal{M}_{p_{0}}\cup\{M\},d_{p_{0}}\cup\{(M,\emptyset)\})\) is as required.
**Lemma.** Suppose that 1. \(\theta>\omega\) is regular, 2. \(U\) is a stationary subset of \([H_{\theta}]^{\omega}\), 3. \(G\rightsquigarrow V^{\mathsf{Col}_{U}}\), 4. \(\mathcal{M}_{G}:=\cup\{\mathcal{M}_{p}:p\in G\}\). Then
* \(\mathcal{M}\) is an \(\in\)-chain, 4. \(\mathsf{op}(\mathcal{M},\in)\leq\omega_{1}^{V}\), 5. \(\cup\mathcal{M}=H_{\theta}\).
Proof.:
* Part a. is obvious.
* Let \((M_{\xi}:\xi<\alpha)\) be the \(\in\)-increasing enumeration of \(\mathcal{M}\). The following mapping is strictly increasing: \[\alpha\to\omega_{1}^{V}:\xi\mapsto\delta_{M_{\xi}}.\] This shows part b.
* Lemma [3.1.3]. immediately implies part c.
* **Definition**.: Suppose that 1. \(\theta>\omega\) is regular, 2. \(U\) is a stationary subset of \([H_{\theta}]^{\omega}\),
3. \(g\rightsquigarrow V^{\mathsf{Col}_{U}}\).
Then we define
* \(\mathcal{M}_{g}:=\cup\{\mathcal{M}_{p}:p\in g\}\),
* \(\alpha_{g}:=\mathsf{otp}(\mathcal{M}_{g},\in)\),
* for all \(\xi<\alpha_{g}\), \(\mathcal{M}_{g}(\xi)\) is the \(\xi^{\text{th}}\) element of \((\mathcal{M}_{g},\in)\).
* **Proposition**.: Suppose that 1. \(\theta>\omega\) is regular, 2. \(U\) is a stationary subset of \([H_{\theta}]^{\omega}\).
Then in \(V^{\mathsf{Col}_{U}}\), for all limit \(\eta<\alpha_{g}\), \(\mathcal{M}_{g}(\eta)=\bigcup_{\xi<\eta}\mathcal{M}_{g}(\xi)\).
Proof.:
* Assume otherwise. Then there exist \(p\), \(\alpha\), \(\eta\), \(x\), \(M\) such that 1. \(p\Vdash\alpha_{g}=\alpha\), 2. \(\eta\) is a limit ordinal strictly less than \(\alpha\), 3. \(x\in H_{\theta}\), 4. \(p\Vdash x\in\mathcal{M}_{g}(\eta)-\bigcup_{\xi<\eta}\mathcal{M}_{g}(\xi)\), 5. \(p\Vdash\mathcal{M}_{g}(\eta)=M\), 6. \(M\in\mathcal{M}_{p}\).
* There exist \(q\leq p\) and \(N\in\mathcal{M}_{q}\cap M\) such that \(x\in d_{q}(N)\).
* Let \(r\leq q\) and \(\xi<\eta\) be such that \(r\Vdash\mathcal{M}_{g}(\xi)=N\). We have \[r\Vdash x\in d_{r}(N)\subseteq\mathcal{M}_{g}(\xi+1),\] which is a contradiction.
* **Proposition**.: Suppose that 1. \(\theta>\omega\) is regular, 2. \(U\) is stationary in \([H_{\theta}]^{\omega}\). Then \(\mathsf{Col}_{U}\) is strongly semiproper.
Proof.:
* Let us consider arbitrary 1. \(\chi\gg\theta\), 2. \(M\prec(H_{\chi},\in,U)\) countable, 3. \(p\in\mathsf{Col}_{U}\cap M\). We want to find \(q\leq p\) which is strongly semigeneric for \((M,\mathsf{Col}_{U})\).
* **Case I.** There exists \(N\in U\) such that \(M\cap H_{\theta}\subseteq N\) and \(\delta_{N}=\delta_{M}\).
Proof.:
1. Let \(q:=(\mathcal{M}_{p}\cup\{N\},d_{p}\cup\{(N,\emptyset)\})\). We want to show that \(q\) is strongly semigeneric for \((M,\mathsf{Col}_{U})\).
2. Let \(r\leq q\) be arbitrary and let us find some \(r_{M}\in\mathsf{Col}_{U}\) such that for \(P:=\mathsf{Hull}(M,r_{M})\), 1. \(\delta_{P}=\delta_{M}\), 2. for all \(s\in\mathsf{Col}_{U}\cap P\), if \(s\leq r_{M}\), then \(s\parallel r\).
3. Let \(r_{M}:=(\mathcal{M}_{r}\cap N,d_{r}\upharpoonright N)\in N\). We have that \[\mathsf{Hull}(M,r_{M})\cap H_{\theta}=\mathsf{Hull}(M\cap H_{\theta},r_{M}) \subseteq N.\] 4. In particular, for \(P:=\mathsf{Hull}(M,r_{M})\), we have \(\delta_{P}=\delta_{M}\).
5. Property \(2^{\prime}\)b. is now routinely verified.
**Case II.** There does not exist \(N\in U\) such that \(M\cap H_{\theta}\subseteq N\) and \(\delta_{N}=\delta_{M}\).
Proof.:
1. In this case, we have that \(q:=(\mathcal{M}_{p}\cup\{M\cap H_{\theta}\},d_{p}\cup\{(M\cap H_{\theta}, \emptyset)\}\in\mathsf{Col}_{U}\) and \(q\leq p\). We want to verify that \(q\) is strongly generic for \((M,\mathsf{Col}_{U})\).
2. Let \(r\leq q\) be arbitrary and let us find some \(r_{M}\in\mathsf{Col}_{U}\cap M\) such that for all \(s\in\mathsf{Col}_{U}\cap M\), if \(s\leq r_{M}\), then \(s\parallel r\).
3. It is routinely verified that \(r_{M}:=(\mathcal{M}_{r}\cap M,d_{r}\upharpoonright M)\) is as required.
4. Having verified the two cases, we reach the conclusion.
**Corollary.** Suppose that
1. \(\theta>\omega\) is regular,
2. \(U\) is stationary in \([H_{\theta}]^{\omega}\),
3. \(g\rightsquigarrow V^{\mathsf{Col}_{U}}\).
Then \(\mathsf{otp}(\mathcal{M}_{g},\in)=\alpha_{g}=\omega_{1}\).
Proof.: This follows from the facts that \(\alpha_{g}\leq\omega_{1}^{V}=\omega_{1}\) and \(H_{\theta}^{V}=\cup\mathcal{M}_{g}\).
**Corollary.** Suppose that
1. \(\theta>\omega\) is regular,
2. \(U\) is stationary in \([H_{\theta}]^{\omega}\).
3. Then in \(V^{\mathsf{Col}_{U}}\), \(|H_{\theta}^{V}|=\omega_{1}\).
**Lemma.** Suppose that
1. \(\theta>\omega\) is regular,
2. \(U\) is stationary in \([H_{\theta}]^{\omega}\),
3. \(\chi\gg H_{\theta}\),
4. \(M\prec(H_{\chi},\in,U)\) is countable,
5. \(p\in\mathsf{Col}_{U}\) is such that \(M\cap H_{\theta}\in\mathcal{M}_{p}\).
Then \(p\) is strongly generic for \((M,\mathsf{Col}_{U})\).
Proof.:
* Let \(q\leq p\) be arbitrary. We want to find \(q_{M}\in\mathsf{Col}_{U}\cap M\) such that for all \(r\in\mathsf{Col}_{U}\cap M\), if \(r\leq q_{M}\), then \(r\parallel q\).
* Let \(q_{M}:=(\mathcal{M}_{q}\cap M,d_{q}\upharpoonright M)\). It is easily seen that \(q_{M}\) is as required.
**Proposition**.: Suppose that
* \(\theta>\omega\) is regular,
* \(U\) is stationary in \([H_{\theta}]^{\omega}\).
Then in \(V^{\mathsf{Col}_{U}}\), \(U\) is stationary in \([H_{\theta}^{V}]^{\omega}\).
Proof.:
* Assume otherwise. Then there exist \(C\in V^{\mathsf{Col}_{U}}\) and \(p\in\mathsf{Col}_{U}\) such that
* \(\Vdash_{\mathsf{Col}_{U}}\)"\(C\) is a club in \([H_{\theta}^{V}]^{\omega}\),
* \(p\Vdash_{\mathsf{Col}_{U}}C\cap U=\emptyset\).
* Let \(\chi\gg H_{\theta},C\). Then there exists countable \(M\prec(H_{\chi},\in,\theta,C,p)\) such that \(M\cap H_{\theta}\in U\).
* Let \(N:=M\cap H_{\theta}\in U\) and let \(q:=(\mathcal{M}_{p}\cup\{N\},d_{p}\cup\{(N,\emptyset)\})\leq p\) in \(\mathsf{Col}_{U}\). By Lemma 3.1.10], we have that \(q\) is generic for \((M,\mathsf{Col}_{U})\).
* Let \(g\rightsquigarrow V^{\mathsf{Col}_{U}}\) contain \(q\). We have that \(C\in M[g]\), so \(M[g]\cap H_{\theta}^{V}\in C\).
* Since \(q\) is generic for \((M,\mathsf{Col}_{U})\) in \(V\), we have that \(M[g]\cap H_{\theta}^{V}=N\).
* Thus, \(N\in C\cap U\), which is a contradiction.
### Sealed Predense Collections
**Definition**.: Let \(D\subseteq\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\). We say that \(D\)_is sealed_ if there exists a mapping \(f:\omega_{1}\to D\) and a club \(C\) in \(\omega_{1}\) such that \(C\subseteq\nabla f\).
**Proposition**.: If \(D\subseteq\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\) is sealed, then \(D\) is predense in \(\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\).
**Proposition**.: The following are equivalent.
* \(\mathsf{NS}_{\omega_{1}}\) is saturated.
* Every predense subset of \(\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\) is sealed.
* Suppose that \(D\subseteq\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\) is sealed and that \(\mathbb{P}\) is a stationary set preserving poset. Then \(V^{\mathbb{P}}\models\)"\(D\) is sealed in \(\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\)".
* Suppose that
* \(D\) is predense in \(\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\),
* \(\theta\geq\omega_{2}\) is regular,
* \(M\prec H_{\theta}\) is countable.
Then we say that \(M\)_captures_\(D\) iff there exists \(S\in D\cap M\) such that \(\delta_{M}\in S\).
**Lemma**.: Suppose that
* \(D\) is predense in \(\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\),
* \(\theta\geq\omega_{2}\) is regular,
* \(U:=\{M\prec H_{\theta}:|M|=\omega,\text{``$M$ captures $D$''}\}\).
Then \(U\) is projectively stationary.
Proof.:
1. Let \(S\subseteq\omega_{1}\) be stationary and let \(F:H_{\theta}^{<\omega}\to H_{\theta}\). There exists \(T\in D\) such that \(T\cap S\) is stationary.
2. There exists \(M\prec H_{\theta}\) such that \(T\in M\), \(F[M^{\omega}]\subseteq M\), and \(\delta_{M}\in T\cap S\).
3. Hence, \(M\in U\), \(\delta_{M}\in S\), and \(M\) is closed for \(F\), as required.
3.2.7] Proposition.: Suppose that
1. \(D\) is a predense set in \(\mathbb{P}_{\mathtt{NS}_{\omega_{1}}}\),
2. \(U:=\{M\prec H_{\omega_{2}}:|M|=\omega,``M\text{ captures }D"\}\),
3. \(g\) is \(V\)-generic for \(\mathsf{Col}_{U}\),
4. \(U^{*}:=\{\xi<\omega_{1}:\mathcal{M}_{g}(\xi)\in U\}\),
5. \(h\) is generic over \(V[g]\) such that \(\omega_{1}^{V[g][h]}=\omega_{1}\),
6. in \(V[g][h]\), \(U^{*}\) contains a club.
Then in \(V[g][h]\), \(D\) is sealed.
Proof.:
1. For all \(\xi<\omega_{1}\), we enumerate \(D\cap\mathcal{M}_{g}(\xi)\) as \((S_{n}^{\xi}:n<\omega)\).
2. We define \(f:\omega_{1}\to D\) as follows: for all \(\alpha<\omega_{1}\), for the unique \(\xi<\omega_{1}\) and \(n<\omega\) such that \(\alpha=\omega\xi+n\), we let \(f(\alpha):=S_{n}^{\xi}\).
3. Let \(C\) be a club contained in \(U^{*}\). By Proposition [3.1.6], the set \(E:=\{\delta(\mathcal{M}_{g}(\alpha)):\alpha\in C,\omega\alpha=\alpha\}\) is a club.
4. We will be done if we show that \(E\subseteq\nabla f\). To that end, let \(\delta\in E\) be arbitrary and let us find \(\gamma<\delta\) such that \(\delta\in f(\gamma)\).
5. There exists \(\alpha\in C\) such that \(\omega\alpha=\alpha\) and \(\delta=\delta(\mathcal{M}_{g}(\alpha))\).
6. Since \(\alpha\in U^{*}\), we have that \(\mathcal{M}_{g}(\alpha)\in U\), so there exists \(S\in D\cap\mathcal{M}_{g}(\alpha)\) such that \(\delta\in S\).
7. Again by Proposition [3.1.6], there exist \(\xi<\alpha\) and \(n<\omega\) such that \(S=S_{n}^{\xi}\).
8. Let \(\gamma:=\alpha\xi+n<\alpha\). Then \(\delta\in S_{n}^{\xi}=f(\gamma)\), as required.
### Sealing Iteration
1. **Definition.** Let \(\delta>\omega\) be a cardinal and let \(A\subseteq V_{\delta}\). Then \(\kappa<\delta\) is \(A\)_-reflecting below \(\delta\)_ if for all multiplicatively closed \(\lambda\in(\kappa,\delta)\), there exists a strong \((\kappa,\lambda)\)-extender \(E\) satisfying \(i_{E}(A\cap V_{\kappa})\cap V_{\lambda}=A\cap V_{\lambda}\).
2. **Definition.** Let \(\delta>\omega\) be a cardinal. A _Woodin diamond for \(\delta\)_ is a function \(\mathbf{U}:\kappa\to V_{\kappa}\) such that 1. for all \(\xi<\delta\), \(\mathbf{U}(\xi)\subseteq V_{\xi}\), 2. for all \(A\subseteq V_{\delta}\), the set \[\{\kappa<\delta:A\cap V_{\kappa}=\mathbf{U}(\kappa)\text{ and }\kappa\text{ is }A\text{-reflecting below }\delta\}\] is stationary in \(\delta\).
3.3.3] Proposition.: Suppose that there exists a Woodin diamond for \(\delta\). Then \(\delta\) is a Woodin cardinal.
4. Proposition.: Suppose that \(\delta\) is a Woodin cardinal. Then in \(V^{\mathsf{Col}(\delta,\delta)}\), there exists a Woodin diamond for \(\delta\).
Proof.: Cf. [16, Lemma 1.3]
3.5 Definition: Let \(\mathbf{U}\) be a Woodin diamond for \(\delta\). The blueprint for the \(\mathsf{NS}_{\omega_{1}}\)-saturating iteration given by \(\mathbf{U}\) is the pair \(((V_{\delta},\in,\mathbf{U}),\mathbf{Q})\) where \(\mathbf{Q}:\delta\times V_{\delta}\to V_{\delta}\) is defined over \((V_{\delta},\in,\mathbf{U})\) as follows: for all \(\kappa<\delta\) and for all \(\mathbb{P}\in V_{\delta}\), if it is the case that 1. \(\kappa\) is inaccessible, 2. \(\mathbb{P}\subseteq V_{\kappa}\) is a poset, 3. \(\mathbf{U}(\kappa)=\mathbb{P}\oplus(\dot{S_{i}}:i<\kappa)\), 4. \(V^{\mathbb{P}}\models{}^{u}\{\dot{S_{i}}:i<\kappa\}\) is predense in \(\mathbb{P}_{\mathsf{NS}_{\omega_{1}}}\)", 5. \(\dot{U}\) is the canonical name such that \[V^{\mathbb{P}}\models\dot{U}=\{M\prec H_{\omega_{2}}:|M|=\omega,{}^{u}M\text{ captures }\{\dot{S_{i}}:i<\kappa\}\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\
* Let \(G^{*}\) be \(V\)-generic for \(j(\mathbb{P}_{\kappa})\) extending \(G_{\kappa+1}\) and let \(k:V[G_{\kappa}]\prec\mathcal{M}[G^{*}]\) be such that \(k\upharpoonright V=j\) and \(k(G_{\kappa})=G^{*}\).
* **Claim.** In \(\mathcal{M}[G^{*}]\), \(\omega_{1}-U^{*}\) is not stationary.
Proof.:
* Let us assume otherwise. Then there exists \(i_{0}<j(\kappa)\) such that the set \[k((\dot{S}_{i}:i<\kappa))(i_{0})\cap(\omega_{1}-U^{*})\] is stationary in \(\omega_{1}\).
* In \(\mathcal{M}\), let \((\dot{T}_{i}:i<j(\kappa))=k((\dot{S}_{i}:i<\kappa))\) and let \((M_{\xi}:\xi<\omega_{1})\) be a continuous \(\in\)-chain of countable elementary submodels of \[(\mathcal{M}\upharpoonright j(\lambda),\in,\kappa,\lambda,\dot{T}_{i}).\]
* Back in \(\mathcal{M}[G^{*}]\), the set \[\{\delta(M_{\xi}[G^{*}]):\xi<\omega_{1}\}\] is a club in \(\omega_{1}\), so there exists \(\xi_{0}<\omega_{1}\) such that \[\alpha:=\delta(M_{\xi_{0}}[G^{*}])\in T_{i_{0}}\cap(\omega_{1}-U^{*}).\]
* Since \(\alpha\not\in U^{*}\), we have that \(\mathcal{M}_{g}(\alpha)\not\in U\). In other words, inside \(\mathcal{M}[G_{\kappa}]\), there does not exists \(N\in U\) such that \(N\supseteq\mathcal{M}_{g}(\alpha)\) and \(\delta_{N}=\delta(\mathcal{M}_{g}(\alpha))\).
* Since \(\mathcal{M}_{g}(\alpha)\) is a countable subset of \(H_{\omega_{2}}\) inside \(\mathcal{M}[G_{\kappa}]\) and since \(\omega_{2}^{\mathcal{M}[G_{\kappa}]}=\kappa\), we have that \(k(\mathcal{M}_{g}(\alpha))=\mathcal{M}_{g}(\alpha)\).
* Consequently, in \(\mathcal{M}[G^{*}]\), there does not exist \(N\in k(U)\) such that \(N\supseteq\mathcal{M}_{g}(\alpha)\) and \(\delta_{N}=\delta(\mathcal{M}_{g}(\alpha))\).
* Since \(g\in M_{\xi_{0}}[G^{*}]\), we get that \[M_{\xi_{0}}[G^{*}]\cap H_{\omega_{2}}^{V[G_{\kappa}]}=\bigcup_{\xi<\alpha} \mathcal{M}_{g}(\xi)=\mathcal{M}_{g}(\alpha).\] Hence, \(M_{\xi_{0}}[G^{*}]\supseteq\mathcal{M}_{g}(\alpha)\) and \(\delta(M_{\xi_{0}}[G^{*}])=\delta(\mathcal{M}_{g}(\alpha))\).
* On the other hand, \(M_{\xi_{0}}[G^{*}]\) captures \(k(\{S_{i}:i<\kappa\})\), as witnessed by \(T_{i_{0}}\). This means that \(M_{\xi_{0}}[G^{*}]\in k(U)\).
* The previous three points together yield a contradiction.
* Thus, in \(\mathcal{M}[G^{*}]\), \(U^{*}\) contains a club.
* By Proposition 3.2.7 applied in \(\mathcal{M}[G_{\kappa}]\), we have that \(\{S_{i}:i<\kappa\}\) is sealed in \(\mathcal{M}[G^{*}]\).
* This means that \(k(\{S_{i}:i<\kappa\})\) is sealed in \(\mathcal{M}[G^{*}]\), which by elementarity of \(k\) yields that \(\{S_{i}:i<\kappa\}\) is sealed in \(V[G_{\kappa}]\), as required.
**Corollary**.: Suppose that \(\delta\) is a Woodin cardinal. Then there exists a poset \(\mathbb{P}\) such that
* \(\mathbb{P}\) is semi-proper,
* \(\mathbb{P}\) is \(\delta\)-c.c,
* \(\omega_{2}^{V^{p}}=\delta\),
* in \(V^{\mathbb{P}}\), \(\mathsf{NS}_{\omega_{1}}\) is saturated. |
2303.09623 | Wasmizer: Curating WebAssembly-driven Projects on GitHub | WebAssembly has attracted great attention as a portable compilation target
for programming languages. To facilitate in-depth studies about this
technology, we have deployed Wasmizer, a tool that regularly mines GitHub
projects and makes an up-to-date dataset of WebAssembly sources and their
binaries publicly available. Presently, we have collected 2 540 C and C++
projects that are highly-related to WebAssembly, and built a dataset of 8 915
binaries that are linked to their source projects. To demonstrate an
application of this dataset, we have investigated the presence of eight
WebAssembly compilation smells in the wild. | Alexander Nicholson, Quentin Stiévenart, Arash Mazidi, Mohammad Ghafari | 2023-03-16T19:55:47Z | http://arxiv.org/abs/2303.09623v1 | # Wasmizer: Curating WebAssembly-driven Projects on GitHub
###### Abstract
WebAssembly has attracted great attention as a portable compilation target for programming languages. To facilitate in-depth studies about this technology, we have deployed Wasmizer, a tool that regularly mines GitHub projects and makes an up-to-date dataset of WebAssembly sources and their binaries publicly available. Presently, we have collected 2 540 C and C++ projects that are highly-related to WebAssembly, and built a dataset of 8 915 binaries that are linked to their source projects. To demonstrate an application of this dataset, we have investigated the presence of eight WebAssembly compilation shells in the wild.
WebAssembly, dataset, compilation shells
## I Introduction
WebAssembly is a standard for portable binary code that aims to bring a safer, faster, and more portable format than JavaScript to the web. It allows for programs written in high-level languages such as C, C++, and Rust to be cross compiled to WebAssembly and run in a web environment. WebAssembly is not limited to the web though, and it can be used in a number of host environments, e.g., it is possible to port C applications using the WebAssembly System Interface and run them as regular desktop applications.1
Footnote 1: [https://wasi.dev/](https://wasi.dev/)
There are several tools for analyzing WebAssembly programs [1, 2, 3, 4, 5, 6]. However, tool builders need to evaluate their tools on benchmark programs, which are currently lacking for WebAssembly. Two datasets of WebAssembly binaries exist [7, 8], but they only include the binary files without the source files of the programs. Linking a WebAssembly binary to the source code that produced it is of high value for developers of such tools, enabling them to understand the inner workings of these programs, or for example to instrument programs during their compilation. To fill this gap, we present a methodology for identifying open-source C and C++ projects that target WebAssembly for compilation, and we build a novel dataset of WebAssembly binary files that are associated with their respective source projects. We deploy Wasmizer, a tool that automates this process and regularly updates our dataset.
To demonstrate an application of this dataset, we investigate _compilation smells_ in WebAssembly, i.e., indication of potentially different behaviour from source to target programs that may yield bugs [9]. In fact, previous work has shown that compilers and standard library implementations available for WebAssembly are yet not as mature as those used for native compilation, and consequently, certain code patterns may yield different behaviour in these platforms [9, 10]. These different behaviour may yield unexpected bugs lurking around when porting programs to WebAssembly, and more importantly, they may introduce security risks, for example enabling an easier exploitation of binaries suffering from buffer overflows [10]. While previous work has shown these different behaviours exist for programs compiled from C and C++, we take the next step and investigate how prevalent they are in real-world projects.
The contributions of this work are centred around the following two research questions:
* **RQ\({}_{1}\): Can we mine WebAssembly-driven projects on GitHub and generate WebAssembly binaries automatically?** We present heuristics that help to identify real-world C and C++ projects that are related to WebAssembly. We collect 2 540 projects that target WebAssembly as a compilation target. We build these projects that contain a makefile and generate 8 915 binaries from 572 repositories, forming a novel dataset of WebAssembly binaries linked to their originating projects. We develop Wasmizer, an open-source tool that automates this process.2 Footnote 2: [https://github.com/arash-mazidi/WASMIZER](https://github.com/arash-mazidi/WASMIZER)
* **RQ\({}_{2}\): How prevalent are compilation smells in open-source projects that target WebAssembly?** We present the code patterns of 16 compilation smells and heuristics that help to identify them in C and C++ programs. We use the Clang static analyser to develop checkers that can detect eight of such smells. We evaluate their presence in 1 605 projects, uncovering that 386 projects (i.e. 24%) suffer from at least one compilation smell. This analysis shows a use case of our dataset that requires access to the source code of WebAssembly applications.
We share our dataset, tools, and code analysis scripts to allow for replication of this work.3 Importantly, Wasmizer is deployed to regularly mine WebAssembly-driven projects on GitHub, compile them, and curate an up-to-date dataset of WebAssembly sources and binaries. We also share this novel and evolving dataset to facilitate further studies in this domain.4
The remainder of this paper is organised as follows. In Section II, we provide some background on model of WebAssembly. We present the datasets of source projects and associated binaries in Section III. In Section IV, we study compilation smells as a use case for this dataset. We discuss threats to validity of this work in Section V. We put this work in context with the related research on WebAssembly in Section VI, and conclude this paper in Section VII.
## II Background on WebAssembly
We first provide background knowledge on the compilation of programs to WebAssembly, required for Section III, and we explain the inner workings of WebAssembly programs, required to understand the compilation smells discussed in Section IV. WebAssembly is a low-level language initially aimed at bringing near-native performance for the web. It uses a stack-based execution model that runs within a virtual machine (VM) similar to the JVM. WebAssembly binaries, called modules, can be expressed in a binary or textual format. The intent is to use WebAssembly as a compile target from another source language such as C, C++, Rust C#, Go or AssemblyScript using a compiler such as Emscripten [11]. WebAssembly modules can import and export definitions for functions and variables. These modules can be imported into JavaScript code or in other host environment and then instantiated. Each module runs within its own isolated context with its own memory and execution stack.
### _Compiling from C and C++ to WebAssembly_
Multiple compilers have backends for compiling to WebAssembly nowadays. For C and C++ specifically, there are two main compiler backends: Emscripten [11] being the original WebAssembly compiler, and Cheerp compiler [12] being a competitor with a strong focus on binary size and speed. Porting an existing C or C++ application to WebAssembly can be as simple as wrapping their build scripts with Emscripten's wrappers for cmake and make. However, the use of libraries and graphical toolkits can render porting applications more challenging.
### _Memory Model_
The memory of a WebAssembly program is a linear array of bytes, called linear memory. When a module is instantiated, the memory is created with an initial size, however this can grow dynamically as needed. Addresses in linear memory are represented as an offset from the start of linear memory. Due to this, many common memory protection techniques such as data execution prevention and stack smashing protection are no longer needed. While this is good for the security of the host (the browser), it then falls on the developer to ensure that the code accessing this memory is bug-free, as anomalies such as buffer overflows will not be checked as long as they remain within the bounds of the linear memory. By isolating the memory used by each module, many independent instances can exist with their own memory within the same process.
### _Control flow_
Unlike native code, WebAssembly only allows for structured control flow, which provides a few advantages. Firstly, it means that it is not possible to jump to arbitrary addresses, which eliminates the possibility for attacks that manipulate the control flow of the program. Instructions are placed in functions which are organised into blocks, where branches may only jump to the end of other blocks inside the current function. This control flow also allows for WebAssembly code to be validated, compiled, or transformed with only a single pass over the program.
## III Dataset Construction
We collect real-world WebAssembly projects that compile to WebAssembly. We explain our methodology and the obtained results in this section.
### _Methodology_
We use GitHub as our data source for identifying WebAssembly projects.
#### Iii-A1 Initial project selection
We query the GitHub Search API to find C and C++ projects that include a few WebAssembly-related keywords.5
Footnote 5: [https://docs.github.com/en/rest/search](https://docs.github.com/en/rest/search)
#### Iii-A2 NLP Filtering
We apply Natural Language Processing (NLP) tools to remove projects for which WebAssembly is not an important word in the repositories' READMEs and description.
#### Iii-A3 Heuristic Development
We study a number of known projects that compile to WebAssembly to familiarise ourselves with how WebAssembly compilers work, and to find possible ways to use WebAssembly in C/C++ projects. We develop initial heuristics based on our observations to find specific indicators of projects that target WebAssembly. We apply these heuristics to the projects that pass the NLP filter, and manually inspect a subset of excluded projects to expand our heuristics that eliminate projects that likely do not target WebAssembly for compilation.
#### Iii-A4 Building Dataset
We obtain a first dataset of real-world projects that likely compile to WebAssembly. We clone each repository locally. The projects are written in C or C++, for which there is no standard build mechanism to rely on in order to build all projects. Through a manual inspection, we notice that many projects rely either on makefiles, or on cmake. Therefore, for each project:
* For each CMakeLists.txt file, indicating the use of the cmake build system, we run Emscripten's cmake wrapper (emcmake).
* For each Makefile, either resulting from the previous step, or standalone, we run Emscripten's make wrapper (emmake).
We rely on Emscripten version 3.1.26, the latest at the time of writing. We leave trying out other compilers and build script mechanisms for future work.
After applying this compilation step to all projects, we look for files that are named with either a.wasm or a.wat extension, indicating that they are WebAssembly files. We try to convert all.wat files found into their binary version (.wasm) relying on the wat2wasm tool (version 1.0.31, the latest at the time of writing) from the WebAssembly Binary Toolkit,6 configured with the --enable-all flag to enable all available WebAssembly extensions. We then store all.wasm files in a new directory, under a name composed of the SHA-256 sum of their content. This is to ensure that duplicate files are only present once in the dataset.
Footnote 6: [https://github.com/WebAssembly/wabt](https://github.com/WebAssembly/wabt)
### _Results_
#### Iv-B1 Initial Projects
We use the GitHub search API to search for projects that include keywords listed in Table I. We search in the repository's title, description, README, or topics (keywords that can be added to a project). We exclude forks as well as any project that is not updated since the official release of WebAssembly specification (December 2019). We search projects refining by date periods that span over less than 1000 projects in order to overcome GitHub's limit of 1000 projects per query. To account for GitHub's query rate limit, we wait for 20 seconds between two requests. We gather a total of 6 095 projects that list C or C++ as their source language.
#### Iv-B2 NLP Filtering
Projects may contain words related to WebAssembly without the aim of targeting it for compilation. In order to eliminate unrelated projects, we use NLP tools to estimate the importance of WebAssembly-related keywords in each project's README and description. If an NLP tool detects a word as important to the text, it indicates that the project may be directly related to the term rather than a passing word. We implement the NLP step by applying the TopicRank algorithm, an enhanced variation of the PageRank algorithm provided by a Python NLP library named PyTextRank [13]. The algorithm decides the ranking of words by a "voting" or "recommendation" system. A lemma graph is produced with vertices representing each word in the README or description of the project selected. These vertices contain edges that represent the voting system, where more edges mean a higher rating. Using PageRank, ranks are calculated for each vertex and returned.
While we aim to filter our dataset, we still want to be inclusive of possible relevant projects and ensure the NLP algorithm implementation is able to reliably detect keywords by comparing the results to those produced by other popular keyword extraction libraries. We therefore run TopicRank against other popular NLP tools, namely Yake [14] and spaCy [15]. We note that TopicRank produces similar results to the alternative tools, but also it is the most inclusive one as it detects more keywords, with the instances of the words in Table I being more prevalent in our results. Using the TopicRank algorithm, we detect 4 022 projects that emphasise the keywords chosen to create our filtered dataset.
To ensure that we are not filtering out relevant projects in this step, we manually analyse 10% of the excluded repositories. We look for indications of compilation to WebAssembly. Most projects have one of three traits.
1. They are either small test projects with very few files where there would be no instances of a README or description. These projects may compile to WebAssembly but are of lower quality and likely to be toy projects, so it makes sense to exclude them.
2. There would be instances of users trying to port C libraries to run with WebAssembly. Most of these projects are abandoned or unfinished, denoting low quality.
3. The projects do not compile to WebAssembly at all.
Based on this manual analysis, we determine that the NLP step indeed removes lower-quality and unrelated projects from our dataset while keeping the relevant projects.
#### Iv-B3 Heuristics for Compilation to WebAssembly
Our dataset, after NLP filtering, contains repositories that indicate they are C or C++ projects related to WebAssembly. However, we cannot guarantee that all the projects will compile to WebAssembly. We develop heuristics to filter our dataset more strictly into projects that will compile to WebAssembly.
We first study projects that are mentioned as top WebAssembly projects to learn about compiler calls and WebAssembly-related headers. These projects are listed on LibHunt,8 Awsome Open Source,9 and on the Emscripten Wiki page.10 We develop initial heuristics based on what we learn from these projects and look for them within source code from our NLP-filtered projects. Through this process, we identify 1 499 projects that would compile into WebAssembly. To verify our heuristics, we manually analyze 141 projects undetected by our initial heuristics, and we create refined heuristics that more strongly indicate projects that compile to WebAssembly.
Footnote 8: [https://www.libhunt.com/topic/webassembly](https://www.libhunt.com/topic/webassembly)
Footnote 9: [https://awesomeponsource.com/projects/webassembly](https://awesomeponsource.com/projects/webassembly)
Footnote 10: [https://github.com/emscripten-core/emscripten/wiki/Porting-Examples-and-Demos](https://github.com/emscripten-core/emscripten/wiki/Porting-Examples-and-Demos)
Of the 141 repositories analysed, 63 do not aim to compile to WebAssembly or have not finished implementing this functionality. In the case where the projects do not aim for compilation, some instead mention unimplemented WebAssembly-related libraries: for example, the 2log.io project mentions the QPA WASM plugin from Qt, however, it does not implement it.11 Of the case where functionality has not been implemented yet, most of the projects aim to compile to WebAssembly however, have it as a "todo" in
their READMEs or descriptions. The intention to compile to WebAssembly indicates that we may be able to analyse it. However, we cannot judge whether all implementations have been complete in their C source code and do not add them to our dataset.
Another challenge is that multiple repositories create their own custom WebAssembly module or require the user to manually set environment variables that are necessary for compiler calls. In our manual analysis, we could not detect common compilation indications from their code.
Based on the projects that do target WebAssembly for compilation, we create three general heuristics that we discuss below.
Reference to a WebAssembly compiler in the build script of the projectWhile not all repositories contain build scripts, the presence of compilation calls to WebAssembly within this set of projects indicates that it does target WebAssembly for compilation. We look for the presence of calls to three WebAssembly compilers for C/C++, being emcc (Emscripten) [11], -target cheep-wasm (Cheep) [12] and --target-wasm32 (LLVM)12[16].
Footnote 12: While Emscripten also uses LLVM, this refers to projects compiled with LLVM without Emscripten.
Inclusion of WebAssembly specific librariesWe look for mention of headers for the Emscripten APIs, as these indicate that the source code is interacting with web APIs. We focus specifically on projects that contain #include macros to include the emscripten.h or html5.h headers, as these have distinct names that are easy to detect, and also provide the most commonly used core functionality.
Use of the JavaScript WebAssembly APIWe look for instantiation of the WebAssembly class within JavaScript code in the project. While this on its own does not indicate compilation to WebAssembly, we found that inclusion of this code in a C/C++ project was a good indication that it compiled to WebAssembly.
#### Iv-B4 Dataset
We collect 2 540 C/C++ projects that are related to WebAssembly i.e., include evidence that they compile to WebAssembly. In this dataset, 63% of the projects have at least one star, 27% have at least one open issue, and 41% have been forked at least once. The average size of a repository in our dataset is 14MB. On average, the projects are over three years old, and of the 85% of projects that are more than a year old, 57% were still being updated a year later. 74% of repositories list C++ as a source language, while 73% list C as a source language. Additionally, 56% and 49% of projects contain HTML and JavaScript respectively.
We clone and compile each WebAssembly-related project. This results in 8 915 WebAssembly binaries (.wasm files), extracted from 572 repositories.13 None of these files overlap with the dataset of Hilbig et al. [8] built in 2020, indicating that all files in our dataset are new benchmark programs.
Footnote 13: During the conversion from textual representation to the binary representation, 1384.wat files could not be converted due to features unsupported by wat2wasm or syntax errors. We do not count these files and include them in a separate folder of our dataset.
We notice that many of the.wasm files are already present in the repositories _before_ the compilation. In total, the compilation step produces 1 096 new binary files, for 124 repositories. These binary files can be linked to their source code. The link between C and C++ source projects and a WebAssembly binary has not been part of any previous dataset so far [7, 8].
The distribution of the size of the binary files is represented in the left-hand side of Figure 1. We notice a bimodal distribution, with many files of a few bytes, and many files of around 1MB.
We find that some files are present in multiple repositories: in total, 68 files (\(<\)1%) are in more than one repository, most often in two repositories, with three files being present in seven repositories. Looking at how many WebAssembly files there are per repositories however, we notice that around half (284) of the repositories contain one file, with the other half containing more files. This distribution is represented in the right-hand side of Figure 1. The most extreme case is one repository containing more 11 599 files (many of which are duplicate from each other). All but 12 repositories contain less than 100 WebAssembly binaries.
### _Wasmizer_
We develop a tool named Wasmizer that automates the entire process from project selection on GitHub to project compilation and WebAssembly binary generation. Figure 2 illustrates the Wasmizer's pipeline. Wasmizer allows for many configurable options. It is possible to configure search parameters such as number of stars, number of forks, minimal project size, and last push date. The filtering step, which looks for symptoms of WebAssembly compilation is also parametric. The compilation pipeline is dynamic too: it is possible to change the compilation commands so that Wasmizer can curate programs written in other languages such as Rust or Go.
Wasmizer is open source,14 and it is deployed to regularly collect WebAssembly-driven projects on GitHub and curate an up- to-date dataset of WebAssembly sources and binaries. In addition to the earlier dataset that we presented,15 we also share the "evolving" dataset that Wasmizer is curating as of March 10th, 2023.16
## IV WebAssembly Compilation Smells in the Wild
We present a case study or our dataset by detecting the presence of compilation smells using custom checkers. We rely on existing checkers and develop new checkers for Clang to analyse our project dataset for occurrences of eight compilation smells. Unlike previous work [9], which has detected these smells on synthetic code examples, we detect the smells in real source code. In addition, we investigate more compilation configuration flags and present the code patterns that indicate these smells.
### _Compilation Smells_
Compilation smells are indicators of execution differences that can arise between a C program compiled to native x86 code and the same program compiled to WebAssembly. We rely on the same dataset as the original paper [9], namely the Juliet test suite [17], a set of test C and C++ programs that have been designed to exercise and evaluate static application security testing (SAST) tools. We extend the previous work by repeating the experiments with multiple sets of compiler configuration flags. This is done to ensure that the smells we identify are exhibited in various compiler configurations. Moreover, we perform a manual analysis step to extract the pattern of each smell, which was lacking in previous work. We illustrate our pipeline to identify the smells in Figure 3.
The Juliet test suite is composed of test programs that each has two variants: the _bad_ variant which contains a security issue, and the _good_ variant which does not contain the issue. Starting from the dataset of deterministic programs in the Juliet test suite, we compile each program to x86 and to WebAssembly. We record their execution outcome (success or crash), as well as their standard outputs. If any difference is encountered, we flag the program as behaving differently when executed in WebAssembly.
Unlike previous work, we repeat this process for a number of different compiler configurations. In particular, we investigate the following tweaks to the default compiler configurations:17
Fig. 1: Distribution of binary WebAssembly files.
Fig. 2: The pipeline of Wasmizer.
* The optimisation level: -00 (no optimisation), -01 (only some optimisations), -02 (moderate level of optimisations), and -Os (optimisations for code size).
* Security protections: default, nonsecure (disable a set of security protections enabled by default, and secure (enable a set of security protections). The flags used for each of these setting are detailed in Table III.
Moreover, unlike previous research, we target 32-bit native code instead of 64-bit. This is to ensure better compatibility with WebAssembly, which itself is a 32-bit architecture and therefore have different number semantics [10].
We consider 12 configurations in total :
1. default,-00,good,
2. default,-01,good,
3. default,-02,good,
4. default,-Os,good,
5. secure,-02,good,
6. nonsecure,-02,good,
7. default,-00,bad,
8. default,-01,bad,
9. default,-02,bad,
10. default,-Os,bad,
11. secure,-02,bad, and
12. nonsecure,-02,bad.
Each test case is used to produce 24 binaries (12 WebAssembly binaries and 12 native binaries, each one using a different set of compiler flags).
In order to understand where these differences arise from, we manually go through each program behaving differently across multiple configurations. The structure of the Juliet test suite is so that there are many variants of the same programs, including control- and data-flow variants, e.g., by wrapping the code of the program in a specific branch that will always be taken at execution time. This allows us to reduce the set of programs to investigate manually. The manual process includes looking at the program, hypothesizing the reason for the different behaviour, and trying to confirm that reason through other examples.
We refine the previous study [9] that merely described the various patterns at a high-level. In this work, we manually minimize each program exhibiting a difference18: we remove portions of the code until no code can be removed without exhibiting the difference. This process results in a minimal program for each difference, which we call _compilation smell_.
Footnote 18: We could adopt an automated approach, for example by relying on automatic program reduction techniques such as delta debugging. However, the size of the dataset and the programs within it enable us to reduce the programs manually.
We find a total of 16 root causes for the differences, i.e., smells. We describe these smells, linking them to a CWE when relevant. We also provide insights on how each smell can be detected by automated program analysis tools. We distinguish between _structural_ checks that can be performed simply by walking the AST of the program and _semantic_ checks that require a deeper analysis of the program semantics. Each of these smells may have consequences when porting C and C++ programs to WebAssembly: a smell is an indication of potentially different behaviour, that itself may result in bugs (or increased security risks) in the program [10].
Double Free (CWE 415)The following excerpt allocates a memory region but frees it twice. In its native version, this program yields a double free error (free(): double free detected in tcache 2), but it runs successfully in WebAssembly, potentially continuing the program's execution while the heap memory has been corrupted. Instead, a malloced memory region should only be freed once.
```
char*data= (char*)malloc(100.sizeof(char)); free(data); free(data);
```
Semantic check: track allocated and freed buffers to detect when free is called on a buffer already freed. Already implemented in Clang as the unix.Malloc checker.
Error Without Action (CWE 390)A file is opened with the fopen function. The file is then closed, without checking that opening the file actually succeeded. This results in an error when executed in WebAssembly (RuntimeError: uninitialized element), but succeeds in its native version. The WebAssembly crash is unexpected, as the file should be created successfully after calling fopen. A closer inspection reveals that fopen indeed fails and sets errno to ENOTUNIQ (_Name not unique on network_), which is an unconventional error for fopen. Instead, the return value of fopen should be checked before using the file handle.
Fig. 3: The pipeline for identifying compilation smells.
```
FILE *f = fopen("file.txt", "w+"); fclose(f);
```
Semantic check: find calls to fclose on a file pointer that has been open with fopen, in a branch where it could be NULL.
Double fclose (CWE 675)A file is opened, and thereafter closed twice. In its native version, this program succeeds. In its WebAssembly version, it fails: the file is not opened successfully, resulting in the same error as we encountered in CWE 390 (ENOTUNIQ: _Name not unique on network_).
```
FILE *data = freopen("f.txt", "w+", stdin); fclose(data); fclose(data);
```
Semantic check: same as for "Error Without Action"
Use of Uninitialized Variable (CWE 457)Similarly, the following program results in undefined behaviour because it reads data from an uninitialized behaviour. The WebAssembly output is the empty string, while the native output is the name of the program. Instead, one should not use uninitialized variables.
```
char *data; printf("%
```
Semantic check: find usages of variables that have not been initialized.
Access to Environment VariablesThe following excerpt accesses the PATH environment variable.
```
printf("%
```
Structural check: find calls to getenv.
In native code, this accesses a variable that is bound on Unix-like systems. However, in WebAssembly, the execution environment is cleared upon execution of a program. With the platform we are using to execute the test cases, it is required to explicitly bind these environment variables before executing this program. In case this is not performed, the output of this excerpt differs between native code and WebAssembly (where the empty string is printed). If environment variables are used in a project, they should be setup properly when instantiating the WebAssembly module.
Incorrect Check of Function Return Value, with fputs (CWE 235)The following example illustrates one difference that arises due to the use of musl as the standard C library when compiling to WebAssembly. Function fputs is called, and its result is checked against the constant 0 to see if printing the string has failed. However, the specification of fputs states that it returns EOF (-1) upon error, and a non-negative number upon success. The musl library returns 0 as number upon success, while glibc returns the number of bytes printed. As a result, the WebAssembly version enters the branch and prints fputs failed!. Instead, one should only rely on whether the return value of fputs is a positive integer or not.
```
if(fputs("string",stdout)==0) printf("fputsfailed!\n");
```
Semantic check: find return value of fputs that flows into a comparison against constant 0.
Improper Resource Shutdown (CWE404)A file is opened with the open system call. This returns a file descriptor as an int. The fclose function is used to close the file. However, a file descriptor must instead be closed with the close system call, while fclose expects a pointer to a FILE data structure. In WebAssembly, this code executes successfully and the program runs to completion, while it crashes in its native version.
```
intf = open("file.txt", O_RDWR | O_CREAT, S_IREAD | S_IWRITE); fclose((FILE *)f);
```
Semantic check: find calls to fclose on values of type int which have been assigned by open.
Wide CharactersThe following program simply writes a wide string to the console. While in its native version, the string is indeed written to the console, the WebAssembly version does not print anything. It actually requires calling fwide to tell the console that wide characters will be printed. This is likely due to a difference of libc. Instead, one should always call fwide before wprintf.
```
Semantic check: find calls to wprintf that have not been preceded by a call to fwide.
**NB.** The code patterns that we present in the rest of this section require more advanced program analysis techniques.
Incorrect String Argument (CWE 688)The following program calls printf with "$s" as a format string, but incorrectly passes an int as argument. The native program fails with a segmentation fault, as it tries to read a string at an invalid memory location. However, the WebAssembly binary succeeds, reading an empty string from the (invalid) memory location 5. This is because the int argument is treated as a string pointer of which the first element is likely '0' due to WebAssembly initializing its linear memory with zeroes. As a result, this is interpreted as passing the empty string to printf. Instead, calls to printf should ensure that the types of arguments match the format string.
```
Structural check: find values of an invalid type being passed as argument to format string functions. Already implemented in clang with -Wformat.
Freeing Invalid Memory (CWE 590)The following program allocates memory on the stack using alloca, and tries to free it with free. However, free should be used to free heap-allocated memory. As a result, this program crashes in its native version. However, it runs to completion in WebAssembly. This could come from a different implementation of free in musl. Instead, one should not free memory allocated on the stack.
```
Semantic check: find memory region allocated with alloca, flowing into a call to free.
Incorrect Number of Arguments (CWE 685)The following program calls printf with too few arguments, while the provided format string expects two arguments. In WebAssembly, it works and treats the second string as null, because the next value on the stack likely points to a 0 and is treated as the empty string. In native, it crashes as it cannot provide a value for the second string. Instead, one should provide the number of arguments in agreement with the format string.
```
Structural check: find calls to format string functions that provide too few arguments. Already supported in clang with -Wformat.
Freeing Pointer Not At Start of Allocated Region (CWE 761)The following program allocates heap memory with malloc, and then moves the allocated pointer further in the allocated region. It then tries to free the memory by passing this incremented pointer as argument, which is invalid: free should be called on the pointer to the initial allocated region. This crashes in native, but works in WebAssembly, due to the use of a different allocator in musl. Instead, one should free memory regions at their starting pointer.
```
Semantic check: this requires tracking allocated region and pointers to these region, detecting calls to free on a pointer that is not at the beginning of an allocated region.
Pointer Subtraction (CWE 469)This program performs invalid pointer manipulation by subtracting two different pointers: the slash variable points to the slash in the first string, but is mistakenly used to compute the index of the slash in string2. This prints a different value in WebAssembly and in its native version, due to difference in memory layouts, i.e., string1 and string2 do not have the same offsets in both binaries. Instead, one should not subtract pointers.
```
Structural check: find the subtract operation applied to two pointers.
Stack-Based Buffer Overflow (CWE 121)The following program contains a buffer overflow due to an incorrect allocation: alloca is used to allocate 10 bytes, while it should be used to allocate 10 integers (hence, sizeof(int) * 10 bytes). When a buffer is copied into the badly allocated memory region, this overflows. In its native version, this program crashes as the stack is detected as being smashed. The WebAssembly version runs the program successfully. Instead, one should ensure that all memory accesses are made within bounds.
```
int*data=alloca(10); for(inti=0;i<10;i++){ data[i]=0; }
```
Detecting buffer overflows statically is an entire research problem on its own.
Buffer Overread (CWE 126)The following program illustrates a buffer overhead: the data string is filled with 150 times the 'A' character. 99 of them are copied into the dest buffer, but no extra null-terminating character is added. Hence, when printing the dest buffer, printf will continue printing the string until it encounters the byte 0. In WebAssembly, since the memory is initialized with 0s, the likeliness of having a byte 0 right after the string, and we encounter this in practice: only 99 As are printed. In its native version however, the string contains random garbage after the 99 first bytes, and printf prints much more characters. Instead, one should ensure that all memory accesses are made within bounds.
```
chardata[150]="AAAAAAA..."; chardest[100]; strncpy(dest, data, 99); printf("%
```
Similarly, this kind of buffer overhead is an entire research problem on its own.
Undefined Behaviour (CWE 758)The following program has undefined behaviour according to the C standard. It allocates a pointer, but does not initialize it. When dereferencing this pointer to print a string, what will be printed is therefore undefined. In native, the name of the program is being printed, while in WebAssembly, the empty string is printed. Instead, one should not dereference uninitialized pointers.
```
char*pointer=alloca(sizeof(char*)); printf("%
```
Semantic check: find dereferenced pointers that have not been initialized.
### _Smell Detection_
We rely on the Clang Static Analyzer (CSA) to detect the first eight code smells (_a_ to _h_) in the projects in our dataset.19 It is the most suitable option as it fits the constraints imposed by the nature of the analysis. Precisely, CSA does not require the projects to be able to be compiled and can instead analyze individual source files, while still allowing for advanced static analyses such as dataflow analysis.
Footnote 19: [https://clang-analyzer.llvm.org](https://clang-analyzer.llvm.org)
To account for unknown external factors such as input values and behaviour of libraries used, CSA uses symbolic execution and assigns unique symbols to unknown values. This allows for path-sensitive analysis as occurrences of symbols can be tracked through the execution of a program.
CSA can also be extended with custom checkers, which is required as the majority of the code patterns being searched for are not detected by existing analysis tools. The custom checkers rely on a callback mechanism to subscribe to certain events. For example, the check::PreCall callback will be called every time the analyser comes across a function call before it analyses it. The analyser also supports inter-procedural analysis. The checkers have access to the program state at each point in the path analysis, and this state can be manipulated by the checkers to read and store arbitrary information.
#### Iv-B1 Selected Compilation Smells
We choose the first eight compilation smells (i.e., a to h) described in the previous section. This selection is based on how feasible it is to implement checkers for these smells in the Clang static analyzer, using the tools available (e.g., no pointer analysis).
#### Iv-B2 Implementation of Custom Checkers
CSA includes checkers that can identify three smells. We implement five new custom checkers using the capabilities of the CSA. Table IV lists the checkers used in our experiments. The remaining smells (i.e., i to p) require deeper analyses such as analysis of pointers or memory allocations, which are difficult to determine statically. We leave detection of these smells for future work.
We develop test cases (19 in total) for each custom checker to detect and eliminate both false positives and false negatives. For example, to test the BadFPutsComparison checker, we implement a test case to make sure that our checker would still pick up cases where the result of calling fputs is compared to a variable which has a known value of 0. Similarly, for other checkers we develop tests based on the smells to ensure that the tool detects the patterns as we expect it.
#### Iv-B3 Running the Checkers
We run our checkers on 1 605 projects, including both the projects that are WebAssembly-related, and the projects that are identified to target WebAssembly as a compilation target. To do so, for each project,
we move all C/C++ source files into a single directory using a Node.js script, and rename filenames to prevent conflicts. This is necessary as the Clang Static Analyzer is not able to handle files in nested subdirectories. We then run the Clang Static Analyzer via command line, passing it the name of each C or C++ source file in the project. This results in a.plist file containing the analysis results for each source file in the project. We use another Node.js script to post process the results and merge the many.plist files into a single file in JSON format that contains all of the key information about each code smell detected in that project. This is a time consuming process, as some projects have up to 50 000 source files.
### _Results_
We uncover the presence of two compilation smells namely, AccessEnv and ErrorWithoutAction). The checkers detect no instance of other six compilation smells. Table V lists the number of repositories affected by the two compilation smells. We discover a total of 1 873 compilation smells and note that 386 projects (i.e., 24%) contain at least one instance of a compilation smell. Future work is necessary to understand whether these smells are indication of potential bugs or not. Nevertheless, the checkers that we use in this study are present in the artifact accompanying this paper, and developers can use them to detect and reduce the prevalence of compilation smells in practice.
None of the checkers already implemented in Clang find instances of those code patterns in our dataset. We theorise that this may be because the developers are already alerted to these code smells, such as through their IDE or during the build process, meaning that these errors are caught and rectified before they are committed to the repository.
## V Threats to Validity
We define several heuristics to ensure a high-quality dataset, but every manual process is subject to bias. We may have exhibited bias in the methods we choose to sample our dataset. Such as the process of queries and keywords selected using the GitHub Search API, to the manual analysis of compilation indicators to WebAssembly. This can be shown in our selection of keywords defined in our NLP filtering. We define known keywords that we think are related to WebAssembly, but there might be more relevant keywords that we missed.
The tools we use in this study are limited too. For instance, the return limit of the GitHub API reduced the number of projects we could gather, capping at 1000 results per query, even though we know that more projects exist. To overcome this, we search over periods of time that span less than 1000 projects.
We take reasonable steps to create a dataset that is inclusive of a wide range of projects, but it may not be representative of all C/C++ projects that compile to WebAssembly. This could be verified through analysis of a larger dataset. We also focus on open-source projects on GitHub, but they may be quite different in nature to closed-source projects developed by large companies or those projects found via other sources.
For the extraction of WebAssembly binaries from our dataset, we focus only on projects that are compiled using either makefiles directly, or relying on CMake as a build system. This is the closest we can have to a standard build system in C and C++. Many projects however could not be compiled: some do not use such build systems, e.g., sometimes resorting to shell scripts to issue the compilation commands; others require libraries that are missing; or have improper configuration of their build system. We extract WebAssembly files based on their name only (.wasm or.wat), but there could be other files that use a different naming scheme.
We share the dataset of binaries with information about the projects from which each binary is built. It is important to check license compatibility of these projects to ensure that any legal restrictions or obligations are properly adhered to.
The code patterns that we could detect are limited by the capabilities of the Clang static analysis tool being used. There is further scope to develop analysis tools that can detect the remaining code smells that are not implemented in this work. Detection of these code smells require modelling the memory and pointers used in the program, making them more difficult to detect using only static analysis. Even though we detect many instances of compilation smells at the level of a project, such smells may be in a file that will not be compiled to
WebAssembly, even if part of the project does. Hence, further work is required to determine which specific files within a project will be compiled to WebAssembly. Finally, we do not investigate the context in which these issues arise nor the impacts that such unwanted code patterns have on these projects. This requires an intensive manual effort that we plan to conduct in future.
## VI Related Work
We build the first dataset of WebAssembly sources and their binaries. This dataset is large, and it remains up to date with the help of Wasmizer tool which is deployed. We also investigate the prevalence of compilation smells in real-world WebAssembly programs.
Regarding the real-world usage of WebAssembly, Hilbig et al [8] found that 64.2% of WebAssembly binaries are compiled from C or C++ in open access repositories. This is significant as these memory-unsafe languages are particularly vulnerable to security weaknesses when compiled to WebAssembly due to the lacking compiler protections. Additionally, many of these binaries have potential memory-related vulnerabilities, such as making use of the unmanaged stack, or using a custom memory allocator, which increase the risk of security weaknesses within the program [8]. The lack of memory protection measures implemented in WebAssembly compilers leads to security weaknesses. Almost 80% of binaries make use of the LLVM toolchain [18], so any security measures implemented in this toolchain would have a significant impact on the security of WebAssembly binaries.
When compiling C/C++ programs to WebAssembly, there have been observed differences in the behaviour of the resulting binaries when compared to x86 native code [10, 9]. While they may introduce hard-to-detect bugs, in many cases these differences may be fairly harmless, such as differing outputs from print statements. However, as WebAssembly lacks common memory protections some behavioural differences related to memory allocation may lead to security weaknesses. Three main causes were identified as being responsible for the observed differences [9]. The first is a difference due to a different implementation of the C standard library used in the native executable and its WebAssembly counterpart. The second cause is the previously mentioned lacking memory protections of WebAssembly, where cases that would cause an exception to be thrown in native binaries instead continue to run in WebAssembly. Finally differences in the execution environment also account for some behaviour differences.
All security critical differences relate to the memory model and its protections (or lack thereof). In a C program compiled to a native binary, a stack smashing attack would be prevented by a stack canary. However, as these are not present in WebAssembly, it leaves the application vulnerable to such an attack [10, 9]. Similarly, a buffer underwrite that would usually be protected against via hardware protection or bounds checks remains undetected [9]. While the sandboxed environment of WebAssembly prevents such attacks manipulating anything outside of the binary's memory, Lehmann et al. [19] demonstrated how these vulnerabilities can be exploited in order to execute an XSS attack in a browser, run an arbitrary shell command in a node.js server-side environment, and write arbitrary content to a file in Wasmtime standalone runtime. While the work done by Stievenart et al. [10, 9] limits itself to only consider the Clang toolchain, McFadden et al. [20] establishes that similar vulnerabilities are present in the Emscript compiler toolchain. These studies have shown the existence of code patterns that exhibit different behaviour in a syntactic test suite, however we aim to take the next step and investigate the prevalence of these code patterns in real world projects.
## VII Conclusion
We identify 2 540 C and C++ projects on GitHub that are related to WebAssembly and likely target WebAssembly for their compilation. We compile these projects to build a dataset of 8 915 binaries that are linked to their corresponding source project. We develop Wasmizer, a tool that fully automates this process. Wasmizer is open source and is running on a dedicated machine to regularly mine GitHub projects and provide researchers with a novel and up-to-date dataset of WebAssembly binaries and their associated projects.
To present a use case of this dataset, we investigate the presence of eight WebAssembly _compilation smells_ in 1 605 projects. We find that 386 projects that aim to compile to WebAssembly exhibit at least one compilation smell. The most prevalent smells are calls to getenv() and calls to fclose() on the return value of fopen() without checking if the value is null. Our findings conclude that developers compiling native programs to WebAssembly should be aware of behavioural differences in their implementation and expected results. This is due to many repositories exhibiting similar issues, which may adversely affect development goals.
In future, we plan to locate the exact source files that generate WebAssembly binaries. This can for example be achieved by tracing compiler calls or by inspecting dependencies in the build system.
## Acknowledgment
The feedback we received from the MSR 2023 reviewers was helpful in improving Wasmizer and making it a more useful tool for the research community. We are grateful for their input and excited to publicly share Wasmizer and the evolving dataset that it curates.
|
2303.06510 | E2CoPre: Energy Efficient and Cooperative Collision Avoidance for UAV
Swarms with Trajectory Prediction | This paper presents a novel solution to address the challenges in achieving
energy efficiency and cooperation for collision avoidance in UAV swarms. The
proposed method combines Artificial Potential Field (APF) and Particle Swarm
Optimization (PSO) techniques. APF provides environmental awareness and
implicit coordination to UAVs, while PSO searches for collision-free and
energy-efficient trajectories for each UAV in a decentralized manner under the
implicit coordination. This decentralized approach is achieved by minimizing a
novel cost function that leverages the advantages of the active contour model
from image processing. Additionally, future trajectories are predicted by
approximating the minima of the novel cost function using calculus of
variation, which enables proactive actions and defines the initial conditions
for PSO. We propose a two-branch trajectory planning framework that ensures
UAVs only change altitudes when necessary for energy considerations. Extensive
experiments are conducted to evaluate the effectiveness and efficiency of our
method in various situations. | Shuangyao Huang, Haibo Zhang, Zhiyi Huang | 2023-03-11T22:33:00Z | http://arxiv.org/abs/2303.06510v2 | (E^{2}CoPre\): Energy Efficient and Cooperative Collision Avoidance for UAV Swarms with Trajectory Prediction
###### Abstract
This paper addresses the collision avoidance problem of UAV swarms in three-dimensional (3D) space. The key challenges are energy efficiency and cooperation of swarm members. We propose to combine Artificial Potential Field (APF) with Particle Swarm Planning (PSO). APF provides environmental awareness and implicit coordination to UAVs. PSO searches for the optimal trajectories for each UAV in terms of safety and energy efficiency by minimizing a fitness function. The fitness function exploits the advantages of the Active Contour Model in image processing for trajectory planning. Lastly, vehicle-to-vehicle collisions are detected in advance based on trajectory prediction and are resolved by cooperative adjusting the altitude of UAVs. Simulation results demonstrate that our method can save up to 80% of energy compared to state-of-the-art schemes.
UAV, swarm, collision avoidance, safety, energy, cooperation, PSO, APF.
## 1 Introduction
A UAV swarm is a group of UAVs collaborating to produce enhanced capabilities and resilience compared to the sum of individuals. Despite the varieties of UAVs, multi-rotors are studied in this paper for their prevalence in research and industry. UAV swarms have been actively used in search and rescue [1], tracking and monitoring [2, 3], and post-disaster communication recovery [4] to provide low-cost and real-time connections. Collision avoidance is essential in UAV swarm applications. The biggest challenges in collision avoidance for UAV swarms are energy efficiency and cooperation. On the one hand, UAVs rely on a limited onboard power supply which can only support less than 40 \(min\) flight time [5] with no payload, and decreases to less than 20 \(min\) with 5 \(kg\) payloads. Other than payloads, frequent adjustment of velocity will further increase the energy consumption of UAVs and reduce their flight time [6, 7, 8]. Therefore, the UAVs' trajectories in collision avoidance are required to be smooth and regular for energy efficiency. On the other hand, the cooperative actions of UAVs stand a higher chance of obtaining optimal solutions in cooperative missions like collision avoidance. Moreover, the collision of any UAV may cause a collision of the whole swarm and fail the mission. Therefore, cooperative collision avoidance for UAVs in a swarm is required.
Existing algorithms for swarm collision avoidance are either energy inefficient or difficult to achieve cooperation. For example, Velocity-based algorithms like Reciprocal Velocity Obstacle (RVO) [9, 10, 11] and Virtual force-based algorithms like Artificial Potential Fields (APF) [12, 13, 14] result in zig-zag trajectories that are energy inefficient to UAVs. Model Predictive Control (MPC)-based methods [15, 16, 17] rely on well-defined control models and parameters of UAVs, which are non-trivial. Heuristic-based algorithms like Particle Swarm Optimization (PSO) [18] and Genetic Algorithms (GA) [19] consider energy consumption by minimizing a cost function. However, these methods can only plan for individual UAVs and hinder cooperation, as the dimensionality of the search space increases exponentially with the number of UAVs. Moreover, the existing methods are all based on shifting horizon planning, where UAVs plan trajectories over a finite time window shifting in the time domain at each step. This receding horizon planning results in the short sight of UAVs and leads to sub-optimal trajectories from a long-term perspective. For example, only the collisions occurring in the immediate next step will be considered. It will help if the collisions occurring in future steps can be foreseen. Recently, Multi-Agent Reinforcement Learning (MARL) has been used to train policies for cooperative collision avoidance [20, 21, 22, 23]. Policies are trained to plan trajectories at each step considering long-term consequences to address the short-sight limitation of shifting horizon planning. However, MARL-based methods rely on well-defined simulation environments and manually designed reward signals, which are non-trivial. Moreover, MARL-based methods are subject to time and computational power-intensive training, high variances [24, 25, 26, 27], and low sample efficiency [28], which result in high failure rates in online execution. These deficiencies of MARL-based methods limit their application in safety-sensitive missions like collision avoidance.
In this paper, we present \(E^{2}CoPre\): Energy Efficient and Cooperative Collision Avoidance for UAV Swarms with Trajectory _Pre_diction. \(E^{2}CoPre\) addresses the limitations of existing methods by combining APF and PSO. APF provides environmental awareness and implicit coordination
to UAVs. Under the implicit coordination of APF, UAVs can search for their optimal trajectories using PSO independently in a decentralized manner, which avoids the curse of dimensionality. The fitness function of PSO ensures that the trajectories are energy-efficient. Moreover, a novel fitness function that exploits the advantages of the Active Contour Model in image processing is designed for trajectory planning. The advantages of \(E^{2}CoPre\) can be summarized in spatial and temporal domains. _Spatial Domain_: the advantage of \(E^{2}CoPre\) in spatial domain is trajectory-based PSO. Unlike conventional waypoint-based PSO [29, 30] where particles in PSO correspond to waypoints of a trajectory, the particles in trajectory-based PSO correspond to trajectories. Trajectory-based PSO generates smoother trajectories but has larger search space dimensionalities. To avoid the curse of dimensionality, a trajectory is expressed using only two variables in \(E^{2}CoPre\). Therefore, the dimensionality of the search space for trajectory-based PSO is two. _Temporal Domain_: the advantage of \(E^{2}CoPre\) in the temporal domain is trajectory prediction. Trajectory prediction addresses the short-sight limitation of UAVs by predicting their future trajectories at each step. The trajectory prediction is achieved by approximating the minima of the fitness function of trajectory-based PSO using the method of calculus of variation. The trajectory prediction module is implemented to detect collisions in advance and addresses the short-sight limitation of shifting horizon planning. The prediction results are also used to initialize the PSO search to boost convergence and avoid local optima traps.
Extensive experiments are conducted to evaluate \(E^{2}CoPre\) against state-of-the-art collision avoidance schemes [29, 30, 31]. \(E^{2}CoPre\) can save energy up to 80% in the best case and 20% in the worst case compared to state-of-the-art schemes, showing the advantage of the novel fitness function of \(E^{2}CoPre\) on energy saving. Except from energy efficiency, the distances between UAVs and obstacles and the distances between UAVs in \(E^{2}CoPre\) are smaller than those in other methods while ensuring safety. This shows that the PSO search initialized by trajectory prediction in \(E^{2}CoPre\) can find more efficient trajectories nearer to the obstacles and collision-free. We also conduct simulations with packet loss in communication channels to show the robustness of \(E^{2}CoPre\) to channel quality. Results show that when the transmission rate of UAVs is higher than 1 \(p/s\) (packet per sec.), the performances of \(E^{2}CoPre\) do not vary much with channel quality. Parameter analysis is conducted to show the impact of essential parameters in \(E^{2}CoPre\) on its performances. Last but not least, ablation tests are conducted to validate the importance of the trajectory prediction model in \(E^{2}CoPre\). Results show that trajectory prediction reduces energy consumption and ensures safety.
The rest of this paper is organized as follows. Related works are first discussed in Section 2. Then, the system model and framework of \(E^{2}CoPre\) are introduced in Sections 3 and 4, respectively. The environment representation using APF is detailed in Section 5. Next, the fitness functions used in PSO searches are detailed in Section 6. Following the fitness functions, trajectory prediction is introduced in Section 7. Following trajectory prediction, trajectory planning using PSO is introduced in Section 8. Eventually, extensive experiments are conducted to validate \(E^{2}CoPre\) in Section 9. Conclusions are drawn in Section 10.
## 2 Related Work
Research in collision avoidance can generally be classified as velocity-based, virtual force-based, heuristic-based, model predictive control-based, and MARL-based.
### _Velocity-Based Methods_
Velocity-based methods avoid collisions by adjusting the velocities of UAVs. A representing algorithm is Velocity Obstacle (VO) [9, 10]. At each step, a UAV maintains a pool of velocities that will cause collisions with other UAVs or obstacles. This velocity pool is referred to as Velocity Obstacle. Each UAV selects a velocity outside its Velocity Obstacle to avoid collisions at each time step. Reciprocal Velocity Obstacle (RVO) [11] assumes two UAVs contribute equally when avoiding mutual collisions. In RVO, two UAVs maintain a shared velocity pool containing the relative velocities that will cause collisions between them. This shared velocity pool is referred to as Relative Velocity Obstacle. The two UAVs work together to adjust their relative velocity outside their Relative Velocity Obstacle. Velocity-based methods are straightforward to implement. However, they cause frequent and abrupt velocities adjustments, leading to energy-inefficient trajectories.
### _Virtual Force-Based Methods_
Virtual force-based methods are based on the concept of Artificial Potential Fields (APF) [14]. The environment is first modeled as a potential field. Intensities on the potential field are a differentiable function with maxima at obstacles and minima at target points. The UAVs are guided by virtual forces derived from the gradients on the potential fields. The virtual forces lead the UAVs to their destinations while pushing them away from the obstacles. However, when the environment is complex, local optima will appear on the potential field with zero gradients. The UAVs will not get out once trapped in local optima.
Various methods are proposed to address the limitation of local optima. A work [13] proposes a recursive excitation/relaxation factor that increases the intensity of a local minimum when a UAV gets trapped in it and decreases it when the UAV gets out. While another work [12] proposes adding a guiding force pointing towards target points, sensors like GPS or a gyrometer generate the guiding force. Virtual force-based methods can better deal with collision avoidance for multiple UAVs than velocity-based methods. Then the influence of all other UAVs and obstacles to the ego UAV is modeled as the intensities and gradients of the potential field around it. Like velocity-based methods, virtual force-based methods generate zig-zag and energy-inefficient trajectories.
### _Heuristic-Based Methods_
Heuristic based-methods like swarm intelligence, genetic algorithms, and graph path-finding algorithms are popular in trajectory planning.
#### 2.3.1 Swarm Intelligence Algorithms
Swarm intelligence like Particle Swarm Optimization (PSO) [18] mimics the behaviors of a swarm of fish searching for food. In detail, members in the swarm explore the environment and exchange their findings, so each member knows about the most promising area the whole swarm has found. Members in the swarm then move toward the most promising area collectively and, at the same time, keep exploring the environment till the food is found. In practice, the most promising area is defined by the point in a search space that minimizes a fitness function. For collision avoidance for UAV swarms, proximity to obstacles and energy consumption can be included in the fitness function to ensure safety and energy efficiency. However, swarm intelligence methods suffer from the curse of dimensionality. For example, in trajectory planning missions, the size of the search space of waypoint-based PSO increases with the environment. The size of the search space for trajectory-based PSO increases with the complexity of the environment and the number of parameters used to define trajectories. This limitation prevents the cooperation of swarm members.
#### 2.3.2 Genetic Algorithms
Genetic algorithms [19] mimic the evolution behaviors of chromosomes. Solutions are represented by chromosomes with properties called genes. The chromosomes evolve for offspring through crossover, combination, and mutation at each step. Like swarm intelligence, a fitness function is needed to measure the quality of chromosomes. Better chromosomes have higher opportunities to generate offspring, and worse chromosomes will be terminated. This multiplication process is repeated like natural selection until the stopping condition is reached. Similar to swarm intelligence, genetic algorithms are limited by the curse of dimensionality. Moreover, genetic algorithms cannot guarantee optimality, as the solutions found are only better than others.
#### 2.3.3 Graph Path Finding Algorithms
Graph Path Finding algorithms like A\({}^{\star}\)[32], D\({}^{\star}\)[33], and D\({}^{\star}\) Lite [34] incorporate heuristics into Graph Search algorithms like Dijkstra's. The environment is modeled as a weighted graph. A node in the weighted graph represents a waypoint, and an edge connecting node represents travel costs. A\({}^{\star}\) maintains a tree of nodes from the start node to the target node. At each step, a new node with the least cost branch is included in the tree. The search stops when the target node is included. D\({}^{\star}\) allows UAVs to update their weighted graph in response to the dynamic environment at each step. D\({}^{\star}\) Lite further improves D\({}^{\star}\) by Reversing Search Direction and code level optimization. These methods are subject to a well-defined weighted graph which is non-trivial to acquire in complex and dynamic environments. The limitations of the methods above make it difficult for UAVs in a swarm to cooperate, as the sizes of search spaces and weighted graphs grow exponentially with the number of UAVs.
### _Model Predictive Control-Based Methods_
Model predictive control (MPC) solves the problem of collision avoidance on the control level. MPC is a set of advanced control strategies. The basic idea of MPC is to predict the future behaviors of a system in a finite time window. Based on this prediction, an optimal control signal is determined to minimize a cost function while satisfying system constraints. The time window is shifted in the time horizon for the next step. The advantage of MPC is that it can work with realistic constraints on control level and directly output control commands. Generally, MPC can be classified as linear MPC and nonlinear MPC based on the linearity of the system model.
Linear MPC computes a linear system model's optimal control signal sequence. For collision avoidance for UAVs, the system is subjected to the kinetic constraints of UAVs and obstacle constraints of environments. The cost function to be minimized considers both safety and energy efficiency. For example, linear MPC is used as a trajectory predictor in previous works [16]. However, due to the non-linear nature of the UAV control system, linear MPC control is only valid over a small section of its trajectory. The output of linear MPC control on the UAV system is approximate. Moreover, the diversity and dynamic properties of the UAV system will be lost after linearization [15]. Non-linear MPC control can better model complex systems like UAVs [17]. However, both linear and non-linear MPC rely on a well-defined control model and parameters, which are non-trivial. Moreover, the shifting horizon control results in energy-inefficient trajectories for UAVs.
### _MARL-Based Methods_
Reinforcement learning (RL) studies the problem _What To Do_ - how to map situations to actions - to maximize a numerical reward [35]. The functions mapping situations to actions are called policies. The policy of an agent (UAV) is trained in a try-and-fail manner through repeated interactions with a simulation environment. Multi-Agent Reinforcement Learning (MARL) trains cooperative policies for agents with a Centralized Training with Decentralized Execution (CTDE) scheme [20, 21, 22, 23]. In detail, agents are trained in a centralized manner considering others agents' policies. While in execution, agents select actions based on their policies in a decentralized way. CTDE enables autonomous and distributive decision-making of agents. However, MARL-based methods require time and computational power-intensive training and are subject to well-defined simulation environments and reward signals. The simulation environment, reward signal, and even the randomness in training significantly impact the algorithms' performances. Moreover, the high variances [24, 25, 26, 27] and low sample efficiency [28] will lead to sub-optimal policies and high failure rates in online execution.
This paper addresses the limitations of existing methods. In detail, cooperation is achieved through the implicit coordination of APF, energy efficiency is ensured by the novel fitness function and trajectory-based PSO, and the short-sight limitation of UAVs is addressed by trajectory planning.
## 3 System Model
The application scenario considered in this paper is illustrated in Fig. 1, where a UAV swarm flies along a pre-planned path depicted by the dashed red line. From a
perspective of planning, the swarm has one static path. The UAVs in the swarm fly on different trajectories that are parallel to the static path and are a certain distance from each other. We assume static obstacles such as buildings and towers have been avoided in the pre-planed path during offline planning, and will focus on avoid collisions with dynamic obstacles such as birds or adversarial UAVs. We also assume the magnitudes of the UAVs' velocities do not change during collision avoidance for energy considerations as altitude change consumes more energy than level flight. Moreover, the avoidance for dynamic obstacles is usually accomplished in a short time. Adjusting velocities in a short time requires significant accelerations and power consumption. Therefore, it is reasonable to avoid obstacles by adjusting trajectories rather than velocities in collision avoidance for dynamic obstacles.
We assume each UAV is equipped with a GPS receiver for self-positioning, a LiDAR sensor such as Leddar M16 [36] for detecting obstacle's position and velocity, a Zigbee radio for wireless communication with in the warm, and a microcomputer such as Raspberry Pi [37] for decision rendering. To achieve environmental awareness, each UAV periodically broadcasts its location and velocity to the other members in the swarm. Since the LiDAR sensor cannot differentiate obstacles from neighboring swarm members, the relative position of the obstacle detected by LiDAR will be converted to its absolute position and compared with the GPS locations received from other swarm members to distinguish between obstacle and UAV swarm member.
As illustrated in Fig. 1, the static path is modeled as a sequence of discrete waypoints. Since dynamic obstacles may intrude between any two waypoints, we focus on avoiding UAV-to-UAV and UAV-to-Obstacle collisions between any two adjacent waypoints, and these two waypoints are the start and target points of collision avoidance, respectively. This assumption does not require the collision avoidance to finish between the start and target waypoints. If the collision avoidance is not finished when the swarm reaches its target waypoint, its target waypoint will be replaced by the next immediate waypoint on the static path. During collision avoidance, UAVs can fly in 3D space, however they must fly back to their original altitude after collision avoidance. Hence, flying upward or downward during collision avoidance makes no difference in energy consumption.
## 4 Framework
The system framework for \(E^{2}CoPre\) is illustrated in Fig. 2. The UAVs in the swarm keep exchanging information on their positions, velocities, and the obstacles they detect. Based on this information exchange, each UAV constructs an artificial potential field to represent the environment. Trajectory planning is performed based on the potential field and is divided into two branches: _level planning_ and _altitude planning_, where _altitude planning_ is used only when potential UAV-to-UAV collision is detected through trajectory prediction. The two-branch structure ensures that UAVs try to keep their trajectories in the 2D space as much as possible, because altitude changes consume much more energy than level flight. To detect potential UAV-to-UAV collisions and address the short-sight limitation of the UAVs, trajectory prediction module is added before level planning. The key components in Fig. 2 are summarized as follows.
* **Environment Field Construction:** The environment field establishes environment awareness and provides implicit coordination for collision avoidance. It is constructed by modelling the environment as a continuous and differentiable potential field with maxima at obstacles as illustrated in Fig. 2. Each UAV can understand the environment by analyzing the intensities and gradients of the potential field within its vicinity. Implicit coordination is achieved by planning trajectories for UAVs along the contours of the environment field to automatically avoid collisions since contours never intersect or go through peaks of the potential field. The detail on constructing the environment Field will be presented in Sec. 5.
* **Trajectory Prediction:** This module predicts the coarse-grained trajectory of each UAV for several future planning steps based on calculus of variation. The trajectory being predicted is denoted by \(S^{k},k>1\) where \(k\) denotes the number of planning steps consisting the trajectory. For example,
Fig. 1: Illustration of collision avoidance for UAV swarms.
Fig. 2: The system framework of \(E^{2}CoPre\).
in Fig. 2 the trajectories \(S^{3}\) for future three steps are predicted with each step represented by \(S\). The trajectory length \(|S|\) in each planning step is fixed. The purpose of predicting trajectories for several future steps is two-fold: (1) the predicted trajectories are used to detect potential UAV-to-UAV collisions. Although UAVs are planned to fly along different contours of the environmental field that never intersect, this cannot completely eliminate UAV-to-UAV collisions since the minimum distance between two adjacent contours can be small. (2) the predicted coarse-grained trajectories are used to initialize level planning to boost convergence since level planning uses PSO to search the best trajectory for the next step. Details on trajectory prediction will be presented in Sec. 7.
* **Level Planning:** Level planning plans the best trajectory \(S^{1}\) of each UAV for the next one step in the 2D space by minimizing a fitness function \(f_{level}(\cdot)\) that considers both energy consumption and safty using PSO. The main objective of level planning is to avoid collisions with obstacles. Although it tries its best to separate UAVs by planning trajectories on different contours, it cannot guarantee that the trajectories won't have UAV-to-UAV collisions.
* **Altitude Planning:** Once potential UAV-to-UAV collisions are detected based on the predicted coarse-grained trajectories, altitude planning is used to resolve them by scheduling UAVs to different altitudes to minimize energy consumed by altitude changes based on another fitness function \(f_{alt}(\cdot)\). Level planning and altitude planning are presented in Sec. 8.
In the rest of this paper, we will present \(E^{2}CoPre\) in a top-down approach. Since both trajectory prediction and level planning use the fitness function \(f_{level}\), we will introduce the two fitness functions \(f_{level}\) and \(f_{alt}\) first before presenting trajectory prediction and planning.
## 5 Environment Field Construction
We model the environment field as a potential field shared by the UAV swarm to provide environmental awareness and implicit coordination for trajectory planning. The environment field is the addition of multiple repulsive fields, where each detected obstacle is modeled as a repulsive field and the UAV swarm as one entity is modeled as a repulsive field. The reason why the swarm instead of each UAV is modeled as a repulsive field is twofold: 1) it reflects the velocity of the swarm in the environment field; 2) it makes the environment field irregular around the swarm, thereby reducing the risk of UAV-to-UAV collisions. Unlike conventional methods [14], attractive fields are not required in \(E^{2}CoPre\) as the UAVs are aware of their targets - the next waypoints on their pre-planned paths. Fig. 3 gives an example of the environment field that contains a swarm with three UAVs and two obstacles.
A repulsive field is a two-dimensional continuous and differentiable function with maxima at its center and decreases toward its edges. The gradients of a repulsive field point away from its center like repulsive forces. The repulsive field for the swarm is constructed based on its conceptual center \(p^{*}\) that is defined as the position by shifting the swarm's geometric center \(\bar{p}\) toward its target point \(p_{tar}\) by a short distance \(d_{shift}\).
\[\begin{split} p^{*}=&\bar{p}+\mathbf{pp_{tar}}\cdot d_{ shift},\\ \bar{p}=&\frac{1}{N}\sum_{i=1}^{N}p_{i}^{u},\end{split} \tag{1}\]
where \(p_{i}^{u}\) is the current position of the \(i^{th}\) UAV and \(N\) is the number of UAVs in the swarm. \(\mathbf{pp_{tar}}\) is a unit vector pointing from the geometric center \(\bar{p}\) to the target point \(p_{tar}\). In practice, \(d_{shift}=v_{s}\times 1s\), where \(v_{s}\) is the velocity of the swarm and is the average velocity of all UAVs. As shown in Fig. 2(a), the conceptual center is always \(v_{s}\times 1s\) in front of the swarm geometric center to provide navigation information to UAVs. Note that the conceptual center of a swarm is just needed for modeling the environment filed and does not need to avoid any collision.
The repulsive field of the swarm \(\Phi_{s}(q)\) is constructed as follows:
\[\Phi_{s}(q)=\left\{\begin{array}{cc}\frac{v_{s}}{|\mathbf{qp^{*}}|^{2}},&|\mathbf{qp^ {*}}|\leq R^{s}\\ 0,&|\mathbf{qp^{*}}|>R^{s}\end{array}\right. \tag{2}\]
where \(\Phi_{s}(q)\) is the field intensity at position \(q\), \(|\mathbf{qp^{*}}|\) is the distance from \(q\) to the conceptual center, and \(R^{s}\) is the influential range of \(\Phi_{s}(q)\).
On the other hand, the repulsive field for the \(j^{th}\) obstacle detected by the swarm is defined as follows.
\[\Phi_{j}^{o}(q)=\left\{\begin{array}{cc}\frac{\max\{v_{j}^{o},v_{s}\}}{d_{ safe}^{2}},&|\mathbf{qp_{j}^{o}}|\leq d_{safe}\\ \frac{\max\{v_{j}^{o},v_{s}\}}{|\mathbf{qp_{j}^{o}}|^{2}},&d_{safe}<|\mathbf{qp_{j}^{o} }|\leq R_{j}^{o}\\ 0,&|\mathbf{qp_{j}^{o}}|>R_{j}^{o}\end{array}\right. \tag{3}\]
where \(\Phi_{j}^{o}(q)\) is the intensity of position \(q\) in the potential field, \(R_{j}^{o}\) is the influential range of the potential field, and \(|\mathbf{qp_{j}^{o}}|\) is the distance from \(q\) to the position of the obstacle \(p_{j}^{o}\). The \(\max\) operator ensures the swarm field and the obstacle's field have comparable intensities when \(v_{j}^{o}<v_{s}\), especially when \(v_{j}^{o}=0\). The area within \(d_{safe}\) around the obstacle is called the _Protection Bubble_, which prevents UAVs from getting closer to the obstacle. The _Protection Bubble_ has the maximum intensity to expel UAVs. As shown in Fig. 2(b), the slope of the potential field increases as \(q\) gets close to the _Protection Bubble_. With such an APF model, each UAV
Fig. 3: Example of environment field.
detects higher field intensity and feels a stronger threat when the swarm gets closer to the obstacle.
Based on the potential fields for the swarm and obstacles, the environment field \(\Phi(q)\) is defined by adding all these fields together as follows:
\[\Phi(q)= \Phi_{s}(q)+\sum_{j=1}^{M}\Phi_{j}^{o}(q) \tag{4}\]
where \(M\) is the number of obstacles.
Some properties of the environment field can be summarized as follows.
* UAVs closer to the conceptual center are in the front of the swarm and thus are in greater danger than others. This is because the selection of the conceptual center reflects the direction of the swarm's velocity. UAVs closer to the conceptual center will have higher field intensities.
* The obstacle's repulsive field has maxima within the _Protection Bubble_, in which no trajectory should be planned.
The coordination on collision avoidance is achieved by planning trajectories on contours of the environment, which will be detailed in the following sections.
## 6 Fitness Functions
In this section, we introduce the two fitness functions \(f_{level}(\cdot)\) and \(f_{alt}(\cdot)\) to be used for trajectory prediction and planning. \(f_{level}(\cdot)\) is used for trajectory prediction and level planning in the 2D space to avoid collisions. \(f_{alt}(\cdot)\) is used in altitude planning to schedule UAVs to different altitudes in 3D space to avoid UAV-to-UAV collisions.
### _Level Fitness \(f_{level}\)_
For energy efficient collision avoidance, \(f_{level}(\cdot)\) consists of two parts: an energy efficiency part \(f_{eng}(\cdot)\) and a safety part \(f_{saf}(\cdot)\), as follows.
\[f_{level}(S^{k})=\lambda_{1}f_{eng}(S^{k})+\lambda_{2}f_{saf}(S^{k}), \tag{5}\]
where \(S^{k}\) is the trajectory to be optimized, \(k\) is the number of planning steps consisting the trajectory. \(f_{eng}(S^{k})\) is the fitness function to ensure energy efficiency whereas \(f_{saf}(S^{k})\) is the fitness function to ensure safety. \(\lambda_{1}\) and \(\lambda_{2}\) are coefficients where \(\lambda_{1}+\lambda_{2}=1\).
#### 6.1.1 Energy Efficiency \(f_{eng}\)
Now we model the first term \(f_{eng}(S^{k})\) in Eq. (5) and show that the energy consumption of a UAV traveling along \(S^{k}\) is minimized when \(f_{eng}(S^{k})\) is minimized.
We consider quad-copters owing to its popularity in real-world applications, The energy consumption of a quad-copter flying along a trajectory \(S^{k}\) can be modeled as follow. Details on derivation can be found in our previous work [31].
\[\begin{split} E=E_{n}+E_{len}+E_{comms}\\ E_{n}=\int_{\eta_{s}}^{\eta_{e}}F_{n}(S^{k})()(v\sin\alpha+v_{i}) \cdot\sin\beta dp\\ E_{len}=(F_{drag}\sin\alpha\cos\beta+mg\cos\alpha\cos\beta)\\ \cdot(v\sin\alpha+v_{i})\cdot(\eta_{e}-\eta_{s})\\ E_{comms}=\int_{\eta_{s}}^{\eta_{e}}P_{comms}(S^{k}(p))dp,\end{split} \tag{6}\]
where \(E_{n}\) is the energy consumed on performing turnings, \(E_{len}\) is the energy depending on trajectory length, and \(E_{comms}\) is the energy consumed in communication. \(p\) is an arc length parameter and \(\eta_{e}=\eta_{s}+k\cdot|S|\) where \(|S|\) is the length of one planning step. \([\alpha,\beta,\gamma]\) are the UAV body's pitch, roll, and yaw angles, \(F_{n}\) is the centripetal force required to perform turnings, \(F_{drag}\) is the air drag force, \(v\) is the UAV's ground speed, \(v_{i}\) is the induced velocity of the propellers, \(m\) is the total mass of the UAV including battery, load and sensors, \(g\) is the gravitational acceleration. \(P_{comms}(S^{k}(p))\) is the energy consumed by the ZigBee module, and it is a function of \(S^{k}(p)\) because the amount data sent and received depends on the length of the trajectory.
In one planning step, it is reasonable to assume that the pitch angle \(\alpha\), the UAV velocity \(v\), air drag force \(F_{drag}\), and the induced velocity \(v_{i}\) do not change when the UAV is flying along the trajectory. Moreover, the trajectory length in planning step is fixed. Therefore, \(E_{len}\) and \(E_{comms}\) are all constant in one planning step. Hence, minimizing \(E\) is equivalent to minimizing \(E_{n}\). To validate our assumption, we conducted experiments in which a hexapeor with a diagonal wheelbase of 1000 \(mm\) controlled by Pixhawk 2.0 is set to fly at a constant speed along two trajectories, as shown in Fig. 4a. Trajectory A and B are generated using a grid search-based genetic evolutionary algorithm [38] and a potential field-based algorithm [12], respectively. Trajectory A has three sharp turnings that are approximately \(90^{\circ}\). On the other hand, Trajectory B has just one smooth turning. Fig. 4b shows that the remaining servo voltage decreases over time. As can be seen from Fig. 4b, sharp turnings on Trajectory A at 1190s, 1280s and 1350s cause more radical drops in servo voltage than the smooth turning on Trajectory B at 580s. The overall energy consumption of Trajectory B is much smaller than that of Trajectory A because Trajectory B has less and smoother turnings. The full video is available at [39].
Let \(r(S^{k}(p))\) be the turning radius of the UAV. Then \(F_{n}(S^{k}(p))=\frac{mv^{2}}{r(S^{k}(p))}\). Hence,
\[\begin{split} E_{n}=& mv^{2}(v\sin\alpha+v_{i}) \sin\beta\int_{\eta_{s}}^{\eta_{e}}\frac{1}{r(S^{k}(p))}dp\\ =& mv^{2}(v\sin\alpha+v_{i})\sin\beta\int_{\eta_{s}} ^{\eta_{e}}|\kappa(S^{k}(p))|dp\\ =& e_{v}\int_{\eta_{s}}^{\eta_{e}}|[S^{k}(p)]^{ \prime\prime}|dp,\end{split} \tag{7}\]
where \(\kappa(S^{k}(p))\) and \([S^{k}(p)]^{\prime\prime}\) are the curvature and the second order derivative of trajectory \(S^{k}(p)\) with regard
to \(p\). \(e_{v}=mv^{2}(v\sin\alpha+v_{i})\sin\beta\) is a velocity-dependent coefficient and is constant in a small step.
For the sake of math modeling, the energy efficiency term of \(f_{level}(S^{k}(p))\) is designed as follows.
\[f_{eng}(S^{k}(p))=\int_{\eta_{s}}^{\eta_{e}}\frac{1}{2}|[S^{k}(p)]^{\prime\prime }|^{2}dp. \tag{8}\]
#### 6.1.2 Safety \(f_{saf}\)
The safety term \(f_{saf}(S^{k}(p))\) is used to avoid collisions. The _key idea_ is to plan trajectories for UAVs along different contours (i.e. different intensity levels) of the environment field. As illustrated in Fig. 5 (a), we can see that: (1) contours are smooth, which ensures the energy efficiency of the planned trajectories, (2) contours never go through peaks on a potential field whereas each peak except the one for the UAV swarm represents the position of an obstacle, and (3) contours never intersect, which tries to avoid UAV-to-UAV collisions.
A trajectory on a contour must have the minimum variation on intensity. To find the trajectory with the minimum variation of intensity for a UAV, we convert the environment field into a binary field using the following equation.
\[\begin{split}\Phi_{b}(q)=\begin{cases}1,&\Phi(q)\geq \Phi(p_{0}),\\ -1,&\Phi(q)<\Phi(p_{0}),\end{cases}\end{split} \tag{9}\]
where \(p_{0}\) is the current position of the UAV. Based on this binary field, the trajectory with the minimum variation on intensity in the environment field becomes an edge in the binary field that has the maximum gradient magnitude. To find such an edge in the binary field, the safety term of the level fitness \(f_{level}(S^{k}(p))\) is defined as follow:
\[f_{saf}(S^{k}(p))=-\int_{\eta_{s}}^{\eta_{e}}\frac{1}{2}|\triangledown\Phi_{b} (S^{k}(p))|^{2}dp, \tag{10}\]
where \(\triangledown\Phi_{b}\) represents the gradients of binary field \(\Phi_{b}\).
By substituting Eq. (8) and Eq. (10) into Eq. (5), the level fitness \(f_{level}(\cdot)\) has the following form:
\[f_{level}(S^{k}(p))=\int_{\eta_{s}}^{\eta_{e}}\frac{1}{2}\lambda_{1}||[S^{k}(p )]^{\prime\prime}|^{2}-\frac{1}{2}\lambda_{2}|\triangledown\Phi_{b}(S^{k}(p)) |^{2}dp. \tag{11}\]
Actually \(f_{level}(S^{k}(p))\) given in Eq. (11) is in the same form as the Active Contour Model [40], except that it doesn't have the term for minimizing trajectory length, as the trajectory length is fixed to \(k\cdot|S|\) in our method.
Although the trajectories on contours never go through the peaks of the environment field or intersect with each other, hard constraints on UAV-to-Obstacle and UAV-to-UAV distances are still needed to ensure safety in practice since a UAV may get close to an obstacle or a neighboring UAV. In detail, the distance between any UAV and obstacle should be no smaller than a threshold distance \(d_{obs}\), and the distance between any two UAVs should be no smaller than a threshold distance \(d_{v2v}\), at all times. Based on the definition on _Protection Bubble_, the hard constraint on UAV-to-Obstacle distance can be easily ensured by setting \(d_{safe}\geq d_{obs}+|S|\). As illustrated in Fig. 3 (b), a UAV is entering the _Protection Bubble_ of an obstacle. At the next step, the UAV is attracted back to the edge of the _Protection Bubble_ at \(|\mathbf{qp_{o}}|=d_{safe}\) because the edge of the _Protection Bubble_ has the maximum gradients and thereby the minimum fitness value. The _Protection Bubble_ ensures a UAV can get closer to the obstacle no more than the length of one planning step. The hard constraint on \(d_{v2v}\) will be ensured by adjusting the altitude based on the altitude fitness to be introduced in the next subsection.
### _Altitude Fitness \(f_{alt}\)_
\(f_{alt}\) is used to adjust UAVs to different altitudes to avoid V2V collisions. Intuitively, \(f_{alt}\) should measure the distances between UAVs and the energy consumption in altitude change. \(f_{alt}\) is designed as follow.
\[\begin{split} f_{alt}(\mathbf{S})=\sum_{i\in[1,N]}|\triangle Alt(S_{ i})|,\\ \text{subject to}\min_{i,j\in[1,N],i\neq j}|S_{i},S_{j}|\geq d _{v2v},\end{split} \tag{12}\]
where \(d_{v2v}\) is the threshold distance for V2V collisions, \(\mathbf{S}=[S_{1},S_{2},\cdots,S_{N}]\) are the trajectories of all UAVs. As \(f_{alt}\) is used in altitude planning and altitude planning only schedules the altitudes of UAVs which is independent of the lengths of trajectories, we ignore the notations of \(p\) and \(k\). Minimizing \(f_{alt}\) ensures the UAVs are at least \(d_{v2v}\) from each other. At the same time, the collective altitude change, and hence the total energy consumption of the swarm is also minimized.
Fig. 4: Field experiment results. (a): The two trajectories planned by corresponding algorithms. (b): the servo voltage remaining along each trajectory shown against time. The radical drops in servo voltage and corresponding positions on trajectories are marked with circles and time labels.
Fig. 5: The contours of the environment field and the _Protection Bubble_ ensuring the constraints of \(d_{obs}\).
## 7 Trajectory Prediction
The objective of trajectory prediction is to find an optimal trajectory \([S^{k}]^{*}\) for \(k\) future steps such that \(f_{level}([S^{k}]^{*})\) is minimized on the current environment field. The prediction results are used to detect potential V2V collisions and to initialize PSO-_Level_ in level planning. Trajectory prediction should satisfy two requirements:
* **Prompt Response:** The trajectory prediction method must be computational light for prompt response.
* **Long View:**\(k\) must be larger than 1.
The prediction results are meaningful only if the two requirements are satisfied, as UAVs need time to react to any predicted collisions and to initialize their level planning process. However, the predicted trajectories become inaccurate and the collisions are falsely detected when \(k\) is set too large, because the environment field keeps evolving. A practical setting for \(k\) is to make the length of the predicted trajectory not larger than the sensing range of UAVs, as the distance within the sensing range is what the UAVs can actually see, and the predicted trajectories within the sensing range is more trustworthy. These requirements prevent the usage of heuristic-based algorithms such as PSO due to their computational complexity which increases with \(k\). Therefore, a computational light mathematical optimization method is needed for approximating the minima of \(f_{level}(S^{k})\). \(f_{level}(S^{k})\) is an integration of a set of property functions, including derivatives and variations of intensity, over a trajectory \(S^{k}\). The method commonly used in minimizing integration of functions is known as calculus of variation in Mathematics.
An integration of functions is minimized by solving the Euler-Lagrange (EL) equation in calculus of variation. The EL equation for \(f_{level}(S^{k})\) is as follow.
\[\lambda_{1}[S^{k}(p)]^{\prime\prime\prime\prime}-\lambda_{2}\triangledown \triangledown\Phi_{b}(S^{k}(p))|=0, \tag{13}\]
where \([S^{k}(p)]^{\prime\prime\prime\prime}\) is the forth order derivative of \(S^{k}(p)\) with regard to \(p\). Let \(E_{s}=-|\triangledown\Phi_{b}(S^{k}(p))|\), Eq. (13) becomes
\[\lambda_{1}[S^{k}(p)]^{\prime\prime\prime\prime}+\lambda_{2}\triangledown E_ {s}=0 \tag{14}\]
The solution for Eq. (14) can be found through iterative optimization of \(S^{k}(p)\). Let \(n\) be the number of iterations, we first give the right-hand side of Eq. (14) an initial term \(\frac{\partial S^{k}_{n}(p)}{\partial n}\). Eq. (14) becomes:
\[\lambda_{1}[S^{k}(p)]^{\prime\prime\prime\prime}+\lambda_{2}\triangledown E_ {s}=\frac{\partial S^{k}_{n}(p)}{\partial n}. \tag{15}\]
The right-hand side of Eq. (15) becomes 0 when the level fitness function Eq. (11) stabilizes at the minimum. Eq. (15) can be viewed as a gradient descend algorithm searching for the minimum of Eq. (11).
Since it is difficult to give a mathematical expression for \(S^{k}_{n}\), we express \(S^{k}_{n}\) as a sequence of equally-spaced discrete points \(S^{k}_{n}=[\mathbf{s}_{1},\mathbf{s}_{2},\cdots,\mathbf{s}_{L}]_{n}\), \(S^{k}_{n}(i)=\mathbf{s}_{i}=[x_{i},y_{i}]=[x(ih),y(ih)]\), where \(h\in\mathbb{R}^{+}\). \(L=k\cdot|S|\) is the length of \(S^{k}_{n}\). Therefore, we have
\[^{\prime\prime\prime\prime}(i)=\frac{\mathbf{s}_{i+2}-4\mathbf{s}_{i+1}+6\mathbf{s}_{i}-4 \mathbf{s}_{i-1}+\mathbf{s}_{i-2}}{h^{4}}. \tag{16}\]
It does not matter if we assume \(h=1\) for simplicity. Substituting Eq. (16) into Eq. (15), we have the matrix form
\[S^{k}_{n+1}=M^{-1}(S^{k}_{n}+\triangledown E_{s}), \tag{17}\]
where
\[\begin{split} S^{k}_{n}&=\begin{bmatrix}\mathbf{s}_{1}, \mathbf{s}_{2},\mathbf{s}_{3},\cdots,\mathbf{s}_{L}\end{bmatrix}^{T}_{n},\\ M&=\begin{bmatrix}u_{1}&u_{3}&u_{2}&\cdots&u_{2}&u_{3}\\ u_{3}&u_{1}&u_{3}&u_{2}&\cdots&u_{2}\\ u_{2}&u_{3}&u_{1}&u_{3}&u_{2}&\cdots\\ &&&&&\cdots\\ u_{3}&u_{2}&\cdots&u_{2}&u_{3}&u_{1}\end{bmatrix},\\ u_{1}&=-6\lambda_{1}+1,u_{2}=-\lambda_{1},u_{3}=4\lambda_{1}\end{split}\]
The optimization stops when Eq. (14) is satisfied in theory. Practically, it stops when a maximum number of iterations has been reached. Let \(S^{*}\) be the prediction result, the collisions between UAV \(i\) and \(j\) are detected by the minimum distance between the predicted trajectories
\[d_{i,j}=\min_{1\leq i\leq L}\mid S^{*}_{i}(l),S^{*}_{j}(l)\mid \tag{18}\]
where \(S^{*}_{i}\) and \(S^{*}_{j}\) are the prediction results for UAV \(i\) and \(j\), respectively. \(S^{*}_{i}(l)\) and \(S^{*}_{j}(l)\) represent the \(l^{th}\) waypoint in trajectory \(S^{*}_{i}\) and \(S^{*}_{j}\), respectively. Collisions occur between UAV \(i\) and \(j\) if \(d_{i,j}\) is smaller than a preset collision threshold. On the other hand, the parameters of \(S^{*}\) are used to initialize PSO-_Level_ in level planning, and is detailed in Section 8.
UAVs in a swarm can predict their trajectories independently at the same time using Eq. (17). As shown in Eq. (17), the trajectory prediction is just linear mapping of matrices. Therefore, the computational overhead of this module is neglectable.
## 8 Trajectory Planning
In this section we introduce the two branches of trajectory planning: level planning and altitude planning.
### _Level Planning_
If the results of trajectory prediction show no collision between the trajectories of UAVs, the system enters level planning. Level planning plans the optimal trajectories for actual flight in the immediate next step. We use PSO to search for the optimal trajectories by minimizing \(f_{level}(\cdot)\) as the computational complexity is acceptable for one planning step, and the high quality of trajectories is desired for actual flight. The level fitness \(f_{level}(\cdot)\) is already detailed in Section 6. We focus on PSO-_Level_ in this section.
PSO-_Level_ is trajectory-based PSO. Each particle in the search space represents a trajectory. Since one planning step is small, we use an arc to represent a trajectory as it is natural for turnings of multi-rotor copters and matches their aerodynamics. As the trajectory length in one planning step is fixed, a trajectory can be expressed using two parameters, curvature \(\kappa\) and slope \(\omega\), as illustrated in Fig. 6. Therefore, the curse of dimensionality of PSO is avoided as the search space of PSO-_Level_ is only two-dimensional. In Fig. 6, the UAV and obstacle are depicted with red asterisk and blue dots, respectively. The dashed blue curve is a contour of the environment field. The solid blue curve \(S\) is the trajectory
being planned. The position and velocity of the UAV are \(P_{0}\) and \(\mathbf{v}_{0}\), respectively. \(O_{t}\) and \(r\) are the center and radius of the arc \(S\). \(\delta\theta\) and \(\omega\) are the center angle and slope of the arc \(S\). As illustrated, an arc \(S\) is uniquely located based on its \([\omega,\kappa]\) and initial point \([x_{0},y_{0}]\). Hence, a particle can be expressed as \(\mathbf{\xi}=[\omega,\kappa]_{[x_{0},y_{0}]}\). When \(\kappa=0\), it is a straight line along the UAV's current velocity, which means the UAV has no turning. Any point \(P_{i}=[x_{i}(\omega,\kappa),y_{i}(\omega,\kappa)]\) on the arc can be expressed by
\[\begin{split} x_{i}(\omega,\kappa)&=\frac{\cos( \theta_{i})}{\kappa}+\frac{\cos(\bar{\omega})}{\kappa}+x_{0},\\ y_{i}(\omega,\kappa)&=\frac{\sin(\theta_{i})}{ \kappa}+\frac{\sin(\bar{\omega})}{\kappa}+y_{0},\\ \theta_{i}&\in[\theta_{0}-\triangle\theta,\theta_{ 0}].\end{split} \tag{19}\]
where \(\theta_{i}\) is the slope of the vector from the turning center \(O_{t}\) to a point on the arc, and \(\Delta\theta\) is the range of \(\theta_{i}\).
Based on the representation of trajectories in Eq. (19), the particles can be updated as follows:
\[\begin{split}\mathbf{v}_{i}^{t+1}&=\mu_{0}\mathbf{v}_{i}^ {t}+\mu_{1}(\mathbf{p}_{i}^{t}-\mathbf{\xi}_{i}^{t})+\mu_{2}(\mathbf{p}_{g}^{t}-\mathbf{\xi}_{ i}^{t})\\ \mathbf{\xi}_{i}^{t+1}&=\mathbf{\xi}_{i}^{t}+\mathbf{v}_{i}^{t+1 }\end{split} \tag{20}\]
where \(\mathbf{v}_{i}^{t}\) and \(\mathbf{\xi}_{i}^{t}\) are the velocity and position of particle \(i\) at time \(t\), \(\mathbf{p}_{i}^{t}\) is the personal best experience of particle \(i\) at time \(t\) and \(\mathbf{p}_{g}^{t}\) is the global best experience of the swarm at time \(t\). \(\mu_{0}\) is an inertia weight, \(\mu_{1}\) and \(\mu_{2}\) are acceleration coefficients uniformly distributed in \([0,1]\) independently.
How to initialize the positions and velocities of the particles is a non-trivial problem in PSO. We propose to initialize the PSO using the parameters of the predicted trajectories in trajectory prediction. The first planning step of the predicted trajectory \([S^{k}]^{*}\) can already approximate the minima of \(f_{level}([S^{1}]^{*})\) for the next one step. However, the trajectory prediction module predicts long-term trajectories and may be sub-optimal locally. Therefore, it is a good idea to use the results of trajectory prediction as the initial positions of the particles in PSO-_Level_ to boost the convergence of PSO. Therefore, the initial positions of particles are set as follows.
\[\mathbf{\xi}_{i}^{0}=[f_{\omega}([S^{1}]^{*}),f_{\kappa}([S^{1}]^{*})]_{[x_{0},y_{ 0}]}+\mathcal{N}^{2}(0,1), \tag{21}\]
where \([S^{1}]^{*}\) is the first planning step on the predicted trajectory \([S^{k}]^{*}\), \(f_{\omega}\) and \(f_{\kappa}\) are functions that get the slope and curvature of a trajectory. \(\mathcal{N}^{2}(0,1)\) is 2D Gaussian noise to encourage exploration.
### _Altitude Planning_
If the results of trajectory prediction show collisions between the trajectories of UAVs, the UAVs involved resolve their collisions in altitude planning. Altitude planning schedules the UAVs to different altitudes to ensure the distances between UAVs are larger than a threshold \(d_{\epsilon 2v}\), while minimizing energy consumption. Altitude planning is achieved with PSO-_Alt_ by minimizing the altitude fitness function \(f_{alt}(\cdot)\). The altitude fitness \(f_{alt}(\cdot)\) is already introduced in Section 6. We focus on PSO-_Alt_ in this section.
Let \(W\) be the number of UAVs involved in the potential collisions, the search space for PSO-_Alt_ is \(W\) dimensional. Each dimension corresponds to the altitude adjustment of a UAV. A particle in PSO-_Alt_ corresponds to the altitude adjustments of all UAVs \(\mathbf{\bar{\xi}}^{t}=[\triangle Alt_{1},\triangle Alt_{2},\cdots,\triangle Alt_{ W}]^{t}\). If \(\triangle Alt_{i}>0\), UAV-\(i\) flies upward for a distance \(|\triangle Alt_{i}|\). If \(\triangle Alt_{i}<0\), UAV-\(i\) flies downward for a distance \(|\triangle Alt_{i}|\). Otherwise, UAV-\(i\) does not change altitude. The update of PSO-_Alt_ follows the same rules as PSO-_Level_ defined by Eq. (20).
However, the most significant difference between PSO-_Alt_ and PSO-_Level_ is that PSO-_Level_ is a distributed search while PSO-_Alt_ is a centralized search. In PSO-_Level_, each UAV independently searches for its optimal trajectories. The coordination of UAVs is implicitly provided by sharing an environmental field. In PSO-_Alt_, all the UAVs search for their optimal altitude changes in a centralized way. This difference can also be explained by the input to \(f_{level}(S)\) and \(f_{alt}(\mathbf{S})\). The input to \(f_{level}(S)\) is one trajectory. In contrast, the input to \(f_{alt}(\mathbf{S})\) is the collective trajectories of all UAVs.
For autonomous decision-making of UAVs, PSO-_Alt_ needs to be decentralized. We let each UAV perform the same PSO search. At the end of the search, the UAVs exchange their results of PSO-_Alt_ and the corresponding fitness values. So, each UAV has all others' results. The result that has the smallest fitness value is adopted by all UAVs.
## 9 Performance Evaluation
### _Simulation Setup_
Our simulation set-up is based on the application scenario illustrated in Fig. 1. Obstacles intrude between any two waypoints on the static path of a swarm. The two waypoints are the start and target positions of the swarm during avoidance. As illustrated in the previous sections, the trajectories of the UAVs only depend on the relative positions of the UAVs and the obstacles. Therefore, the simulation environment is orientation invariant. For simplicity, the simulation setup is designed as follows. The simulation environment is a \(300\times 300\) square. The UAV swarm is always spawned at the left side of the square and flies toward the right side. Without loss of generality, we test two specific scenarios: _Obstacle in Front_ where the obstacles fly directly toward the UAV swarm and _Obstacle on Side_ where the obstacles approach the UAV swarm from the left or right side. The two scenarios are illustrated in Fig. 7.
Fig. 6: Parameterization of a trajectory.
Essential parameters are defined as follows. The initial distance between the swarm and the obstacle is 200 \(m\). Because the maximum ground speed of industry-level UAVs is between 10 \(m/s\) and 17 \(m/s\)[41], and the maximum ground speed of consumer-level UAVs is between 5 \(m/s\) to 14 \(m/s\)[42], we set the swarm speed \(v_{s}=10\)\(m/s\) in all simulations. The weight of the UAV is set to 1 \(Kg\). In the simulations, we are interested in the relation between \(E^{2}CoPre\)'s performances and the obstacle's velocity rather than extreme tests on the obstacle's velocity. For the sake of simulation, we assume the obstacles are adversarial UAVs that have the same velocity range with the swarm. The velocity of obstacle \(v_{obs}\) varies from 1 to 10 \(m/s\). The Lidar sensor's sensing range is set to 100 \(m\). An obstacle is detected by a UAV when the distance between them is smaller than \(100\)\(m\). The obstacle detected by any UAVs will be known to the whole swarm through information exchange. Each run of the simulation starts when the swarm and the obstacles are spawn and finishes when the distances between all UAVs arrive at their target positions. During each run, the collision avoidance starts when the distance between the obstacle and any UAV is smaller than the avoiding distance \(50\)\(m\). The length of one planning step is set to \(|S|=v_{s}\times 1s\). We predict for \(10\) planning steps in trajectory planning which is \(100\)\(m\), as it's the length of sensing range of UAVs. The radius of the _Protection Bubble_ of an obstacle is set to \(d_{safe}=20\)\(m\). The threshold distance for V2O collisions is set to \(d_{obs}=d_{safe}-|S|=10/m\), as the _Protection Bubble_ guarantees that UAVs get closer to obstacles with no more than one planning step. Lastly, the threshold distance for V2V collisions is set to \(d_{v2v}=5\)\(m\). For swarm formation, the swarm members are equally spaced in a circle with a radius \(\tau=20\)\(m\). The swarm size \(N\) varies from 2 to 10. The resolution of the simulations is 1 \(s\).
We compare \(E^{2}CoPre\) with the following three schemes:
* \(FFPSO\)[29]: A new term repelling particles from one another is introduced to the velocity update equation in PSO. The fitness function of a particle is simply the negative of the distance between the particle and its destination.
* \(PPSO\)[30]: A smoothing field in APF is introduced to smooth UAVs' trajectories. Then, PSO is adopted to find the optimal trajectories on APF. The fitness value of particles is just the field intensity.
* \(E^{2}Coop\)[31]: A shared environment field provides environmental awareness and implicit coordination. UAVs plan for their optimal trajectories using trajectory-based PSO in a 2D space. V2V collisions are avoided using virtual forces.
Due to the time and computational power intensive training phase, and the high failure rates in online execution, MARL-based methods are not suitable for safety-sensitive missions like collision avoidance. Moreover, the working mechanism of MARL-based methods is fundamentally different from heuristic-based methods like \(E^{2}CoPre\). Therefore, MARL-based methods are not compared in this paper.
We want to see how \(E^{2}CoPre\) performs against the three benchmarks regarding energy efficiency and safety. Firstly, we show the energy consumption of four algorithms in the two scenarios. Secondly, we show the minimum distances between UAVs and obstacles (V2O) and the minimum distances between UAVs (V2V) of all four algorithms to compare safety. Thirdly, we test the energy consumption and safety of the four algorithms with interference in communication channels. Next, we conduct parameter analysis to test the impacts of the key parameters of \(E^{2}CoPre\). Last, we conduct ablation tests to show the importance of trajectory prediction in \(E^{2}CoPre\). The codes for the simulation are available at [43].
### _Simulation Results_
#### 9.2.1 Energy Consumption
The energy consumption of UAVs is calculated following Eq. (6) and can be decomposed into three components: turning-dependent energy \(E_{n}\), length-dependent energy \(E_{len}\) and communication-dependent energy \(E_{comms}\). An energy term \(mg\cdot\triangle Alt(S(p))\) for the energy consumption in altitude change is added to \(E_{len}\).
\[E=E_{n}+E_{len}+E_{comms}\] \[E_{n}=\oint F_{n}(S(p))(v\sin\alpha+v_{i})\cdot\sin\beta dp\] \[E_{len}=(F_{drag}\sin\alpha\cos\beta+mg\cos\alpha\cos\beta)\] \[\cdot(v\sin\alpha+v_{i})\cdot(\eta_{e}-\eta_{s})+\oint mg\cdot \triangle Alt(S(p))dp\] \[E_{comms}=\oint P_{comms}(S(p))dp,\]
where \(\oint dp\) denotes the integration over the entire trajectory. Therefore, the notation \(k\) is emitted. \(m\) and \(g\) are the weight and gravitational acceleration of the UAV. \(\triangle Alt(p)\) is the altitude change of the trajectory.
Fig. 8: The energy consumption under different obstacle velocities in the two scenarios.
Fig. 7: Experiment scenarios. (a): _Obstacle in Front_. (b): _Obstacle on Side_.
For simplicity, the turning-dependent and length-dependent components can be seen as linear functions of the average curvature and length of a trajectory \(S\), as the swarm's velocity is constant during collision avoidance, and the changes of motion angles and the induced velocity is small. The average curvature of a trajectory can be represented by its second-order derivatives. On the other hand, it is reasonable to represent the energy consumed by the sleep-active cycle of ZigBee with an average transmit power. The communication-dependent energy also becomes a linear function of the length of a trajectory. Let \(\bar{P}_{n}\), \(\bar{P}_{len}\) and \(\bar{P}_{comms}\) be the weights for the turning-dependent, length-dependent and communication-dependent components, respectively, the energy consumption of a trajectory becomes:
\[E=\oint\bar{P}_{n}\cdot|S^{\prime\prime}(p)|+\bar{P}_{len}\cdot|S(p)|+\bar{P}_ {comms}\cdot|S(p)|dp, \tag{22}\]
where \(\bar{P}_{n}\), \(\bar{P}_{len}\) and \(\bar{P}_{comms}\) represent the average power per unit curvature and length, respectively. Because the energy consumed in wireless communication is usually much smaller than that generating thrusts, we have \(\bar{P}_{comms}\ll\bar{P}_{n}\), \(\bar{P}_{comms}\ll\bar{P}_{len}\). Therefore, the three coefficients are set to \(\bar{P}_{len}=\bar{P}_{len}=1\) and \(\bar{P}_{comms}=0.01\).
The results on energy consumption are shown in Fig. 8. The energy consumption of \(FFPSO\) is re-scaled by 0.5 for demonstration. For all three schemes, energy consumption decreases as the obstacle's speed increases. This is because when the obstacle is at high speed, the avoidance of UAVs starts early and finishes quickly. This causes their short trajectories with fewer turnings. \(E^{2}CoPre\) has the lowest energy consumption among all four schemes in both scenarios. This is because the fitness function we designed aims to minimize energy consumption by minimizing the trajectory's curvature and altitude change. \(FFPSO\) and \(PPSO\) fail to minimize trajectory smoothness or altitude change and hence have high energy consumption. Moreover, the environment field constructed by the swarm provides global coordination among swarm members for collision avoidance. Hence, the trajectories for individual UAVs do not conflict with each other. \(FFPSO\) and \(PPSO\) did not have coordination among UAVs and hence led to frequent V2V collisions, which resulted in zig-zag trajectories and high energy consumption. Compared with \(E^{2}Coop\) where V2V collisions are avoided using virtual forces, \(E^{2}CoPre\) resolves V2V collisions with _PSO-Alt_ in the 3D space. Fig. 8 shows that _PSO-Alt_ is more energy efficient than virtual forces as it reduces the velocity changes of UAVs. In scenario _Obstacle in Front_, \(E^{2}CoPre\) can save 83%, 54% and 15% energy compared to \(FFPSO\), \(PPSO\) and \(E^{2}Coop\) in average, respectively. In scenario _Obstacle on Side_, \(E^{2}CoPre\) can save 78%, 51% and 19% energy compared to \(FFPSO\), \(PPSO\) and \(E^{2}Coop\) in average, respectively.
#### 9.2.2 Safety
Safety is ensured when the minimum distances between UAVs and obstacles (V2O) are larger than the threshold distance \(d_{obs}\), and the minimum distances between UAVs (V2V) are larger than the threshold distance \(d_{v2v}\).
From Fig. 8(a) and 8(b), it is observed that the minimum V2O distances in all four algorithms are larger than the collision threshold \(d_{obs}\). The minimum V2O distances in all four algorithms decrease with the obstacle's velocity in scenario _Obstacle in Front_ because when the obstacles are fast, there is less time for the UAVs to react. As a result, the UAVs will be closer to the obstacle when it flies at a higher speed. In scenario _Obstacle on Side_, only the minimum V2O distance in _FFPSO_ first decreases then increases with the obstacle velocity. This is because collision avoidance in _FFPSO_ is achieved by the virtual forces generated by the obstacle that repel the UAVs away. The UAVs will be held by the obstacle and continue to fly after the obstacle has passed. Hence, the impact of the obstacle on the UAVs' trajectories becomes smaller when the obstacle's velocity is higher, as the obstacle will pass quickly when its velocity is high. Therefore, the minimum V2O distance in _FFPSO_ gets larger with the obstacle's velocity, as the obstacle will pass quickly and fly away from the UAVs.
Moreover, the minimum V2O distances in \(E^{2}CoPre\) and \(E^{2}Coop\) are smaller than those in _FFPSO_ and _PPSO_. This implies that the trajectories found by \(E^{2}CoPre\) and \(E^{2}Coop\) are more energy efficient as they require less unnecessary detour. This is because the fitness functions and the shared environment fields in \(E^{2}CoPre\) and \(E^{2}Coop\) coordinate the trajectories of UAVs on different contours, which is much more efficient than the virtual forces in _FFPSO_ and _PPSO_. The minimum V2O distances in \(E^{2}CoPre\) are close to those in \(E^{2}Coop\) on average but become smaller when the obstacle's velocity is large. This shows that in scenarios with high-speed obstacles, \(E^{2}CoPre\) can find more efficient trajectories nearer to the obstacles while ensuring safety. This is because the particles in PSO-_Alt_ are initialized in the vicinity of the optimal solutions by the results of trajectory prediction. Therefore, the convergence of PSO-_Alt_ is fast, and the particles are less probable to be trapped in local optima.
On the other hand, it is observed from Fig. 8(c) and 8(d)
Fig. 9: The minimum distances between UAVs and obstacles (V2O) and the minimum distance between UAVs (V2V) under different obstacle velocities in the two scenarios.
that the minimum V2V distances in \(E^{2}CoPre\) and \(E^{2}Coop\) are larger than the collision threshold \(d_{v2v}\), while those in _FFPSO_ and _PPSO_ all fall below \(d_{v2v}\). This is because the V2V collision avoidance process is triggered when the minimum V2V distances fall below \(d_{v2v}\) in \(E^{2}CoPre\) and \(E^{2}Coop\). While there is not any hard constraint on V2V distances or any V2V collision avoidance process in _FFPSO_ and _PPSO_. Moreover, the minimum V2V distance in \(E^{2}CoPre\) is much smaller than that in \(E^{2}Coop\). This is because \(E^{2}CoPre\) solves V2V collisions by using _PSO-Alt_ to coordinate UAVs to different altitudes such that the minimum V2V distance is not smaller than \(d_{v2v}\), while \(E^{2}Coop\) solves V2V collisions using virtual forces. _PSO-Alt_ is much more efficient than virtual forces.
#### 9.2.3 Performances With Packet Loss
Wireless vehicle-to-vehicle (V2V) communication is required in two parts of \(E^{2}CoPre\): environment field construction and _PSO-Alt_ to exchange the positions and velocities of each other and the obstacles. The quality of wireless communication channels is characterized by correlated packet losses due to channel fading. Because channel fading in wireless communications is correlated with time, packet loss usually happens in bursts. For some periods, the real-time packet loss rate (PLR) will be higher than the average. While for the others, the real-time PLR will be lower than the average. To simulate the burst property of packet loss in wireless communication channels, we model the quality of the communication channels as a Markov Process with two states \(\{Good,Bad\}\). A packet is successfully transmitted over the channel if the channel is in state \(Good\) and is lost in state \(Bad\). The two-state Markov Process for channel quality is illustrated in Fig. 10, where \(1-p_{b}\) and \(1-p_{g}\) are the transition probabilities from state \(Good\) to \(Bad\) and \(Bad\) to \(Good\), respectively. As shown in Fig. 10, when the channel is in state \(Good\), it has a probability of \(p_{g}\) staying in state \(Good\) and a probability of \(1-p_{g}\) entering state \(Bad\) at the next step. When the channel is in state \(Bad\), it has a probability of \(p_{b}\) staying in state \(Bad\) and a probability of \(1-p_{b}\) entering state \(Good\) at the next step.
In the simulations, if a UAV does not receive the packets from others regarding the positions or velocities of each other and the obstacles due to packet loss, it uses the previous measurements for environment field construction and _PSO-Alt_. For simplicity, we assume the Markov state transition is symmetric, that is, \(p_{g}=1-p_{b}\). Except for the state transition probability, the PLR depends on each UAV's transmission frequency \(f_{Tx}\), defined as the number of packets each UAV transmits per second \((p/s)\). The performances of \(E^{2}CoPre\) with regard to \(p_{b}\) under different transmission frequencies \(f_{Tx}\) are shown in Fig. 11 and are summarized as follows.
* **Energy**. In both scenarios, the energy consumption when \(f_{Tx}=1\)\(p/s\) is higher than that when \(f_{Tx}\geq 20\)\(p/s\). Moreover, the energy consumption increases with \(p_{b}\) when \(f_{Tx}=1\)\(p/s\), and does not vary much with \(p_{b}\) when \(f_{Tx}\geq 20\)\(p/s\). This indicates that packet loss greatly influences energy consumption when the transmission frequency is small. The reason is that when only one packet is transmitted per second, it is highly likely not received by the UAVs due to channel fading. As a result, the trajectories of UAVs become zig-zag as the environment field keeps changing locations. When the transmission frequency is higher, the problem is alleviated as at least one packet can be received.
* **Minimum V2O Distances**. In scenario _Obstacle in Front_, the minimum V2O distances of \(E^{2}CoPre\) fall below the collision threshold \(d_{obs}\) when \(p_{b}\geq 0.2\) with a low transmission frequency of \(f_{Tx}=1\)\(p/s\). However, the minimum V2O distances are above \(d_{obs}\) regardless of \(p_{b}\) when the transmission frequency is larger than 20 \(p/s\). While in scenario _Obstacle on Side_, the minimum V2O distances are all below \(d_{obs}\) regardless of \(p_{b}\) and \(f_{Tx}\). This reflects that it is relatively more difficult for the UAVs to avoid obstacles approaching from the side than those coming directly in front. In scenario _Obstacle on Side_, a little shift of the environment field can cause a V2O collision. Therefore, \(E^{2}CoPre\) is more prone to packet loss in scenario _Obstacle on Side_ than in scenario _Obstacle in Front_, in terms of V2O collisions.
* **Minimum V2V Distances**. Overall, the minimum V2V distances when \(f_{Tx}=1\)\(p/s\) are smaller than those when \(f_{Tx}\geq 20\)\(p/s\) in both scenarios. However, the minimum V2V distances exceed the collision threshold \(d_{v2v}\). This indicates that PSO-_Alt_ is robust to packet loss in communication channels. This is because the V2V communications in resolving V2V collisions are just used to synchronize the solutions of PSO-_Alt_ found by UAVs, and the search space and fitness function of PSO-_Alt_ are relatively simple. Therefore, it is very likely for the UAVs to find the same solutions, even without communication.
#### 9.2.4 Parameter Analysis
Important parameters of \(E^{2}CoPre\) include \(d_{safe},d_{v2v},\lambda_{1}\) and \(\lambda_{2}\). Given \(v_{s}\) and \(v_{obs}\), large \(d_{safe}\) is expected to prolong trajectories, and large \(d_{v2v}\) is expected to increase the altitude change of trajectories. \(\lambda_{1}\) and \(\lambda_{2}\) are linear weights of energy efficiency and safety concerns in our fitness function (5). With \(\lambda_{1}+\lambda_{2}=1\), energy efficiency and safety are a pair of trade-offs. We now study the impact of these parameters on the performance of \(E^{2}CoPre\) using the scenario of _Obstacle in Front_. Each combination is repeated three times. The simulation results are shown in Fig. 12.
Fig. 10: The two-state Markov process for wireless communication channel quality.
* **Impact of \(d_{safe}\).** From Fig. (a)a, it can be seen that energy consumption of \(E^{2}CoPre\) increases with the increase of \(d_{safe}\) under all obstacle speeds. This is because larger \(d_{safe}\) forces the trajectories to go further away from the obstacle. Therefore, the curvature and length of the trajectories are larger, and hence more energy consumption is required.
* **Impact of \(d_{v2v}\).** From Fig. (b)b and Fig. (c)c, it can be seen that energy consumption of \(E^{2}CoPre\) increases with swarm size. This is because UAV members must remain at a certain distance from each other during avoidance. Trajectories of individual UAVs have to be longer and more curved when more members are in a swarm. Moreover, larger \(d_{v2v}\) causes higher energy consumption. Generally, the energy consumption under \(\tau=10\:m\) is larger than \(\tau=20\:m\) because the swarm is more compact and requires more maneuvering to avoid each other.
* **Impact of \(\lambda_{1}\) and \(\lambda_{2}\).** We want to know what impact the selection of \(\lambda_{1}\) and \(\lambda_{2}\) causes to energy consumption and vehicle-to-vehicle distance under different swarm sizes. From Fig. (d)d and Fig. (e)e, we observe that energy consumption and minimum UAV-to-UAV distance are not sensitive to the selection of \(\lambda_{1}\) and \(\lambda_{2}\). From Fig. (d)d, it is observed that when the swarm size is smaller than 5, the average energy consumption does not vary much. When the swarm size becomes larger than 5, the average energy consumption increases with the swarm size. As shown in Fig. (e)e, it is clear that the minimum vehicle-to-vehicle distance decreases when the swarm size increases.
#### 9.2.5 Ablations
Ablation tests are necessary to validate the importance of trajectory prediction in \(E^{2}CoPre\). When without trajectory prediction, the particles in PSO-_Level_ are initialized with random positions and velocities, and V2V collisions are detected by the distances between UAVs. The results in Fig. 13 show that trajectory prediction is essential for reducing energy consumption and ensuring the minimum V2O/V2V distances are larger than the corresponding thresholds in both scenarios. Illustrated by Fig. (a)a and (b)b, the energy consumption of \(E^{2}CoPre\) (wo/ prediction) is larger than that of \(E^{2}CoPre\) in both scenarios. Illustrated by Fig. (c)c and (d)d, the minimum V2O distances in \(E^{2}CoPre\) (wo/
Fig. 11: The performances of \(E^{2}CoPre\) with packet loss.
prediction) are smaller than those of \(E^{2}CoPre\) when the obstacle velocity is large, sometimes even fall below \(d_{obs}\). From Fig. 12(e) and 12(f), the minimum V2V distances in \(E^{2}CoPre\) (wo/ prediction) are smaller than those of \(E^{2}CoPre\), and sometimes fall below \(d_{v2v}\).
The results are because that trajectory prediction helps the UAVs roughly know their trajectories in future steps. On the algorithmic level, each UAV's particles in _PSO-Level_ can be initialized around the optimal positions. Therefore, the convergence speed and accuracy of _PSO-Level_ can be significantly increased. On the trajectory level, the trajectories of UAVs will be less zig-zag and hence more energy efficient as _PSO-Level_ finds the optimal trajectories more quickly and accurately. Without trajectory prediction, the particles in _PSO-Level_ will likely be trapped in local optima. Therefore, the trajectories will display many sharp turnings and turnarounds, which are energy inefficient.
### _Trajectory Demonstration_
Fig. 14 and 15 demonstrate the trajectories generated by \(E^{2}CoPre\), where \(\lambda_{1}=\lambda_{2}=0.5\). Five UAVs are in a swarm with \(v_{s}=5\)\(m/s\) and \(v_{safe}=10\)\(m/s\). \(d_{v2v}\) and \(d_{obs}\) are set to 10 \(m\). \(d_{v2v}\) is deliberately set to as large as 10 \(m\) to trigger PSO-_Alt_ to demonstrate how \(E^{2}CoPre\) resolves V2V collisions in 3D space. The swarm's conceptual center and the obstacle's geometrical center are shown in the red circle and blue square, respectively. The UAVs' original trajectories before avoidance are shown in red dashed lines. The planned trajectories are shown in red solid curves. The predicted trajectories are depicted with dotted dark blue curves. The contours of the environment field are shown in dotted light blue curves to reflect the deformation of the environment field.
Fig. 14 shows the trajectories of 5 UAVs with one obstacle. At the time \(t=19\), the distances between the predicted trajectories of three UAVs in red circles are smaller than \(d_{v2v}\). PSO-_Alt_ is triggered, and the collisions among the three UAVs are resolved in 3D space at \(t=24\). Fig. 15 shows the trajectories of 5 UAVs with two obstacles. At the time \(t=12\), the predicted trajectories of the two UAVs in red circles intersect. The collisions are resolved using PSO-_Alt_ in 3D space at time \(t=25\).
## 10 Conclusions
In this paper, we investigate the problem of trajectory planning for UAV swarms to avoid collisions in an energy-efficient manner. \(E^{2}CoPre\), a hybrid scheme that combines APF and PSO, is proposed. \(E^{2}CoPre\) is designed to take advantage of the searchability of PSO and the environmental representations of APF to implicitly coordinate the trajectory planning of UAVs in a swarm for collision avoidance. Results demonstrate that \(E^{2}CoPre\) generates safer and more energy-efficient trajectories than the compared schemes. Future work can be done to extend \(E^{2}CoPre\) to intelligent decision-making.
Fig. 12: Parameter analysis results.
Fig. 13: Ablation tests on trajectory prediction. |
2301.08344 | On the cusp of cusps: a universal model for extreme scattering events in
the ISM | The scattering structures in the ISM responsible for so-called ``extreme
scattering events" (ESEs), observed in quasars and pulsars, remain enigmatic.
Current models struggle to explain the high-frequency light curves of ESEs, and
a recent analysis of a double lensing event in PSR\,B0834+06 reveals features
of ESEs that may also be challenging to accommodate via existing models. We
propose that these features arise naturally when the lens has a cusp-like
profile, described by the elementary $A_3$ cusp catastrophe. This is an
extension of previous work describing pulsar scintillation as arising from
$A_2$ fold catastrophes in thin, corrugated plasma sheets along the line of
sight. We call this framework of describing the lens potentials via elementary
catastrophes ``doubly catastrophic lensing", as catastrophes (e.g. folds and
cusps) have long been used to describe universal features in the light curves
of lensing events that generically manifest, regardless of the precise details
of the lens. Here, we argue that the lenses themselves may be described by
these same elementary structures. If correct, the doubly catastrophic lensing
framework would provide a unified description of scintillation and ESEs, where
the lenses responsible for these scattering phenomena are universal and can be
fully described by a small number of unfolding parameters. This could enable
their application as giant cosmic lenses for precision measurements of coherent
sources, including FRBs and pulsars. | Dylan L. Jow, Ue-Li Pen, Daniel Baker | 2023-01-19T22:35:54Z | http://arxiv.org/abs/2301.08344v2 | # On the cusp of cusps: a universal model for extreme scattering events in the ISM
###### Abstract
The scattering structures in the ISM responsible for so-called "extreme scattering events" (ESEs), observed in quasars and pulsars, remain enigmatic. Current models struggle to explain the high-frequency light curves of ESEs, and a recent analysis of a double lensing event in PSR B0834+06 reveals features of ESEs that may also be challenging to accommodate via existing models. We propose that these features arise naturally when the lens has a cusp-like profile, described by the elementary \(A_{3}\) cusp catastrophe. This is an extension of previous work describing pulsar scintillation as arising from \(A_{2}\) fold catastrophes in thin, corrugated plasma sheets along the line of sight. We call this framework of describing the lens potentials via elementary catastrophes "doubly catastrophic lensing", as catastrophes (e.g. folds and cusps) have long been used to describe universal features in the light curves of lensing events that generically manifest, regardless of the precise details of the lens. Here, we argue that the lenses themselves may be described by these same elementary structures. If correct, the doubly catastrophic lensing framework would provide a unified description of scintillation and ESEs, where the lenses responsible for these scattering phenomena are universal and can be fully described by a small number of unfolding parameters. This could enable their application as giant cosmic lenses for precision measurements of coherent sources, including FRBs and pulsars.
keywords: waves - radio continuum: ISM - pulsars:general - fast radio bursts
## 1 Introduction
The enigmatic extreme scattering events (ESEs) that were first discovered in quasars in the late 80s (Fiedler et al., 1987) and in pulsars a few years later (Cognard et al., 1993) have presented a long-standing mystery in observations of radio sources. While they are known to be caused by scattering in the interstellar medium (ISM), the precise form of the plasma structures that cause these events and the physical origin of these structures remains unknown. Interest in observing and understanding ESEs has increased, with recent work highlighting their relative ubiquity and setting the stage for future surveys of these mysterious events (Bannister et al., 2016). Moreover, the excess time delays induced by ESEs have implications for precision gravitational wave detection through pulsar timing arrays. Precise modelling of these excess delays will be necessary to move beyond detection of a stochastic gravitational wave background to individual detections (Burke-Spolaor et al., 2019). Recently, novel phase retrieval techniques have been used for precision localization of the refractive images formed by the ESE lens (Zhu et al., 2022). In conjunction with such techniques, new observations from current and next-generation radio telescopes built for pulsar timing arrays and fast radio burst (FRB) detections, among other purposes, will allow us to test the variety of models that have been proposed to explain ESEs.
While future observations will hopefully shed light on the plasma structures causing ESEs, current observations pose several theoretical challenges. Assuming ESEs are caused by three-dimensional plasma inhomogeneities in the ISM (i.e. an over- or under-dense cloud of ionized plasma) leads to inferences for the pressure of such clouds that are several orders of magnitude in excess of typical pressures in the diffuse ISM (Clegg et al., 1998). If such highly pressurized clouds existed, then they would be unstable on the time-scales needed to explain ESE observations; this is known as the over-pressure problem. Thin, two-dimensional current sheets that are aligned with the line of sight have been proposed as a potential resolution to the over-pressure problem (Romani et al.,
1987; Pen & King 2012), but, thus far, such models struggle to explain certain features of the ESE light curves, in particular their rich frequency structure (Walker & Wardle 1998). For example, while a two-dimensional Gaussian profile may fit the low-frequency light curves observed in ESEs, they fail to match the high-frequency light curves (Clegg et al. 1998). Typically, such models invoke substructure in the ISM that becomes resolved at high frequencies to explain the complex morphologies of the high-frequency light curves; however, it remains desirable to be able to explain both the time and frequency structure of ESEs with a single lens model. Cold, self-gravitating clouds of neutral gas with an ionized skin have been proposed to explain the frequency structure of ESEs (Henriksen & Widrow 1995; Walker & Wardle 1998), but if correct would imply that a substantial fraction of the galaxy's mass is contained within these clouds.
ESEs are not the only scattering phenomenon associated with the ISM that radio sources are observed to undergo. Pulsars are observed to scintillate due to multi-path scattering in the ISM. It has generally been assumed that pulsar scintillation and extreme scattering events are distinct phenomena, caused by different plasma structures in the ISM. However, just as thin plasma sheets have been proposed as an explanation for ESEs, in recent decades, there has been growing observational evidence that a substantial fraction of scintillation observations (if not all) can be explained by refractive plasma sheets along the line of sight (Stinebring et al. 2001; Walker et al. 2004; Goldreich & Sridhar 2006; Brisken et al. 2010; Pen & Levin 2014). This is in contrast to traditional models of an extended Kolmogorov turbulent medium. A similar story has been playing out in the study of the turbulent ISM through magnetohydrodynamic (MHD) simulations. Recent MHD simulations suggest that the turbulent cascade is driven by intermittent sheet-like structures in the ISM (Dong et al. 2022).
If thin plasma sheets explain scintillation observations and are consistent with current understandings of the physics of turbulence in the ISM, might they not also explain ESEs? Here we propose a model for ESEs that arises naturally from the thin sheet picture which qualitatively explains several features of current ESE observations, including the complex frequency structure. The model we propose is a simple application of catastrophe theory at the density level of the lens description. That is, lensing by thin sheets can be effectively described by the projected density of the sheet onto a plane perpendicular to the line of sight. Mathematically, singularities in the projection map can be classified and described by a small set of elementary catastrophes (Thom et al. 1975). Fold (\(A_{2}\)) catastrophes in corrugated plasma sheets have been proposed as an explanation for pulsar scintillation observations (Goldreich & Sridhar 2006; Pen & Levin 2014; Simard & Pen 2018). Here we propose the next higher-order catastrophe, the \(A_{3}\) cusp, as an explanation for ESEs. We call this framework "doubly catastrophic" lensing, since catastrophe theory has long been applied to the theory of lensing to describe the magnification of sources near singularities in the lens map (Nye 1999). In addition to describing the magnification as a network of catastrophes, here we describe the lens itself as a catastrophe. One of the advantages of such a framework, is that the elementary catastrophes are universal and described by a small number of unfolding parameters. Therefore, if correct, the effective lenses describing ESEs may be exceptionally simple in form, even if the physical plasma sheets are formed by complex physical processes.
This paper is structured as follows. In Section 2 we introduce our model for ESEs and discuss some of its qualitative features. In Section 4 we discuss the possibility of using the doubly catastrophic lensing framework to explain both scintillation and ESEs. In Section 3 we analyse in detail observations of an extreme scattering event in the pulsar PSR B0834+06 with our model, and in Section 5 we discuss potential applications of this framework.
## 2 The \(A_{3}\) Lens
In geometric optics, the effect of a plasma lens localized to a single plane along the line of sight is determined by the lens equation
\[\hat{y}=\hat{x}+\frac{\partial c\epsilon^{2}}{m_{e}\epsilon_{0}a^{2}}\nabla_{ \hat{x}}\Sigma_{e}(\hat{x}), \tag{1}\]
where \(\omega\) is the angular frequency of the light, \(\hat{y}=(\hat{x}_{s}d_{l}+\hat{x}_{o}d_{sl})/d_{s}\) is a weighted average of the transverse displacement between the source and observer, \(\overline{d}=d_{s}d_{l}d_{l}/d_{s}\) is an effective distance, and \(\Sigma_{e}(\hat{x})\) is the excess surface electron in the lens plane. The coordinates and distances involved are shown in Fig. 1. The lens equation determines a mapping between the source plane and the lens plane, determining the set of rays that connect the source and observer.
In astrophysical lensing, due to the vast distances between the source and the observer, it is often sufficient to treat lensing in this "thin lens" approximation, where the lens is taken to be localized to a single plane perpendicular to the line of sight (the lens plane). The physical plasma that produces the lens effect is, of course, not a fully two-dimensional screen, but has some extent along the line of sight. As such, the surface density, \(\Sigma_{e}\), is a projection of the actual density onto the lens plane:
\[\Sigma_{e}(\hat{x})=\int\delta n_{e}(\hat{x},z)dz, \tag{2}\]
where \(\delta n_{e}\) is the excess electron density.
Fig. 1 shows an example of this projection process. Consider
Figure 1: Diagram of a corrugated sheet lens. When the distances between the observer, source, and lens are large compared to the extent of the sheet along the line of sight (as in most astrophysical scenarios), the lensing is effectively described by the projected density (shown in gray) of the sheet onto the lens plane perpendicular to the line of sight. As the sheet is rotated to be more aligned with the line of sight, the peaks of the projected density become larger. The refraction of light due to the lens causes multi-path propagation from source to observer, shown by the grey, dashed lines.
a thin sheet with a periodic profile that is inclined by some angle with respect to the line of sight (the blue curve in Fig. 1). The surface density along the lens plane (the grey curve) is obtained by projecting the sheet onto a plane perpendicular to the line of sight. In particular, for an infinitely thin sheet with a shape given by \(x=f(z)\), the projected density is proportional to \(|\frac{d\,x}{dz}|^{-1}\). Thus, the density is formally infinite at singularities of the projection.
Catastrophe theory describes the mathematics of such singularities. Powerfully, catastrophe theory shows that the topological structure of singularities must conform to a few fundamental forms (the "elementary catastrophes") regardless of the precise details of the map in which the singularities arise. Catastrophe theory has already been used to great effect in lensing theory to predict the magnification of a source near a lens' caustics without needing to know the precise details of the lens potential. Our proposal here is to extend this use of catastrophe theory to the density level. That is, we wish to describe not only the magnification via elementary catastrophes, but the lens potential itself. We expect these elementary forms are likely to appear in the projected plasma density, as catastrophes generically arise when projecting thin, sheet-like structures in the ISM along the line of sight. We will argue that by modelling the plasma structures responsible for ESES by these catastrophes, it is possible to explain aspects of observations that have thus far been challenging to model. We call this framework of describing the projected plasma density by a network of caustics "doubly catastrophic lensing", as catastrophes arise both in the magnification produced by the lens and in the lens potential, itself.
### Modelling the \(A_{3}\) lens
In this paper we will focus on the \(A_{3}\) catastrophe, a.k.a. the cusp catastrophe. The fold (\(A_{2}\)) and cusp catastrophes are the simplest of the elementary catastrophes. Lensing by a fold has been discussed elsewhere, and has been proposed as an explanation for pulsar scintillation (Goldreich & Sridhar, 2006; Pen & Levin, 2014; Simard & Pen, 2018). Here we propose lensing by an \(A_{3}\) catastrophe as an explanation for ESES in pulsars and quasars. The basic idea is shown in Fig. 2; when the folds of a thin, folded sheet come to an end, they meet in a cusp. The sheet, when viewed under projection, forms an \(A_{3}\) cusp density profile. The cusp is described by two unfolding parameters: \(x_{1}\) and \(x_{2}\). The bottom panel of Fig. 2 shows the cusp density as a function of \(x_{1}\) for fixed \(x_{2}\).
The idea is to use the canonical intensity of the \(A_{3}\) catastrophe, which we will call \(\mu_{A_{3}}(x_{1},x_{2})\), as the lens potential. That is, our lens equation is:
\[\mathbf{y}=\mathbf{x}+\alpha\nabla\mu_{A_{3}}(\mathbf{x}). \tag{3}\]
The intensity of the \(A_{3}\) catastrophe is described by the canonical phase
\[\phi_{A_{3}}(t;x_{1},x_{2})=\frac{t^{4}}{4}-\frac{x_{2}t^{2}}{2}+x_{1}t, \tag{4}\]
where \(x_{1}\) and \(x_{2}\) are the unfolding parameters, and \(t\) is the coordinate in the phase screen. It follows that the cusp intensity is given by
\[\mu_{A_{3}}(x_{1},x_{2})=\sum_{t_{i}}|3t_{i}^{2}-x_{2}|^{-1} \tag{5}\]
where \(t_{i}\) are the solutions to the stationary phase equation \(t^{3}-x_{2}t-x_{1}=0\). We will also introduce an additional parameter \(\sigma\) and define
\[\tilde{\mu}_{A_{3}}(x_{1},x_{2};\sigma)\equiv\mu_{A_{3}}(x_{1},x_{2})\star W( x_{1};\sigma),, \tag{6}\]
where \(W(x_{1};\sigma)\) is taken to be a simple Gaussian smoothing function with standard deviation \(\sigma\). This smoothing is performed to remove the infinite densities that arise in the cusp catastrophe and represents the fact that the physical sheet that gives rise to the cusp has a finite thickness.
Together, Eqs. 3 and 6 provide a full description for the \(A_{3}\) lens in geometric optics. However, to further simplify our analysis we will treat the lens as quasi-one-dimensional. That is, we will assume that the direction of the rays is only modified in the \(x_{1}\) direction. In other words, we treat \(x_{2}\) as a fixed parameter of the lens, and we only need to solve the one-dimensional lens equation:
\[y_{1}=x+\alpha\partial_{x}\phi(x;x_{2}). \tag{7}\]
The second lens equation is simply \(y_{2}=x_{2}\). This simplification is possible because the derivative of \(\tilde{\mu}_{A_{3}}\) is typically much larger in the \(x_{1}\) direction than it is in the \(x_{2}\) direction. While not strictly necessary here, this simplification will come in handy when we consider a multi-plane lens in Section 3.
In this limit, the magnification due to the \(A_{3}\) lens is given by
\[\mu(y_{1};x_{2})=\sum_{x}|1+\alpha\partial_{x}^{2}\phi(x;x_{2})|^{-1}, \tag{8}\]
where the sum is taken over solutions to the lens equation, Eq. 7.
Figure 2: Diagram showing how the \(A_{3}\) cusp catastrophe arises when a folded sheet is projected onto the lens plane. The grey mesh shows the physical sheet and the colour map below shows the density of the sheet when projected onto the lens plane. The cusp profile arises generically when two folds in the sheet come to an end. The variables \(x_{1}\) and \(x_{2}\) are the two unfolding parameters of the \(A_{3}\) catastrophe and can be thought of as the physical coordinates in the lens plane up to some arbitrary scaling. The red lines show cross sections through the sheet for fixed \(x_{2}\). In the bottom panel, we show the cross section through the physical sheet in red, and the projected density for an infinitely thin sheet below that in grey. The blue curves show the projected density for a sheet of finite thickness; the effect of which is to smooth out the sharply peaked grey curves. For \(x_{2}<0\), the sheet is made up of two folds that converge at \(x_{2}=0\) at the cusp point. For \(x_{2}>0\), the two folds disappear as the sheet flattens out.
It turns out that for the \(A_{3}\) potential, changes along the \(x_{2}\) direction can be described by changes in the amplitude of the lens, and a re-scaling of the \(x\) coordinate. This follows from the identity
\[\alpha\mu_{A_{3}}(x,x_{2}=\pm 1)=\mu_{A_{3}}\big{(}\alpha^{-3/2}x,x_{2}=\pm\frac{1 }{\alpha}\big{)}, \tag{9}\]
or, equivalently,
\[\mu_{A_{3}}(x,x_{2})=\frac{1}{x_{2}}\mu\big{(}x_{2}^{-3/2}x,\mathrm{sign}(x_{2 })\big{)}. \tag{10}\]
In other words, once a sign is chosen for \(x_{2}\), one is free to re-scale the amplitude \(\alpha\) and the \(x\) coordinate so that \(|x_{2}|=1\). This means that the effect of changing the amplitude \(\alpha\) is simply to re-scale the coordinates.
Fig. 4 shows the lens map described by Eq. 7 for fixed \(x_{2}=1\) and different values of \(\alpha\). One may be interested in the location of the caustics that are formed by the lens (i.e. the turning points in the lens map). The location of these turning points depends on \(\alpha\) and the smoothing scale \(\sigma\), as the un-smoothed potential is formally infinite at certain points. Thus, the maximum amplitude of the smoothed potential depends strongly on \(\sigma\). Fig. 5 shows the level curves of the lens map (Eq. 7), as a function of the unfolding parameters, \(x_{1}\) and \(x_{2}\), for fixed \(\alpha=\pm 1\), where the sign of alpha determines whether the lens is convergent or divergent (note that because of the scaling relations shown in Eqs. 9 and 10 varying either \(\alpha\) or \(x_{2}\), while holding the other fixed, covers the entirety of the parameter space of the lens). Fig. 5 tells us where a given source position \(y_{1}\) gets mapped to in the lens plane. That is, consider \(x_{2}\) to be some fixed value. Then we can read off the image positions in \(x_{1}\) for a given value of \(y_{1}\) by looking at the \(x_{1}\) value the contour associated with \(y_{1}\) reaches for that value of \(x_{2}\). A peculiar feature of the \(A_{3}\) lens is that for a fixed source position, for a small region about \(x_{2}=0\), the distance between the image positions in \(x_{1}\) increases. This is in contrast to large values of \(x_{2}\), for which a decrease in \(x_{2}\) leads to the images moving closer together. This feature will lead to the characteristic hockey-stick shape shown later in Figs. 9 and 10.
Now, it is straightforward to compute how the location of the outermost caustics scale with \(\sigma\) and \(\alpha\). Roughly, the location of the outermost caustic is given by \(y_{1}^{*}\sim\max\{\alpha\partial_{x}\phi\}\). For the un-smoothed lens, the maximum derivative of the lens potential is infinite. For the smoothed lens, the maximum value of the derivative is nevertheless attained close to where the unsmoothed lens diverges. Since the lens potential near the divergence is described by a fold catastrophe, the lens potential is given by \(\phi(x;x_{2})\propto x^{-1/2}\) on one side of the divergence and zero on the other side. Thus, near the divergence, the smoothed derivative is given by \(\partial_{x}\phi(x;x_{2})\propto\phi(x;x_{2})\star\partial_{x}W(x;\sigma)\). It follows from this that the location of the outermost caustic scales as
\[y_{1}^{*}\sim\alpha\sigma^{-3/2}. \tag{11}\]
Knowing the location of the outermost caustic will be useful later when we are trying to infer the lens parameters from observed time delays in Section 3.
Fig. 6 shows the critical curve and caustic structures that arise due to magnification by the \(A_{3}\) lens. The top row shows the results for the divergent lens (\(\alpha<0\)), corresponding to an over-dense lens, and the bottom row shows the convergent lens (\(\alpha>0\)), corresponding to an under-dense lens. The light curve that arises as one changes impact parameter, \(y_{1}\), (i.e. as the source moves relative to the lens), are effectively entirely determined by the number and location of the caustics. When \(\sigma\ll 1\) and \(\alpha\gtrsim 1\), the caustics tend to be located far from the axis; for example, the outermost caustic is located at \(y_{1}^{*}\gg 1\). This means that in between the caustics the slope of the inverse lens map is large, meaning that the total magnification is close to one (see, for example, the blue curve in Fig. 4, where the inverse lens map, \(x(y)\), is effectively flat for most of the region between the caustics). The result is that the light curve as the source moves relative to the lens is close to unity except at the caustics where the magnification suddenly diverges. Fig. 7 shows an example
Figure 4: The lens map of the \(A_{3}\) lens for fixed \(x_{2}=1\) and varying \(\alpha\). The effect of changing \(\alpha\) is to effectively re-scale the coordinates as shown in Eq. 9.
Figure 5: The level curves of the lens map, Eq. 7, as a function of the unfolding parameters \(x_{1}\) and \(x_{2}\).
Figure 3: The right panel shows the dynamic spectrum (intensity as a function of time and frequency) one would observe for the source trajectory shown by the black arrows on the left. The blue curve on the left shows the caustics of the \(A_{3}\) cusp profile, i.e., the points of maximum projected density shown in Fig. 2. We have chosen the trajectory to just grasp the cusp point. Here we are describing the lens potential as an cusp catastrophe, but cusps also generically arise in the dynamic spectrum. This is clearly seen in the right panel, where the bright peaks of the magnification converge towards a cusp at high frequencies: this is the titular “cusp of the cusp”.
light curve for \(\alpha=0.3\), \(\sigma=0.03\). The lens is taken to be of size \(\ell=10\,\)AU and moving with velocity \(v=200\,\)kms\({}^{-1}\) relative to the lens. The parameters are chosen to match the \(f=2.7\,\)GHz light curve of the original ESE observation presented in Fiedler et al. (1987). Since for plasma lensing \(\alpha\propto f^{-2}\), we can compute the light curve for multiple frequency bands. Computing the light curve for \(f=8.1\,\)GHz (shown in the bottom panel of Fig. 7), we see that, at high frequencies, multiple caustics in the light curve become apparent. It is not the case that these magnification caustics appear because new features in the lens become resolved as the frequency increases. Rather, as can be seen from the scaling relation in Eq. 9, an increase in frequency leads to an effective rescaling of the \(x_{1}\) coordinate. Thus, the additional magnification caustics seen at \(8.1\,\)GHz are still present at \(2.7\,\)GHz, but are further out, and are not seen at the impact parameters spanned by the observation. In other words, the low-frequency light curve is effectively a zoomed in version of the high-frequency light curve.
The high-frequency light curve shown in Fig. 7 shares qualitative similarities to the high-frequency observation of the original ESE in Fiedler et al. (1987). While we have not undertaken a quantitative best-fit analysis of the observations with our model, we argue that the \(A_{3}\) lens naturally explains the appearance of multiple magnification caustics at high frequencies without the need to invoke unknown substructure, as is often done in many attempts to model ESEs. This also leads to a concrete prediction of our model: large-bandwidth observations of ESEs across multiple frequency bands should reveal that the high-frequency magnification caustics do not appear spontaneously as one crosses some focal frequency, but rather they should gradually move inwards from infinity.
### A physical picture
We will now briefly discuss a possible physical origin for \(A_{3}\) lenses. First, however, we will note that such lenses will generically arise for any lens that can be described by a thin-lens approximation. This is because the mathematics of catastrophe theory requires singularities of projection maps to take on a small number of generic forms (the elementary catastrophes). Given the vast distances involved in astrophysical lensing, all but the most extended lenses will be adequately described by the thin-lens approximation. Thus, the presence of \(A_{3}\) lenses does not strongly depend on a particular physical model; \(A_{3}\) lenses should arise generically in most models of the ISM. The question is not whether or not there are \(A_{3}\) lenses, but whether or not the \(A_{3}\) lenses predicted by a given physical model can explain the observed properties of ESEs.
In order for \(A_{3}\) lenses to be a reasonable candidate to explain observed ESEs, we require that the transverse physical size of the lenses (projected onto the plane of the sky) be roughly on the order of \(1\,\)AU, as this is the typical physical scale that can be inferred from ESE observations. We also require that the thickness of the sheet that is projected to produce these lenses be much less than \(1\,\)AU. In other words, we require \(\sigma\ll 1\). While, in principle, there is nothing stopping us from modelling lensing events with a highly smoothed \(A_{3}\) lens, when the smoothing scale is large (\(\sigma\gtrsim 1\)), the unique features of the cusp become washed out, lessening the explanatory power of such a model. A third requirement is that whatever physical process causes the \(A_{3}\) lenses, the lenses must persist on a timescale of months to years in order to match the timescale of ESE observations.
A physical picture for corrugated sheets that satisfies these properties has already been proposed (Goldreich and Sridhar, 2006) and has been suggested as a potential explanation for pulsar scintillation (Pen and Levin, 2014; Simard and Pen, 2018). The basic idea, which we will summarize here, is that magnetic re-connection sheets in the ISM (boundaries between oppositely oriented magnetic field lines) sustain plasma current sheets. Ducted waves, driven by the tension produced by the magnetic fields, propagate through the current sheets, forming a corrugated pattern in the plasma density along the sheet. When the current sheet is close to aligned with the line of sight, these corrugated patterns produce the fold (\(A_{2}\)) and cusp (\(A_{3}\)) lenses when projected onto the lens plane (see e.g. Fig. 1). While the ducted waves propagate at the speed of sound in the plasma (\(c_{s}\sim 10T_{4}^{1/2}\)km s\({}^{-1}\), \(T_{4}\equiv T/10\,\)K), when the sheet is aligned with the line of sight, the transverse speed of the waves projected onto the lens plane can be made arbitrarily small, depending on the degree of alignment. This leads to the long timescales over which the ESE lens structures are observed to persist. These magnetic reconnection sheets are also predicted to arise on the spatial scales
Figure 6: The critical curves and caustics of the \(A_{3}\) lens for fixed \(|\alpha|=1\). The top row shows the results for the divergent lens, \(\alpha<0\), and the bottom row shows the results for the convergent lens, \(\alpha>0\).
Figure 7: An example light curve for two frequency bands, \(f=2.7\,\)GHz and \(f=8.1\,\)GHz, for the \(A_{3}\) lens. The light curve is computed for \(d_{l}=1\,\)kpc, \(d_{s}=1\,\)Gpc, and fixed \(x_{2}=10\,\)AU. We choose \(\alpha=0.3\) at \(f=2.7\,\)GHz, and a smoothing scale \(\sigma_{x}=0.03\), which for the chosen parameters corresponds to a peak electron surface density of \(\Sigma_{e}\approx 0.02\,\)pc cm\({}^{-2}\). The source position as a function of time is given by \(y=vt\) where \(v=200\,\)km s\({}^{-1}\).
required to explain ESE observations. While the energetic processes that stir the ISM (e.g. supernovae, ionization fronts, spiral density waves, etc.) are typically short lived and occur on parsec scales or larger, magneto-hydrodynamic simulations of turbulent dynamos demonstrate that stable magnetic re-connection sheets may occur well below the stirring scale, and, in particular, on the several AU scale required by ESEs. Moreover, these sheets are indeed predicted to be "thin" relative to the transverse AU scale.
It remains to be seen how realistic such a model of the small-scale ISM is Realistic, high-resolution simulations are needed. Recent magnetohydrodynamic simulations of the turbulent ISM have revealed the ubiquity of thin, filamentary-like structures on small scales intermittently permeating the diffuse medium (Dong et al., 2022; Fielding et al., 2022). However, the resolution of these simulations are typically at much larger scales than the \(\sim\)AU scales we require to explain ESEs. Nevertheless, these recent simulations give some confidence that these thin, intermittent structures may plausibly exist. However, we stress again that one of the strengths of the doubly catastrophic lensing framework is that it does not depend crucially on the details of the underlying physical model. We have summarized this particular model to give an outline of a plausible, but not necessary scenario that could give rise to the kinds of lenses we are considering.
## 3 A double lensing event
Now that we have outlined the details of the \(A_{3}\) lens, we will turn to a particular lensing event of interest in the pulsar PSR B0834+06.This event has been a particularly fruitful object of study since its observation in 2005 (Brisken et al., 2010) with the William E.Gordon Telescope at the Arecibo Observatory, whose data we use here. The data was taken in a 32 MHz band centred at 316.5 MHz over the course of \(\sim 2\) hours. The dynamic spectrum was created using 5s integrations with \(\sim 0.25\) kHz channels. In order to collapse the inverted arclets into single points to more easily identify images, we use the conjugate wavefield produced by Baker et al. (2022) using phase retrieval techniques to recover the electric field from the dynamic spectrum. The conjugate wave-field (the top-left panel of Fig. 9) shows the main parabolic arc that is ubiquitous in scintillation observations, in addition to a peculiar island of power located at a delay of roughly 1 ms and Doppler shift of \(-40\) mHz. We will refer to this feature as the "millisecond feature". Zhu et al. (2022) use observations over four epochs in roughly fifteen-day intervals to demonstrate that this event is best explained by a double lens system. That is, they argue the pulsar is lensed by a main scattering screen, producing the primary scintillation arc, and a second lens producing the millisecond features (a schematic of this is shown in Fig. 8). Zhu et al. (2022) use novel phase retrieval techniques to infer the distances to the two screens, as well as the angular position of the many images produced by this lensing system. From this they argue that the secondary lens associated with the millisecond feature has similar properties to the plasma structures responsible for ESEs. In particular, its persistence over the more-than-month-long observation and large bending angles (\(\theta\approx 83\) mas) are consistent with other ESE observations.
Here we will consider the possibility that this millisecond feature is actually produced by an \(A_{3}\) lens. In order to do this, we first need to introduce the double lensing formalism. For a multi-plane lens, the induced phase along a particular path from source to observer is given by
\[S=\omega\sum_{i=1}^{N}\frac{d_{ij}d_{0(i+1)}}{cd_{i+1}}\left[\frac{1}{2}( \boldsymbol{\theta}_{i+1}-\boldsymbol{\theta}_{i})^{2}+\frac{d_{ii+1}d_{0n+1} }{d_{0n+1}d_{i+1}}\hat{\phi}_{i}(\boldsymbol{\theta}_{i})\right], \tag{12}\]
where \(d_{ij}\) is the distance from the \(i^{\rm th}\) lens plane to the \(j^{\rm th}\) lens plane, and \(i=0,N\) refer to the observer and source, respectively. The angular coordinates associated with the \(i^{\rm th}\) lens plane is given by \(\boldsymbol{\theta}_{i}\) and \(\hat{\phi}_{i}\) is the \(i^{\rm th}\) lens potential.
For a two-plane system, we can re-write this phase in terms of dimensionless parameters as follows:
\[S=\nu\left[\frac{\overline{d}_{2}}{\overline{d}_{1}}\big{[}\frac{1}{2}( \boldsymbol{x}-\boldsymbol{z})^{2}+\rho\phi_{1}(\boldsymbol{z})\big{]}+\frac{ 1}{2}(\boldsymbol{x}-\boldsymbol{y})^{2}+\alpha\phi_{2}(\boldsymbol{x})\right], \tag{13}\]
where we have defined the co-ordinates \(\boldsymbol{z}\equiv\boldsymbol{\theta}_{2}d_{01}/\ell,\boldsymbol{x}\equiv \boldsymbol{\theta}_{2}d_{02}/\ell\), and \(\boldsymbol{y}\equiv\boldsymbol{\theta}_{3}d_{02}/\ell\) to be the physical distance in the respective lens/source plane, re-scaled by some physical scale \(\ell\). For our purposes, we will take the first lens plane to be the main scattering screen, and the second lens plane to be the \(A_{3}\) lens. It is, therefore, convenient to choose \(\ell\) to be a physical scale associated with the \(A_{3}\) lens. The barred distances are combined distances given by \(\overline{d}_{1}=d_{12}d_{02}/d_{01}\) and \(\overline{d}_{2}=d_{23}d_{02}/d_{03}\). The phase is multiplied by an overall factor of \(\nu=\omega t^{2}/\overline{d}_{2C}\) and the amplitudes of the lens potentials are given by
\[\alpha=\frac{\overline{d}_{2}e^{2}\Sigma_{2}^{*}}{2m_{e}\epsilon_{0 0}\omega^{2}\ell^{2}}, \tag{14}\] \[\rho=\frac{d_{12}d_{03}}{d_{02}d_{13}}\frac{\overline{d}_{1}e^{2 }\Sigma_{1}^{*}}{2m_{e}\epsilon_{0}\omega^{2}\ell^{2}}, \tag{15}\]
where \(\Sigma_{1}^{*}\) and \(\Sigma_{2}^{*}\) are the projected electron density of the lenses.
Figure 8: Diagram showing the geometry of the extreme scattering event observed in PSR B0834+06. The pulsar is first scattered by the ESE lens, which we propose is an \(A_{3}\) lens, and is then scattered by the primary scattering screen which results in the parabolic arc that is ubiquitous in scintillation observations.
We could just have easily defined \(\nu=\omega c^{2}/\overline{d}_{1}c\), factoring out an overall factor of \(\overline{d}_{1}\) as opposed to \(\overline{d}_{2}\); however, for our purposes it is convenient to treat the \(A_{3}\) lens as the primary lens, absorbing the geometric factors that appear in Eq. 15 into \(\rho\) rather than \(\alpha\). This, however, is purely a choice of convention, as the image locations and magnifications in geometric optics do not depend on the overall factor \(\nu\).
The locations of the geometric images are given by the lens equations, \(\nabla_{\mathbf{x}/\mathbf{z}}S=0\), which are:
\[D(x_{1}-z_{1})+(x_{1}-y_{1})+\frac{d\phi_{2}}{dx_{1}} =0, \tag{16}\] \[D(x_{2}-z_{2})+(x_{2}-y_{2})+\frac{d\phi_{2}}{dx_{2}} =0,\] (17) \[x_{1}-z_{1}+\frac{d\phi_{1}}{dz_{1}} =0,\] (18) \[x_{2}-z_{2}+\frac{d\phi_{1}}{dz_{2}} =0, \tag{19}\]
where we have defined the ratio \(D\equiv\overline{d}_{2}/\overline{d}_{1}\).
Now, for the sake of simplicity, we will assume that the lenses are both highly anisotropic (i.e. one-dimensional) and that they are perpendicular to each other. That is, we will assume \(\phi_{2}(x_{1},x_{2})=\phi_{2}(x_{2})\) and \(\phi_{1}(x_{1},x_{2})=\phi_{1}(x_{1})\). In this way, the lens equations simplify to
\[x_{2}-y_{2}+\frac{d\phi_{2}}{dx_{2}} =0, \tag{20}\] \[x_{1}-z_{1}+\frac{d\phi_{1}}{dz_{1}} =0,\] (21) \[x_{1}-\frac{y_{1}+Dz_{1}}{D+1} =0,\] (22) \[x_{2}-z_{2} =0. \tag{23}\]
It is convenient to do this because the result is that the two lenses act independently from each other; that is, we can solve the lens equations for \(x_{2}\) and \(z_{1}\) co-ordinates of the images independently using Eqs. 20 and 21, respectively, which then directly give us the \(x_{1}\) and \(z_{2}\) co-ordinates through Eqs. 22 and 23. In general, this simple separation of the lens equations into independent equations is not possible since the two lenses will not generically be perfectly perpendicular to each other. However, for our purposes in this work, we are primarily interested in the qualitative aspects of the \(A_{3}\) lens, as opposed to a precise quantitative comparison, and Zhu et al. (2022) show that the two lenses are, indeed, roughly perpendicular to each other.
In order to simulate the lensing event of PSR B0834+06, we will take \(\phi_{2}(x_{2})=\mu_{A_{3}}(x_{2};x_{1})\): the \(A_{3}\) lens. Again, we stress that we are treating the \(A_{3}\) lens as a quasi-one-dimensional lens, where the second co-ordinate \(x_{1}\) is treated as a lens parameter. For the main scattering screen, instead of specifying a lens potential, we will simply specify a set of co-ordinates, \(z_{1}\), fixing the location (in the \(z_{1}\) direction) on the main scattering screen the rays must pass through. The goal of this is to re-produce the main scintillation arc seen in the conjugate wave-field without having to over-commit ourselves, as it were, to a particular scintillation model for the main screen.
Once we have the location of the images by solving the lens equations, it is straightforward to compute where the images should appear in Doppler-delay space. The group delay of the images is given by
\[\begin{split}\tau&=\frac{\partial S}{\partial \alpha}=\frac{\nu}{\omega}\Big{[}D\big{[}\frac{1}{2}(\mathbf{x}-\mathbf{z})^{2}-\rho \phi_{1}(\mathbf{z})\big{]}+\frac{1}{2}(\mathbf{x}-\mathbf{y})^{2}-\alpha\phi_{2}(\mathbf{x}) \Big{]},\\ &\approx\frac{\nu}{\omega}\Big{[}\frac{D}{2}(\mathbf{x}-\mathbf{z})^{2}+ \frac{1}{2}(\mathbf{x}-\mathbf{y})^{2}-\alpha\phi_{2}(\mathbf{x})\Big{]}.\end{split} \tag{24}\]
Note that the dispersive terms appear with a relative minus sign compared to Eq. 13 since for plasma lensing the amplitudes of the lens potential have a frequency dependence \(\alpha,\rho\sim\omega^{-2}\). We drop the dispersive term related to the main scattering screen as we have not specified the lens potential \(\phi_{1}\). We assume that the delay from the main scattering screen is primarily geometric.
The Doppler shift of the images is given by \(f_{D}=\frac{d\mathbf{y}}{dt}\cdot\nabla_{\mathbf{y}}S\), which can be computed from the following:
\[\begin{split}\frac{\partial S}{\partial y_{1}}& \approx-\frac{\nu D}{D+1}\left(z_{1}-y_{1}\right),\\ \frac{\partial S}{\partial y_{2}}&\approx-\nu(x_{2} -y_{2}).\end{split} \tag{25}\]
Note that Eq. 25 follows from the assumption that \(\frac{\partial x_{2}}{\partial y_{2}}=\frac{\partial z_{1}}{\partial y_{1}}=0\). This assumption is equivalent to stating that the lensed images are stationary transverse to the lens. Alternatively, this is equivalent to stating that the lensed images are only weakly magnified, which for the \(A_{3}\) lens follows from the fact that the lens map is essentially flat away from the folds (see Fig. 4). This assumption fails only for a small region near the folds. We are also assuming that the two lens screens are stationary relative to each other. Namely, we assume that the velocity \(\frac{\partial\mathbf{y}}{dt}\) is dominated by the velocity of the source relative to the combined position of the two screens.
We now have the tools to simulate the conjugate wave-spectra of our \(A_{3}\) lens plus scattering screen system. The top and middle panel of the right column of Fig. 9 shows the location of the lensed images in Doppler-delay space. The plotted points correspond to where the power would be localized in the conjugate wave-field. The conjugate wave-field for the actual observation is shown in the left column as comparison. For our simulation, we have chosen the dimensionless parameters \(\alpha=0.7\), \(\sigma=0.05\), \(\nu=31,000\), \(D=5\), \(y_{1}=0\), and \(y_{2}=11\). The locations of the images on the scattering screen are chosen to be distributed uniformly random over a range \(z_{1}\in[-16,16]\). These values are chosen to be consistent with the observing frequency \(f=311\) MHz and the distances, \(d_{01}=389\) pc, \(d_{02}=415\) pc, and \(d_{03}=620\) pc measured by Zhu et al. (2022). We also choose the velocity to be in the direction \(\frac{\partial\mathbf{y}}{dt}\parallel[1,0.1]\) to be consistent with the velocity measured by Zhu et al. (2022). In order to convert the dimensionless time delay and Doppler shift to dimensionful quantities, we take the physical scale of the lens to be \(\ell=1\) AU, and the magnitude of the relative velocity of the source to the lens to be \(\nu=23\) km s\({}^{-1}\) (again, consistent with the velocity measured by Zhu et al. (2022)). From the parameters, we can also infer the value for the amplitude of the plasma under-density, \(\Sigma_{2}^{*}\sim 0.0001\) pc cm\({}^{-3}\). It is important to note that these chosen parameters are not the best-fit parameters for this model, but rather a reasonable estimate of the parameters in order to qualitatively compare the features of the model with the data.
There are two features of the data that we wish to point out that our model naturally reproduces. Firstly, we note that the millisecond feature is highly asymmetric. That is, if the two lenses responsible for the main scattering screen and the millisecond feature were truly one-dimensional (i.e. translationally invariant along one axis), then the millisecond feature would simply be a lensed copy of the main
parabolic arc, centred at a different delay and Doppler shift. This, however, is not the case; the millisecond feature is truncated as it moves towards larger Doppler shifts. This truncation is naturally accommodated by our model as the \(A_{3}\) lens, by design, ends in a cusp. The folds that produce the lensed images meet at the cusp rather than continuing on forever. Secondly, as the lensed images approach this truncation point (the cusp) in Doppler-delay space, the separation between pairs of images in delay first becomes larger before converging at the cusp. This is the characteristic hockey stick shape that can be seen both in the data and simulation in the middle row of Fig. 9. This increase in the delay between images as one approaches the cusp is a generic feature of our model as the contours of the lens map effectively form a loop around the cusp, as seen in Fig. 5. That is the images have a tendency to spread out before converging at the cusp.
We may also wish to examine the behaviour of the magnification of the images. The bottom left panel of Fig. 9 shows the total flux of the millisecond feature as a function of observing frequency, computed by summing the total power in the millisecond feature, normalized to a value of unity at some reference frequency. Essentially what happens is that as the frequency increases, pairs of localized islands of power in the conjugate wave-field merge successively. When each pair merges they rapidly increase in brightness before disappearing. This happens multiple times as one varies the frequency, creating the structure in seen in the bottom left panel of Fig. 9: each spike in magnification corresponds to one of these mergers. A sequence of successive peaks in magnification as a function of frequency is the generic expectation of our \(A_{3}\) model. As the frequency decreases, different pairs of images encounter the fold catastrophe of the lens, thereby merging and attaining a formally infinite magnification briefly before disappearing. This happens repeatedly until all the images associated with the millisecond feature disappear.
Although we have not undertaken the more involved process of quantitatively determining the best-fit parameters of our model to the data, we hope to have demonstrated that qualitatively the \(A_{3}\) lens can naturally accommodate features of the PSR B0834+06 event that have previously been challenging to accommodate.
## 4 Eses and Scintillation
One of the more powerful aspects of the doubly catastrophic lensing framework is that it has the potential to explain both ESEs and scintillation with a single, unified framework. While in this work we have focused on cusp (\(A_{3}\)) lenses as a natural explanation for ESEs, previous works have explored the possibility of explaining scintillation observations with ensembles of fold (\(A_{2}\)) lenses (Pen and Levin, 2014; Simard and Pen, 2018). The basic picture we propose is that corrugated plasma sheets are responsible for both scattering phenomena. When corrugated sheets are closely aligned with the line of sight, they form folds under projection. These folds result in the multi-path propagation that is seen in pulsar scintillation. While these folds are required to be highly elongated (i.e. effectively one-dimensional) to explain scintillation observations, they cannot continue forever. When folds end, they are mathematically required to merge in cusp (\(A_{3}\)) catastrophes. It is these \(A_{3}\) catastrophes that we propose as the origin of ESEs.
One immediate question that arises is why, if both scintillation and ESEs are caused by the same ISM structures, is one phenomenon so much more common than the other. Within the PSR B0834+06 observation we have been discussing, there is only one feature associated with an \(A_{3}\) lens (namely, the millisecond feature), whereas the each of the scattered images along the main scintillation arc, in our picture, would be associated with a fold. Since there are hundreds of scattered images, this suggests that fold lenses are much more common than cusp lenses. Moreover, most pulsars are observed to scintillate, but are only observed to undergo ESE-like scattering about one percent of the time.
While this relative rarity of ESEs compared to scintillation might initially seem to pose an issue for any attempt to explain
Figure 9: The left column shows data for an observation of PSR B0834+06. The top panel shows the conjugate wave-field of the observation in Doppler-delay space. The middle panel is the same as the top panel, zoomed in on the millisecond-delay feature associated with the ESE lens. The bottom panel shows the sum of the power in the conjugate wave-field of the millisecond wave-field as a function of frequency, normalized to unity when the signal falls below the noise threshold. This is taken as a proxy for the magnification induced by the lens. The top and middle panel of the right column shows the location of the images in Doppler-delay space for our double lensing model with an \(A_{3}\) lens plus a primary scattering screen. We choose the dimensionless parameters \(\alpha=0.7\), \(\sigma=0.05\), \(\nu=31,000\), \(D=5\), \(y_{1}=0\), \(y_{2}=11\), and the direction of the velocity to be \(\frac{d_{0}}{dm}\parallel[1,0.1]\). The locations of the images on the scattering screen are chosen to be distributed uniformly over \(z_{1}\in[-16,16]\). To convert to dimensionful parameters, we choose \(f=311\,\mathrm{MHz}_{d01}=389\,\mathrm{pc}_{d02}=415\,\mathrm{pc}\), and \(d_{03}=620\,\mathrm{pc}\) and a physical scale of the lens \(\ell=1\) AU. The magnitude of the velocity is chosen to be \(23\,\mathrm{km}\,\mathrm{s}^{-1}\). These parameters correspond to an amplitude of the plasma under-density of \(\Sigma_{2}^{*}\sim 0.0001\,\mathrm{pc}\,\mathrm{cm}^{-3}\) (note that the maximum density at the fold is about an order of magnitude higher than this). Since we choose the two lenses to be perpendicular to each other, it is well-defined to identify each image with one of the three images produced by the \(A_{3}\) lens: distinguished in the figure by the green, orange, and blue colours. The bottom right panel shows the total flux of the green and orange images (the images associated with the millisecond feature) as a function of frequency.
these two phenomena with a unified model, the doubly catastrophic framework actually provides a natural explanation. It is a well-known result of catastrophe theory that the cross-section for folds is much larger than the cross-section for cusps. That is, consider a projected plasma surface density profile given by \(\Sigma_{e}\). We can define the area on the sky such that the density is greater than some threshold, \(\Delta\), to be \(\sigma(\Sigma_{e}>\Delta)\). The scaling as a function of threshold for this cross-section can be computed for the fold and cusp catastrophes as (Narayan and Wallington, 1993):
\[\sigma_{A_{2}}(\Sigma_{e}>\Delta)\sim\Delta^{-2}, \tag{26}\]
\[\sigma_{A_{3}}(\Sigma_{e}>\Delta)\sim\Delta^{-5/2}. \tag{27}\]
That is, the cross-section for the cusp decreases faster as a function of threshold than the fold cross-section. Therefore, it is a generic expectation of catastrophe theory that folds will contribute more to the observed density than cusps. This is also, notably, a precise and testable prediction of our model; the number of ESEs with an inferred maximum column density above some threshold should scale according to this power law.
## 5 Applications
We have argued that doubly catastrophic lensing is a potentially powerful framework for analyzing scattering phenomena in pulsars and other radio sources, as it provides a unified explanation for both scintillation and ESE observations, and also naturally accommodates qualitative features of the data that have thus far been challenging to explain. Another powerful aspect of lenses as catastrophes is that the mathematics of catastrophe theory constrains the form of the lenses to a small set of elementary catastrophes. These elementary catastrophes are universal and are described by a small number of unfolding parameters.
If the plasma structures in the ISM responsible for these scattering phenomena are indeed catastrophes, then this would represent a significant advancement in our ability to unambiguously infer the physical properties of the lenses and also to use them as astrometric tools. Contrast this with the present situation. Since we lack any prior information on the form of the lenses, in principle one has an infinite number of degrees-of-freedom when attempting to build a model to match ESE or scintillation data. As a result, inferences of, say, the electron density, \(\Sigma_{e}\), may vary by orders-of-magnitude between different lens models. Moreover, there has been little observational evidence, so far, that has allowed us to distinguish between the many proposed models. At the very least, the doubly catastrophic lensing formalism makes precise predictions which we will be able to test soon. If confirmed, then the space of potential lens shapes collapses from infinite, to a small number of catastrophes.
This would have particularly important implications for pulsar astrometry. One of the primary limitations of our ability to use lensing to obtain precise astrometric data is the fact that we typically do not know the dispersive contribution of the lens to the observed time delays. That is, we are typically forced to ascribe the observed time delays entirely to the geometric part of the time delay, e.g. the quadratic terms in Eq. 24. If the lenses are catastrophes, described by a small number of parameters, then it becomes possible to unambiguously infer the dispersive contributions to the delay. Especially for a system such as PSR B0834+06 which is highly over-determined, it would potentially be possible to infer the full lens potential.
One concrete example of an application of the doubly catastrophic lensing framework to infer the physics of scattering structures in the ISM is its potential ability to unambiguously distinguish between convergent (under-dense) and divergent (over-dense) lenses. Fig. 10 shows the millisecond feature of the PSR B0834+06 lensing event, modelled with the \(A_{3}\) lens for using \(\alpha>0\) (left panel) and \(\alpha<0\) (right panel). The crosses show the value of the geometric delay, whereas the solid dots show the total delay. Note that since the group delay is negative for positive \(\alpha\), and positive for negative \(\alpha\), for a convergent (under-dense) lens, the total delay is less than the geometric delay, and for a divergent (over-dense) lens, the total delay is greater than the geometric delay. The red and blue colours indicate the parity of the images, which refers to the direction of motion of the images with respect to the source. If the image moves in the same direction as the source then it is said to have positive parity, and if it moves in the opposite direction it is said to have negative parity. Note that in either the over- or under-dense case, the positive parity images have much smaller group delay than the negative parity images. As a result, the orientation of the characteristic hockey-stick shape of the millisecond feature is flipped between the convergent and divergent lenses. Inspecting the data shown in Fig. 9 visually, the millisecond feature appears to be closer to the convergent (under-dense) case. However, since we have not attempted to precisely fit the data, it is not possible to make any strong inferences.
An additional possibility that the doubly catastrophic framework opens up, is that if pulsar scintillation is caused by \(A_{2}\) folds and ESEs are caused by \(A_{3}\) cusps by the same sheet structures in the ISM as viewed under projection, then many observations of these phenomena will allow us to take advantage of the rich mathematics of catastrophe theory to probe the turbulent ISM on scales that are inaccessible to simulations. That is, with many scintillation and ESE observations, we can effectively generate a sky-map of caustic networks in the ISM. Such a map may be a powerful tool for studying the physics of the ISM.
## 6 Conclusion
In this paper, we have presented a model based on a simple application of catastrophe theory to thin plasma sheets to explain extreme scattering events. That is, we propose that several aspects of ESE observations can be explained using lens potential with an \(A_{3}\) cusp profile. This is an extension of previous work (Pen and Levin, 2014; Simard and Pen, 2018) suggesting that \(A_{2}\) folds arising from corrugated plasma sheets may explain pulsar scintillation. We call this application of catastrophe theory to the lens potential "doubly catas
Figure 10: The simulated millisecond feature using the same parameters for our double-lens model as Fig. 9, except the left panel is for \(\alpha>0\) (the convergent lens) and the right panel is for \(\alpha<0\) (the divergent lens). The crosses show the value of the geometric delay, whereas the solid dots show the total delay (geometric plus group delay). The red and blue colours indicate the parity of the images, which is either positive or negative, given by the sign of the determinant of the Jacobian of the lens map.
trophic" lensing, as catastrophes also generically appear in the light curves of lensed sources.
The doubly catastrophic framework is well-motivated for several reasons. Firstly, the past decade of pulsar scintillation observations suggest the ubiquity of thin plasma sheets in the ISM. Since lensing is well-described by an effective projected density perpendicular to the line of sight, and since catastrophes generically arise when thin sheets are viewed under projection, \(A_{2}\) and \(A_{3}\) lenses (in addition to higher order catastrophes which have not considered here) should naturally arise. We argue that these catastrophic lens potentials should exist in the ISM, whether or not they are abundant enough to explain all scintillation or ESE observations. Secondly, recent work on the physics of the turbulent ISM through MHD simulations suggest that corrugated plasma sheets of the kind we consider are physically well-motivated. Lastly, the doubly catastrophic framework has several desirable theoretical features: it provides a universal framework that describes both scintillation and ESEs as aspects of the same phenomenon, and the application of catastrophe theory means that the lens potentials are generic and well-described by a small number of parameters.
In this work, we have described the features of the simplest \(A_{3}\) lens and have argued that it can explain many of the qualitative features of ESE observations, including the frequency structure of ESEs. The inability to account for features of ESE light curves at high frequencies has been a roadblock for thin sheet models of these events. We argue that the \(A_{3}\) lens overcomes this issue. We also argue that the \(A_{3}\) lens provides a natural explanation for features seen in the lensing of PSR B0834+06.
## Data availability
No new data were generated or analysed in support of this research.
## Acknowledgements
We receive support from Ontario Research Fund--research Excellence Program (ORF-RE), Natural Sciences and Engineering Research Council of Canada (NSERC) [funding reference number RGPIN-2019-067, CRD 523638-18, 555585-20], Canadian Institute for Advanced Research (CIFAR), Canadian Foundation for Innovation (CFI), the National Science Foundation of China (Grants No. 11929301), Thoth Technology Inc, Alexander von Humboldt Foundation, and the Ministry of Science and Technology(MOST) of Taiwan(110-2112-M-001-071-MY3). Computations were performed on the SOSCIP Consortium's [Blue Gene/Q, Cloud Data Analytics, Agile and/or Large Memory System] computing platform(s). SOSCIP is funded by the Federal Economic Development Agency of Southern Ontario, the Province of Ontario, IBM Canada Ltd., Ontario Centres of Excellence, Mitacs and 15 Ontario academic member institutions. Cette recherche a te financee par le Consel de recherches en sciences naturelles et en genie du Canada (CRSNG), [numero de reference 523638-18,555585-20 RGPIN-2019-067].
|
2308.04075 | Boundary-preserving Lamperti-splitting schemes for some Stochastic
Differential Equations | We propose and analyse boundary-preserving schemes for the strong
approximations of some scalar SDEs with non-globally Lipschitz drift and
diffusion coefficients whose state-space is bounded. The schemes consists of a
Lamperti transform followed by a Lie--Trotter splitting. We prove
$L^{p}(\Omega)$-convergence of order $1$, for every $p \geq 1$, of the schemes
and exploit the Lamperti transform to confine the numerical approximations to
the state-space of the considered SDE. We provide numerical experiments that
confirm the theoretical results and compare the proposed Lamperti-splitting
schemes to other numerical schemes for SDEs. | Johan Ulander | 2023-08-08T06:21:07Z | http://arxiv.org/abs/2308.04075v3 | # Boundary-preserving Lamperti-splitting scheme for some stochastic differential equations
###### Abstract.
We propose and analyse an explicit boundary-preserving scheme for the strong approximations of some SDEs with non-globally Lipschitz drift and diffusion coefficients whose state-space is bounded. The scheme consists of a Lamperti transform followed by a Lie-Trotter splitting. We prove \(L^{p}(\Omega)\)-convergence of order \(1\), for every \(p\in\mathbb{N}\), of the scheme and exploit the Lamperti transform to confine the numerical approximations to the state-space of the considered SDE. We provide numerical experiments that confirm the theoretical results and compare the proposed Lamperti-splitting scheme to other numerical schemes for SDEs.
Key words and phrases:Stochastic differential equations, Lamperti transform, splitting scheme, boundary-preserving numerical scheme, \(L^{p}(\Omega)\)-convergence, explicit numerical scheme 2
the boundary of \(D\), the solution \(X\) takes values in the interior \(\mathring{D}\) of the domain \(D\). For precise definition of the setting, see Section 2.
Examples of applications include some instances of the Susceptible-Infected-Susceptible (SIS) epidemic model [6, 8, 10, 30, 31, 32], the Nagumo SDE [21, 22] and an Allen-Cahn type SDE [2, 5, 9, 21]. We consider the SIS epidemic model corresponding to the choices \(f(x)=x-x^{2}\) and \(g(x)=x-x^{2}\) and is also known as the simplest Wright-Fisher diffusion for a gene frequency model. The Nagumo SDE corresponds to the choices \(f(x)=-x(1-x)(1-x)\) and \(g(x)=-x+x^{2}\). The Allen-Cahn type SDE corresponds to the choices \(f(x)=x-x^{3}\) and \(g(x)=1-x^{2}\). We provide short discussions and motivations for these models in Section 4.
The proposed scheme combines a Lamperti transform with a time splitting procedure. The Lamperti transform applied to the SDE in (1) guarantees that the scheme is boundary-preserving. We employ a Lie-Trotter time splitting of the resulting transformed SDE to obtain tractable sub-problems with closed-form solutions. See Section 3 for the precise construction of the scheme.
The main results of the paper are the following:
* We propose an explicit approximation procedure for SDEs of the form in (1) that is boundary-preserving, see Proposition 2.
* We prove \(L^{p}(\Omega)\)-convergence of order \(1\) for every \(p\in\mathbb{N}\), see Theorem 5, and almost sure pathwise convergence of order \(1-\epsilon\), for every \(\epsilon>0\), see Corollary 7.
The literature on schemes based on the Lamperti transform and on time splitting schemes is extensive. Without being exhaustive, we mention the following articles [6, 13, 16, 26, 31, 32] on schemes based on the Lamperti transform and the following references [3, 4, 5, 11, 12, 13, 16, 23, 25] on time splitting schemes for differential equations. To the best of our knowledge, only the two recent article [13, 16] combines these two approaches to construct a positivity-preserving scheme for the Ait-Sahalia model and the Cox-Ingersoll-Ross (CIR) process, respectively. The CIR model considered in [16] has an affine function as drift coefficient and the diffusion coefficient is \(1/2\)-Holder continuous. In the present paper, we consider drift and diffusion coefficients that can be locally-Lipschitz continuous with superlinear growth.
Before closing the introduction, we would like to compare the proposed scheme to the literature on numerical schemes based on the Lamperti transform on similar problems. We first mention the paper [6], where the authors prove strong convergence of order \(1\) for a family of stochastic SIS equations using a Lamperti transform followed by smoothing the drift coefficient. The smoothing strategy in [6] enables the authors to obtain \(L^{2}(\Omega)\)-convergence of order \(1\) for quite general drifts (essentially requiring \(C^{2}\) on the closure of the domain) and for a diffusion coefficient of the form \(x(1-x)\), exploiting for example inverse moment bounds of the exact solution and exponential integrability of the transformed SDE. In this work we instead impose assumptions on the Lamperti transform and on the drift coefficient of the transformed SDE. After using the Lamperti transform, we apply a Lie-Trotter splitting. This approach enables us to establish a mild integral equation for the approximate solution that is similar to the mild integral equation for the exact solution of the SDE. From this we obtain \(L^{p}(\Omega)\)-convergence of order \(1\), for every \(p\in\mathbb{N}\), and almost sure pathwise convergence of order \(1-\epsilon\) for every \(\epsilon>0\). We also mention the articles [30, 31, 32] and [1, 26], where the authors apply the Lamperti transform followed by the (truncated) Euler-Maruyama (EM) schemes and the semi-implicit Euler-Maruyama (SEM) scheme, respectively, to the transformed SDEs. [30] considers SIS SDEs and the authors obtain improved, although not as general, results compared to [6] discussed above. [32] establishes \(L^{p}(\Omega)\)-convergence and almost sure pathwise
convergence for Lamperti (truncated) EM schemes for general SDEs defined in \((0,\infty)\). [1, 26] obtain \(L^{p}(\Omega)\)-convergence of order \(1\) for the Lamperti SEM scheme for some SDEs defined in domains under slightly different conditions on the drift and diffusion coefficients. By the assumptions in Section 2, such explicit and implicit Lamperti-based schemes are also applicable in the present setting. In future works, however, the proposed approach could possibly be extended to cases where Lamperti (truncated) EM and Lamperti SEM are not applicable. Moreover, the proposed scheme in this paper is explicit, meaning that the computational effort is lower than for the implicit scheme in [1, 26].
This paper is organized as follows. Section 2 is devoted to presenting the setting, assumptions and some properties of the considered SDE. In Section 3 we define the Lamperti-splitting scheme and state and prove the main results. Lastly, in order to support our theoretical results in Section 3, we provide numerical experiments in Section 4.
## 2. Setting
In this section we introduce the notation and the assumptions on the considered SDE (1). Let \((\Omega,\mathcal{F},\mathbb{P})\) be a fixed probability space equipped with a filtration \(\big{(}\mathcal{F}_{t}\big{)}_{t\geq 0}\) that satisfies the usual conditions. We denote by \(\mathbb{E}[\cdot]\) the expectation operator and \(C\) denotes a generic constant that may vary from line to line.
### Description of the SDE
We first discuss some preliminaries and introduce the main assumptions needed for the definition and analysis of the proposed Lamperti-splitting scheme (LS). The general idea of the Lamperti transform is to transform an SDE into another SDE with state-independent diffusion coefficient [20, 24]. More precisely, provided that everything is well-defined, the Lamperti transform of the SDE in equation (1) is given by
\[\Phi(x)=\int_{w_{0}}^{x}\frac{1}{g(w)}\,\mathrm{d}w \tag{2}\]
for \(x\in\mathring{D}\), where \(\mathring{D}=(l,r)\) for some \(l,r\in\mathbb{R}\) and for some \(w_{0}\in\mathring{D}\). By Ito's formula, the process \(Y(t)=\Phi(X(t))\) satisfies
\[\begin{cases}\mathrm{d}Y(t)=\tilde{H}(Y(t))\,\mathrm{d}t+\mathrm{d}B(t),\\ Y(0)=\Phi(x_{0}),\end{cases} \tag{3}\]
for some \(x_{0}\in\mathring{D}\), where we define
\[\tilde{H}(x)=\frac{f(\Phi^{-1}(x))}{g(\Phi^{-1}(x))}-\frac{1}{2}g^{\prime}( \Phi^{-1}(x)).\]
We require \(x_{0},w_{0}\in\mathring{D}\) because, by Assumption 1 below, \(\Phi\) has singularities on \(\partial D\). Also observe that \(w_{0}=x_{0}\) is a valid choice for the lower integration limit in the Lamperti transform. Let us also denote by
\[H(x)=\tilde{H}(x)-\mu\]
for some \(\mu\in\mathbb{R}\). The introduction of \(H\) allows us to transfer the constant \(\mu\) between the ODE part and the SDE part of the splitting scheme (see Section 3.1). We now list the assumptions that we need to guarantee that the above is well-defined.
**Assumption 1**.: _The map \(g:\mathbb{R}\to\mathbb{R}\) is continuously differentiable and strictly positive on \(\hat{D}=(l,r)\), \(g:\mathbb{R}\to\mathbb{R}\) is continuous on \(\bar{D}=[l,r]\) and, for any \(w_{0}\in\hat{D}\), the following non-integrability condition is satisfied_
\[\int_{w_{0}}^{l}\frac{1}{g(w)}\,\mathrm{d}w=-\infty,\ \int_{w_{0}}^{r}\frac{1}{g( w)}\,\mathrm{d}w=\infty.\]
Assumption 1 implies, for example, that
* \(\Phi(x),\ \Phi^{\prime}(x)\) are well-defined for \(x\in\mathring{D}\).
* \(\Phi:\hat{D}\to\mathbb{R}\) is bijective and continuous.
* \(\Phi^{-1}:\mathbb{R}\to\mathring{D}\) and the process \(Y(t)\) are well-defined.
We remark that it is essential for any numerical scheme that utilises the Lamperti transform that \(\Phi\) is well-defined. Assumption 1 is satisfied, for example, for the SIS SDEs considered in [6, 30, 31] that we also cover some instances of in this paper. Moreover, that \(\Phi:\mathring{D}\to\mathbb{R}\) is bijective and continuous implies, in this setting, that \(\partial D\) is unattainable by \(X\) without reference to Feller's boundary classification (see Section 2.2). For a detailed and elaborate discussion on Feller's boundary classification see, for example, [15].
A key step in the construction of the proposed LS scheme is to apply a Lie-Trotter time splitting to the SDE in (3): we iteratively solve the nonlinear ODE
\[\frac{\mathrm{d}y(t)}{\mathrm{d}t}=H(y(t)), \tag{4}\]
using an exact solution formula, and the SDE for Brownian motion with drift
\[\mathrm{d}Z(t)=\mu\,\mathrm{d}t+\mathrm{d}B(t).\]
We make the following assumption to guarantee a unique and global solution to the ODE in equation (4).
**Assumption 2**.: _The map \(H:\mathbb{R}\to\mathbb{R}\) is continuously differentiable and there exists a constant \(C\) such that_
\[\sup_{x\in\mathbb{R}}|H(x)|+\sup_{x\in\mathbb{R}}|H^{\prime}(x)|\leqslant C.\]
We remark that Assumption 2 implies that \(H\) is Lipschitz continuous. Assumption 2, combined with Assumption 1, implies the already mentioned essential property that the boundary points \(\partial D\) cannot be reached by the solution \(X\) to the original SDE in (1) (see Section 2.2). The following assumption is natural for the type of SDEs that we cover here and is used to obtain \(L^{p}(\Omega)\)-convergence of order 1.
**Assumption 3**.: _The map \(\Phi^{-1}:\mathbb{R}\to\hat{D}\) is Lipschitz continuous._
We remark that for the examples that we have in mind (see Section 4), Lipschitz continuity of \(\Phi^{-1}\) as assumed in Assumption 3 follows from Assumption 2. Assumption 3 is satisfied in, for example, [1, 6, 30, 31] where the authors obtain \(L^{2}(\Omega)\)-convergence of order 1. On the other hand, Assumption 3 is, for example, not satisfied for the CIR model and the authors of [16] do not recover \(L^{2}(\Omega)\)-convergence of order 1.
As mentioned, the Lamperti-splitting scheme requires that the ODE in equation (4) admits an explicit solution formula that we can implement, and this is the content of the following assumption.
**Assumption 4**.: _The ODE in equation (4) admits an explicit and globally well-defined solution formula for any initial value \(y(0)\in\mathbb{R}\)._
After this preparation, we can define the class of SDEs that we consider. We consider time-homogeneous stochastic differential equations in the Ito sense
\[\begin{cases}\operatorname{d}X(t)=f(X(t))\operatorname{d}t+g(X(t))\operatorname{ d}B(t),\ t\in(0,T],\\ X(0)=x_{0}\in\dot{D},\end{cases} \tag{5}\]
where \(T>0\) and \(f\) and \(g\) are such that Assumptions 1, 2, 3, and 4 are satisfied. We say that a stochastic process \((X(t))_{t\in[0,T]}\) is a (strong) solution of (5) if the corresponding integral equation
\[X(t)=x_{0}+\int_{0}^{t}f(X(s))\operatorname{d}s+\int_{0}^{t}g(X(s))\operatorname {d}B(s)\]
is satisfied, almost surely, for every \(t\in[0,T]\), where the second integral is an Ito integral. Naturally, the above definition requires that the involved integrals are well-defined. We refer the interested reader to [27] for details on well-posedness of (strong) solutions of SDEs with Lipschitz continuous coefficients. The well-posedness of (strong) solutions to (5) follows from the well-posedness of (strong) solutions \(Y\) to (3) with Lipschitz coefficients, \(X=\Phi^{-1}(Y)\) (see equations (2) and (3)), and the above assumptions.
### Boundary classification
We dedicate this section to a short discussion about whether or not the process \(X\) can hit the boundary points \(\partial D\), where the Lamperti transform \(\Phi\) is not well-defined. The boundary points are unattainable by \(X\) if and only if the stopping time
\[\tau=\inf\{t\in[0,T]:\ X(t)\in\partial D\}\]
is infinite almost surely. In the considered setting, we have the following characterisation of when \(\tau=\infty\) almost surely:
\[\mathbb{P}(\tau=\infty)=1\Longleftrightarrow\mathbb{P}(Y\text{ blows up})=0.\]
By Assumption 2, the drift coefficient \(H\) of the transformed process \(Y\) is Lipschitz continuous. Thus, \(Y\) does not blow up with probability \(1\), and we conclude that \(\tau=\infty\) almost surely. Alternatively, Feller's boundary classification provides a general theory on boundary behaviour of solutions to Ito SDEs and characterises this in terms of the drift \(f\) and diffusion \(g\) coefficients. For a detailed exposition of Feller's boundary classification we refer the interested reader to [15].
## 3. The boundary-preserving integrator
We are now in a position to present and state the properties of the boundary-preserving integrator for the SDE in (5).
We partition the interval \([0,T]\) into \(M\in\mathbb{N}\) subintervals \([t_{m},t_{m+1}]\), each of length \(\tau=T/M\). This means that \(t_{m}=m\tau\), for \(m=0,\dots,M\).
We propose an explicit scheme based on utilising the Lamperti transform associated with the considered SDE in equation (3) followed by a Lie-Trotter splitting strategy of the resulting transformed SDE in equation (3).
We first provide a detailed description of the construction of the scheme in Section 3.1. We then provide the main results of this paper in Section 3.2; that is, the boundary-preserving property of the scheme (Proposition 2) and the \(L^{p}(\Omega)\)-convergence of order \(1\) (Theorem 5). As a corollary, we also obtain almost sure pathwise convergence of order \(1\) (Corollary 7).
### Description of the integrator
In the following we describe how the Lamperti-splitting (LS) scheme is constructed. We consider the SDE obtained by the transformation \(Y(t)=\Phi(X(t))\), i.e. equation (3),
\[\begin{cases}\operatorname{d}\!Y(t)=\left(H(Y(t))+\mu\right) \operatorname{d}\!t+\operatorname{d}\!B(t),\\ Y(0)=\Phi(x_{0}),\end{cases} \tag{6}\]
using Ito's lemma, where \(H\) and \(\mu\) are defined in Section 2. We construct an approximation \(Y^{LS}\) of the solution \(Y\) to the SDE in equation (6) and define the approximation \(X^{LS}\) of the solution \(X\) to the original SDE in equation (5) as \(X^{LS}=\Phi^{-1}(Y^{LS})\). We construct the LS scheme on the time grid points \(0=t_{0}<\ldots<t_{M}=T\), denoted by \(Y^{LS}_{m}\) for \(m=0,\ldots,M\). We let \(Y^{LS}_{0}=\Phi(x_{0})\) and we define \(Y^{LS}_{m}\), for \(m=1,\ldots,M\), recursively as follows: Suppose \(Y^{LS}_{m}\) at time \(t_{m}=m\tau\) is given. First we let \(y_{m}\) solve the nonlinear ODE
\[\begin{cases}\frac{\operatorname{d}\!y_{m}(t)}{\operatorname{d}\!t}=H(y_{m}( t)),\\ y_{m}(t_{m})=Y^{LS}_{m},\end{cases} \tag{7}\]
on the interval \([t_{m},t_{m+1}]\) with initial value \(Y^{LS}_{m}\). Second we let \(Z_{m}\) solve the SDE for Brownian motion with drift \(\mu\in\mathbb{R}\)
\[\begin{cases}\operatorname{d}\!Z_{m}(t)=\mu\operatorname{d}\!t+ \operatorname{d}\!B(t),\\ Z_{m}(t_{m})=y_{m}(t_{m+1}),\end{cases} \tag{8}\]
on the interval \([t_{m},t_{m+1}]\) with initial value \(y_{m}(t_{m+1})\). We define \(Y^{LS}_{m+1}\) at the next time grid point \(t_{m+1}=(m+1)\tau\) as
\[Y^{LS}_{m+1}=Z_{m}(t_{m+1})\equiv y_{m}(t_{m+1})+\mu\tau+B(t_{m+1})-B(t_{m}). \tag{9}\]
and we define the approximation \(X^{LS}_{m+1}=\Phi^{-1}(Y^{LS}_{m+1})\) of the solution of equation (5) at the grid point \(t_{m+1}\). Observe that \(y_{m}\) is well-defined and explicitly given by a solution formula by Assumption 4.
**Remark 1**.: _We could also consider the opposite order of the splitting; that is, first solve the SDE in equation (8) and then solve the ODE in equation (7). If we, in addition, work in the setting of [6] instead of our assumptions in Section 2, then we would directly obtain \(L^{2}(\Omega)\)-convergence of order \(1\) by applying Theorem \(3.1\) in [25]. However, Theorem \(3.1\) in [25] does not yield the stronger results of \(L^{p}(\Omega)\)-convergence rate, for every \(p\in\mathbb{N}\), with the supremum inside the expected value: see Theorem 5 below._
For the proofs of the main results in Theorem 5 and Proposition 6, the following integral expression for \(Y^{LS}_{m+1}\), for \(m=0,\ldots,M-1\), will be used instead of the differential form in equation (9)
\[Y^{LS}_{m+1}=Y^{LS}_{m}+\int_{t_{m}}^{t_{m+1}}H(y_{m}(s))\operatorname{d}\!s+ \mu\tau+B(t_{m+1})-B(t_{m}). \tag{10}\]
The definition of \(X^{LS}\) as \(\Phi^{-1}(Y^{LS})\) combined with Assumption 1, 2, 3, and 4 guarantees that the scheme is boundary-preserving. This is the content of the next proposition.
**Proposition 2**.: _Let \(M\in\mathbb{N}\), \(T\geq 0\), \(\tau=T/M\) and let \(x_{0}\in\hat{D}\). Suppose that Assumptions 1, 2, 3, and 4 are satisfied. Let \(Y^{LS}\) be given by the splitting scheme in equation (9)
_and let \(X^{LS}=\Phi^{-1}(Y^{LS})\) be the numerical approximation of the original SDE in (5). Then_
\[X^{LS}_{m}\in\mathring{D},\]
_almost surely, for every \(m\in 0,\ldots,M\)._
Proof.: Recall that \(\Phi^{-1}:\mathbb{R}\to\mathring{D}\) is bijective and continuous. Thus, if \(Y^{LS}_{m}\), for \(m=0,\ldots,M\), does not blows up, with probability 1, then the statement holds. Suppose that \(Y^{LS}_{m}\) is finite with probability 1 for some \(m=0,\ldots M-1\). Then \(y_{m}(t_{m+1})\) is finite with probability 1 as it is the solution of a non-explosive ODE in (7) at time \(t_{m+1}\). Since \(Z_{m}(t)\), for \(t\in[t_{m},t_{m+1}]\), is a Brownian motion with drift \(\mu\) and unit diffusion starting at \(y_{m+1}(t_{m+1})\), we have that \(Y^{LS}_{m+1}=Z_{m}(t_{m+1})\) is finite with probability 1.
### Convergence result
In the following we prove that the proposed LS scheme has \(L^{p}(\Omega)\)-convergence of order 1 for the considered SDE in equation (5).
We would like to make a few remarks and comparisons to other related papers before we state and prove the convergence result.
**Remark 3**.: _The positive moments of the exact solution and of the LS approximation are immediately bounded as both are confined to the bounded domain \(\mathring{D}\)._
**Remark 4**.: _In contrast to the papers [6, 30, 31, 32], we do not need to bound the inverse moments of the exact solution and of the LS approximation to obtain our convergence result._
**Theorem 5**.: _Let \(M\in\mathbb{N}\), \(T\geq 0\), \(\tau=T/M\) and let \(x_{0}\in\mathring{D}\). Suppose Assumptions 1, 2, 3, and 4 are satisfied. Let \(X^{LS}=\Phi^{-1}(Y^{LS})\), where \(Y^{LS}\) is defined by the splitting scheme in equation (9), and let \(X\) be the exact solution of the considered SDE in equation (5). Then, for every \(p\in\mathbb{N}\), it holds_
\[\left(\mathbb{E}\left[\sup_{m=0,\ldots,M}|X^{LS}_{m}-X(t_{m})|^{p}\right] \right)^{\frac{1}{p}}\leq C\tau,\]
_where the constant \(C\) does not depend on \(\tau\)._
By Assumption 3, Theorem 5 follows from the corresponding error for \(Y^{LS}-Y\) since
\[|X^{LS}_{m}-X(t_{m})|^{p}=|\Phi^{-1}(Y^{LS}_{m})-\Phi^{-1}(Y(t_{m}))|^{p}\leq C |Y^{LS}_{m}-Y(t_{m})|^{p},\]
where \(C\) depends on the Lipschitz constant of \(\Phi^{-1}\) and on \(p\). The \(L^{p}(\Omega)\)-convergence of \(Y^{LS}-Y\) is the content of the following proposition.
**Proposition 6**.: _Let \(M\in\mathbb{N}\), \(T\geq 0\), \(\tau=T/M\) and let \(x_{0}\in\mathring{D}\). Suppose Assumptions 1, 2, 3, and 4 are satisfied. Let \(Y^{LS}\) be given by the splitting scheme in equation (9). Then, for every \(p\in\mathbb{N}\), it holds_
\[\left(\mathbb{E}\left[\sup_{m=0,\ldots,M}|Y^{LS}_{m}-Y(t_{m})|^{p}\right] \right)^{\frac{1}{p}}\leq C\tau,\]
_where the constant \(C\) does not depend on \(\tau\)._
Proof.: As \(Y^{LS}_{0}=Y(0)\), it suffices to consider the case \(m\in\{1,\ldots,M\}\). Recall the following integral expression for \(Y^{LS}_{m}\) stated in equation (10) in Section 3.1
\[Y^{LS}_{m}=Y^{LS}_{m-1}+\int_{t_{m-1}}^{t_{m}}H(y_{m-1}(s))\,\mathrm{d}s +\mu\tau+B(t_{m})-B(t_{m-1})\\ =Y^{LS}_{m-1}+\int_{t_{m-1}}^{t_{m}}\left(H(y_{m-1}(s))+\mu\right) \mathrm{d}s+B(t_{m})-B(t_{m-1}).\]
By recursively applying the above formula, we obtain that
\[Y^{LS}_{m}=Y(0)+\sum_{k=0}^{m-1}\int_{t_{k}}^{t_{k+1}}\left(H(y_{k}(s))+\mu \right)\mathrm{d}s+B(t_{m}). \tag{11}\]
Recall that the Euler-Maruyama (EM) scheme, denoted by \(Y^{EM}\) below, for the SDE in (6) satisfies
\[\mathbb{E}\left[\sup_{m=0,\ldots,M}|Y^{EM}_{m}-Y(t_{m})|^{p}\right]\leqslant C \tau^{p} \tag{12}\]
for every \(p\in\mathbb{N}\). See, for example, Theorem 10.6.3 and the discussion following Theorem 10.6.3 in chapter 10 in [20] for the above estimate for the Milstein scheme, which coincides with the EM scheme for additive noise. We remark that in order to apply this convergence theorem for the Milstein scheme, we need at most linear growth of \(H,H^{\prime}\) and \(HH^{\prime}\) which follows directly from Assumption 2. Thus, from the inequality
\[|Y^{LS}_{m}-Y(t_{m})|^{p}\leqslant C\left(|Y^{LS}_{m}-Y^{EM}_{m}|^{p}+|Y^{EM}_ {m}-Y(t_{m})|^{p}\right)\]
and the estimate in equation (12), it suffices to show that
\[\mathbb{E}\left[\sup_{m=0,\ldots,M}|Y^{LS}_{m}-Y^{EM}_{m}|^{p}\right]\leqslant C \tau^{p}\]
to obtain the desired convergence rate. To this end, we extend the EM scheme using linear interpolation to be define for all \(t\in[0,T]\) as
\[Y^{EM}(t)=Y^{EM}_{k}+(t-t_{k})\frac{Y^{EM}_{k+1}-Y^{EM}_{k}}{t_{k+1}-t_{k}}\]
for \(t\in[t_{k},t_{k+1}]\) and \(k=0,\ldots,M-1\). Then following integral equation for \(Y^{EM}(t)\) applied to (6) holds
\[Y^{EM}(t)=Y(0)+\int_{0}^{t}\left(H(Y^{EM}(\ell(s))+\mu)\right) \mathrm{d}s+B(t), \tag{13}\]
where \(\ell(s)=t_{k}\) for \(s\in[t_{k},t_{k+1})\). We insert \(t=t_{m}\) in equation (13) and re-write it as
\[Y^{EM}_{m}=Y(0)+\sum_{k=0}^{m-1}\int_{t_{k}}^{t_{k+1}}\left(H(Y^{EM}(\ell(s))) +\mu\right)\mathrm{d}s+B(t_{m}).\]
Thus, the difference \(Y^{LS}_{m}-Y^{EM}_{m}\) can be expressed as
\[Y^{LS}_{m}-Y^{EM}_{m}=\sum_{k=0}^{m-1}\int_{t_{k}}^{t_{k+1}}H(y_{k}(s))-H(Y^{EM }(\ell(s)))\,\mathrm{d}s.\]
and we can therefore estimate
\[|Y_{m}^{LS}-Y_{m}^{EM}|^{p}\leqslant m^{p-1}\tau^{p-1}\sum_{k=0}^{m-1 }\int_{t_{k}}^{t_{k+1}}|H(y_{k}(s))-H(Y^{EM}(\ell(s)))|^{p}\,\mathrm{d}s\\ \leqslant\sum_{k=0}^{m-1}\int_{t_{k}}^{t_{k+1}}|H(y_{k}(s))-H(Y^{ EM}(\ell(s)))|^{p}\,\mathrm{d}s, \tag{14}\]
where we used the inequality \((a+b)^{p}\leqslant 2^{p-1}(a^{p}+b^{p})\) for \(m\) terms, Jensen's inequality for integrals and that \(m\tau=t_{m}\leqslant T\). With the goal to apply Gronwall's inequality, we do the following
\[|H(y_{k}(s))-H(Y^{EM}(\ell(s)))|^{p}\leqslant C\left(|H(y_{k}(t_{k}))-H(Y^{EM} (\ell(s)))|^{p}+|H(y_{k}(s))-H(y_{k}(t_{k}))|^{p}\right),\]
for \(s\in[t_{k},t_{k+1})\), and use that Assumption 2 implies that \(H\) is Lipschitz continuous to obtain
\[|H(y_{k}(s))-H(Y^{EM}(\ell(s)))|^{p}\leqslant C\left(|y_{k}(t_{k})-Y^{EM}( \ell(s))|^{p}+|y_{k}(s)-y_{k}(t_{k})|^{p}\right).\]
By recalling that \(y_{k}(s)\), for \(s\in[t_{k},t_{k+1})\), solves the ODE with right hand side \(H\) and (random) initial condition \(y_{k}(t_{k})=Y_{k}^{LS}\) and that \(Y^{EM}(t_{k})=Y_{k}^{EM}\), we have
\[|y_{k}(t_{k})-Y^{EM}(\ell(s))|^{p}=|Y_{k}^{LS}-Y_{k}^{EM}|^{p}, \tag{15}\]
for \(s\in[t_{k},t_{k+1})\), and
\[|y_{k}(s)-y_{k}(t_{k})|^{p}=|\int_{t_{k}}^{s}H(y_{k}(r))\,\mathrm{d}r|^{p} \leqslant C\tau^{p}, \tag{16}\]
for \(s\in[t_{k},t_{k+1})\), by Jensen's inequality for integrals and since \(H\) is uniformly bounded. Inserting equations (15) and (16) into the integrals in equation (14) gives us that
\[\int_{t_{k}}^{t_{k+1}}|H(y_{k}(s))-H(Y^{EM}(\ell(s)))|^{p}\,\mathrm{d}s \leqslant C\left(\tau|Y_{k}^{LS}-Y_{k}^{EM}|^{p}+\tau^{p+1}\right).\]
The total error \(Y_{m}^{LS}-Y_{m}^{EM}\) can now be estimated by
\[|Y_{m}^{LS}-Y_{m}^{EM}|^{p}\leqslant C\tau^{p}+C\tau\sum_{k=0}^{m-1}|Y_{k}^{ LS}-Y_{k}^{EM}|^{p}.\]
Applying the Gronwall's inequality and then \(\sup_{m=0,\ldots,M}\) gives us
\[\sup_{m=0\ldots,M}|Y_{m}^{LS}-Y_{m}^{EM}|^{p}\leqslant C\tau^{p}.\]
Applying expected value yields the desired estimate
\[\mathbb{E}\left[\sup_{m=0\ldots,M}|Y_{m}^{LS}-Y_{m}^{EM}|^{p}\right]\leqslant C \tau^{p}.\]
We can thus conclude that
\[\mathbb{E}\left[\sup_{m=0\ldots,M}|Y_{m}^{LS}-Y(t_{m})|^{p}\right]\leqslant C \tau^{p},\]
which is the desired estimate.
By applying Lemma 2.1 in [19], we obtain, as a corollary to Theorem 5, almost sure pathwise convergence with rate \(1-\epsilon\) for every \(\epsilon>0\).
**Corollary 7**.: _Under the same assumptions and notation as in Theorem 5, there exists for every \(\epsilon>0\) a random variable \(\eta_{\epsilon}\), with \(\mathbb{E}\left[|\eta_{\epsilon}|^{p}\right]<\infty\) for every \(p\in\mathbb{N}\), such that_
\[\sup_{m=0,\ldots,M}|X_{m}^{LS}-X(t_{m})|\leq\eta_{\epsilon}\tau^{1-\epsilon}\]
_almost surely._
## 4. Numerical experiments
In this section we provide numerical experiments to support and verify the theoretical results in Section 3. We introduce a noise scale parameter \(\lambda>0\) in the considered SDE in equation (5) to avoid the need to run the numerical experiments for large time horizon \(T>0\). Hence, we consider the SDE
\[\begin{cases}\operatorname{d}X(t)=f(X(t))\operatorname{d}t+\lambda g(X(t)) \operatorname{d}B(t),\ t\in(0,T],\\ X(0)=x_{0}\in\hat{D}.\end{cases} \tag{17}\]
Recall that \(M\in\mathbb{N}\) and \(\tau=T/M\) denote the number of subintervals used to partition \([0,T]\) and the time step size, respectively, of a numerical scheme. We denote by \(\Delta B_{m}=B(t_{m+1})-B(t_{m})\) the increment of the Brownian motion over the interval \([t_{m},t_{m+1}]=[m\tau,(m+1)\tau]\). We compare boundary-preservation of the proposed Lamperti-splitting scheme, denoted LS below, as defined by \(X_{m+1}^{LS}=\Phi^{-1}(Y_{m+1}^{LS})\) and equation (9) to boundary-preservation of the following integrators for SDEs:
* the Euler-Maruyama scheme (denoted EM below), see for instance [20] \[X_{m+1}^{\text{EM}}=X_{m}^{\text{EM}}+f(X_{m}^{\text{EM}})\tau+\lambda g(X_{m} ^{\text{EM}})\Delta B_{m},\]
* the semi-implicit Euler-Maruyama scheme (denoted SEM below), see for instance [20] \[X_{m+1}^{\text{SEM}}=X_{m}^{\text{SEM}}+f(X_{m+1}^{\text{SEM}})\tau+\lambda g (X_{m}^{\text{SEM}})\Delta B_{m},\]
* the tamed Euler scheme (denoted TE below), see for instance [14, 28] \[X_{m+1}^{\text{TE}}=X_{m}^{\text{TE}}+f_{M}(X_{m}^{\text{TE}})\tau+g_{M}(X_{m }^{\text{TE}})\Delta B_{m},\] where \[f_{M}(x)=\frac{f(x)}{1+M^{-1/2}|f(x)|+M^{-1/2}|\lambda g(x)|^{2}}\] \[g_{M}(x)=\frac{\lambda g(x)}{1+M^{-1/2}|f(x)|+M^{-1/2}|\lambda g(x)|^{2}}.\]
We consider values of \(\lambda>0\) which illustrate that EM, SEM, and TE are not boundary-preserving.
We provide numerical results for three choices of the drift and diffusion coefficients \(f\) and \(g\):
* the Susceptible-Infected-Susceptible (SIS) SDE with \(f(x)=x-x^{2}\) and \(g(x)=x-x^{2}\),
* the Nagumo SDE with \(f(x)=-x(1-x)(1-x)\) and \(g(x)=-x+x^{2}\),
* an Allen-Cahn type SDE with \(f(x)=x-x^{3}\) and \(g(x)=1-x^{2}\).
The SIS SDE [6, 8, 10, 30, 31, 32] is a model for the spread of epidemics and is also used in gene frequency modelling (for example Wright-Fisher diffusion). We refer the interested reader to, for example, [8, 10] for detailed descriptions of such models. The Nagumo SDE [21, 22] and the Allen-Cahn type SDE [2, 5, 9, 21] are motivated by a finite difference discretisation of the corresponding stochastic partial differential equations (SPDEs). The stochastic Nagumo
equation is a stochastic model for the voltage in the axon of a neuron. The stochastic Allen-Cahn equation is a stochastic model for the time evolution of the interface between two phases of a material. We refer the interested reader to [21] for details on these SPDEs.
For the above three SDEs, we provide numerical experiments illustrating the boundary-preservation as well as the \(L^{2}(\Omega)\)-convergence of order 1 of the LS scheme as derived in Section 3. We present boundary-preservation in tables displaying the proportion out of 100 simulated sample paths that contain only values in the domain \(\tilde{D}\) and we present, in loglog plots, the \(L^{2}(\Omega)\)-errors
\[\left(\mathbb{E}\left[\sup_{m=1,\ldots,M}|X_{m}^{LS}-X_{m}^{ref}|^{2}\right] \right)^{1/2}\]
over the time grid points \(\{t_{m}:\ m=1,\ldots,M\}\). The reference solution \(X^{ref}\) is computed using the LS scheme with time step size \(\tau^{ref}=10^{-7}\). We have also computed the \(L^{2}(\Omega)\)-errors for the LS scheme with the reference solution computed using the Lamperti EM scheme (see, e.g., [30, 32]) and the Lamperti SEM scheme (see, e.g., [1, 26]), respectively, and obtained similar results. For approximation of the expectations for the \(L^{2}(\Omega)\)-convergence, we use 300 simulated samples. We have checked that 300 simulated samples is sufficient for the Monte Carlo error to be negligible.
For each numerical experiment, we use \(T=1\) and for each sample we use an initial value \(x_{0}\) that is uniformly distributed on \(\tilde{D}\).
For ease of presentation, lengthy and complicated formulas are collected in Appendix A.
**Example 1** (Sis Sde).: _Consider the SIS epidemic model given by_
\[\mathrm{d}X(t)=X(t)\left(1-X(t)\right)\mathrm{d}t+\lambda X(t)(1-X(t))\, \mathrm{d}B(t)\]
_with initial value \(X(0)=x_{0}\in(0,1)\) and \(T=1\); that is, \(f(x)=x(1-x)\) and \(g(x)=\lambda x(1-x)\) in the considered SDE in equation (5) are both quadratic. See Section A.1 for details about explicit formulas used for the implementation of the LS scheme for the SIS SDE. The exact solution \(X\) takes values in \((0,1)\), since the inverse Lamperti transform_
\[\Phi^{-1}(x)=\frac{w_{0}e^{x}}{w_{0}e^{x}+(1-w_{0})}\]
_take values in \((0,1)\), for any \(w_{0}\in(0,1)\). See Section 2.2 and Section A.1 for more details. In Table 1, we observe that the LS scheme preserves the domain \((0,1)\) of the SIS SDE while the integrators EM, SEM, and TE do not. As expected, the number of samples that preserve the domain \((0,1)\) for EM, SEM, and TE, respectively, decreases as \(\lambda>0\) increases. In Table 1, we used \(\tau=10^{-3}\), \(T=1\), \(N=100\) number of samples and \(x_{0}\) uniformly distributed on \((0,1)\) for each sample._
_In Figure 1 we present the \(L^{2}(\Omega)\)-errors for the same values of \(\lambda\) as used in Table 1. The \(L^{2}(\Omega)\)-error rates in Figure 1 agree with the rates obtained in Theorem 5. We use \(T=1\), \(N=300\) number of samples to approximate the expected value and \(x_{0}\) is uniformly distributed on \((0,1)\) for each sample in Figure 1._
**Example 2** (Nagumo SDE).: _Consider the Nagumo SDE given by_
\[\mathrm{d}X(t)=-X(t)(1-X(t))(1-X(t))\,\mathrm{d}t-\lambda X(t)(1-X(t))\, \mathrm{d}B(t)\]
_with initial value \(X(0)=x_{0}\in(0,1)\) and \(T=1\); that is, \(f(x)=-x(1-x)(1-x)\) is cubic and \(g(x)=-\lambda x(1-x)\) is quadratic in the considered SDE in equation (5). See Section A.2 for
details about explicit formulas used in the implementation of the LS scheme for the Nagumo SDE. As is derived in Section A.2, the inverse Lamperti transform is given by_
\[\Phi^{-1}(x)=\frac{w_{0}}{(1-w_{0})e^{x}+w_{0}}\]
_and takes values in \((0,1)\), for any \(w_{0}\in(0,1)\). Hence, by Section 2.2, the exact solution \(X\) takes values in \((0,1)\). Similarly to the SIS SDE case, Table 2 shows that the integrators EM, SEM, and TE do not preserve the domain \((0,1)\) of the Nagumo SDE and the number of samples that preserve the domain \((0,1)\) decreases as \(\lambda>0\) increases. Moreover, Table 2 also confirms that the LS scheme preserves the domain \((0,1)\) of the Nagumo SDE. In Table 2, we used \(\tau=10^{-3}\), \(T=1\), \(N=100\) number of samples and \(x_{0}\) is uniformly distributed on \((0,1)\) for each sample._
_In Figure 2 we present the \(L^{2}(\Omega)\)-errors for the same values of \(\lambda\) as used in Table 2. The \(L^{p}(\Omega)\)-error rates in Figure 2 agree with the rates obtained in Theorem 5. We use \(T=1\), \(N=300\) number of samples to estimate the expected value and \(x_{0}\) uniformly distributed on \((0,1)\) for each sample in Figure 2._
\begin{table}
\begin{tabular}{||c c c c c||} \hline \(\lambda\) & _LS_ & _EM_ & _SEM_ & _TE_ \\ \hline \hline
6 & 100/100 & 100/100 & 100/100 & 100/100 \\ \hline
7 & 100/100 & 94/100 & 89/100 & 92/100 \\ \hline
8 & 100/100 & 71/100 & 63/100 & 70/100 \\ \hline \end{tabular}
\end{table}
Table 1. Proportion of samples containing only values in \((0,1)\) out of 100 simulated sample paths for the Lamperti-splitting scheme (LS), the Euler–Maruyama scheme (EM), the semi-implicit Euler–Maruyama scheme (SEM), and the tamed Euler scheme (TE) for the SIS SDE for different choices of \(\lambda>0\). The parameters used are: \(T=1\), \(\tau=10^{-3}\) and with \(x_{0}\) uniformly distributed on \((0,1)\) for each sample.
Figure 1. \(L^{2}(\Omega)\)-errors on the interval \([0,1]\) of the Lamperti-splitting scheme (LS) for the SIS SDE for different choices of \(\lambda>0\) and reference lines with slopes \(1/2\) and \(1\). Averaged over 300 samples.
**Example 3** (Allen-Cahn SDE).: _Consider the Allen-Cahn type SDE given by_
\[\mathrm{d}X(t)=\left(X(t)-X(t)^{3}\right)\mathrm{d}t+\lambda(1-X(t)^{2})\, \mathrm{d}B(t)\]
_with initial value \(X(0)=x_{0}\in(-1,1)\) and \(T=1\); that is, \(f(x)=x-x^{3}\) is cubic and \(g(x)=\lambda(1-x^{2})\) is quadratic in the considered SDE in equation (5). See Section A.3 for details about explicit formulas used in the implementation of the LS scheme for the Allen-Cahn SDE. Since the inverse Lamperti transform is given by_
\[\Phi^{-1}(x)=\frac{e^{2x}-1}{e^{2x}+1},\]
_for the case \(w_{0}=0\), see Section A.3 for details, Section 2.2 implies that the exact solution \(X\) takes values in \((-1,1)\). Similarly to the two previous examples, Table 3 shows that the integrators EM, SEM and, TE do not preserve the domain \((-1,1)\) of the Allen-Cahn SDE and the number of samples that preserve the domain \((-1,1)\) decreases as \(\lambda>0\) increases. Table 3 also confirms that the LS schemes preserves the domain \((-1,1)\) of the Allen-Cahn
\begin{table}
\begin{tabular}{||c c c c c||} \hline \(\lambda\) & _LS_ & _EM_ & _SEM_ & _TE_ \\ \hline \hline
6 & 100/100 & 100/100 & 100/100 & 100/100 \\ \hline
7 & 100/100 & 95/100 & 97/100 & 95/100 \\ \hline
8 & 100/100 & 75/100 & 77/100 & 73/100 \\ \hline \end{tabular}
\end{table}
Table 2. Proportion of samples containing only values in \((0,1)\) out of 100 simulated sample paths for the Lamperti-splitting scheme (LS), the Euler–Maruyama scheme (EM), the semi-implicit Euler–Maruyama scheme (SEM), and the tamed Euler scheme (TE) for the Nagumo SDE for different choices of \(\lambda>0\). The parameters used are: \(T=1\), \(\tau=10^{-3}\) and with \(x_{0}\) uniformly distributed on \((0,1)\) for each sample.
Figure 2. \(L^{2}(\Omega)\)-errors on the interval \([0,1]\) of the Lamperti-splitting scheme (LS) for the Nagumo SDE for different choices of \(\lambda>0\) and reference lines with slopes \(1/2\) and \(1\). Averaged over 300 samples.
SDE. In Table 3, we used \(\tau=10^{-3}\), \(T=1\), \(N=100\) number of samples and \(x_{0}\) is uniformly distributed on \((-1,1)\) for each sample._
_In Figure 3 we present the \(L^{2}(\Omega)\)-errors for the same values of \(\lambda\) as used in Table 3. The \(L^{2}(\Omega)\)-error rates in Figure 3 agree with the rates obtained in Theorem 5. We use \(T=1\), \(N=300\) number of samples to estimate the expected value and \(x_{0}\) uniformly distributed on \((0,1)\) for each sample in Figure 3._
## Appendix A Additional formulas
Here we provide a detailed description of the LS scheme for the three considered examples in Section 4. We present explicit formulas for both \(y(t)\) in equation (7) and for \(X^{LS}=\Phi^{-1}(Y^{LS})\). We denote by \(\log\) the natural logarithm.
\begin{table}
\begin{tabular}{||c c c c c||} \hline \(\lambda\) & _LS_ & _EM_ & _SEM_ & _TE_ \\ \hline \hline
3 & 100/100 & 100/100 & 100/100 & 100/100 \\ \hline
3.3 & 100/100 & 97/100 & 97/100 & 95/100 \\ \hline
3.6 & 100/100 & 74/100 & 89/100 & 82/100 \\ \hline \end{tabular}
\end{table}
Table 3. Proportion of samples containing only values in \((-1,1)\) out of 100 simulated sample paths for the Lamperti-splitting scheme (LS), the Euler–Maruyama scheme (EM), the semi-implicit Euler–Maruyama scheme (SEM), and the tamed Euler scheme (TE) for the Allen–Cahn type SDE for different choices of \(\lambda>0\), \(T=1\), \(\tau=10^{-3}\) and with \(x_{0}\) uniformly distributed on \((-1,1)\) for each sample.
Figure 3. \(L^{2}(\Omega)\)-errors on the interval \([0,1]\) of the Lamperti-splitting scheme (LS) for the Allen–Cahn type SDE for different choices of \(\lambda>0\) and reference lines with slopes \(1/2\) and \(1\). Averaged over 300 samples.
### SIS SDE
Recall that the SIS epidemic model is given by
\[dX(t)=X(t)\left(1-X(t)\right)dt+\lambda X(t)(1-X(t))dB(t)\]
with initial value \(X(0)=x_{0}\in(0,1)\) and \(T=1\). The boundary points \(\{0,1\}\) are stationary points: if \(x_{0}\in\{0,1\}\), then \(X(t)=x_{0}\) for all times \(t>0\). Let \(w_{0}\in(0,1)\). Direct computations give
\[\frac{f(x)}{g(x)}-\frac{\lambda^{2}}{2}g^{\prime}(x)=\lambda^{2}x+(1-\lambda^ {2}/2),\]
\[\Phi(x)=\log(x)-\log(1-x)-\log(w_{0})+\log(1-w_{0}),\]
and
\[\Phi^{-1}(x)=\frac{w_{0}e^{x}}{w_{0}e^{x}+(1-w_{0})}.\]
If we let \(H(x)=\lambda^{2}\Phi^{-1}(x)\) and \(\mu=1-\lambda^{2}/2\), then the assumptions in Section 2 are fullfilled: Assumptions 1, 2 and 3 are easily checked and the choice of \(H\) implies that the ODE
\[\begin{cases}\frac{\mathrm{d}y(t)}{\mathrm{d}t}=H(y(t))=\lambda^{2}\Phi^{-1} (y(t)),\\ y(t_{m})=\Phi(x_{m}),\end{cases}\]
for \(t\in[t_{m},t_{m+1}]\), where \(x_{m}=X^{LS}(t_{m})\), has the explicit solution formula given by
\[y(t)=W\left(\frac{(1-x_{m})e^{\frac{1-x_{m}}{x_{m}}}}{x_{m}e^{\lambda^{2}(t-t_ {m})}}\right)+\lambda^{2}(t-t_{m})+\log\left(\frac{x_{m}}{1-x_{m}}\frac{1-w_{0 }}{w_{0}}\right)-\frac{1-x_{m}}{x_{m}}, \tag{18}\]
where \(W\) is the Lambert W function [7]. By inserting the explicit formula for \(y(t)\) in equation (18) into the defining formula for \(Y^{LS}(t)\) in equation (9), we obtain, after simplifications, that
\[X^{LS}(t_{m+1})=\Phi^{-1}(Y^{LS}(t_{m+1}))\\ =\frac{e^{(1-(\lambda^{2})/2)(t_{m+1}-t_{m})}e^{\lambda(B(t_{m+1} )-B(t_{m}))}}{e^{(1-(\lambda^{2})/2)(t_{m+1}-t_{m})}e^{\lambda(B(t_{m+1})-B(t_{ m}))}+W\left(\frac{(1-x_{m})e^{\frac{1-x_{m}}{x_{m}}}}{x_{m}e^{\lambda^{2}(t_{m+1} -t_{m})}}\right)}.\]
### Nagumo SDE
Recall that the Nagumo SDE is given by
\[\mathrm{d}X(t)=-X(t)(1-X(t))(1-X(t))\,\mathrm{d}t-\lambda X(t)(1-X(t))\, \mathrm{d}B(t)\]
with initial value \(X(0)=x_{0}\in(0,1)\) and \(T=1\). The boundary points \(\{0,1\}\) are stationary points: if \(x_{0}\in\{0,1\}\), then \(X(t)=x_{0}\) for all times \(t>0\). Let \(w_{0}\in(0,1)\). Similarly to the SIS SDE in Section A.1, direct computations give us
\[\frac{f(x)}{g(x)}-\frac{\lambda^{2}}{2}g^{\prime}(x)=\left(1+\frac{\lambda^{2 }}{2}\right)-\left(1+\lambda^{2}\right)x,\]
\[\Phi(x)=\log(1-x)-\log(x)-\log(1-w_{0})+\log(w_{0})\]
and
\[\Phi^{-1}(x)=\frac{w_{0}}{(1-w_{0})e^{x}+w_{0}}.\]
Let now \(H(x)=-(1+\lambda^{2})\Phi^{-1}(x)\) and \(\mu=(1+\frac{\lambda^{2}}{2})\). One checks that the assumptions in Section 2 are fullfilled: Assumptions 1, 2 and 3 are easily verified and the ODE
\[\begin{cases}\frac{\mathrm{d}y(t)}{\mathrm{d}t}=-(1+\lambda^{2})\Phi^{-1}(y(t)),\\ y(t_{m})=\Phi(x_{m}),\end{cases}\]
for \(t\in[t_{m},t_{m+1}]\), where \(x_{m}=X^{LS}(t_{m})\), has the explicit solution formula given by
\[y(t)=-W\left(\frac{(1-x_{m})e^{\frac{1-x_{m}}{x_{m}}}}{x_{m}e^{(1+\lambda^{2}) (t-t_{m})}}\right)-(1+\lambda^{2})(t-t_{m})+\log\left(\frac{1-x_{m}}{x_{m}} \frac{w_{0}}{1-w_{0}}\right)+\frac{1-x_{m}}{x_{m}}, \tag{19}\]
where \(W\) is the Lambert W function [7]. We insert the formula in equation (19) into equation (9) to obtain, after simplifications, that
\[X^{LS}(t_{m+1})=\Phi^{-1}(Y^{LS}(t_{m+1}))\\ =\left(W\left(\frac{(1-x_{m})e^{\frac{1-x_{m}}{x_{m}}}}{x_{m}e^{( 1+\lambda^{2})(t_{m+1}-t_{m})}}\right)e^{(1+(\lambda^{2})/2)(t_{m+1}-t_{m})+ \lambda(B(t_{m+1})-B(t_{m}))}+1\right)^{-1}.\]
### Allen-Cahn SDE
Recall that the Allen-Cahn type SDE is given by
\[\mathrm{d}X(t)=\left(X(t)-X(t)^{3}\right)\mathrm{d}t+\lambda(1-X(t)^{2}) \,\mathrm{d}B(t)\]
with initial value \(X(0)=x_{0}\in(-1,1)\) and \(T=1\). The boundary points \(\{-1,1\}\) are stationary points: if \(x_{0}\in\{-1,1\}\), then \(X(t)=x_{0}\) for all times \(t>0\). Observe that \(0\) is not stationary, since \(g(0)\neq 0\). In this case, we present the implementation formulas for the choice \(w_{0}=0\) as this simplifies the expressions. Straightforward computations give us
\[\frac{f(x)}{g(x)}-\frac{\lambda^{2}}{2}g^{\prime}(x)=(1+\lambda^{2})x,\]
\[\Phi(x)=\frac{1}{2}\left(\log(1+x)-\log(1-x)\right),\]
and
\[\Phi^{-1}(x)=\frac{e^{2x}-1}{e^{2x}+1}.\]
We let \(H(x)=(1+\lambda^{2})\Phi^{-1}(x)\) and \(\mu=0\). Then assumptions in Section 2 are fullfilled: Assumptions 1, 2 and 3 are easily verified and the ODE
\[\begin{cases}\frac{\mathrm{d}y(t)}{\mathrm{d}t}=(1+\lambda^{2})\Phi^{-1}(y(t) ),\\ y(t_{m})=\Phi(x_{m}),\end{cases}\]
for \(t\in[t_{m},t_{m+1}]\), where \(x_{m}=X^{LS}(t_{m})\), has the explicit solution formula given by
\[y(t)=\log\left(\frac{1}{2}\left(\sqrt{e^{2(1+\lambda^{2})(t-t_{m})}\left(e^{- y_{m}}-e^{y_{m}}\right)^{2}+4}-e^{(1+\lambda^{2})(t-t_{m})}\left(e^{-y_{m}}-e^{ y_{m}}\right)\right)\right). \tag{20}\]
Combining equation (20) with equation (9) gives us
\[X^{LS}(t_{m+1})=\Phi^{-1}(Y^{LS}(t_{m+1}))=\frac{V(t_{m+1})e^{2\lambda(B(t_{m+ 1})-B(t_{m}))}-(1-x_{m})(1+x_{m})}{V(t_{m+1})e^{2\lambda(B(t_{m+1})-B(t_{m}))} +(1-x_{m})(1+x_{m})}, \tag{21}\]
where \(x_{m}=X^{LS}(t_{m})\) and where
\[V(t)=\left(\sqrt{\left(x_{m}\right)^{2}e^{2(1+\lambda^{2})(t-t_{m})}+(1-x_{m})(1+ x_{m})}+x_{m}e^{(1+\lambda^{2})(t-t_{m})}\right)^{2}.\]
## Acknowledgements
The author would like to thank David Cohen for his comments, suggestions and sharing some of his code. The author would also like to thank Charles-Eduardo Brehier for finding and discussing an error in the first version of the paper. This work is partially supported by the Swedish Research Council (VR) (projects nr. \(2018-04443\)). The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at UPPMAX, Uppsala University, partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.
|
2305.05397 | Activity Induced Diffusion Recovery in Crowded Colloidal Suspension | We show that the force generated by active enzyme molecules are strong enough
to influence the dynamics of their surroundings under artificial crowded
environments. We measured the behavior of polymer microparticles in a
quasi-two-dimensional system under aqueous environment, at various area
fraction values of particles. In the presence of enzymatic activity not only
the diffusion of the suspended particles at shorter time-scale regime enhanced,
the system also showed a transition from sub-diffusive to diffusive dynamics at
longer time-scale limits. Similar observations were also recorded with enzyme
functionalized microparticles. Brownian dynamics simulations have been
performed to support the experimental observations. | Arnab Maiti, Yuki Koyano, Hiroyuki Kitahata, Krishna Kanti Dey | 2023-05-09T12:43:38Z | http://arxiv.org/abs/2305.05397v1 | # Activity Induced Diffusion Recovery in Crowded Colloidal Suspension
###### Abstract
We show that the force generated by active enzyme molecules are strong enough to influence the dynamics of their surroundings under artificial crowded environments. We measured the behavior of polymer microparticles in a quasi-two-dimensional system under aqueous environment, at various area fraction values of particles. In the presence of enzymatic activity not only the diffusion of the suspended particles at shorter time-scale regime enhanced, the system also showed a transition from sub-diffusive to diffusive dynamics at longer time-scale limits. Similar observations were also recorded with enzyme functionalized microparticles. Brownian dynamics simulations have been performed to support the experimental observations.
Cellular functions usually involve many enzymes mediated catalytic reactions that control the rate of various chemical transformations [1]. Unlike the biomolecular motors, most enzymes in nature operate in free states and rather diffuse to and away from their substrates during catalytic reactions [2]. For a long time, such molecules were believed neither to have any energy transduction ability during substrate turnover nor to influence significantly the dynamics of their surroundings. In a series of recent experiments, enzymes have, however, been found to generate forces during substrate turnover, which were significant enough to influence their dynamics and that of their surroundings in aqueous solutions [3; 4; 5; 6; 7; 8; 9]. The behavior is analogous to the generation of randomly fluctuating forces inside cells by the aggregation and adaptations of molecular motors, which are believed to drive diffusive-like, non-thermal motion of cellular components, effecting the overall metabolic state of the cell [10; 11; 12]. Population of active bacteria has also been observed to considerably affect the dynamics of their surroundings either by direct interactions (hydrodynamic coupling) [13], collisions [14], or by changing the fluid rheology considerably [15; 16; 17; 18]. These observations provide strong motivation to investigate if enzymes, while catalyzing various chemical reactions within intracellular crowded environments, could generate sufficient mechanical forces to influence the motion of nearby particles. A positive answer to this will refine our understanding of organelle and small molecules' motion in cells and underscore fundamental principles of molecular transport, assembly and motility under crowded cytoplasmic environments [19; 20].
In this Letter, we demonstrate that forces generated by enzymatic reactions are sufficiently long-ranged and strong enough to influence the dynamics of their surroundings, under artificial crowded environments. This was demonstrated by using crowded colloidal suspensions of 3 \(\mu\)m polystyrene particles mixed with solutions of active enzymes like urease, where the amount of crowding was controlled by changing the area fractions of the suspended microparticles. At shorter time-scale regimes, the passive tracers were found to display diffusive dynamics while at longer time scales, their behavior was found to be sub-diffusive. With increased amount of crowding, the diffusion coefficient at shorter time scale and the diffusion exponent at longer time scale decreased gradually, as observed in other previous studies [21; 22]. However, with the onset of enzymatic reactions in the system (triggered by the addition of calculated amount of substrate solutions from outside), both the diffusion coefficients at shorter time scale and the diffusion exponents at longer time scale were found to increase. The recovery of diffusion values and exponents were likely due to the decrease in effective viscosity and particle caging effect respectively - both facilitated by the enzyme substrate reactions. Experiments were also conducted with enzyme-coated microparticles, at sufficiently higher area fraction limits. Even on this occasion, substrate turnover was found to generate sufficient mechanical force to enhance the diffusivity of the particles and influence their sub-diffusive dynamics at longer time scales. We consider both these observations as significant since from a scientific standpoint, although the co-operativity between diffusing enzymes in various intracellular signalling pathways has been well documented, the degree to which their activity plays a role in cellular mechanics has not yet been investigated. Moreover, although it was hypothesized that localized energy transduction by enzymes was capable of generating long-range dynamic interactions with their surroundings, even in crowded conditions [23; 24; 25], to date, there has been no experimental studies validating such propositions. The previous theoretical work based on Langevin dynamics of active dimers [25] seems
to correspond to the experiments performed with the enzyme-coated microparticles in this study. However, to understand the experimental results obtained with passive microparticles in active enzyme suspensions, new simulations have been performed. Our results, therefore, promises a distinct shift in paradigm in molecular biophysics research, whereby localized energy transduction by enzymes is expected to play a crucial role in understanding diffusion-mediated intracellular processes. The catalysis-induced force generation and recovery of particle diffusion under artificial crowded environment may also provide newer insights into the reported stochastic motion of the cytoplasm [26; 27], glass transition of cytoplasmic matrix during metabolism [28; 29], and dynamics of nanoswimmers [30] and convective transport in cells [31; 32], opening up new research opportunities in active biomolecular mechanics [33].
We measured the mean squared displacement (MSD) of 3 \(\mu\)m polystyrene tracer particles in deionized water for five different area fractions \(\phi\) (0.02, 0.03, 0.16-0.20, 0.21-0.25, 0.35-0.38). Maintaining stable particle area fraction values during the experiments was challenging, as the particles were in motion. To ensure that the experimental measurements were statistically significant, we tracked hundreds of particles in each data set and performed necessary control experiments, the details of which are given in the Supplementary Material (SM) [34]. For low area fractions, particle motion remained mostly diffusive, whereas at higher values of \(\phi\), the plots showed sub-diffusive behavior with gradually decreasing slopes with increasing time steps (Fig. 1a). Considering the MSD changes in the log scale (Fig. 1a, inset), we observed that for \(\phi\leq 0.03\), the motion of the particles remained diffusive at all time steps. For higher values of \(\phi\), until a time step of 3 s, the particle motion remained diffusive. However, at intermediate time steps (20-50 s) the particle dynamics was dominated by the crowding in the system making their motion sub-diffusive in nature. Interestingly, at the highest area fraction (\(\phi\)=0.35-0.38) limit, the particles did not show any diffusive motion for the entire range of the time steps used. At sufficiently longer time steps, the particle motion became diffusive again. These observations were further confirmed by calculating \(d\log(\mathrm{MSD})/d\log(\Delta t)\) (which yielded the corresponding diffusion exponent \(\alpha\)) and plotting it as a function of \(\log(\Delta t)\) (Fig. 1b). For data analysis, we considered the time intervals \(\Delta t=\) 1-3 s as the short time scale regime where the motion remained diffusive. The intervals \(\Delta t=\) 20-50 s was chosen as the long-time scale regime where the crowding effect dominated.
In dilute particle concentration limit, the diffusion coefficient of 3 \(\mu\)m polystyrene particles was measured to be 0.088 \(\mu\)m\({}^{2}\)/s. However, from the Stokes-Einstein relation the diffusion coefficient was estimated to be 0.14 \(\mu\)m\({}^{2}\)/s for the same particle at room temperature T = 25 \({}^{\circ}\)C in water (viscosity \(\eta=\) 1 cP). Therefore, the experimentally measured diffusion coefficient is 37% less than the expected value for infinite dilution. This could be explained by considering the effect of the bottom surface and corresponding hindrance in particle diffusion [35; 36]. The negatively charged microparticles could interact with the negatively charged surface [37; 38], leading to the restricted diffusion of the former (see details in the SM [34]). We also estimated the increase in effective viscosity of the particle suspension given by \(\eta(\phi)=\eta_{0}(1+2.5\phi)\)[39] and hypothesized that the observed decrease in the diffusion coefficient at the shorter time-scale regime was due to the increase in effective viscosity with higher area fraction \(\phi\) (Fig. 1c). The supporting calculations are given in SM [34]. We also hypothesized that the increased \(\phi\) resulted in greater degree of caging effect [40; 41] at longer time-scale regimes, resulting in lowering the sub-diffusive exponent \(\alpha\) with \(\phi\), as observed (Fig. 1d).
From the above results, it became clear that crowding in a colloidal system could significantly influence both the short time diffusion coefficient and long-time diffusion exponents of suspended particles. To check if enzymatic catalysis could generate sufficient forces to counter this sub-diffusive behavior and restore the diffusive dynamics of the system under crowded conditions, we performed experiments both with tracers suspended in active enzyme solutions and high concentrations of enzyme functionalized microparticles in substrate-rich media. A recent study has demonstrated enhanced propulsion of catalase powered motor confined within giant unilamellar vesicles [42], wherein we demonstrate in this study that similar enhancement in motion could also be real
Figure 1: (a) The MSD profiles of microparticle suspensions without activity for different area fractions. The inset shows the same plots in the log scale. (b) Diffusion exponents \(\alpha\) as a function of \(\log(\Delta t)\), which helped in identifying the short and long time-scale regimes. The variation of short time-scale diffusion coefficient and long time scale diffusion exponent are shown in (c) and (d), respectively. Particles with \(\phi=\) 0.35–0.38 did not show any diffusive motion for the entire range of the time steps used, and as such it has not been included in (c). Moreover, data corresponding to \(\phi=\) 0.28–0.33 has been included in (c) and (d) only.
ized for passive particles suspended in active crowded environment. We selected molecules of active urease as nanomotors in our system owing to their robustness, and high substrate turnover rate at room temperature (see SM) [34]. As both ensemble and time averages were considered for MSD calculations, care was taken to fix the reaction rate that allowed the catalytic reaction to continue for a significant duration. Also, the enzyme substrate concentrations were chosen in such a manner that ensured sufficient substrate turnover and generation of nearly constant mechanical forces during the entire measurement period. Under crowding conditions, the MSD showed enhanced tracer dynamics in the presence of enzymatic activity (Fig. 2a). To observe the change in the diffusive parameters in presence of substrate turnover, we measured the particle diffusion coefficients at shorter time scales (1-3 s), and diffusion exponents at longer time scales (20-50 s) and compared them with those measured in absence of catalysis. Fig. 2b shows the diffusion coefficients measured at different crowding conditions in the presence and absence of enzymatic activity. Force generated by free enzymes in solution was found to enhance the tracer diffusion by 15-25% which was, as mentioned earlier, likely due to the lowering of effective viscosity in the presence active enzyme propulsion. At longer time-scale regimes, the diffusion exponents of the particles showed a 1-5% enhancement, in the presence of catalytic activity (Fig. 2c). The corresponding sub-diffusion coefficients [43] are given in Fig. 2d. This indicated that in the presence of force generated due to substrate catalysis, the particles were able to get themselves freed from the crowding effects imposed by their neighbors and displayed enhanced diffusive dynamics. To confirm that the free enzymes did not adsorb over the polymer bead surface during experiments and influenced their propulsion, we also performed experiments with microparticles coated with a thin layer of bovine serum albumin (BSA), which also showed similar enhancement in particle motion during catalysis. The details of the experiments performed and results obtained are given in SM [34]. It was also noted that upon complete depletion of the substrate in the experimental chamber, both the tracer diffusion at short time-scale regime and diffusion exponent in the long time-scale regime decreased again, like in passive crowded suspensions, indicating that the particles started feeling the effect of crowding in absence of the force generated by the enzymes. The results are given in SM [34]. Like molecules of free enzymes, microparticles coated with immobilized active enzymes have been reported to behave as motors and display nontrivial collective dynamics [44]. Instead of using free enzyme molecules as mechanical energy sources to counter the effect of crowding, we used an assembly of urease functionalized active microparticles and investigated if during substrate turnover, the particles could generate forces to overcome their mutual crowding effects. Although theoretical analysis performed earlier suggested such a possibility, [25] to the best of our knowledge, it has not yet been demonstrated experimentally. Like passive microparticles suspended in active enzyme solution, the recovery of diffusive dynamics was also observed with active microparticles in different crowding conditions. The microparticles were functionalized with active urease enzymes using biotin streptavidin linkage chemistry (see SM for details [34]). Diffusion studies were performed using different area fractions of particles that corresponded to different degrees of crowding. Fig. 3a shows the MSD plots of the particles while Figs. 3b and c show the short-time diffusion coefficients and long-time diffusion exponents measured at different area fractions. Clearly, like
Figure 2: (a) The MSD profiles of microparticle suspensions for different area fractions in the presence (wa) and absence (woa) of free enzyme activity in the system. The corresponding short time-scale diffusion coefficients and long time-scale diffusion exponents are shown in (b) and (c), respectively. (d) Sub-diffusion coefficients \(D_{\alpha}\) at various area fraction values.
Figure 3: (a) The MSD profiles of enzyme-immobilized microparticle suspensions for different area fractions in the presence (wa) and absence (woa) of substrate solution in the system. The corresponding short time-scale diffusion coefficients and long time-scale diffusion exponents are shown in (b) and (c), respectively. (d) Sub-diffusion coefficients \(D_{\alpha}\) at various area fraction values.
the particle in active enzyme suspensions, the enzyme functionalized particles were able to generate sufficient mechanical forces, overcome the effect of crowding to a significant degree and restore their diffusive dynamics. In case of enzyme coated particles, the short-time diffusion coefficients was found to get enhanced by 20-40% in the presence of substrate catalysis, while the long-time diffusion exponents increased by 2-6%. The corresponding sub-diffusion coefficients are given in Fig. 3d. From the experimental results, we therefore concluded that substrate catalysis by active enzyme molecules generated sufficiently large forces that could influence the dynamics of their surroundings under artificial crowded conditions. The effective viscosity estimated at the short time-scale regime due to crowding was found to be in the range of cytoplasmic viscosity [45; 46]. The enhanced diffusion of particles observed at this time-scale regime indicated that the force generated due to catalytic turnover were able to lower the effective viscosity, thereby enhancing the particle propulsion.
To understand the diffusion enhancement of passive tracers by the active enzymes as shown in Fig. 2 from the microscopic viewpoint, we considered a two-dimensional model composed of tracers surrounded by dumbbell-shaped particles (Fig. 4a). The dumbbell-shaped particles changed their arm length incoherently, which roughly imitated the conformation changes in enzymes during catalysis. Hereafter, the dumbbell-shaped particles are called the dimers.
The mathematical formulation corresponding to the tracer dynamics in active dimer suspension is as follows. In our model, the Langevin dynamics of the center positions of the tracer particles \(\mathbf{R}_{i}\) and the beads consisting the dimer \(\mathbf{r}_{j}^{(n)}\) with excluded volume effect are considered, where \(i(=1,\cdots,M)\), \(j(=1,\cdots,N)\) and \(n(=1,2)\) indicate the indices for a tracer, dimer, and bead consisting the dimer, respectively. The dynamics of the tracer particles are governed by the following over-damped Langevin equation:
\[\frac{dR_{i}}{dt}=-\mu_{t}\frac{\partial U}{\partial\mathbf{R}_{i}}+\mathbf{\xi}_{t,i} (t) \tag{1}\]
where \(\mu_{t}\) is the mobility. The term \(\mathbf{\xi}_{t,i}\) is the thermal noise which satisfies \(\langle\xi_{t,i,\alpha}(t)\rangle=0\) and \(\langle\xi_{t,i,\alpha}(t)\xi_{t,j,\beta}(s)\rangle=2\mu_{t}k_{B}T\delta_{ij} \delta_{\alpha\beta}\delta(t-s)\) (\(\alpha,\beta=x,y\)). The function \(U\) denotes the potential reflecting the excluded volume effect of particles
\[U= \frac{1}{2}\sum_{i=1}^{N}\sum_{j(\neq i)=1}^{N}\sum_{m=1}^{2}\sum _{n=1}^{2}u\left(\left|\mathbf{r}_{i}^{(m)}-\mathbf{r}_{j}^{(n)}\right|;2r_{0}\right)\] \[+\frac{1}{2}\sum_{i=1}^{M}\sum_{j(\neq i)=1}^{M}u\left(\left|\mathbf{ R}_{i}-\mathbf{R}_{j}\right|;2R_{0}\right)\] \[+\sum_{i=1}^{N}\sum_{j=1}^{M}\sum_{m=1}^{2}u\left(\left|\mathbf{r}_{i }^{(m)}-\mathbf{R}_{j}\right|;r_{0}+R_{0}\right) \tag{2}\]
where
\[u(r;\rho_{0})=\left\{\begin{array}{ll}u_{0}(\rho_{0}-r)^{2},&(r<\rho_{0})\\ 0,&(r>\rho_{0})\end{array}\right. \tag{3}\]
Here, \(R_{0}\) and \(r_{0}\) are the radii of the tracer and the bead consisting dimers. Thus, the first term in the right side of Eq. (1) represents the repulsive force during particle collision. In the same way, the dynamics of the dimer are given by the following over-damped Langevin equation:
\[\frac{d\mathbf{r}_{i}^{(n)}}{dt}=-\mu\frac{\partial E_{i}}{\partial\mathbf{r}_{i}^{(n) }}-\mu\frac{\partial U}{\partial\mathbf{r}_{i}^{(n)}}+\mathbf{\xi}_{i}^{(n)}(t) \tag{4}\]
where \(\mu\) is the mobility. The term \(\mathbf{\xi}_{i}^{(n)}\) is the thermal noise, which satisfies \(\left\langle\xi_{i,\alpha}^{(m)}(t)\right\rangle=0\), \(\left\langle\xi_{i,\alpha}^{(m)}(t)\xi_{j,\beta}^{(n)}(s)\right\rangle=2\mu k _{B}T\delta_{mn}\delta_{ij}\delta_{\alpha\beta}\delta(t-s)\). It should be noted that the particle mobilities hold the relation \(\mu=R_{0}\mu_{t}/r_{0}\) from the Stokes' law. The first term in the right side of Eq. (4) expresses the force between the beads composing a dimer, where
\[E_{i}(t)=\frac{k}{2}\left(\left|\mathbf{r}_{i}^{(1)}-\mathbf{r}_{i}^{(2)}\right|-\ell _{i}(t)\right)^{2} \tag{5}\]
In the numerical calculations, we compared two cases: one is that the length of the dimer, \(\ell_{i}(t)\), changes in time, which corresponds to the conformation changes in the enzymes. Specifically, it changes as follows
\[\ell_{i}(t)=\ell_{0}+\ell_{1}\sin\psi_{i}(t) \tag{6}\]
\[\frac{d\psi_{i}}{dt}=\omega+\zeta_{i}(t) \tag{7}\]
\(\zeta_{i}(t)\) is the white Gaussian noise with \(\langle\zeta_{i}(t)\zeta_{j}(s)\rangle=2\eta\delta_{ij}\delta(t-s)\). The other case corresponds to the tracer
Figure 4: (a) Snapshot of the suspension composed of the tracer (pink particles) and the dimers (light blue particles connected by yellow bonds). (b) Trajectories of the tracer particles. A spread (compact) trajectory are observed in the case for the lower (higher) area fraction with (without) activity, which reflects the magnitude of caging effect. The spatial scales for 4 panels are common.
diffusion in the absence of enzymatic catalysis, i.e., \(\ell_{i}\) has a constant value \(\ell_{0}\). We also investigated the dependence of diffusivity on the particle density to understand the crowding effect on the tracer particles.
The spatial scale is normalized by the bead radius comprising a dimer, \(r_{0}\). The radius of the tracer particle is \(R_{0}=3\), and the the dimer's natural length is \(\ell_{0}=1.5\). The time is scaled by the damping of the bead consisting of a dimer, i.e., \(\mu=1\). Other parameters are set to be \(\ell_{1}=1.0\), \(u_{0}=1\), \(k=1\), \(\omega=0.1\), \(k_{B}T=0.01\), and \(\eta=0.1\). The area fraction of dimer was fixed to be 0.5, and that of tracer particles was changed as 0.4, 0.4125, 0.425, 0.4375, and 0.45.
By increasing the area fraction, the trajectory of the tracer particle becomes compact, while the activity makes the trajectory broader, as shown in Fig. 4b. To elucidate this further, the MSDs of the tracer particles as function of time intervals were obtained by averaging over all tracer particles. As seen in Fig. 5a, the MSDs become smaller for larger area fractions of tracer particles in the absence of any dimer activity. The diffusion constant determined by the MSD at \(t=1\) is shown in Fig. 5b.
Since the diffusion constant should coincide \(D_{0}=\mu_{t}k_{B}=10^{-3}/3\) for the limit of \(t\to 0\), the crowding effect already appears at \(t=1\). To check the sub-diffusion regime, the local gradient of MSD, \(f(\Delta t)=d[\log(\text{MSD}(\Delta t))]/d(\Delta t)\) was checked. The examples of \(f(\Delta t)\) are plotted in Fig. 5c, which are qualitatively similar to the experimental results in Fig. 1b. We can see that the variations seem to take minimum values around \(\Delta t\simeq 1.3\) (1.8) in the case with (without) activity. Since \(f(\Delta t)\) is noisy, the sub-diffusion exponent was defined as the local minimum of the quadratic-function fitting of \(f(\Delta t)\). The fitting range was \(\Delta t_{\text{min}}-0.5\leq\Delta t\leq\Delta t_{\text{min}}+0.5\), where \(\Delta t_{\text{min}}\) gives the minimum value of raw \(f(\Delta t)\). As shown in Fig. 5d, the sub-diffusion exponent \(\alpha\) estimated in the presence of activity always exceeds that estimated without dimer activity. Moreover, the sub-diffusion exponent is a decreasing function of the area fraction of tracer particles for both active and non-active cases. Such trend is qualitatively the same as the experimental results. From the numerical results, we assert that the conformation changes in the enzymes is essential in deciding the dynamics of tracer particles in active crowded suspensions.
In summary, we demonstrate that the force generated by enzymes during substrate turnover is sufficiently strong to influence the dynamics of their surroundings, under artificially crowded environments. These observations have several important implications and offer opportunities to investigate the consequences of biomolecular activities over intracellular transport, assembly and organization of components under crowded cytosolic environments.
###### Acknowledgements.
KKD thanks SERB, India (ECR/2017/002649), DST, India (DST/ICD/BRICS/PilotCall3/BioTheraBubble/2019) and IIT Gandhinagar for financial supports. HK was supported by JSPS KAKENHI Grant Number JP21H01004. We are thankful to Prof. Alexander Mikhailov and Prof. Raymond Kapral for insightful discussions. Help received from Dr. Chandan Kumar Mishra in developing the particle tracking methodology is gratefully acknowledged.
|
2308.14266 | SalesBot 2.0: A Human-Like Intent-Guided Chit-Chat Dataset | In recent research on dialogue systems and corpora, there has been a
significant focus on two distinct categories: task-oriented (TOD) and
open-domain (chit-chat) dialogues. TOD systems aim to satisfy specific user
goals, such as finding a movie to watch, whereas open-domain systems primarily
focus on generating engaging conversations. A recent study by Chiu et al.
(2022) introduced SalesBot, which provides simulators and a dataset with
one-turn transition from chit-chat to task-oriented dialogues. However, the
previously generated data solely relied on BlenderBot, which raised concerns
about its long-turn naturalness and consistency during a conversation. To
address this issue, this paper aims to build SalesBot 2.0, a revised version of
the published data, by leveraging the commonsense knowledge of large language
models (LLMs) through proper prompting. The objective is to gradually bridge
the gap between chit-chat and TOD towards better naturalness and consistency.
The newly released large-scale dataset with detailed annotations exhibits
smoother transitions between topics and is more human-like in terms of
naturalness and consistency. It can serve as a valuable resource for both
academic research and commercial applications. Furthermore, our proposed
framework can be applied to generate numerous dialogues with various target
intents. | Wen-Yu Chang, Yun-Nung Chen | 2023-08-28T02:48:49Z | http://arxiv.org/abs/2308.14266v1 | # SalesBot 2.0: A Human-Like Intent-Guided Chit-Chat Dataset
###### Abstract
In recent research on dialogue systems and corpora, there has been a significant focus on two distinct categories: task-oriented (TOD) and open-domain (chit-chat) dialogues. TOD systems aim to satisfy specific user goals, such as finding a movie to watch, whereas open-domain systems primarily focus on generating engaging conversations. A recent study by Chiu et al. (2022) introduced SalesBot, which provides simulators and a dataset with one-turn transition from chit-chat to task-oriented dialogues. However, the previously generated data solely relied on BlenderBot, which raised concerns about its long-turn naturalness and consistency during a conversation. To address this issue, this paper aims to build SalesBot 2.0, a revised version of the published data, by leveraging the commonsense knowledge of large language models (LLMs) through proper prompting. The objective is to gradually bridge the gap between chit-chat and TOD towards better naturalness and consistency. The newly released large-scale dataset with detailed annotations exhibits smoother transitions between topics and is more human-like in terms of naturalness and consistency. It can serve as a valuable resource for both academic research and commercial applications. Furthermore, our proposed framework can be applied to generate numerous dialogues with various target intents.1
Footnote 1: The source code and data are available at: [https://github.com/MuiLab/SalesBot2](https://github.com/MuiLab/SalesBot2).
## 1 Introduction
In recent years, dialogue systems have undergone significant advancements due to improvements in modeling techniques and computing power. However, most research in this field has focused on two distinct areas: task-oriented dialogues (TOD) and open-domain dialogues, also known as chitchat systems. Popular large-scale datasets for TOD include Schema-Guided Dialogue (SGD) Rastogi et al. (2020) and MultiWoz Budzianowski et al. (2018); Zang et al. (2020), which contain annotated information on the user intents and dialogue states. In TOD, the agent's goal is to identify the user's intention and fulfill their task by the end of the dialogue. Meanwhile, research on open-domain chitchat systems and datasets Li et al. (2017); Adiwardana et al. (2020); Zhang et al. (2018); Kim et al. (2022) aims to build models capable of engaging in free-form conversations. As pre-trained language models continue to improve, larger sets of dialogues are being used to train models with the ability to engage in free-form chatting Zhang et al. (2020); Roller et al. (2021). Despite significant advancements in both areas, there has been a lack of integration between them, which is crucial for real-world applications.
Recently, efforts have been made to integrate TOD and open-domain dialogues. For instance, Sun et al. (2021) incorporated chitchat responses into existing TOD datasets to enhance the conversation's naturalness and social engagement. Furthermore, there have been attempts to develop datasets and models capable of handling both TOD and chitchat scenarios. Li et al. (2022) developed the PivotBot model, capable of handling three pre-defined scenarios. One scenario involves adding chit-chat dialogues as context to a TOD, while another includes chitchat dialogues to facilitate domain transition. In contrast, the third scenario involves incorporating chitchat content to enhance a TOD, similar to the approach taken in the ACCENTOR model Sun et al. (2021). However, these approaches assume that the user has an explicit goal to accomplish, and chitchat responses merely enrich the conversation. In contrast, our scenario assumes that the user does not have any explicit goal or intention, and the agent must detect any potential intent, whether it is explicitly or implicitly shown by the user, and smoothly pivot the dialogue to the topic related to the detected intent.
With this idea, Chiu et al. (2022) first introduced a framework for generating data that transitions
from chit-chat to TOD dialogues. They utilized two open-domain BlenderBots to chat with each other and generate chit-chat dialogues, followed by a task-oriented intent detector to determine whether a transition to the TOD system should be made. However, the SalesBot dataset has some limitations, such as the absence of proper social engagement in 20% of the data, nonsensical detected intents due to short chit-chat context, and unnatural transition turns. Our paper proposes an improvement to the SalesBot dataset by leveraging large language models (LLMs) to generate more human-like chit-chat dialogues, along with intent-guided transition turns. This approach takes advantage of the LLMs' commonsense knowledge to create more natural and engaging chit-chat dialogues, addressing the issues with the SalesBot dataset.
Table 1 provides evidence that the agent's response to the user's statement regarding missing someone is indirect, indicating a deviation from natural conversational norms. Furthermore, the agent's final turn, "_What about your family?_" appears misplaced in the given context. However, in the updated SalesBot 2.0 version (as shown in the lower part of Table 1), the agent sympathizes with the user's loss and smoothly transitions the conversation from work to tourism. This exemplifies the effectiveness of the SalesBot 2.0 revisions in enhancing the naturalness of the conversation and improving the dialogue flow.
Our paper presents a novel approach to enhance the SalesBot dataset by leveraging large language models (LLMs) to produce more realistic and immersive chit-chat dialogues, in addition to intent-guided transition turns. Our approach involves utilizing LLMs to detect potential intents from a set of predefined options and then generating dialogues based on the identified intents. By doing so, we capitalize on the LLMs' vast knowledge base to create chit-chats that is more natural and compelling. This method effectively tackles some limitations of the SalesBot dataset, including the lack of engaging social interactions, the detection of irrelevant or nonsensical intents, and the production of unnatural transition turns. Our contribution can be summarized as 4-fold:
* Proposing a simple prompting framework that allows for customization of intent-oriented chit-chat with a predefined intent.
* Introducing a new and better dataset that includes open-domain chit-chats, smoother transitions, and potential intent detected from the context.
* Leveraging large language models to ensure the naturalness and quality of the dataset.
* Defining aggressiveness through LLM evaluation, which can be helpful in developing dialogue systems with varying levels of aggressiveness in the future.
## 2 Proposed Framework
Our proposed pipeline is illustrated in Figure 1, which includes the following procedures: (1) Chit-Chat Dialogue Revision, (2) Potential Intent Detection, (3) Dialogue Continuation, and (4) Transition Boundary Detection. For all tasks, we use OpenAI's gpt-3.5-turbo API as our model for generating our data.
### Chit-Chat Dialogue Revision
As highlighted in Section 1, the SalesBot dataset (Chiu et al., 2022) has a considerable amount of data that lacks consistency with the given context. Previous research has shown that leveraging the commonsense knowledge of LLMs can lead to more natural dialogues (Kim et al., 2022). In this work, we take this approach one step further by tasking LLMs with identifying any inconsistent ut
\begin{table}
\begin{tabular}{l} \hline \hline
**SalesBot 1.0: Chit-chat context** \\ User: Thank you. It was a long time ago, but I still miss her. What do you do for a living? \\ Agent: I am a stay at home mom. It is very rewarding. \\ How about you? \\ User: I work at the zoo. It’s a lot of fun, and I get to spend \\ time with my family. \\ Agent: What about your family? \\ \(\rightarrow\)FindAttraction \\ \hline \hline
**SalesBot 2.0: Rewritten chit-chat context** \\ User: Thank you. It was a long time ago, but I still miss her deeply. \\ Agent: I’m sorry for your loss. It sounds like she was very \\ important to you. As for me, I’m also fortunate to have a \\ family that I get to spend a lot of time with. \\ User: That’s great to hear. I work at the zoo, and it’s a fun \\ job that allows me to support my family. \\ Agent: That sounds like a really cool job. What kind of \\ animals do you work with at the zoo? \\ User: Mostly primates, but there are also elephants, lions, and bears. \\ Agent: Wow, that must be exciting to work with such a \\ variety of animals. Have you had the chance to explore the \\ tourist attractions in the area? \\ User: Not really, I’ve been so focused on work and family \\ lately that I haven’t had much time to go out and explore. \\ \(\rightarrow\)FindAttraction \\ \hline \hline \end{tabular}
\end{table}
Table 1: Chit-chat context comparison between SalesBot 1.0 and 2.0.
terances in the dialogue and providing reasons for their identification. We then use this information to revise the entire dialogue, resulting in a more consistent and coherent dataset.
However, our approach occasionally results in revised dialogues that are too short. This is due to the original SalesBot dataset containing a significant amount of data with insufficient chit-chat dialogue to provide context. To address this issue, we implemented a constraint that requires the LLMs to extend the dialogues if they consist of only one turn. The prompt used for our approach is provided below for reference.
You will be given a conversation between two people. Here is what you should do: 1. Identify the inconsistent utterances. 2. Give some reasons why they are inconsistent. 3. Modify the dialogue based on previously identified utterances. 4. The rewritten dialogue should be more than 6 turns. Here is the conversation: <Diague> You MUST follow the format as : <output_format>
### Potential Intent Detection
In the second stage of our procedure, we aim to identify potential task-related intents in the chit-chat dialogues. To achieve this, we collect a set of intents from the SGD dataset (Rastogi et al., 2020). However, we only include those intents that can trigger a transition to TOD, such as "FindMovie", and exclude others like "GetMovieTime", as our focus is on the first topic-related intent ("GetMovieTime" should come after "FindMovie"). Furthermore, since we consider the agent as a businessperson seeking potential opportunities, we exclude intents such as "TransferMoney" that are not suitable for our scenarios. For reference, Table 2 lists all the intents that are included in our study. The prompt used is shown below.
You will be given a dialogue and a list of topics of conversation. Please tell me which of the following topics will be the most reasonable one to be pivoted to in the dialogue.
Here is the dialogue: <Diague> Here is the list of topics: <Intent List> NOTE: 1. You MUST choose one of the above topic. 2. DONOT create any topics that are not listed above. 3. You should choose the one that is the most related to the topic. The output format should follow the below format: <output_format>
### Dialogue Continuation
We utilize the revised dialogues and potential intents identified by LLMs as input to continue the chit-chat dialogue. To ensure a natural and coherent dialogue, we provide several instructions to guide the LLMs in their generation. Firstly, we instruct the agent to steer the conversation towards topics related to the identified intent, considering that the
Figure 1: Illustration of our proposed pipeline utilizing LLMs to generate human-like dialogues.
user may not have a specific intent in mind at the beginning or middle of the conversation. Secondly, we instruct the LLMs to find a topic that intersects between the current topic and the identified intent before transitioning the dialogue to the target intent to avoid abrupt changes in topics. Lastly, we ask LLMs to make the transition between topics as smooth as possible, potentially involving multiple turns in the dialogue. The detailed instructions are shown below.
### Transition Boundary Detection
To accurately initiate the TOD process, we require a trigger point that signifies when the user first mentions or implies something related to the detected intent. This boundary helps determine whether to start the TOD immediately or continue the dialogue. To establish this trigger point, we instruct LLMs to select a turn in the conversation where the user explicitly mentions something related to the detected intent. It is worth noting that we only consider turns that are explicitly mentioned by the user to avoid any confusion caused by indirectly related turns. For instance, if the intent is to "FindMovie", LLMs may mistakenly consider playing video games as implicitly related to watching a movie.
## 3 SalesBot 2.0 Dateset
Our data generation framework enables us to create the SalesBot 2.0 dataset, a revised version of SalesBot that boasts improved intent coverage, longer chit-chat dialogues as context, and smoother and longer transition turns. To provide a glimpse of the quality of our dataset, we present an example dialogue in Table 3.
### Postprocessing
Additionally, we conduct postprocessing on the SalesBot 2.0 dataset to eliminate noise and formatting errors caused by LLMs. One issue we frequently encountered is the LLM's tendency to generate unknown intents, which can pose challenges because our pipeline assumes dialogues are generated based on a predefined list of intents, and such outliers lack a corresponding ontology. To maintain consistency and simplicity, we filter out any data that deviates from our predetermined output format.
\begin{table}
\begin{tabular}{l r} \hline \hline
**Intent** & **\#Dialogues** \\ \hline FindAttraction & 1,440 \\ FindRestaurants & 1,297 \\ FindMovie & 1,138 \\ LookupMusic & 523 \\ SearchHotrel & 356 \\ FinDevents & 394 \\ GetCarsAvailable & 103 \\ SearchRoundtripFlights & 92 \\ GetRide & 13 \\ SearchOnewayFlight & 25 \\ FinDus & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Distribution of the target intents.
### Additional Annotations
We have conducted additional annotation to ensure that the LLM can accurately detect intents related to the context and initiate a task-oriented dialogue at a reasonable timing. This annotation was done using two prompts: "Does the user show the intent of <given_intent>?" and "Is it reasonable for the agent to suggest anything partially related to the intent <given_intent>". The annotation results are summarized in Table 4. The results demonstrate that in nearly 80% of the dialogues, the user mentions the intent after the transition, indicating that the LLM can effectively detect the intent related to the context and initiate a task-oriented dialogue. Although there are some dialogues where the user does not explicitly mention the given intent, approximately 96% of the dialogues are still deemed reasonable for the agent to suggest anything related to the given intent. These annotations can be utilized for developers to decide how aggressively an agent behaves to promote products.
### Dataset Statistics
Table 5 provides a statistical comparison between our proposed SalesBot 2.0 dataset and the original SalesBot 1.0. The data shows that SalesBot 2.0 contains more turns in total, and on average, has one more chit-chat turn compared to SalesBot 1.0. Additionally, SalesBot 2.0 has longer transition turns, with an average length of over three turns, while SalesBot 1.0 generates only one turn as a transition response to TOD.
#### 3.3.1 Dialogue Quality Evaluation
We conduct an evaluation of our SalesBot 2.0 dataset by sampling 500 dialogues from both SalesBot 2.0 and SalesBot 1.0 for comparison. In the evaluation, we prompt LLMs to provide scores on two key aspects: **naturalness** and **consistency**, which are drawbacks in SalesBot 1.0. The results, shown in Table 6, demonstrate that SalesBot 2.0 achieved a higher naturalness score of 0.7 compared to SalesBot 1.0, indicating that our revised dataset is more human-like overall. Moreover, SalesBot 2.0 exhibits a higher consistency score than SalesBot, with a difference of 1.4 points, suggesting that the overall dialogue is more coherent. The prompt template used for evaluation is provided below for reference.
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Naturalness\(\uparrow\)** & **Consistency\(\uparrow\)** \\ \hline SalesBot 1.0 & 3.574 & 2.656 \\ SalesBot 2.0 & **4.258** & **4.026** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Quality comparison of SalesBot 1.0 and 2.0.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**\#Turns** & **Yes** & **No** & **Total** \\ \hline HasIntent? & 4,194 & 1,197 & 5,391 \\ Suggest? & 5,167 & 224 & 5,391 \\ \hline Both? & - & 182 & - \\ \hline \hline \end{tabular}
\end{table}
Table 4: Transition timing quality of SalesBot 2.0.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Avg. \#Turns** & **Chit-chat** & **Trans.** & **Total** \\ \hline SalesBot 1.0 & 4.49 & 1.00 & 5.49 \\ SalesBot 2.0 & 5.22 & 4.55 & 9.29 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Number of turns in SalesBot 1.0 and 2.0.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & **Naturalness\(\uparrow\)** & **Consistency\(\uparrow\)** \\ \hline SalesBot 1.0 & 3.574 & 2.656 \\ SalesBot 2.0 & **4.258** & **4.026** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Quality comparison of SalesBot 1.0 and 2.0.
You will be given a dialogue, where the agent is trying to pivot the dialogue to a certain topic. Your goal is as following:
1. Based on the naturalness of the dialogue, score from 1 to 5 on a continuous scale.
2. Based on the consistency of the entire dialogue, scoring from 1 to 5 on a continuous scale.
3. You should only give points, and do not do anything else.
Output Format: <output_format>
## 4 Related Work
Our study focuses on a conversation scenario where a conversational agent attempts to steer the discussion towards determining whether the user is interested in receiving recommendations. This scenario has been explored in various related works.
Persuasive Dialogue ConstructionThe three studies (Hiraoka et al., 2014; Yoshino et al., 2018; Wang et al., 2019) focused on persuasive dialogue construction in different scenarios. Hiraoka et al. (2014) annotated 34 dialogues in which a salesperson with expertise attempted to persuade a customer to purchase a camera. Yoshino et al. (2018) generated 200 dialogues through crowdsourcing, where one participant persuades the other to adopt a suggestion, such as cleaning a room. In comparison, Wang et al. (2019) collected 1,017 dialogues, where one participant was convinced to donate to a specific charity.
While all of these datasets are limited to specific scenarios, our framework can generate dialogues with any potential intent, making it more versatile. Additionally, our dataset is much larger, two to three times bigger than the previous ones, which makes it a valuable resource for training and evaluating non-collaborative conversational agents.
Conversational Recommendation DatasetsPrevious studies have developed various datasets for conversational recommendation systems. For instance, Li et al. (2019) created a large-scale dataset with a focus on recommendation. Other researchers have utilized knowledge graphs to collect dialogues by extracting paths consisting of attribute and entity nodes from a knowledge base and asking annotators to generate recommendation dialogues following the flow of the extracted path (Wu et al., 2019; Zhou et al., 2020; Xu et al., 2020).
Moreover, Hayati et al. (2020) aimed to collect a socially interactive conversational recommendation dialogue dataset, called INSPIRED. They designed an annotation scheme based on social science theories regarding recommendation strategies and used it to annotate the collected dialogues. Their goal was to better understand how humans make recommendations in communication. Manzoor and Jannach (2022) further improved the dataset and released INSPIRED 2.0, claiming that the original dataset had numerous incorrect annotations.
In contrast, our work does not solely focus on the task of "recommendation", but rather on the ability of the agent to identify potential business opportunities and navigate the dialogue topics towards a desired outcome. Furthermore, our framework does not rely on human annotators to collect data, as it can automatically generate human-like dialogues. This sets our approach apart from previous datasets and provides a more versatile and scalable solution for developing conversational agents.
Combination of Chit-Chat and TODRecent studies have aimed to combine task-oriented and open-domain dialogues to enhance the naturalness and social engagement of the conversation. One approach is to incorporate chit-chat responses into existing task-oriented datasets, as seen in Sun et al. Another approach is to develop models that handle predefined scenarios integrating chit-chat and task-oriented dialogues (Li et al., 2022). However, these approaches assume that the user has a clear goal to accomplish and that chit-chat responses simply enrich the conversation. In contrast, our study assumes that the user has no explicit goal or intent, and the conversational agent must detect any potential intent and pivot the dialogue smoothly to the related topic. This requires a more nuanced approach that can identify and respond to implicit cues, making the conversation more natural and engaging.
## 5 Conclusion
This paper presents a novel framework for generating intent-oriented dialogues by utilizing the commonsense knowledge of LLMs. Our proposed SalesBot 2.0 dataset contains thousands of human-like dialogues, which exhibit smoother transitions, enhanced naturalness, and better consistency when compared to existing datasets. This work is a significant contribution to the development of more sophisticated and effective conversational agents |
2303.16703 | Flach system on Quaternionic Hilbert--Blumenthal surfaces and
distinguished periods | We study arithmetic properties of certain quaternionic periods of Hilbert
modular forms arising from base change of elliptic modular forms. These periods
which we call the distinguished periods are closely related to the notion of
distinguished representation that appear in work of
Harder--Langlands--Rapoport, Lai, Flicker--Hakim on the Tate conjectures for
the Hilbert--Blumenthal surfaces and their quaternionic analogues. In
particular, we prove an integrality result on the ratio of the distinguished
period and the quaternionic Peterson norm associated to the modular form. Our
method is based on an Euler system argument initiated by Flach by producing
elements in the motivic cohomologies of the quaternionic Hilbert--Blumenthal
surfaces with control of their ramification behaviours. We show that these
periods give natural bounds for certain subspaces of the Selmer groups of these
quaternionic Hilbert--Blumenthal surfaces. The lengths of these subspaces can
be determined by using the Taylor--Wiles method and can be related to the
quaternionic Peterson norms of the modular forms. | Haining Wang | 2023-03-29T13:54:15Z | http://arxiv.org/abs/2303.16703v3 | # Flach system on quaternionic Hilbert-Blumenthal surfaces and distinguished periods
###### Abstract.
We study arithmetic properties of certain quaternionic periods of Hilbert modular forms arising from base change of elliptic modular forms. These periods which we call the distinguished periods are closely related to the notion of distinguished representation that appear in work of Harder-Langlands-Rapoport, Lai, Flicker-Hakim on the Tate conjectures for the Hilbert-Blumenthal surfaces and their quaternionic analogues. In particular, we prove an integrality result on the ratio of the distinguished period and the quaternionic Petersson associated to the modular form. Our method is based on an Euler system argument initiated by Flach by producing elements in the motivic cohomologies of the quaternionic Hilbert-Blumenthal surfaces with control of their ramification behaviours. We show that these periods give natural bounds for certain subspaces of the Selmer groups of these quaternionic Hilbert-Blumenthal surfaces. The lengths of these subspaces can be determined by using the Taylor-Wiles method and can be related to the quaternionic Petersson norms of the modular forms.
2000 Mathematics Subject Classification: Primary 11G18, 11R34, 14G35
###### Contents
* 1 Introduction
* 1.1 Main results
* 1.2 Strategy of the proof
* 1.3 Notations and conventions
* 2 Quaternionic Hilbert-Blumenthal surfaces
* 2.1 A quaternionic Shimura surface
* 2.2 Integral model for the quaternionic Shimura surface
* 2.3 Supersingular locus of the quaternionic Shimura surface
* 3 Quaternionic Hirzebruch-Zagier divisor
* 3.1 Shimura curves and Drinfeld uniformization
* 3.2 Quaternionic Hirzebruch-Zagier divisor
* 3.3 Integral Tate cycles on the special fiber
* 4 Flach classes and reciprocity formula
* 4.1 Flach class of the quaternionic Shimura surface
* 4.2 Local bevavious of Flach classes
Distinguished representation and base change
* 5.1 Distinguished representation
* 5.2 Asai representation and base change
* 6 Bounding the adjoint Selmer groups
* 6.1 Generalities on Selmer groups
* 6.2 Statement of the main result
* 6.3 The Flach system argument
* 6.4 Comparison of quaternionic periods
## 1. Introduction
In this article, we study arithmetic properties of certain integral periods of quaternionic Hilbert modular forms arising from base change. These periods are closely related to the notion of distinguished representation which play a prominent role in the work of Harder-Langlands-Rapoport [HLR], Lai [Lai], Flicker-Hakim [FH] on the Tate conjectures for the Hilbert-Blumenthal surfaces and their quaternionic analogues. We will therefore call these periods the distinguished periods. While the representation theoretic properties of these distinguished periods have been amply studied in the literature and have also been generalized and studied for higher rank groups using the relative trace formula, the arithmetic properties seem to be less well understood. We will provide some results in this direction below. More precisely, we will compare the distinguished period with another well-known period, namely the Petersson norm of the associated definite quaternionic modular form. We will show that the \(\lambda\)-adic valuation of the distinguished period will be greater than or equal to the \(\lambda\)-adic valuation of the Petersson norm for a prime \(\lambda\) of the Hecke field over a fixed rational prime \(l\). It is well-known that the Petersson norm of a definite quaternionic modular form can be related to congruence module of the quaternionic modular form and hence our result shows that the distinguished period also captures this information. We suspect that the distinguished period should also contain the information of the base change congruence module studied by Hida [Hid] and Urban-Tilouine [UT], but the method of this article seems not be able to provide us such information.
This result should be viewed as an analogue of the integrality result of ratio of Petersson norms of modular and indefinite quaternionic modular forms proved by Prasana in [Pra] where he also shows that the positive valuation part of this ratio should be related to the local Tamagawa factors which measures quantitatively the level lowering congruences of the modular form. Our result should also be reminiscent of the result of Ribet-Takahashi [RT] comparing the degree of the modular parametrization of elliptic curve by a classical modular curve and the degree of the parametrization of the elliptic curve by an indefinite Shimura curve. The results of Prasana have been extended to a program to understand of ratio of Petersson norms of general quaternionic Hilbert modular forms, [IP1], [IP2]. It would be interesting to see if our method could be useful in this program.
The method to study these distinguished periods is based on the so-called Flach system of Jacquet-Langlands type introduced in the companion article [Wang]. We will realize the \(\lambda\)-adic valuation of the distinguished period as a natural bound for the length of certain subspace of the Bloch-Kato Selmer group of the middle degree cohomology group of a suitable quaternionic Hilbert-Blumenthal surface. This subspace turns out to be isomorphic to the Bloch-Kato Selmer group of the adjoint Galois representation attached to the definite quaternionic modular form whose base-change defines the distinguished period. The length of the latter Selmer group can be determined using the Taylor-Wiles method and is equal to the \(\lambda\)-adic valuation of the Petersson norm of the definite quaternionic modular form and hence is less than equal to the \(\lambda\)-adic valuation of the distinguished period. This shows the desired integrality result of the ratio of periods as mentioned above. We remark that Flach system of Jacquet-Langlands type is a simplified geometric Euler system in the sense of [Weston3] that should reflect Jacquet-Langlands correspondence in the cohomologies of Shimura varieties. In [Wang], this is reflected using arithmetic level raising results on the product of two Shimura curves. In this article, this is realized using the Tate conjecture for the special fiber of the quaternionic Hilbert-Blumenthal surface proved in [TX], see also [Lan]. The version of the Tate conjectures for special fibers of Shimura varieties needed to construct similar Flach systems as in this article have been proved for a large class of Shimura varieties in [TX] and [XZ], therefore we expect the strategy in this article and [Wang] should be useful to study the integrality questions of ratio of periods coming from definite Shimura sets for other reductive groups. In a very similar setting, we will construct a Flach system of Jacquet-Langlands type on Picard modular surfaces and study the period of automorphic forms on \(\mathrm{U}(3)\) coming from theta lifts of automorphic forms of \(\mathrm{U}(2)\).
It should also be interesting to note that usually automorphic periods have good connections to special values of \(L\)-functions and Selmer groups can be studied in terms of these special values through Euler system arguments relating these automorphic periods and hence the special values to the size of the Selmer group via the so-called reciprocity formula. Here we turn things around and use known bounds for Selmer groups to study integral periods.
### Main results
We will introduce some notations before we state our main results. Let \(l\geq 5\) be a fixed prime. Let \(f\) be a normalized newform in \(\mathrm{S}_{2}(\Gamma_{0}(\mathrm{N}))\) whose associated automorphic representation of \(\mathrm{GL}_{2}(\mathbf{A})\) is denoted by \(\pi\). We will assume that \(\mathrm{N}\) admits a decomposition \(\mathrm{N}=\mathrm{N}^{+}\mathrm{N}^{-}\) with \((\mathrm{N}^{+},\mathrm{N}^{-})\) such that \(\mathrm{N}^{-}\) is square-free and consists of odd number of prime factors. Let \(\overline{\mathrm{B}}\) be the definite quaternion algebra of discriminant \(\mathrm{N}^{-}\). Then \(f\) admits a normalized Jacquet-Langlands transfer \(f^{\dagger}\) to an automorphic form for the group \(\mathrm{G}(\overline{\mathrm{B}})=\overline{\mathrm{B}}^{\times}\) whose associated automorphic representation will be denoted by \(\pi^{\circ}=\pi^{\overline{\mathrm{B}}}\). We will be concerned with the base change \(\phi^{\dagger}\) of \(f^{\dagger}\) by a real quadratic field \(\mathrm{F}\). Let \(\mathrm{E}=\mathbf{Q}(f)\) be a Hecke field of \(f\) and \(\iota_{l}:\overline{\mathbf{Q}}\hookrightarrow\mathbf{C}_{l}\) be a fixed embedding which defines a place \(\lambda\) in \(\mathrm{E}\). We will denote by \(\mathrm{E}_{\lambda}\) the completion of \(\mathrm{E}\) at \(\lambda\) and by \(\mathcal{O}_{\lambda}\) its valuation ring. We will also denote by \(\lambda\) the maximal ideal of \(\mathcal{O}_{\lambda}\) and will fix a uniformizer \(\varpi\) for the ideal \(\lambda\). Let \(k_{\lambda}\) be the residue field of \(\mathcal{O}_{\lambda}\).
Let \(\overline{\mathrm{Q}}=\overline{\mathrm{B}}\otimes\mathrm{F}\) be the definite quaternion algebra obtained by base change to \(\mathrm{F}\) and let \(\mathrm{G}(\overline{\mathrm{Q}})\) be the algebraic group defined by the Weil restriction of \(\overline{\mathrm{Q}}^{\times}\) from \(\mathrm{F}\) to \(\mathbf{Q}\). Then \(\phi^{\dagger}\) is contained in the automorphic representation \(\pi^{\overline{\mathrm{Q}}}\) of \(\mathrm{G}(\overline{\mathrm{Q}})\) obtained by base change of \(\pi^{\circ}\) to \(\mathrm{F}\). The automorphic forms \(\phi^{\dagger}\) can be realized as a smooth function valued in \(\mathcal{O}_{\lambda}\) on certain Shimura set \(\mathrm{Z}(\overline{\mathrm{Q}})\) associated to the group \(\mathrm{G}(\overline{\mathrm{Q}})\) with suitable level structure. The Shimura
set \(\mathrm{Z}(\overline{\mathrm{B}})\) defined similarly from \(\mathrm{G}(\overline{\mathrm{B}})\) can be embedded in \(\mathrm{Z}(\overline{\mathrm{Q}})\) and this can be seen as an analogue of the Hirzebruch-Zagier morphism for the Shimura sets. We define the _distinguished period_ by
\[\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger})=\sum_{z\in\mathrm{Z}(\overline{ \mathrm{B}})}\phi^{\dagger}(z).\]
Recall a cupsidal automorphic representation \(\Pi\) of \(\mathrm{GL}_{2}(\mathbf{A}_{\mathrm{F}})\) is called distinguished if there is a function \(\phi\) in \(\Pi\) such that
\[\mathcal{P}_{\mathrm{dis}}(\phi)=\int_{\mathrm{GL}_{2}(\mathbf{Q})\backslash \mathrm{GL}_{2}(\mathbf{A})}\phi(g)dg\]
is non-vanishing. It is known that \(\Pi\) is distinguished only if \(\Pi_{\infty}\) is in the discrete series and \(\Pi\) comes from a cupsidal automorphic representation \(\pi\) of \(\mathrm{GL}_{2}(\mathbf{A})\) via base change. The notion of distinguished representation has played an prominent role in the proof of the Tate conjecture of Hilbert-Blumenthal surfaces by [HLR] and the quaternionic Hilbert-Blumenthal surfaces by [Lai] and [FH]. Moreover it is known that being distinguished is preserved under Jacquet-Langlands correspondence, see Proposition 5.1 for the precise statement.
The above discussion should have explained that the distinguished period is of great arithmetic and representation theoretic interest and also why we stick to the situation when the period is defined by an automorphic form coming from base change. This set-up also leads us to consider a closely related period of \(f\) namely the Petersson norm of \(f^{\dagger}\), this is defined by
\[\mathcal{P}(f^{\dagger})=\sum_{z\in\mathrm{Z}(\overline{\mathrm{B}})}f^{ \dagger}(z)^{2}.\]
and is referred to as the _quaternionic period_ of \(f\) in [Wang]. Our main result will be concerned with the \(\lambda\)-integrality of the ratio \(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger})/\mathcal{P}(f^{\dagger})\) of these two periods. Before we state it, we introduce several assumptions that we impose on the Galois representation attached to \(f\) or equivalently to \(f^{\dagger}\). Let \(\rho_{\pi^{\circ}}:\mathrm{G}_{\mathbf{Q}}\to\mathrm{GL}_{2}(\mathrm{E}_{ \lambda})\) be the Galois representation associated to \(f^{\dagger}\) by the Eichler-Shimura construction and Jacquet-Langlands correspondence. We denote by \(\overline{\rho}_{\pi^{\circ}}\) the residual representation of \(\rho_{\pi^{\circ}}\). Let \(\Sigma^{+}\) be the set of primes dividing \(\mathrm{N}^{+}\). Let \(\Sigma^{-}_{\mathrm{ram}}\) be the set of \(r\) dividing \(\mathrm{N}^{-}\) such that \(l\mid r^{2}-1\) and \(\Sigma^{-}_{\mathrm{mix}}\) be the set of \(r\) dividing \(\mathrm{N}^{-}\) such that \(l\nmid r^{2}-1\).
**Assumption 1.1**.: We make the following assumptions on \(\bar{\rho}_{\pi^{\circ}}\).
1. \(\bar{\rho}_{\pi^{\circ}}|_{\mathrm{G}_{\mathbf{Q}(\zeta_{l})}}\) is absolutely irreducible;
2. The image of \(\bar{\rho}_{\pi^{\circ}}\) contains \(\mathrm{GL}_{2}(\mathbf{F}_{l})\);
3. \(\overline{\rho}_{\pi^{\circ}}\) is minimal at primes in \(\Sigma^{+}\) in the sense that all the liftings of \(\overline{\rho}_{\pi^{\circ}}|_{\mathrm{G}_{\mathbf{Q}_{r}}}\) are minimally ramified for \(r\in\Sigma^{+}\);
4. \(\overline{\rho}_{\pi^{\circ}}\) is ramified at primes in \(\Sigma^{-}_{\mathrm{ram}}\).
Let \(\mathrm{T}_{\pi^{\circ}}\) be a suitable Galois stable lattice in \(\rho_{\pi^{\circ}}\) which we will make precise in the main body of this article and \(\mathrm{T}_{\pi^{\circ},n}\) be the reduction of \(\mathrm{T}_{\pi^{\circ}}\) modulo \(\lambda^{n}\). We will set \(\mathrm{M}_{n}=\mathrm{Sym}^{2}(\mathrm{T}_{\pi^{\circ},n})\).
**Theorem 1.2**.: _Let \(f\in\mathrm{S}_{2}(\Gamma_{0}(\mathrm{N}))\) be a newform of weight \(2\) with \(\mathrm{N}=\mathrm{N}^{+}\mathrm{N}^{-}\) such that \(\mathrm{N}^{-}\) is squarefree and has odd number of prime factors. Let \(f^{\dagger}\) be the automorphic form on \(\mathrm{Z}(\overline{\mathrm{B}})\) corresponding to \(f\) under the Jacquet-Langlands correspondence and let \(\phi^{\dagger}\) be the base change of \(f^{\dagger}\) considered as an automorphic form on \(\mathrm{Z}(\overline{\mathrm{Q}})\)._
1. _We assume that the residual Galois representation_ \(\overline{\rho}_{\pi^{\circ}}\) _satisfies Assumption_ 1.1_;_
2. _We further assume that_ \[\mathrm{H}^{1}(\mathbf{Q}(\mathrm{M}_{n})/\mathbf{Q},\mathrm{M}_{n})=0\] _for every_ \(n\geq 1\) _and where_ \(\mathbf{Q}(\mathrm{M}_{n})\) _is the splitting field of the Galois module_ \(\mathrm{M}_{n}\)_._
_Then we have the following inequality_
\[\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger}))\geq\mathrm{ ord}_{\lambda}(\mathcal{P}(f^{\dagger})).\]
It should also be interesting to study the integrality question of the ratio between the distinguished period \(\mathcal{P}_{\mathrm{dis}}(f_{\mathrm{F}})\) for the base-change \(f_{\mathrm{F}}\) of \(f\) and the classical Petersson norm \(\langle f,f\rangle\) of \(f\). The \(p\)-adic reciprocity formulas proved in [LLZ], [LSZ] should be more suitable for this purpose. Our main result is clearly the compact version for this ratio under the Jacquet-Langlands correspondence. We also suspect that when the \(\lambda\)-adic valuation of this ratio is positive, then the positive part may be related to the base-change congruence ideal as defined by Hida [Hid]. We hope to return to this question in the near future.
### Strategy of the proof
We now briefly outline the proof for the main integrality result in Theorem 1.2. We define the divisible Galois module \(\mathcal{M}_{\pi^{\circ}}\) by
\[0\to\mathrm{T}_{\pi^{\circ}}\to\mathrm{V}_{\pi^{\circ}}\to\mathcal{M}_{\pi^{ \circ}}\to 0\]
The key step towards proving the integrality of the ratio of the distinguished period and the quaternionic period is to prove the following theorem bounding the Bloch-Kato Selmer group of \(\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{\circ}})\) in terms of \(\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger}))\).
**Theorem 1.3**.: _Let \(\nu=\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger}))\) and \(\eta=\varpi^{\nu}\)._
1. _We assume that the residual Galois representation_ \(\overline{\rho}_{\pi^{\circ}}\) _is absolutely irreducible._
2. _We further assume that_ \[\mathrm{H}^{1}(\mathbf{Q}(\mathrm{M}_{n})/\mathbf{Q},\mathrm{M}_{n})=0\] _for every_ \(n\geq 1\) _and where_ \(\mathbf{Q}(\mathrm{M}_{n})\) _is the splitting field of the Galois module_ \(\mathrm{M}_{n}\)_._
_Then \(\eta\) annihilates the Selmer group \(\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{\circ}}))\), in particular_
\[\mathrm{length}\;\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{Ad}^{0}(\mathcal{M}_{ \pi^{\circ}}))\leq\nu.\]
The proof of this theorem relies on the construction of the so-called Flach system of Jacquet-Langlands type on certain Shimura surface which we now describe in more details. Let \(p\) be a prime which is inert in \(\mathrm{F}\). We then obtain another quaternion algebra \(\mathrm{B}\) by switching the invariant of \(\overline{\mathrm{B}}\) at \(p\) and \(\infty\). Then \(\mathrm{Q}=\mathrm{B}\otimes\mathrm{F}\) is a totally indefinite quaternion algebra over \(\mathrm{F}\) obtained from \(\overline{\mathrm{Q}}\) by switching the invariant at the two archimedean places of \(\mathrm{F}\). Note that \(\pi^{\overline{\mathrm{Q}}}\) then admits a Jacquet-Langlands transfer \(\pi^{\mathrm{Q}}\) as a representation of \(\mathrm{G}(\mathrm{Q})\) where we denote by \(\mathrm{G}(\mathrm{Q})\) the algebraic group given by the Weil restriction of \(\mathrm{Q}^{\times}\) from \(\mathrm{F}\) to \(\mathbf{Q}\). Let \((\rho_{\pi^{\mathrm{Q}}},\mathrm{V}_{\pi^{\mathrm{Q}}})\) be the Galois representation of \(\mathrm{G}_{\mathrm{F}}\) attached to \(\pi^{\mathrm{Q}}\). Then it is well-known that the Asai representation \(\mathrm{As}(\rho_{\pi^{\mathrm{Q}}})=\mathrm{As}(\mathrm{V}_{\pi^{\mathrm{Q}}})\) can be realized on the middle degree cohomology of certain Shimura surface \(\mathrm{X}(\mathrm{Q})\otimes\mathbf{Q}\) which we will refer to as the _quaternionic Hilbert-Blumenthal surface_. Roughly speaking these Shimura surfaces defines a moduli problem \(\mathrm{X}(\mathrm{Q})\) which classifies abelian fourfold with quaternionic multiplication by \(\mathrm{Q}\) and additional structures.
The integral cohomology of \(X(Q)\otimes Q\) with coefficient in \(\mathcal{O}_{\lambda}\) defines a natural lattice \(T_{\pi^{Q}}\) in \(V_{\pi^{Q}}\) and we define the divisible Galois module \(\mathcal{M}_{\pi^{Q}}\) using the exact sequence
\[0\to T_{\pi^{Q}}\to V_{\pi^{Q}}\to\mathcal{M}_{\pi^{Q}}\to 0.\]
In our set-up, the Asai representation \(As(\mathcal{M}_{\pi^{Q}})(-1)\) splits into
\[As(\mathcal{M}_{\pi^{Q}})(-1)=Ad^{0}(\mathcal{M}_{\pi^{\circ}})\oplus E_{ \lambda}/\mathcal{O}_{\lambda}(\omega_{F/Q})\]
where \(\omega_{F/Q}\) is the quadratic character associated to \(F\). Since \(As(T_{\pi^{Q}})\) is realized in the \(\pi^{Q}\)-isotopic part \(H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{\mathbf{Q}},\mathcal{O}_{\lambda}(2))\) of the middle degree cohomology of \(X(Q)\) and \(Ad^{0}(\mathcal{M}_{\pi^{\circ}})(1)\) is the Kummer dual of \(Ad^{0}(\mathcal{M}_{\pi^{\circ}})\), we are naturally led to construct elements in the Galois cohomology group \(H^{1}(\mathbf{Q},H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{\mathbf{Q}},\mathcal{O }_{\lambda}(2)))\). For this purpose, we consider the motivic cohomology group \(H^{3}_{\mathcal{M}}(X(Q)\otimes\mathbf{Q},\mathbf{Z}(2)))\) for the surface \(X(Q)\otimes\mathbf{Q}\) whose elements consist of pairs \((Z,f)\) where \(Z\) is a curve on \(X(Q)\otimes\mathbf{Q}\) and \(f\) is a rational function on \(X(Q)\otimes\mathbf{Q}\) with trivial Weil divisors. Let \(X(B)\otimes\mathbf{Q}\) be the Shimura curve associated to the indefinite quaternion algebra \(B\), then we have the quaternionic Hirzebruch-Zagier morphism
\[\theta:X(B)\otimes\mathbf{Q}\to X(Q)\otimes\mathbf{Q}\]
which can even be defined integrally. Then we define the _Flach element_ by the pair
\[\Theta^{[p]}=(\theta_{*}X(B)\otimes\mathbf{Q},p).\]
The construction of this element is inspired by that of [Flach1] which is used to bound the symmetric square Selmer group for an elliptic curve. However, for our purpose, we do not need to include the Siegel modular unit in our definition (which is also not available for us as we are using the compact Shimura curve). On the other hand, the original construction of the Flach element has applications to Iwasawa theory. In a closely related setting, this is carried out in [LLZ]. Note that they consider the case when the Hilbert modular form is not arising from base change and hence our result is of complimentary nature to their work. There is an Abel-Jacobi map
\[AJ_{\pi^{Q}}:H^{3}_{\mathcal{M}}(X(Q)\otimes\mathbf{Q},\mathbf{Z}(2))\to H^{1} (\mathbf{Q},H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{\mathbf{Q}},\mathcal{O}_{ \lambda}(2)))\]
defined using the Chern character map from the \(K\)-theory to etale cohomology. The class \(\kappa^{[p]}\) defined by \(AJ_{\pi^{Q}}(\Theta^{[p]})\) is called the _Flach class_. The local ramification behaviour of the class \(\kappa^{[p]}\) can be analyzed. In particular at \(p\), we are concerned with singular quotient \(\partial_{p}(\kappa^{[p]})\in H^{1}_{\sin}(\mathbf{Q}_{p},H^{2}_{\pi^{Q}}(X(Q) \otimes\overline{\mathbf{Q}}_{p},\mathcal{O}_{\lambda}(2)))\) of \(\kappa^{[p]}\). Note that \(H^{1}_{\sin}(\mathbf{Q}_{p},H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{\mathbf{Q}}_ {p},\mathcal{O}_{\lambda}(2))\) is isomorphic to \(H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1) ^{G_{F_{p}}}\). If we choose \(p\) carefully, then \(H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1) ^{G_{F_{p}}}\) is isomorphic to \(\mathcal{O}_{\lambda}[Z(\overline{Q})][\pi^{\overline{Q}}]\) the \(\pi^{\overline{Q}}\)-isotypic component of the space of the \(\mathcal{O}_{\lambda}\)-valued functions on \(Z(\overline{Q})\) by the Tate conjecture for \(X(Q)\otimes\overline{\mathbf{F}}_{p}\) proved in [TX]. In particular, we can view \(\partial_{p}(\kappa^{[p]})\) as an element of \(\mathcal{O}_{\lambda}[Z(\overline{Q})][\pi^{\overline{Q}}]\). There is natural pairing \((\cdot,\cdot)\) on this space with itself. Then we can prove the following key _reciprocity formula_
\[(\partial_{p}(\kappa^{[p]}),\phi^{\dagger})=\mathcal{P}_{\mathrm{dis}}(\phi^{ \dagger}).\]
Then an Euler system argument allows us to show that \(\eta=\varpi^{\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger }))}\) annihilates the Selmer group \(H^{1}_{f}(\mathbf{Q},Ad^{0}(\mathcal{M}_{\pi^{\circ}}))\) under some technical assumptions. This shows that \(\mathrm{leng}\;H^{1}_{f}(\mathbf{Q},Ad^{0}(\mathcal{M}_{\pi^{\circ}}))\) admits an upper-bound given by \(\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger})\). On the other hand, the Selmer group of the adjoint representation \(Ad^{0}(\mathcal{M}_{\pi^{\circ}})\) is closely related to the deformation theory of the residual representation \(\overline{\rho}_{\pi^{\circ}}\). In fact, as a by-product of the so-called \(R=T\)
theorem proved using the Taylor-Wiles method, it can be shown that \(\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{\circ}}))\) has a subspace \(\mathrm{H}^{1}_{\mathrm{mix}}(\mathbf{Q},\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{ \circ}}))\) whose length is given by the \(\lambda\)-valuation of certain congruence number \(\eta(\mathrm{N}^{+},\mathrm{N}^{-})\) that detects congruences between the modular form \(f\) and modular forms in \(\mathrm{S}_{2}(\Gamma_{0}(\mathrm{N}))\) which are new at primes dividing \(\mathrm{N}^{-}\). Finally, it is well-known that the \(\lambda\)-valuation of this congruence number is exactly that of the Petersson norm of \(f^{\dagger}\) under our assumptions. This finally proves that \(\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger}))\geq\mathrm{ ord}_{\lambda}(\mathcal{P}(f^{\dagger}))\).
### Notations and conventions
We will use common notations and conventions in algebraic number theory and algebraic geometry. The cohomologies appeared in this article will be understood as the etale cohomologies. For a field \(\mathrm{K}\), we denote by \(\mathrm{K}^{\mathrm{ac}}\) the separable closure of \(\mathrm{K}\) and put \(\mathrm{G}_{\mathrm{K}}:=\mathrm{Gal}(\mathrm{K}^{\mathrm{ac}}/\mathrm{K})\) the Galois group of \(\mathrm{K}\). For a place \(v\) of \(\mathrm{K}\), \(\mathrm{Frob}_{v}\) is the arithmetic Frobenius at \(v\).
We let \(\mathbf{A}_{\mathrm{F}}\) be the ring of adeles over a number field \(\mathrm{F}\) and \(\mathbf{A}_{\mathrm{F}}^{\infty}\) be the subring of finite adeles. For a prime \(v\) of \(\mathrm{F}\), \(\mathbf{A}_{\mathrm{F}}^{\infty,v}\) is the prime-to-\(v\) part of \(\mathbf{A}_{\mathrm{F}}^{\infty}\). When \(\mathrm{F}=\mathbf{Q}\), then we will omit the subscripts from these notations.
When \(\mathrm{K}\) is a local field, we denote by \(\mathcal{O}_{\mathrm{K}}\) its valuation ring and by \(k\) its residue field. We let \(\mathrm{I}_{\mathrm{K}}\) be the inertia subgroup of \(\mathrm{G}_{\mathrm{K}}\). For a \(\mathrm{G}_{\mathrm{K}}\)-module \(\mathrm{M}\),
1. the finite part \(\mathrm{H}^{1}_{\mathrm{fin}}(\mathrm{K},\mathrm{M})\) of \(\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\) is defined to be \(\mathrm{H}^{1}(k,\mathrm{M}^{\mathrm{I}_{\mathrm{K}}})\);
2. the singular part \(\mathrm{H}^{1}_{\mathrm{sing}}(\mathrm{K},\mathrm{M})\) of \(\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\) is defined to be the quotient of \(\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\) by the image of \(\mathrm{H}^{1}_{\mathrm{fin}}(\mathrm{K},\mathrm{M})\) in \(\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\) via inflation;
3. let \(x\) be an element of \(\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\), we call the image of \(x\) in \(\mathrm{H}^{1}_{\mathrm{sing}}(\mathrm{K},\mathrm{M})\) the singular residue of \(x\) written as \(\partial_{p}(x)\).
Let \(\mathrm{K}\) be a number field and let \(\mathrm{K}_{v}\) be the completion of \(\mathrm{K}\) at a place \(v\). Suppose \(x\in\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\), then we will write \(\mathrm{loc}_{v}(x)\) the image of \(x\) under the restriction map
\[\mathrm{loc}_{v}:\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\to\mathrm{H}^{1}( \mathrm{K}_{v},\mathrm{M}).\]
We will write \(\partial_{v}(x)\) be the image of \(x\) in \(\mathrm{H}^{1}_{\mathrm{sing}}(\mathrm{K}_{v},\mathrm{M})\) under the composite map of
\[\mathrm{loc}_{v}:\mathrm{H}^{1}(\mathrm{K},\mathrm{M})\to\mathrm{H}^{1}( \mathrm{K}_{v},\mathrm{M})\]
and the natural map \(\mathrm{H}^{1}(\mathrm{K}_{v},\mathrm{M})\to\mathrm{H}^{1}_{\mathrm{sing}}( \mathrm{K}_{v},\mathrm{M})\).
### Acknowledgements
We would like to thank Ming-Lun Hsieh for introducing the notion of distinguished representation to the author and pointing out useful references.
## 2. Quaternionic Hilbert-Blumenthal surfaces
### A quaternionic Shimura surface
Let \(\mathrm{F}\) be a real quadratic field of discriminant \(\mathrm{D}_{\mathrm{F}}\). Let \(\mathrm{Q}=\mathrm{B}\otimes\mathrm{F}\) for an indefinite quaternion algebra \(\mathrm{B}\) with discriminant \(\mathrm{N}^{-}p\) which is square free consisting of even number of prime factors. Suppose that \(p\) is a prime inert in \(\mathrm{F}\). Given this \(\mathrm{Q}\), let \(\mathrm{G}(\mathrm{Q})\) be the algebraic group over \(\mathbf{Q}\) defined by the Weil restriction of \(\mathrm{Q}^{\times}\) from \(\mathrm{F}\) to \(\mathbf{Q}\). Fix an open compact subgroup \(\mathrm{K}\) of \(\mathrm{G}(\mathrm{Q})(\mathbf{A}^{\infty})\) we can associate to it a Shimura variety \(\mathrm{Sh}_{\mathrm{K}}(\mathrm{Q})\) over \(\mathbf{Q}\) whose complex points are given by
\[\mathrm{Sh}_{\mathrm{K}}(\mathrm{Q})(\mathbf{C})=\mathrm{G}(\mathrm{Q})( \mathbf{Q})\backslash\mathcal{D}\times\mathrm{G}(\mathrm{Q})(\mathbf{A}^{ \infty})/\mathrm{K}.\]
where we denote by \(\mathcal{D}=\mathcal{H}^{\pm 2}\) product of two copies of the upper and lower half planes,
Let \(\widetilde{\mathrm{G}}(\mathrm{Q})\) be the algebraic group over \(\mathbf{Q}\) whose \(\mathrm{R}\)-points are given by
\[\widetilde{\mathrm{G}}(\mathrm{Q})(\mathrm{R})=\{x\in(\mathrm{Q}\otimes\mathrm{R })^{\times}:\mathrm{N}^{\circ}(x)\in\mathrm{R}^{\times}\}\]
for any \(\mathbf{Q}\)-algebra \(\mathrm{R}\). Fix an open compact subgroup \(\widetilde{\mathrm{K}}\) of \(\widetilde{\mathrm{G}}(\mathrm{Q})(\mathbf{A}^{\infty})\) we can associate to it a Shimura variety \(\mathrm{Sh}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\) which represents the following moduli problem over \(\mathbf{Q}\). It associates to a scheme \(\mathrm{S}\in\mathrm{Sch}/\mathbf{Q}\) the isomorphism classes of \(4\)-tuples \((\mathrm{A},\lambda,\iota,\overline{\eta})\) up to isogeny where
1. A is an abelian scheme over \(\mathrm{S}\) of relative dimension \(4\);
2. \(\iota:\mathrm{B}\otimes\mathrm{F}\to\mathrm{End}^{0}(\mathrm{A})\) is a homomorphism such that \[\iota(b\otimes a)^{*}=\iota(b^{*}\otimes a)\] where the first \((\cdot)^{*}\) means the Rosati involution of \(\mathrm{End}^{0}(\mathrm{A})=\mathrm{End}_{\mathrm{S}}(\mathrm{A})\otimes \mathbf{Q}\) while the second \((\cdot)^{*}\) means the main involution on \(\mathrm{B}\);
3. \(\overline{\eta}\) is a \(\mathrm{K}\)-equivalence class of \(\mathrm{B}\otimes\mathrm{F}\)-equivariant isomorphisms \[\eta:\widehat{\mathrm{V}}(\mathrm{A})\xrightarrow{\sim}\mathrm{B}\otimes \mathrm{F}(\mathbf{A}^{\infty})\] where \(\widehat{\mathrm{V}}(\mathrm{A})=\prod_{v}\mathrm{T}_{v}(\mathrm{A})\otimes \mathbf{Q}\) and which preserves Weil-pairing on the left-hand-side and the reduced trace pairing on the right-hand-side up to a scalar in \(\mathbf{A}^{\infty\times}\).
We will refer to [Kot] for the precise definitions of the notations used here. We require that the following Kottwitz conditions are satisfied
\[\det(\iota(b\otimes a);\mathrm{Lie}(\mathrm{A}))=\mathrm{N}^{\circ}(b)\mathrm{ N}_{\mathrm{F}/\mathbf{Q}}^{2}(a)\]
for \(b\otimes a\in\mathrm{B}\otimes\mathrm{F}\). It is known that the polarization datum that are usually included in the above moduli problem can be omitted, see [Zin, Lemma 3.8], [LT, Remark 2.5].
_Remark 2.1_.: It is clear that the above moduli problem is the same as the GSpin-type Shimura variety defined by Kulda-Raopoport in [KR1, SS1] via Morita equivalence.
For \(\widetilde{\mathrm{K}}\) sufficiently small, this moduli problem is representable by a quasi-projective smooth scheme over \(\mathbf{Q}\). Note that when \(\mathrm{B}\otimes\mathrm{F}=\mathrm{M}_{2}(\mathrm{F})\), then we recover the classical Hilbert-Blumenthal surface. We denote by \(\mathcal{D}=\mathcal{H}^{\pm 2}\) product of two copies of the upper and lower half planes, then the \(\mathbf{C}\)-points of \(\mathrm{Sh}_{\widetilde{\mathrm{K}}}(\mathrm{Q})(\mathbf{C})\) can be described by
\[\mathrm{Sh}_{\widetilde{\mathrm{K}}}(\mathrm{Q})(\mathbf{C})=\widetilde{ \mathrm{G}}(\mathrm{Q})(\mathbf{Q})\backslash\mathcal{D}\times\widetilde{ \mathrm{G}}(\mathrm{Q})(\mathbf{A}^{\infty})/\widetilde{\mathrm{K}}.\]
When \(\widetilde{\mathrm{K}}\) is the restriction of \(\mathrm{K}\), there is a canonical map \(\mathrm{Sh}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\to\mathrm{Sh}_{\mathrm{K}}( \mathrm{Q})\) extending the natural map
\[\widetilde{\mathrm{G}}(\mathrm{Q})(\mathbf{Q})\backslash\mathcal{D}\times \widetilde{\mathrm{G}}(\mathrm{Q})(\mathbf{A}^{\infty})/\widetilde{\mathrm{K}} \to\mathrm{G}(\mathrm{Q})(\mathbf{Q})\backslash\mathcal{D}\times\mathrm{G}( \mathrm{Q})(\mathbf{A}^{\infty})/\mathrm{K}\]
on \(\mathbf{C}\)-points. The PEL-type Shimura surface \(\mathrm{Sh}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\) will facilitate the study of the geometry of \(\mathrm{Sh}_{\mathrm{K}}(\mathrm{Q})\) whose cohomology carries the automorphic representation of interest. We will refer to \(\mathrm{Sh}_{\mathrm{K}}(\mathrm{Q})\) as the _quaternionic Hilbert-Blumenthal surface_ associated to \(\mathrm{Q}\). In the case when \(\mathrm{Q}=\mathrm{M}_{2}(\mathrm{F})\), \(\mathrm{Sh}_{\mathrm{K}}(\mathrm{Q})\) agrees with the classical Hilbert-Blumenthal surface. We remark that we do not exclude this case.
### Integral model for the quaternionnic Shimura surface
Let \(\mathcal{O}_{\mathrm{B}}\subset\mathrm{B}\) be a maximal order stable under the involution \(*\) and we consider the order \(\mathcal{O}_{\mathrm{Q}}=\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\) in \(\mathrm{B}\otimes\mathrm{F}\). Recall \(p\) is a prime which is inert in \(\mathrm{F}\) and divides the discriminant of \(\mathrm{B}\), \(\widetilde{\mathrm{K}}^{p}\subset\widetilde{\mathrm{G}}(\mathrm{Q})(\mathbf{ A}^{\infty,p})\) be a compact open subgroup, we can define a moduli problem over \(\mathbf{Z}_{(p)}\) which associates to each \(\mathrm{S}\in\mathrm{Sch}/\mathbf{Z}_{(p)}\) the set of isomorphism classes of \(4\)-tuples \((\mathrm{A},\lambda,\iota,\overline{\eta}^{p})\) where
1. A is an abelian scheme over \(\mathrm{S}\) of relative dimension \(4\) up to prime to \(p\)-isogeny;
2. \(\iota:\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\to\mathrm{End} (\mathrm{A})\otimes\mathbf{Z}_{(p)}\) is a homomorphism such that \[\iota(b\otimes a)^{*}=\iota(b^{*}\otimes a)\] where the first \((\cdot)^{*}\) means the Rosati involution of \(\mathrm{End}_{\mathrm{S}}(\mathrm{A})\otimes\mathbf{Z}_{(p)}\) while the second \((\cdot)^{*}\) means the main involution on \(\mathrm{B}\).
3. \(\overline{\eta}^{p}\) is a \(\mathrm{K}^{p}\)-equivalence class of \(\mathcal{O}_{\mathrm{B}}^{?}\otimes\mathcal{O}_{\mathrm{F}}\)-equivariant isomorphisms \[\eta^{p}:\widehat{\mathrm{V}}^{p}(\mathrm{A})\xrightarrow{\sim}\mathcal{O}_{ \mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}(\mathbf{A}^{\infty,p})\] where \(\widehat{\mathrm{V}}^{p}(\mathrm{A})=\prod_{v\neq p}\mathrm{T}_{v}(\mathrm{A}) \otimes\mathbf{Q}\) and \(\eta^{p}\) preserves Weil-pairing on the left-hand-side and the reduced trace pairing on the right-hand-side up to a scalar in \(\mathbf{A}^{\infty,p\times}\);
We will refer to [Kot] for the precise definitions of the notations used here. We require that the following Kottwitz conditions are satisfied
\[\det(\iota(b\otimes a);\mathrm{Lie}(\mathrm{A}))=\mathrm{N}^{\circ}(b)\mathrm{ N}_{\mathrm{F}/\mathbf{Q}}^{2}(a)\]
for \(b\otimes a\in\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\) which is understood as an identity of polynomial functions with coefficient in \(\mathcal{O}_{\mathrm{S}}\) as in [Kot]. It follows from [Kot, SS5] that the this moduli problem is representable by a smooth quasi-projective scheme \(\widetilde{\mathrm{X}}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\) over \(\mathbf{Z}_{(p)}\). Let \(\widetilde{\mathrm{K}}=\widetilde{\mathrm{K}}_{p}\tilde{\mathrm{K}}^{p}\) where \(\widetilde{\mathrm{K}}_{p}\) is the intersection of \((\mathcal{O}_{\mathrm{Q}}\otimes\mathbf{Z}_{p})^{\times}\) with \(\widetilde{\mathrm{G}}(\mathrm{Q})(\mathbf{Q}_{p})\). Then the generic fiber \(\widetilde{\mathrm{X}}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\otimes\mathbf{Q}\) is given by \(\mathrm{Sh}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\) defined in the previous subsection. The canonical map \(\mathrm{Sh}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\to\mathrm{Sh}_{\mathrm{K}}( \mathrm{Q})\) extends to the integral model \(\mathrm{X}_{\mathrm{K}}(\mathrm{Q})\) of \(\mathrm{Sh}_{\mathrm{K}}(\mathrm{Q})\) and gives rise to a finite map
\[\widetilde{\mathrm{X}}_{\widetilde{\mathrm{K}}}(\mathrm{Q})\longrightarrow \mathrm{X}_{\mathrm{K}}(\mathrm{Q}) \tag{2.1}\]
between schemes over \(\mathbf{Z}_{(p)}\).
We will now fix a definite choice of the level structure for later purpose. We will do this by introducing a slight variant of the above moduli problem. Let \(\mathrm{N}^{+}\) be an integer that coprime to \(\mathrm{N}^{-}p\) and we put \(\mathrm{N}=\mathrm{N}^{+}\mathrm{N}^{-}\). Let \(d\geq 4\) be an integer. Then we consider the moduli problem over \(\mathbf{Z}[1/\mathrm{ND}_{\mathrm{F}}]\) that assigns for each \(\mathrm{S}\in\mathrm{Sch}/\mathbf{Z}[1/\mathrm{ND}_{\mathrm{F}}]\) the set of isomorphism classes of \(4\)-tuples \((\mathrm{A},\iota,\mathrm{C}_{\mathrm{N}^{+}},\alpha_{d})\) where
1. A is an abelian scheme over \(\mathrm{S}\) of relative dimension \(4\), up to prime to \(p\)-isogeny;
2. \(\iota:\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\to\mathrm{End} (\mathrm{A})\) is a homomorphism such that \[\iota(b\otimes a)^{*}=\iota(b^{*}\otimes a)\] where the first \((\cdot)^{*}\) means the Rosati involution of \(\mathrm{End}_{\mathrm{S}}(\mathrm{A})\) while the second \((\cdot)^{*}\) means the main involution on \(\mathrm{B}\);
3. \(\mathrm{C}_{\mathrm{N}^{+}}\) is an \(\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\)-stable flat subgroup of \(\mathrm{A}[\mathfrak{n}^{+}]\) for \(\mathfrak{n}^{+}=\mathrm{N}^{+}\mathcal{O}_{\mathrm{F}}\) such that at every geometric point, it is isomorphic to \((\mathcal{O}_{\mathrm{F}}/\mathfrak{n}^{+})^{2}\);
4. \(\alpha_{d}:(\mathcal{O}_{\mathrm{F}}/\mathfrak{d})^{2}_{\mathrm{S}}\hookrightarrow \mathrm{A}[\mathfrak{d}]\) is an \(\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\)-equivariant injection of group schemes over \(\mathrm{S}\) for the ideal \(\mathfrak{d}=d\mathcal{O}_{\mathrm{F}}\).
This moduli problem is representable by a scheme \(\widetilde{\mathrm{X}}_{\mathrm{N}^{+},d}(\mathrm{Q})\). The corresponding integral model for the Shimura variety associated to the group \(\mathrm{G}(\mathrm{Q})\) will be denoted by \(\mathrm{X}_{\mathrm{N}^{+},d}(\mathrm{Q})\) and there is a finite morphism
\[\widetilde{\mathrm{X}}_{\mathrm{N}^{+},d}(\mathrm{Q})\to\mathrm{X}_{\mathrm{N}^ {+},d}(\mathrm{Q})\]
as in (2.1). Since we will not need the precise form of the level structure, we will refer the reader to [LT, Example 2.12] for the precise description of the open compact subgroup \(\mathrm{K}_{\mathrm{N}^{+},d}\) defining the level structure. When there is no danger of confusion, we will simply write \(\mathrm{X}(\mathrm{Q})\) for the scheme \(\mathrm{X}_{\mathrm{N}^{+},d}(\mathrm{Q})\).
### Supersingular locus of the quaternionic Shimura surface
Let \(\mathbf{F}\) be a fixed algebraic closure of \(\mathbf{F}_{p}\). We set \(\mathrm{D}=\mathrm{B}\otimes\mathbf{Q}_{p}\) to be the quaternionic division algebra over \(\mathbf{Q}_{p}\). Let \(\mathcal{L}\) be an isocrystal of height \(4\) over \(\mathbf{F}\) with an action \(\iota:\mathrm{D}\otimes\mathbf{Q}_{p^{2}}\to\mathrm{End}(\mathcal{L})\) of \(\mathrm{D}\otimes\mathbf{Q}_{p^{2}}\) on \(\mathcal{L}\). Since we always have \(\mathrm{D}\otimes\mathbf{Q}_{p^{2}}=\mathrm{M}_{2}(\mathbf{Q}_{p^{2}})\), we have \(\mathcal{L}=\mathcal{N}^{2}\) and \(\mathcal{N}\) is equipped with an action of \(\mathbf{Q}_{p^{2}}\). We will use covariant Dieudonne theory throughout this article. A Dieudonne lattice \(\mathrm{M}\) is a lattice in \(\mathcal{N}\) with the property that \(p\mathrm{M}\subset\mathrm{FM}\subset\mathrm{M}\). A Dieudonne lattice is superspecial if \(\mathrm{F}^{2}\mathrm{M}=p\mathrm{M}\) and in this case \(\mathrm{F}=\mathrm{V}\). We are concerned with a Dieudonne lattice \(\mathrm{M}\) with an additional endomorphism \(\iota:\mathbf{Z}_{p^{2}}\to\mathrm{End}(\mathrm{M})\). We can decompose \(\mathrm{M}\) as \(\mathrm{M}_{0}\oplus\mathrm{M}_{1}\) according to the action of \(\mathbf{Z}_{p^{2}}\).
Let (Nilp) be the category of \(\mathrm{W}_{0}\)-schemes over which \(p\) is locally nilpotent. We consider the set valued functor \(\mathcal{M}\) that sends \(\mathrm{S}\in(\mathrm{Nilp})\) to the isomorphism classes of the collection \((\mathrm{X},\iota_{\mathrm{X}},\lambda_{\mathrm{X}},\rho_{\mathrm{X}})\) where:
1. \(\mathrm{X}\) is a \(p\)-divisible group of dimension \(2\) and height \(4\) over \(\mathrm{S}\);
2. \(\iota_{\mathrm{X}}:\mathbf{Z}_{p^{2}}\to\mathrm{End}(\mathrm{X})\) is an action of \(\mathbf{Z}_{p^{2}}\) on \(\mathrm{X}\) defined over \(\mathrm{S}\);
3. \(\rho_{\mathrm{X}}:\mathrm{X}\times_{\mathrm{S}}\overline{\mathrm{S}}\to \mathbb{X}\times_{\mathbf{F}}\overline{\mathrm{S}}\) is an \(\mathbf{Z}_{p^{2}}\)-linear quasi-isogeny over \(\overline{\mathrm{S}}\) which is the special fiber of \(\mathrm{S}\) at \(p\).
We require that \(\iota_{\mathrm{X}}\) satisfies the Kottwitz condition
\[\det(\iota_{\mathrm{X}}(a);\mathrm{Lie}(\mathrm{X}))=\mathrm{N}_{\mathbf{Q}_{ p^{2}}/\mathbf{Q}_{p}}(a)\]
for \(a\in\mathbf{Z}_{p^{2}}\). For \(\rho_{\mathrm{X}}:\mathrm{X}\times_{\mathrm{S}}\overline{\mathrm{S}}\to \mathbb{X}\times_{\mathbf{F}}\overline{\mathrm{S}}\), we require that \(\rho_{\mathrm{X}}^{-1}\circ\lambda_{\mathrm{X}}\circ\rho_{\mathrm{X}}=c(\rho_{ \mathrm{X}})\lambda_{\mathrm{X}}\) for a \(\mathbf{Q}_{p}\)-multiple \(c(\rho_{\mathrm{X}})\). This moduli problem is representable by a formal scheme \(\mathcal{M}\), locally formally of finite type over \(\mathrm{Spf}(\mathrm{W}_{0})\). We will be mainly concerned with the underlying reduced closed subscheme \(\mathcal{M}_{\mathbf{F}}\) of \(\mathcal{M}\). Let \(x=(\mathrm{X},\iota_{\mathrm{X}},\lambda_{\mathrm{X}},\rho_{\mathrm{X}})\) be a point in \(\mathcal{M}(\mathbf{F})\) and let \(\mathrm{M}\) be the Dieudonne lattice of \(\mathrm{X}\). The action of \(\mathbf{Z}_{p^{2}}\)-action on \(\mathrm{X}\) gives rise to a grading of the Dieudonne module \(\mathrm{M}=\mathrm{M}_{0}\oplus\mathrm{M}_{1}\). For this lattice, we always have
\[p\mathrm{M}_{0}\subset^{1}\mathrm{VM}_{1}\subset^{1}\mathrm{M}_{0}\]
\[p\mathrm{M}_{1}\subset^{1}\mathrm{VM}_{0}\subset^{1}\mathrm{M}_{1}.\]
We say \(i\in\{0,1\}\) is a _critical index_ for \(\mathrm{M}\) with respect to the \(\mathbf{Z}_{p^{2}}\)-action if \(\mathrm{V}^{2}\mathrm{M}_{i}=p\mathrm{M}_{i}\) and in this case we say \(i\) is a critical index of the point \(x\). It is clear that \(\mathrm{M}\) is superspecial if \(0\) and \(1\) are both critical indices of \(\mathrm{M}\) and in this case we say \(x\) is a superspecial point.
**Lemma 2.2**.: _We have the following statements:_
1. _for any Dieudonne lattice_ \(\mathrm{M}\) _associated to a point in_ \(\mathcal{M}(\mathbf{F})\)_, at least one_ \(i\in\{0,1\}\) _is critical for_ \(\mathrm{M}\)_;_
_;_
2. _we have a partition of the scheme_ \(\mathcal{M}_{\mathbf{F}}=\mathcal{M}_{\mathbf{F}}^{\circ}\cup\mathcal{M}_{\mathbf{F}} ^{\bullet}\) _where_ \(\mathcal{M}_{\mathbf{F}}^{\circ}\) _consists those points of_ \(\mathcal{M}_{\mathbf{F}}\) _such that_ \(i=0\) _is a critical index of the associated Dieudonne lattice and_ \(\mathcal{M}_{\mathbf{F}}^{\bullet}\) _consists those points of_ \(\mathcal{M}_{\mathbf{F}}\) _such that_ \(i=1\) _is a critical index of the associated Dieudonne lattice;_
3. _the irreducible components of_ \(\mathcal{M}_{\mathbf{F}}^{\circ}\) _and_ \(\mathcal{M}_{\mathbf{F}}^{\bullet}\) _are projective lines. These two family of projective lines intersect at the superspecial points of_ \(\mathcal{M}_{\mathbf{F}}\)_._
Proof.: The first part (1) follows from [1, Lemma 4.2]. The second part (2) follows from (1). The third part (3) is [1, Proposition 4.4] but we will recall briefly the construction of those projective lines: suppose \(\mathrm{M}\) is associated to a point in \(\mathcal{M}_{\mathbf{F}}^{\circ}\), then \(\Lambda_{0}=\mathrm{M}_{0}^{pV^{-2}}\) is a lattice over \(\mathbf{Z}_{p^{2}}\), then we associate the projective line given by \(\mathbf{P}(\Lambda_{0}/p\Lambda_{0})\). Suppose \(\mathrm{M}\) is associated to a point in \(\mathcal{M}_{\mathbf{F}}^{\bullet}\), then \(\Lambda_{1}=\mathrm{M}_{1}^{pV^{-2}}\) is a lattice over \(\mathbf{Z}_{p^{2}}\), then we associate the projective line given by \(\mathbf{P}(\Lambda_{1}/p\Lambda_{1})\). It is clear these projective lines are defined over \(\mathbf{F}_{p^{2}}\).
We will refer to those projective lines in \(\mathcal{M}^{\circ}\) as projective lines of \(\circ\)-type and those projective lines in \(\mathcal{M}^{\bullet}\) as projective lines of \(\bullet\)-type.
Let \(\overline{\mathrm{Q}}\) be the quaternion algebra obtained from \(\mathrm{Q}\) by switching the invariants at the archimedean places of \(\mathrm{F}\), then we define the algebraic group \(\mathrm{G}(\overline{\mathrm{Q}})\) over \(\mathbf{Q}\) given by the Weil restriction of \(\overline{\mathrm{Q}}^{\times}\) from \(\mathrm{F}\) to \(\mathbf{Q}\). Let \(\mathrm{K}\) be an open compact subgroup of \(\mathrm{G}(\mathrm{Q})(\mathbf{A}^{\infty})\), then \(\mathrm{K}\) can be viewed as an open compact subgroup of \(\mathrm{G}(\overline{\mathrm{Q}})(\mathbf{A}^{\infty})\). Let \(\overline{\mathrm{X}}(\mathrm{Q})\) be the special fiber of the The Rapoport-Zink uniformization theorem [10, Theorem 6.1], [She, Theorem 1.2] furnishes the following description of the supersingular locus \(\overline{\mathrm{X}}^{\mathrm{ss}}(\mathrm{Q})\) of \(\overline{\mathrm{X}}(\mathrm{Q})\).
**Proposition 2.3**.: _The supersingular locus \(\overline{\mathrm{X}}^{\mathrm{ss}}(\mathrm{Q})\) of \(\overline{\mathrm{X}}(\mathrm{Q})\) is pure of dimension \(1\)._
1. _We have an isomorphism from the double quotient_ \[\mathrm{G}(\overline{\mathrm{Q}})(\mathbf{Q})\backslash\mathcal{M}_{\mathbf{F} }\times\mathrm{G}(\overline{\mathrm{Q}})(\mathbf{A}^{\infty,p})/\mathrm{K}^{p}\] _and_ \(\overline{\mathrm{X}}^{\mathrm{ss}}(\mathrm{Q})\) _which descends to an isomorphism over_ \(\mathbf{F}_{p^{2}}\)_;_
2. _the irreducible components of_ \(\overline{\mathrm{X}}^{\mathrm{ss}}(\mathrm{Q})\) _are projective lines which are parametrized by two copies of_ \[\mathrm{Z}(\overline{\mathrm{Q}})=\mathrm{G}(\overline{\mathrm{Q}})(\mathbf{ Q})\backslash\mathrm{G}(\overline{\mathrm{Q}})(\mathbf{A}^{\infty,p})/\mathrm{K}^{p} \mathrm{K}_{p}\] _where_ \(\mathrm{K}_{p}\) _is_ \((\mathcal{O}_{\overline{\mathrm{Q}}}\otimes\mathbf{Z}_{p})^{\times}\)_. We denote these two copies by_ \(\mathrm{Z}^{\circ}(\overline{\mathrm{Q}})\) _and_ \(\mathrm{Z}^{\bullet}(\overline{\mathrm{Q}})\)_, they parametrize those projective lines of_ \(\circ\)_-type and those projective lines of_ \(\bullet\)_-type respectively;_
3. _the superspecial locus_ \(\overline{\mathrm{X}}^{\mathrm{ss}}(\mathrm{Q})\) _is a discrete set of points parametrized by_ \[\mathrm{Z}_{\mathrm{Iw}}(\overline{\mathrm{Q}})=\mathrm{G}(\overline{\mathrm{ Q}})(\mathbf{Q})\backslash\mathrm{G}(\overline{\mathrm{Q}})(\mathbf{A}^{ \infty,p})/\mathrm{K}^{p}\mathrm{Iw}_{p}\] _where_ \(\mathrm{Iw}_{p}\) _is the Iwahori subgroup of_ \(\mathrm{K}_{p}\)_._
Proof.: The first part (1) follows from the Rapoport-Zink uniformization theorem and the fact that the Weil descent datum for \(\mathcal{M}\) is effective and thus descends \(\mathcal{M}\) to \(\mathbf{Z}_{p^{2}}\). The second part (2) and the third part (3) follow from the first part and descriptions in Lemma 2.2 immediately.
By the previous proposition we can write \(\overline{\mathrm{X}}^{\mathrm{ss}}(\mathrm{Q})=\overline{\mathrm{X}}^{\circ} (\mathrm{Q})\cup\overline{\mathrm{X}}^{\bullet}(\mathrm{Q})\) where
1. \(\overline{\mathrm{X}}^{\circ}(\mathrm{Q})\) is a \(\mathbf{P}^{1}\)-bundle over \(\mathrm{Z}^{\circ}(\mathrm{Q})\) which we will call the \(\circ\) components;
2. \(\overline{\mathrm{X}}^{\bullet}(\mathrm{Q})\) is a \(\mathbf{P}^{1}\)-bundle over \(\mathrm{Z}^{\bullet}(\mathrm{Q})\) which we will call the \(\bullet\) components.
Note that the description from the previous Proposition 2.3 can be obtained using the so-called isogeny trick instead of using Rapoport-Zink uniformization theorem, see the main result of [TX] which is much more general.
## 3. Quaternionic Hirzebruch-Zagier divisor
### Shimura curves and Drinfeld uniformization
Let \(\mathrm{B}\) be the indefinite quaternion algebra over \(\mathbf{Q}\) with discriminant \(p\mathrm{N}^{-}\) considered in the last section. We let \(\mathrm{G}(\mathrm{B})\) be the algebraic group over \(\mathbf{Q}\) given by \(\mathrm{B}^{\times}\). We fix an open compact subgroup \(\mathrm{K}\) of \(\mathrm{G}(\mathrm{B})(\mathbf{A}^{\infty})\) such that \(\mathrm{K}^{p}\mathrm{K}_{p}\) where \(\mathrm{K}^{p}\) is sufficiently small and \(\mathrm{K}_{p}=(\mathcal{O}_{\mathrm{B}}\otimes\mathbf{Z}_{p})^{\times}\). We consider the moduli problem \(\mathrm{X}_{\mathrm{K}}(\mathrm{B})\) over \(\mathbf{Z}_{(p)}\) which assigns each \(\mathrm{S}\) over \(\mathbf{Z}_{(p)}\) the set of triples \((\mathrm{A},\iota,\overline{\eta})\) where
1. \(\mathrm{A}\) is an abelian scheme over \(\mathrm{S}\) of relative dimension \(2\) up to prime to \(p\)-isogeny;
2. \(\iota:\mathcal{O}_{\mathrm{B}}\hookrightarrow\mathrm{End}_{\mathrm{S}}( \mathrm{A})\) which is special in the sense of [BC, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 222, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 323, 324, 325, 326, 327, 328, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 412, 413, 414, 415, 416, 417, 418, 419, 42, 42, 43, 44, 44, 45, 46, 47, 48, 49, 42, 44, 45, 47, 49, 43, 44, 46, 48, 49, 44, 45, 46, 47, 49, 45, 48, 49, 46, 49, 47, 48, 49, 40, 41, 42, 43, 44, 46, 49, 47, 48, 49, 49, 40, 42, 44, 45, 46, 49, 41, 42, 44, 46, 49, 43, 44, 47, 48, 49, 45, 49, 46, 47, 49, 48, 49, 40, 43, 44, 49, 45, 46, 49, 47, 48, 49, 49, 41, 42, 44, 45, 46, 49, 42, 45, 47, 49, 48, 49, 49, 40, 44, 49, 41, 42, 45, 46, 49, 42, 46, 49, 43, 44, 48, 49, 45, 46, 47, 49, 48, 49, 49, 41, 43, 44, 49, 45, 46, 47, 49, 48, 49, 49, 42, 45, 49, 46, 47, 48, 49, 49, 40, 41, 42, 43, 44, 49, 45, 46, 47, 49, 48, 49, 49, 40, 41, 42, 45, 49, 42, 46, 49, 43, 44, 48, 49, 45, 47, 48, 49, 49, 40, 44, 49, 42, 45, 49, 46, 49, 47, 48, 49, 49, 41, 45, 49, 48, 49, 49, 42, 49, 43, 44, 45, 46, 49, 45, 47, 49, 48, 49, 49, 40, 44, 49, 45, 49, 46, 47, 49, 48, 49, 41, 49, 49, 42, 49, 43, 44, 49, 45, 49, 46, 47, 49, 48, 49, 49, 49, 40, 45, 49, 41, 42, 45, 49, 46, 47, 48, 49, 49, 42, 45, 49, 46, 47, 48, 49, 49, 45, 49, 47, 49, 48, 49, 49, 40, 49, 41, 42, 45, 49, 46, 48, 49, 49, 42, 49, 43, 44, 49, 45, 49, 46, 47, 48, 49, 49, 47, 49, 48, 49, 49, 49, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 52, 54, 59, 53, 56, 59, 54, 52, 55, 56, 57, 59, 53, 58, 59, 54, 50, 55, 59, 56, 57, 59, 58, 59, 50, 59, 51, 50, 52, 54, 59, 50, 56, 57, 59, 59, 52, 57, 58, 59, 53, 59, 50, 51, 52, 54, 53, 56, 57, 59, 54, 55, 58, 59, 50, 57, 59, 56, 59, 58, 59, 50, 59, 59, 50, 51, 52, 54, 55, 59, 56, 59, 57, 59, 58, 59, 50, 59, 50, 51, 53, 59, 52, 56, 57, 58, 59, 51, 50, 52, 59, 53, 54, 55, 59, 50, 51, 54, 56, 57, 59, 58, 59, 50, 52, 59, 50, 53, 59, 51, 50, 54, 59, 52, 56, 57, 59, 53, 57, 59, 54, 50, 55, 59, 56, 57, 58, 59, 59, 50, 57, 59, 51, 59, 52, 50, 59, 5
such that \(\mathrm{Lie}(\mathrm{X})\) is a locally free \(\mathbf{Z}_{p^{2}}\otimes\mathcal{O}_{\mathrm{S}}\)-module of rank \(1\). We fix a special \(\mathcal{O}_{\mathrm{D}}\)-module \(\mathbb{X}\) over \(\mathbf{F}\) whose Dieudonne module is denoted by \(\mathbb{M}\). Consider the functor \(\mathcal{M}_{\mathrm{Dr}}\) on the category (Nilp) of \(\mathrm{W}_{0}\)-schemes \(\mathrm{S}\) such that \(p\) is locally nilpotent in \(\mathcal{O}_{\mathrm{S}}\) such that \(\mathcal{M}_{\mathrm{Dr}}(\mathrm{S})\) classifies the isomorphism classes of pairs \((\mathrm{X},\rho_{\mathrm{X}})\) where
1. \(\mathrm{X}\) is special formal \(\mathcal{O}_{\mathrm{D}}\)-module;
2. \(\rho_{\mathrm{X}}:\mathrm{X}\times_{\mathrm{S}}\overline{\mathrm{S}}\to \mathbb{X}\times_{\mathbf{F}}\overline{\mathrm{S}}\) is a quasi-isogney.
The functor \(\mathcal{M}_{\mathrm{Dr}}\) is represented by a formal scheme over \(\mathrm{W}_{0}\) which we also denote by \(\mathcal{M}_{\mathrm{Dr}}\). The formal scheme \(\mathcal{M}_{\mathrm{Dr}}\) breaks into a disjoint union
\[\mathcal{M}_{\mathrm{Dr}}=\bigsqcup_{i\in\mathbf{Z}}\mathcal{M}_{\mathrm{Dr},i}\]
according to the height \(i\) of the quasi-isogeny \(\rho_{\mathrm{X}}\). Each formal scheme \(\mathcal{M}_{\mathrm{Dr},i}\) is isomorphic to the _\(p\)-adic upper half plane_\(\mathcal{H}_{p}\) base-changed to \(\mathrm{W}_{0}\). The group \(\mathrm{GL}_{2}(\mathbf{Q}_{p})\) acts naturally on the formal scheme \(\mathcal{M}\) and each \(\mathcal{M}_{\mathrm{Dr},i}\) affords an action of the group
\[\mathrm{GL}_{2}^{0}(\mathbf{Q}_{p}):=\{g\in\mathrm{GL}_{2}(\mathbf{Q}_{p}): \mathrm{ord}_{p}(\det(g))=0\}.\]
Let \((\mathrm{X},\rho)\in\mathcal{M}_{\mathrm{Dr}}(\mathbf{F})\) and let \(\mathrm{M}\) be the Dieudonne lattice of \(\mathrm{X}\) and the action of \(\mathbf{Z}_{p^{2}}\) on \(\mathrm{X}\) induced a grading
\[\mathrm{M}=\mathrm{M}_{0}\oplus\mathrm{M}_{1} \tag{3.1}\]
that satisifies
\[p\mathrm{M}_{0}\subset^{1}\mathrm{VM}_{1}\subset^{1}\mathrm{M}_{0}\quad p \mathrm{M}_{1}\subset^{1}\mathrm{VM}_{0}\subset^{1}\mathrm{M}_{1} \tag{3.2}\]
Since the action of \(\Pi\) and \(\mathrm{V}\) commute, we have the induced maps
\[\Pi:\mathrm{M}_{0}/\mathrm{VM}_{1}\to\mathrm{M}_{1}/\mathrm{VM}_{0}, \tag{3.3}\]
Since both \(\mathrm{M}_{0}/\mathrm{VM}_{1}\) and \(\mathrm{M}_{1}/\mathrm{VM}_{0}\) are of dimension \(1\) and the composite of the two maps is obviously zero, we can conclude that there is an \(i\in\{0,1\}\) such that \(\Pi\mathrm{M}_{i}\subset\mathrm{VM}_{i}\). Since both \(\Pi\mathrm{M}_{i}\) and \(\mathrm{VM}_{i}\) are of colength \(1\) in \(\mathrm{M}_{i+1}\), we conclude that they are in fact equal to each other. We say that \(i\) is a _critical index_ for \(\mathrm{M}\) with respect to the D-action if \(\mathrm{VM}_{i}=\Pi\mathrm{M}_{i}\) and a critical index always exists for \(\mathrm{M}\). We let \(\tau=\Pi^{-1}\mathrm{V}\) and it acts as an automorphism on \(\mathrm{M}_{i}\) if \(i\) is a critical index. If \(0\) is a critical index, then we set \(\Lambda_{0}=\mathrm{M}_{0}^{\tau=1}\) and this is a \(\mathbf{Z}_{p}\)-lattice of rank \(2\) and we associate to it the projective line \(\mathbf{P}(\mathrm{L}_{0}/p)\). Then \(\mathrm{VM}_{1}/p\mathrm{M}_{0}\subset^{1}\mathrm{M}_{0}/p\mathrm{M}_{0}= \Lambda_{0}/p\Lambda_{0}\otimes\mathbf{F}\) gives a point on \(\mathbf{P}(\Lambda_{0}/p\Lambda_{0})(\mathbf{F})\). If \(1\) is a critical index, then we similarly put \(\Lambda_{1}=\Pi\mathrm{M}_{1}^{\tau=1}\) and we again associate to it the projective line \(\mathbf{P}(\Lambda_{1}/p\Lambda_{1})\). Similarly \(\mathrm{VM}_{0}/p\mathrm{M}_{1}\subset^{1}\mathrm{M}_{1}/p\mathrm{M}_{1}= \Lambda_{1}/p\Lambda_{1}\otimes\mathbf{F}\) gives a point on \(\mathbf{P}(\Lambda_{1}/p\Lambda_{1})(\mathbf{F})\). We summarize the above discussion in the following proposition.
**Lemma 3.1**.: _We have the following statements._
1. _For any Dieudonne lattice_ \(\mathrm{M}\) _associated to a point in_ \(\mathcal{M}_{\mathrm{Dr}}(\mathbf{F})\)_, at least one_ \(i\in\{0,1\}\) _is critical;_
2. _We have a partition of the scheme_ \(\mathcal{M}_{\mathrm{Dr},\mathbf{F}}=\mathcal{M}_{\mathrm{Dr},\mathbf{F}}^{ \circ}\cup\mathcal{M}_{\mathrm{Dr},\mathbf{F}}^{\bullet}\) _where_ \(\mathcal{M}_{\mathrm{Dr},\mathbf{F}}^{\circ}\) _consists those points of_ \(\mathcal{M}_{\mathrm{Dr},\mathbf{F}}\) _such that_ \(i=0\) _is a critical index and_ \(\mathcal{M}_{\mathrm{Dr},\mathbf{F}}^{\bullet}\) _consists those points of_ \(\mathcal{M}_{\mathrm{Dr},\mathbf{F}}\) _such that_ \(i=1\) _is a critical index;_
_._
3. _The irreducible components of_ \(\mathcal{M}^{\circ}_{\mathrm{Dr},\mathbf{F}}\) _and_ \(\mathcal{M}^{\bullet}_{\mathrm{Dr},\mathbf{F}}\) _are projective lines. These two family of projective lines intersect at the superspecial points of_ \(\mathcal{M}_{\mathrm{Dr},\mathbf{F}}\)_._
The Cerednick-Drinfeld uniformization theorem provides the following proposition describing the special fiber \(\overline{\mathrm{X}}(\mathrm{B})_{\mathbf{F}}\) of \(\mathrm{X}(\mathrm{B})\). Let \(\overline{\mathrm{B}}\) be the definite quaternion algebra obtained from \(\mathrm{B}\) by switching the invariant at \(p\) and \(\infty\). Then we define \(\mathrm{G}(\overline{\mathrm{B}})\) to be algebraic group defined by \(\overline{\mathrm{B}}^{\times}\). Let \(\mathrm{K}\) be an open compact subgroup of \(\mathrm{G}(\mathrm{B})(\mathbf{A}^{\infty})\), then \(\mathrm{K}^{p}\) can be viewed as an open compact of \(\mathrm{G}(\overline{\mathrm{B}})(\mathbf{A}^{\infty})\) in a obvious way.
**Proposition 3.2**.: _We have the following descriptions of the scheme \(\overline{\mathrm{X}}(\mathrm{B})\)._
1. _We have an isomorphism from the double quotient_ \[\mathrm{G}(\overline{\mathrm{B}})(\mathbf{Q})\backslash\mathcal{M}_{\mathrm{ Dr},\mathbf{F}}\times\mathrm{G}(\overline{\mathrm{B}})(\mathbf{A}^{\infty,p})/ \mathrm{K}^{p}\] _to_ \(\overline{\mathrm{X}}(\mathrm{B})_{\mathbf{F}}\) _which descends to an isomorphism over_ \(\mathbf{F}_{p^{2}}\)_;_
2. _The irreducible components of_ \(\overline{\mathrm{X}}(\mathrm{B})\) _are projective lines which are parametrized by two copies of the Shimura set_ \[\mathrm{Z}(\overline{\mathrm{B}})=\mathrm{G}(\overline{\mathrm{B}})(\mathbf{Q} )\backslash\mathrm{G}(\overline{\mathrm{B}})(\mathbf{A}^{\infty,p})/\mathrm{K}^ {p}\overline{\mathrm{K}}_{p}\] _where_ \(\overline{\mathrm{K}}_{p}\) _is the group_ \((\mathcal{O}_{\overline{\mathrm{B}}}\otimes\mathbf{Z}_{p})^{\times}\)_. We denote these two copies by_ \(\mathrm{Z}^{\circ}(\overline{\mathrm{B}})\) _and_ \(\mathrm{Z}^{\bullet}(\overline{\mathrm{B}})\)_, they parametrize those projective lines corresponding to critical index_ \(i=0\) _resp. critical index_ \(i=1\)_;_
3. _The superspecial locus_ \(\overline{\mathrm{X}}(\mathrm{B})\) _is a discrete set of points parametrized by_ \[\mathrm{Z}_{\mathrm{Iw}}(\overline{\mathrm{B}})=\mathrm{G}(\overline{\mathrm{ B}})(\mathbf{Q})\backslash\mathrm{G}(\overline{\mathrm{B}})(\mathbf{A}^{\infty,p})/ \mathrm{K}^{p}\mathrm{Iw}_{p}\] _where_ \(\mathrm{Iw}_{p}\) _is the Iwahori subgroup of_ \(\overline{\mathrm{K}}_{p}\)_._
Proof.: This is a well-known result following from the Cerednick-Drinfeld uniformization theorem for the Shimura curve \(\mathrm{X}(\mathrm{B})\). For example, see the proof of [Wang, Proposition 3.2] for more details.
By the previous proposition we can write \(\overline{\mathrm{X}}(\mathrm{B})=\overline{\mathrm{X}}^{\circ}(\mathrm{B}) \cup\overline{\mathrm{X}}^{\bullet}(\mathrm{B})\) where
1. \(\overline{\mathrm{X}}^{\circ}(\mathrm{B})\) is a \(\mathbf{P}^{1}\)-bundle over \(\mathrm{Z}^{\circ}(\overline{\mathrm{B}})\) which we will call the \(\circ\) components;
2. \(\overline{\mathrm{X}}^{\bullet}(\mathrm{B})\) is a \(\mathbf{P}^{1}\)-bundle over \(\mathrm{Z}^{\bullet}(\overline{\mathrm{B}})\) which we will call the \(\bullet\) components.
Before we close this subsection, we remark that the role of \(d\) in the level structure for the Shimura set \(\mathrm{Z}(\overline{\mathrm{B}})\) is completely auxiliary. Therefore we may sometimes want to work with the genuine level to the modular form \(f\) or \(f^{\dagger}\) and we introduce the following Shimura set
\[\mathrm{Z}_{\mathrm{N}^{+}}(\overline{\mathrm{B}})=\mathrm{G}(\overline{ \mathrm{B}})(\mathbf{Q})\backslash\mathrm{G}(\overline{\mathrm{B}})(\mathbf{A} ^{\infty,p})/\mathrm{K}_{\mathrm{N}^{+}}^{p}\overline{\mathrm{K}}_{p}\]
of level \(\mathrm{N}^{+}\) where the level \(\mathrm{K}_{\mathrm{N}^{+}}^{p}\) is given by an Eichler order of level \(\mathrm{N}^{+}\) in \(\mathcal{O}_{\overline{\mathrm{B}}}\).
### Quaternionic Hirzebruch-Zagier divisor
Let \((\mathrm{A},\iota,\mathrm{C}_{\mathrm{N}^{+}},\alpha_{d})\) be an element of \(\mathrm{X}(\mathrm{B})(\mathrm{S})\) for a scheme \(\mathrm{S}\) over \(\mathbf{Z}[1/\mathrm{N}d\mathrm{D}_{\mathrm{F}}]\), we define
1. \(\widetilde{\mathrm{A}}=\mathrm{A}\otimes\mathcal{O}_{\mathrm{F}}\) is the abelian scheme given by the Serre's tensor construction from \(\mathrm{A}\);
2. \(\widetilde{\iota}:\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}} \hookrightarrow\mathrm{End}(\mathrm{A})\otimes\mathcal{O}_{\mathrm{F}} \hookrightarrow\mathrm{End}(\widetilde{\mathrm{A}})\) be the morphism given by \(\iota\otimes\mathcal{O}_{\mathrm{F}}\);
3. \(\widetilde{\mathrm{C}}_{\mathrm{N}^{+}}\subset\widetilde{\mathrm{A}}[\mathfrak{ n}^{+}]\) be the \(\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\)-stable finite flat subgroup of \(\widetilde{\mathrm{A}}\) given by \[\widetilde{\mathrm{C}}_{\mathrm{N}^{+}}=\mathrm{C}_{\mathrm{N}^{+}}\otimes \mathcal{O}_{\mathrm{F}};\]
4. \(\widetilde{\alpha}_{d}:(\mathcal{O}_{\mathrm{F}}/d)^{2}\hookrightarrow\mathrm{A}[ \mathfrak{d}]\) is the \(\mathcal{O}_{\mathrm{B}}\otimes\mathcal{O}_{\mathrm{F}}\)-equivariant injection \[\widetilde{\alpha}_{d}:(\mathbf{Z}/d)^{2}\otimes\mathcal{O}_{\mathrm{F}} \hookrightarrow\mathrm{A}\otimes\mathcal{O}_{\mathrm{F}}[d]=\widetilde{ \mathrm{A}}[\mathfrak{d}].\] given by \(\alpha_{d}\otimes\mathcal{O}_{\mathrm{F}}\).
This defines an element of \((\widetilde{\Lambda},\widetilde{\iota},\widetilde{\mathrm{C}}_{\mathrm{N}^{+}},\widetilde{\alpha}_{d})\in\widetilde{\mathrm{X}}(\mathrm{Q})(\mathrm{S})\) over \(\mathbf{Z}[1/\mathrm{N}d\mathrm{D}_{\mathrm{F}}]\) which gives a map from \(\mathrm{X}(\mathrm{B})\otimes\mathbf{Z}[1/\mathrm{N}d\mathrm{D}_{\mathrm{F}}]\) to \(\widetilde{\mathrm{X}}(\mathrm{Q})\). We therefore obtain the _quaternionic Hirzebruch-Zagier morphism_ over \(\mathbf{Z}[1/\mathrm{N}d\mathrm{D}_{\mathrm{F}}]\)
\[\theta:\mathrm{X}(\mathrm{B})\otimes\mathbf{Z}[1/\mathrm{N}d\mathrm{D}_{ \mathrm{F}}]\rightarrow\mathrm{X}(\mathrm{Q})\]
by composing the previously defined map with the canonical map \(\widetilde{\mathrm{X}}(\mathrm{Q})\rightarrow\mathrm{X}(\mathrm{Q})\).
Let \((\mathrm{X},\iota,\rho_{\mathrm{X}})\) be an element of \(\mathcal{M}_{\mathrm{Dr}}(\mathrm{S})\) with \(\mathrm{S}\) in (Nilp), then we can restrict \(\iota:\mathcal{O}_{\mathrm{D}}\rightarrow\mathrm{End}_{\mathrm{S}}(\mathrm{X})\) to \(\mathbf{Z}_{p^{2}}\) and \((\mathrm{X},\iota_{|_{\mathbf{Z}_{p^{2}}}},\rho_{\mathrm{X}})\) gives rise to an element of \(\mathcal{M}(\mathrm{S})\) and hence we can define a morphism \(\mathcal{M}_{\mathrm{Dr}}\rightarrow\mathcal{M}\). Let \(\mathrm{M}=\mathrm{M}_{0}\oplus\mathrm{M}_{1}\) be the Dieudonne lattice of \(\mathrm{X}\). Suppose that \(i\in\{0,1\}\) is a critical index for \(\mathrm{M}\) with respect to the action of \(\mathcal{O}_{\mathrm{D}}\), then \(\mathrm{VM}_{i}=\Pi\mathrm{M}_{i}\). It follows that \(\mathrm{V}^{2}\mathrm{M}_{i}=p\mathrm{M}_{i}\) therefore \(i\) is also a critical index for \(\mathrm{M}\) with respect to the restricted action of \(\mathbf{Z}_{p^{2}}\). Therefore the morphism \(\mathcal{M}_{\mathrm{Dr}}\rightarrow\mathcal{M}\) respects the partition with respect to the two notion of critical indices and thus to the \(\circ\) and \(\bullet\) components, that is we have morphisms \(\mathcal{M}_{\mathrm{Dr}}^{\circ}\rightarrow\mathcal{M}^{\circ}\) and \(\mathcal{M}_{\mathrm{Dr}}^{\bullet}\rightarrow\mathcal{M}^{\bullet}\). Note these maps are first considered by [Lan].
**Lemma 3.3**.: _We have the following statements on the reduction of the Hirzebruch-Zagier morphism._
1. _The image of the induced morphism on the special fiber_ \[\overline{\theta}:\overline{\mathrm{X}}(\mathrm{B})\rightarrow\overline{ \mathrm{X}}(\mathrm{Q}).\] _is contained in the supersingular locus_ \(\overline{\mathrm{X}}^{\mathrm{ss}}(\mathrm{Q})\) _of_ \(\overline{\mathrm{X}}(\mathrm{Q})\)_._
2. _The map_ \(\overline{\theta}\) _respects the partition according to the critical indices and induces a map_ \[\overline{\theta}^{?}:\overline{\mathrm{X}}^{?}(\mathrm{B})\rightarrow\overline{ \mathrm{X}}^{?}(\mathrm{Q})\] _which restricts to the canonical embedding of_ \[\vartheta^{?}:\mathrm{Z}^{?}(\overline{\mathrm{B}})\rightarrow\mathrm{Z}^{?}( \overline{\mathrm{Q}})\] _for_ \(?\in\{\circ,\bullet\}\)_._
Proof.: The first part (1) follows from Proposition 2.3 (1) and 3.2 (1). The second part (2) follows from the previous discussions on the compatibilities of the two notions of critical indices.
### Integral Tate cycles on the special fiber
Suppose that \(\pi^{\mathrm{Q}}\) is a cuspidal automorphic representation of \(\mathrm{G}(\mathrm{Q})\) of weight \((2,2)\) and trivial central character and we will always assume that \(\pi^{\mathrm{Q}}\) is non-dihedral. Suppose that \(\pi^{\mathrm{Q}}\) appears in the middle degree cohomology
\[\mathrm{H}^{2}(\mathrm{X}(\mathrm{Q})\otimes\overline{\mathbf{Q}},\mathrm{E}_ {\lambda})=\bigoplus_{\Pi^{\mathrm{Q}}}\mathrm{H}^{2}(\mathrm{X}(\mathrm{Q}) \otimes\overline{\mathbf{Q}},\mathrm{E}_{\lambda})[\Pi^{\mathrm{Q}}]\otimes( \Pi^{\mathrm{Q}\infty})^{\mathrm{K}_{\mathrm{N}^{+},d}}\]
of \(\mathrm{X}(\mathrm{Q})\otimes\overline{\mathbf{Q}}\). We can associate to \(\pi^{\mathrm{Q}}\) a two dimensional \(\lambda\)-adic representation \((\rho_{\pi^{\mathrm{Q}},\lambda},\mathrm{V}_{\pi^{\mathrm{Q}}})\) of \(\mathrm{G}_{\mathrm{F}}\) using the construction of Blasius-Rogawski [BR] and Taylor [Tay]. The Asai representation of \(\mathrm{G}_{\mathbf{Q}}\) associated to the \(\mathrm{G}_{\mathrm{F}}\) representation \(\mathrm{V}_{\pi^{\mathrm{Q}}}\) will be denoted by
\[(\mathrm{As}(\rho_{\pi^{\mathrm{Q}},\lambda}),\mathrm{As}(\mathrm{V}_{\pi^{ \mathrm{Q}}})),\]
this is a representation of \(G_{Q}\) isomorphic to the tensor induction of \(V_{\pi^{Q}}\) from \(G_{F}\) to \(G_{Q}\). It will be important for us to note that \((\operatorname{As}(\rho_{\pi^{Q},\lambda})(-1),\operatorname{As}(V_{\pi^{Q}})(-1))\) is realized on the cohomology \(H^{2}(X(Q)\otimes\overline{Q},E_{\lambda}(1))\) by the same construction as in [BL].
**Lemma 3.4**.: _The \(\lambda\)-adic Galois representation_
\[H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{Q},E_{\lambda}(1))=H^{2}(X(Q)\otimes \overline{Q},E_{\lambda}(1))[\pi^{Q}]\]
_is isomorphic to \(m(\pi^{Q},d)\) copies of \(\operatorname{As}(V_{\pi^{Q}})(-1)\) for some \(m(\pi^{Q},d)\geq 1\)._
Proof.: The same proof for [Liu1, Lemma 3.9] works here for our quaternionic Hilbert-Blumenthal surface as well. Note that \(H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{Q},E_{\lambda}(1))\) is a semi-simple Galois module by [Nek2].
Let \(\mathbb{T}_{F}=\mathbb{T}_{F}^{\mathfrak{sn}}\) be the Hecke algebra for the group \(G(Q)\) consisting of prime-to-\(\mathfrak{sn}\) Hecke operators where \(\mathfrak{n}\) is the ideal \(N\mathcal{O}_{F}\) and \(\mathfrak{d}\) is the ideal \(d\mathcal{O}_{F}\). The associated Hecke eigensystem given by the traces of the Frobenius elements of the residual representation \(\overline{\rho}_{\pi^{Q}}\) of \(\rho_{\pi^{Q}}\) furnishes a morphism
\[\psi_{\pi^{Q}}:\mathbb{T}_{F}\to k_{\lambda}\]
and we put
\[\mathfrak{m}_{F}=\ker(\psi_{\pi^{Q}}:\mathbb{T}_{F}\to k_{\lambda}) \tag{3.4}\]
to be the maximal ideal given by the kernel of this morphism. To simplify the notations, we put
\[H^{i}_{\pi^{Q}}(X(Q)\otimes\overline{Q},\mathcal{O}_{\lambda}(j))=H^{i}(X(Q) \otimes\overline{Q},\mathcal{O}_{\lambda}(j))_{\mathfrak{m}_{F}}\]
for integers \(i,j\geq 0\) and call it the \(\pi^{Q}\)-isotypic component of \(H^{i}(X(Q)\otimes\overline{Q},\mathcal{O}_{\lambda}(j))\). In particular, \(H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{Q},\mathcal{O}_{\lambda}(1))\) defines an integral lattice in \(\operatorname{As}(V_{\pi^{Q}})(-1)\) which we will denote by \(\operatorname{As}(T_{\pi^{Q}})(-1)\).
**Definition 3.5**.: We say the number \(d\) that appears in the level structure for \(X(Q)\) is _clean_ for \(T_{\pi^{Q}}\) if
\[H^{2}_{\pi^{Q}}(X(Q)\otimes\overline{Q},\mathcal{O}_{\lambda}(1))\]
is isomorphic to \(m(\pi^{Q},d)\) copies of \(\operatorname{As}(T_{\pi^{Q}})(-1)\) with \(m(\pi^{Q},d)\) defined in Lemma 3.4.
This terminology is inherited from [Liu2] from a slightly different situation. Intuitively, this condition means that the \(d\)-new forms occurring on \(H^{2}(X(Q)\otimes\overline{Q},\mathcal{O}_{\lambda}(1))\) will not be congruent to the \(d\)-old forms. From here on, we will always choose \(d\) to be clean for \(T_{\pi^{Q}}\). Note that by the proper base change theorem, we can identify
\[H^{2}_{\pi^{Q}}(\overline{X}(Q)\otimes\overline{F}_{p},\mathcal{O}_{\lambda}( 1))=H^{2}(X(Q)\otimes\overline{F}_{p},\mathcal{O}_{\lambda}(1))_{\mathfrak{m}_ {F}}\]
with \(m(\pi^{Q},d)\)-copies of \(\operatorname{As}(T_{\pi^{Q}})(-1)|_{G_{Q_{p}}}\).
We define similarly the \(\pi^{\overline{Q}}\)-isotypic component of the cohomology \(H^{0}(Z(\overline{Q}),\mathcal{O}_{\lambda})=\mathcal{O}_{\lambda}[Z(\overline {Q})]\) of the Shimura set \(Z(\overline{Q})\). We put
\[\mathcal{Z}^{?}_{\pi^{Q}}(\overline{Q})=\mathcal{O}_{\lambda}[Z^{?}(\overline {Q})]_{\mathfrak{m}_{F}}\]
for \(?\in\{\circ,\bullet\}\). If we would like to identify these spaces for \(?\in\{\circ,\bullet\}\), then we will omit the superscript \(?\in\{\circ,\bullet\}\) from the notations. The description of the supersingular locus of \(\overline{X}(Q)\) in Lemma 2.3 and the cycle class map provide the following map
\[\mathcal{C}:\mathcal{O}_{\lambda}[Z^{\circ}(\overline{Q})]\oplus\mathcal{O}_ {\lambda}[Z^{\bullet}(\overline{Q})]\to H^{2}(\overline{X}(Q)\otimes\overline {F}_{p},\mathcal{O}_{\lambda}(1))^{\operatorname{Fr}_{p^{2}}}\]
which can be considered as a geometric realization of the Jacquet-Langlands correspondence for the group \(\mathrm{G}(\mathrm{Q})\) and \(\mathrm{G}(\overline{\mathrm{Q}})\).
**Definition 3.6**.: We say the prime \(p\) is admissible for \(\mathrm{T}_{\pi^{\mathrm{Q}}}\) if its associated Galois representation satisfies the following condition
1. \(p\) is an inert prime in \(\mathrm{F}\);
2. let \(a_{p^{2}}(\pi^{\mathrm{Q}})=\mathrm{tr}(\mathrm{Fr}_{p^{2}},\mathrm{T}_{\pi^{ \mathrm{Q}}})\), then \(a_{p^{2}}(\pi^{\mathrm{Q}})\not\in\{2p,-2p,-p^{2}-1,p^{2}+1\}\).
We now record the following key fact on the Tate conjecture for the special fiber of the surface \(\mathrm{X}(\mathrm{Q})\) proved in [TX] which directly inspires our construction of the Flach system on \(\mathrm{X}(\mathrm{Q})\).
**Proposition 3.7**.: _Suppose \(p\) is an admissible prime for \(\mathrm{T}_{\pi^{\mathrm{Q}}}\). Then the restriction of the map \(\mathcal{C}\) induces an isomorphism_
\[\mathcal{C}_{\pi^{\mathrm{Q}}}:\mathcal{Z}_{\pi^{\mathrm{Q}}}^{\circ}( \overline{\mathrm{Q}})\oplus\mathcal{Z}_{\pi^{\mathrm{Q}}}^{\bullet}( \overline{\mathrm{Q}})\xrightarrow{\sim}\mathrm{H}_{\pi^{\mathrm{Q}}}^{2}( \overline{\mathrm{X}}(\mathrm{Q})\otimes\overline{\mathbf{F}}_{p},\mathcal{O }_{\lambda}(1))^{\mathrm{Fr}_{p^{2}}}.\]
_Moreover, the action of the Frobenius element of \(\mathrm{Gal}(\mathbf{F}_{p^{2}}/\mathbf{F}_{p})\) on the right hand switches the two factors on the left hand side._
Proof.: This follows from the main result of [TX], see in particular section 1.1 of [TX] for this particular case. Note as explained in [Liu1], we can upgrade the result of [TX] to integral coefficient using the assumption that \(\mathrm{tr}(\mathrm{Fr}_{p^{2}},\mathrm{T}_{\pi^{\mathrm{Q}}})\mod\lambda\not \in\{2p,-2p\}\).
## 4. Flach classes and reciprocity formula
### Flach class of the quaternionic Shimura surface
Let \(\mathrm{X}\) be a proper smooth variety of finite type over a field \(\mathrm{K}\). For an integer \(d\), consider the complex
\[\bigoplus_{x\in\mathrm{X}^{d-1}}\mathrm{K}_{2}k(x)\to\bigoplus_{x\in\mathrm{X }^{d}}k(x)^{\times}\xrightarrow{d_{1}}\bigoplus_{x\in\mathrm{X}^{d+1}} \mathbf{Z} \tag{4.1}\]
where \(\mathrm{X}^{i}\) denotes the set of points of codimension \(i\) on the variety \(\mathrm{X}\), \(\mathrm{K}_{2}k(x)\) is Milnor K-group of the field \(k(x)\), the first map is the so-called tame symbol map and the second is given by the divisor map. The _motivic cohomology group_
\[\mathrm{H}_{\mathcal{M}}^{2d+1}(\mathrm{X},\mathbf{Z}(d+1))\]
is defined to be the cohomology of this complex: elements of \(\mathrm{H}_{\mathcal{M}}^{2d+1}(\mathrm{X},\mathbf{Z}(d+1))\) are represented by formal sums \(\sum_{i}(\mathrm{Z}_{i},f_{i})\) of pairs of codimension \(d\) cycles \(\mathrm{Z}_{i}\) on \(\mathrm{X}\) and non-zero rational function \(f_{i}\) on \(\mathrm{Z}_{i}\) such that \(\sum_{i}\mathrm{div}_{\mathrm{Z}_{i}}(f_{i})=0\) as a Weil divisor on \(\mathrm{Z}_{i}\).
This group is also known as the higher Chow group \(\mathrm{CH}^{d}(\mathrm{X},1)\) of \(\mathrm{X}\). There is a Chern character map
\[\mathrm{ch}:\mathrm{H}_{\mathcal{M}}^{2d+1}(\mathrm{X},\mathbf{Z}(d+1))\to \mathrm{H}^{2d+1}(\mathrm{X},\mathbf{Z}_{l}(d+1)) \tag{4.2}\]
given by the coniveau spectral sequences in \(\mathrm{K}\)-theory and in etale cohomology. Next suppose that \(\mathcal{O}\) is a complete local ring with residue field \(k\) of characteristic \(p\) different from \(l\). Let \(\mathfrak{X}\) be a proper regular scheme over \(\mathcal{O}\), let \(\mathrm{X}\) be its generic fiber and \(\overline{\mathrm{X}}\) be its special fiber. The motivic cohomology \(\mathrm{H}_{\mathcal{M}}^{2d}(\overline{\mathrm{X}},\mathbf{Z}(d))\) in this case agree with the usual Chow group \(\mathrm{CH}^{d}(\overline{\mathrm{X}})\). There is a map
\[\mathrm{div}:\mathrm{H}_{\mathcal{M}}^{2d+1}(\mathrm{X},\mathbf{Z}(d+1))\to \mathrm{H}_{\mathcal{M}}^{2d}(\overline{\mathrm{X}},\mathbf{Z}(d)) \tag{4.3}\]
defined by sending a pair \((\mathrm{Z},f)\) to the divisor of \(f\) on the closure \(\mathcal{Z}\) of \(\mathrm{Z}\) in \(\mathfrak{X}\). Note that this divisor is entirely supported on the special fiber \(\overline{\mathcal{Z}}\) of \(\mathcal{Z}\).
From the definition, an element of the motivic cohomology of \(\mathrm{H}^{3}_{\mathcal{M}}(\mathrm{X}(\mathrm{Q})\otimes\mathbf{Q},\mathbf{ Z}(2))\) can be represented by a curve on the surface \(\mathrm{X}(\mathrm{Q})\otimes\mathbf{Q}\) and a rational function on this curve with trivial Weil divisors. We will consider the element represented by the pair
\[\Theta^{[p]}=(\theta_{*}\mathrm{X}(\mathrm{B})\otimes\mathbf{Q},p)\]
where \(\theta_{*}\mathrm{X}(\mathrm{B})\otimes\mathbf{Q}\) is the image of \(\mathrm{X}(\mathrm{B})\otimes\mathbf{Q}\) under the Hirzebruch-Zagier morphism \(\theta\) and we will refer to this element as the _Flach element_ in the motivic cohomology of the quaternionic Hilbert-Blumenthal surface \(\mathrm{X}(\mathrm{Q})\otimes\mathbf{Q}\).
**Proposition 4.1**.: _Suppose that the residual Galois representation \(\overline{\rho}_{\pi^{\mathrm{Q}}}\) attached to \(\pi^{\mathrm{Q}}\) has non-solvable image. Then the localized cohomology group_
\[\mathrm{H}^{i}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{\lambda})=\mathrm{H}^{i}(\mathrm{X}(\mathrm{Q}) \otimes\overline{\mathbf{Q}},\mathcal{O}_{\lambda})_{\mathfrak{m}_{\mathbb{F}}}\]
_vanishes unless \(i=2\)._
Proof.: This is the main theorem of [CT], see [CT, Theorem 7.5.2].
**Lemma 4.2**.: _Assume that the residual Galois representation \(\overline{\rho}_{\pi^{\mathrm{Q}}}\) attached to \(\pi^{\mathrm{Q}}\) has non-solvable image. Then we have an isomorphism_
\[\mathrm{H}^{3}(\mathrm{X}(\mathrm{Q}),\mathcal{O}_{\lambda}(2))_{\mathfrak{m}_ {\mathbb{F}}}=\mathrm{H}^{1}(\mathbf{Q},\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}( \mathrm{X}(\mathrm{Q})\otimes\overline{\mathbf{Q}},\mathcal{O}_{\lambda}(2))).\]
Proof.: This follows from the Hoschchild-Serre spectral sequence applied to the cohomology \(\mathrm{H}^{3}(\mathrm{X}(\mathrm{Q})\otimes\mathbf{Q},\mathcal{O}_{\lambda}(2 ))_{\mathfrak{m}_{\mathbb{F}}}\) and the fact that \(\mathrm{H}^{i}(\mathrm{X}(\mathrm{Q})\otimes\overline{\mathbf{Q}},\mathcal{O} _{\lambda})_{\mathfrak{m}_{\mathbb{F}}}\neq 0\) only for \(i=2\).
The Chern character map in (4.2) induces a map
\[\mathrm{ch}_{\mathcal{O}_{\lambda}}:\mathrm{H}^{3}_{\mathcal{M}}(\mathrm{X}( \mathrm{Q})\otimes\mathbf{Q},\mathbf{Z}(2))\to\mathrm{H}^{3}(\mathrm{X}( \mathrm{Q})\otimes\mathbf{Q},\mathcal{O}_{\lambda}(2))_{\mathfrak{m}_{\mathbb{F}}}.\]
Suppose the Galois representation \(\overline{\rho}_{\pi^{\mathrm{Q}}}\) associated to \(\mathfrak{m}_{\mathbb{F}}\) has non-solvable image, It induces the following _Abel-Jacobi map_
\[\mathrm{AJ}_{\pi^{\mathrm{Q}}}:\mathrm{H}^{3}_{\mathcal{M}}(\mathrm{X}( \mathrm{Q})\otimes\mathbf{Q},\mathbf{Z}(2))\to\mathrm{H}^{1}(\mathbf{Q}, \mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{\lambda}(2)))\]
in light of the previous lemma. The image of the Flach element \(\Theta^{[p]}\) under this map will be denoted by
\[\kappa^{[p]}=\mathrm{AJ}_{\pi^{\mathrm{Q}}}(\Theta^{[p]}) \tag{4.4}\]
and will be referred to as the _Flach class_ and it will play a pivotal role for us later.
### Local bevavours of Flach classes
To use the Euler system argument, we will have to analyze the local behaviours of the Flach class \(\kappa^{[p]}\) at all primes \(r\) of \(\mathbf{Q}\). Let \(\mathrm{N}(\pi^{\mathrm{Q}})\) be the product of primes at which the Asai representation \(((\mathrm{As}(\rho_{\pi^{\mathrm{Q}},\lambda})(-1),\mathrm{As}(\mathrm{V}_{\pi ^{\mathrm{Q}}})(-1)))\) is ramified.
**Lemma 4.3**.: _For any prime \(r\nmid\mathrm{N}(\pi^{\mathrm{Q}})\), the Flach class is unramified:_
\[\mathrm{loc}_{r}(\kappa^{[p]})\in\mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{r}, \mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}}_{r},\mathcal{O}_{\lambda}(2))).\]
Proof.: Since \(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{\mathbf{Q }}_{r},\mathcal{O}_{\lambda}(2))\) is unramified at \(r\) as a \(\mathrm{G}_{\mathbf{Q}_{r}}\) module. It follows that
\[\begin{split}\mathrm{H}^{1}_{\mathrm{sin}}(\mathbf{Q}_{r}, \mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}}_{r},\mathcal{O}_{\lambda}(2)))&=(\mathrm{H}^{2}_{\pi^ {\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{\mathbf{Q}}_{r}, \mathcal{O}_{\lambda}(1)))^{\mathrm{G}_{\mathbf{F}_{r}}}\\ &=(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\overline{\mathrm{X}}( \mathrm{Q})\otimes\overline{\mathbf{F}}_{r},\mathcal{O}_{\lambda}(1)))^{ \mathrm{G}_{\mathbf{F}_{r}}}.\end{split} \tag{4.5}\]
On the other hand, we have a commutative diagram by [Weston1, Theorem 3.1.1]
where
1. the top horizontal map is the Abel-Jacobi map \(\mathrm{AJ}_{\pi^{\mathrm{Q}}}\) for \(\mathrm{X}(\mathrm{Q})\otimes\mathbf{Q}_{r}\);
2. the bottom horizontal map is given by the usual cycle class map of the special fiber \(\overline{\mathrm{X}}(\mathrm{Q})\otimes\mathbf{F}_{r}\);
3. the right vertical map \(\partial_{r}\) is the singular quotient map at \(r\) under the identification of (4.5).
By definition, it is clear that \(\mathrm{div}_{r}(\Theta^{[p]})\) vanishes in \(\mathrm{H}^{2}_{\mathcal{M}}(\overline{\mathrm{X}}(\mathrm{Q})\otimes \mathbf{F}_{r},\mathbf{Z}(1))\), indeed \(\mathrm{div}_{r}(\Theta^{[p]})\) is only supported at the special fiber of \(\theta_{*}\mathrm{X}(\mathrm{B})\) at \(r\). Thus, \(\partial_{r}(\kappa^{[p]})\) vanishes and hence \(\mathrm{loc}_{r}(\kappa^{[p]})\) lies in the finite part.
Next we turn to the more interesting case of \(\mathrm{loc}_{p}(\Theta^{[p]})\). First we give a slight refinement of Proposition 3.7.
**Lemma 4.4**.: _if \(p\) is an admissible prime for \(\mathrm{T}_{\pi^{\mathrm{Q}}}\), then we have an isomorphism_
\[\mathcal{Z}_{\pi^{\overline{\mathrm{Q}}}}(\overline{\mathrm{Q}})\xrightarrow{ \sim}(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\overline{\mathrm{X}}(\mathrm{Q}) \otimes\overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1)))^{\mathrm{G}_{ \mathbf{F}_{p}}}.\]
Proof.: By Proposition 3.7, \((\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\overline{\mathrm{X}}(\mathrm{Q})\otimes \overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1)))^{\mathrm{G}_{\mathbf{F}_{p }}}\) can be identified as the invariant space of \((\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\overline{\mathrm{X}}(\mathrm{Q})\otimes \overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1)))^{\mathrm{G}_{\mathbf{F}_{p }}}\) under the action of \(\mathrm{Gal}(\mathbf{F}_{p^{2}}/\mathbf{F}_{p})\) and hence as the diagonal image of \(\mathcal{Z}_{\pi^{\overline{\mathrm{Q}}}}(\overline{\mathrm{Q}})\) in \(\mathcal{Z}_{\pi^{\overline{\mathrm{Q}}}}^{\circ}(\overline{\mathrm{Q}}) \oplus\mathcal{Z}_{\pi^{\overline{\mathrm{Q}}}}^{\bullet}(\overline{\mathrm{Q}})\). The lemma is proved.
Now we state the reciprocity formula for the class \(\kappa^{[p]}\). Note that we have a bilinear pairing
\[(\cdot,\cdot):\mathcal{O}_{\lambda}[\mathrm{Z}(\overline{\mathrm{Q}})]\times \mathcal{O}_{\lambda}[\mathrm{Z}(\overline{\mathrm{Q}})]\to\mathcal{O}_{\lambda}\]
given by
\[(\zeta,\phi)=\sum_{z\in\mathrm{Z}(\overline{\mathrm{Q}})}\zeta\phi(z).\]
**Proposition 4.5**.: _Suppose that \(p\) is an admissible prime for \(\mathrm{T}_{\pi^{\mathrm{Q}}}\). Let \(\phi\) be any element in \(\mathcal{Z}_{\pi^{\overline{\mathrm{Q}}}}(\overline{\mathrm{Q}})\). Then we have the following formula_
\[(\partial_{p}(\Theta^{[p]}),\phi)=\sum_{z\in\mathrm{Z}(\overline{\mathrm{B}})} \phi(z)\]
_where the sum is taken over elements in the image of \(\mathrm{Z}(\overline{\mathrm{B}})\) in \(\mathrm{Z}(\overline{\mathrm{Q}})\) via the canonical embedding \(\nu\)._
Proof.: By definition, it is clear that \(\operatorname{div}_{p^{2}}(\Theta^{[p]})=[\mathrm{X}^{\circ}(\mathrm{B})]+[ \mathrm{X}^{\bullet}(\mathrm{B})]\) as an element in the motivic cohomology
\[\mathrm{H}^{2}_{\mathcal{M}}(\overline{\mathrm{X}}(\mathrm{Q})\otimes\mathbf{F }_{p^{2}},\mathbf{Z}(1))\]
which can be identified with the usual Chow group \(\mathrm{CH}^{1}(\overline{\mathrm{X}}(\mathrm{Q})\otimes\mathbf{F}_{p^{2}})\). We will still consider the commutative diagram used in the previous case
\[\begin{CD}\mathrm{H}^{3}_{\mathcal{M}}(\mathrm{X}(\mathrm{Q})\otimes \mathbf{Q}_{p^{2}},\mathbf{Z}(2))@>{\mathrm{AJ}_{\pi^{\mathrm{Q}}}}>{}>\mathrm{H }^{1}(\mathbf{Q}_{p^{2}},\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{ Q})\otimes\overline{\mathbf{Q}}_{p},\mathcal{O}_{\lambda}(2)))\\ @V{}V{\mathrm{div}_{p^{2}}}V@V{}V{\partial_{p^{2}}}V\\ \mathrm{H}^{2}_{\mathcal{M}}(\overline{\mathrm{X}}(\mathrm{Q})\otimes\mathbf{F }_{p^{2}},\mathbf{Z}(1))@>{\mathrm{cl}}>{}>\mathrm{(H}^{2}_{\pi^{\mathrm{Q}}}( \overline{\mathrm{X}}(\mathrm{Q})\otimes\overline{\mathbf{F}}_{p},\mathcal{O} _{\lambda}(1)))^{\mathrm{G}\mathbf{F}_{p^{2}}}\end{CD} \tag{4.6}\]
and we are concerned with the singular residue \(\partial_{p^{2}}(\Theta^{[p]})\). By the commutativity of this diagram, \(\partial_{p^{2}}(\Theta^{[p]})\) agrees with the cycle class of \([\mathrm{X}^{\circ}(\mathrm{B})]+[\mathrm{X}^{\bullet}(\mathrm{B})]\) in \((\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\overline{\mathrm{X}}(\mathrm{Q})\otimes \overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1)))^{\mathrm{G}\mathbf{F}_{p^ {2}}}\). It follows that under the identification
\[\mathcal{Z}^{\circ}_{\pi^{\overline{\mathrm{Q}}}}(\overline{\mathrm{Q}})\oplus \mathcal{Z}^{\bullet}_{\pi^{\overline{\mathrm{Q}}}}(\overline{\mathrm{Q}}) \xrightarrow{\sim}\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\overline{\mathrm{X}}( \mathrm{Q})\otimes\overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1))^{ \mathrm{F}_{p^{2}}}, \tag{4.7}\]
\(\partial_{p^{2}}(\Theta^{[p]})\) is given by \((\mathbf{1}_{\mathrm{Z}^{\circ}(\overline{\mathrm{B}})},\mathbf{1}_{\mathrm{ Z}^{\bullet}(\overline{\mathrm{B}})})\) where \(\mathbf{1}_{\mathrm{Z}^{?}(\overline{\mathrm{B}})}\) is the characteristic function of \(\mathrm{Z}^{?}(\overline{\mathrm{B}})\) for \(?\in\{\circ,\bullet\}\). This follows from the fact that the isomorphism in (4.7) is given by the cycle class map and statements of Lemma 3.3. Thus under the identification
\[\mathcal{Z}_{\pi^{\overline{\mathrm{Q}}}}(\overline{\mathrm{Q}})\xrightarrow{ \sim}(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\overline{\mathrm{X}}(\mathrm{Q}) \otimes\overline{\mathbf{F}}_{p},\mathcal{O}_{\lambda}(1)))^{\mathrm{G} \mathbf{F}_{p}},\]
the singular residue \(\partial_{p}(\Theta^{[p]})\) is simply given by \(\mathbf{1}_{\mathrm{Z}(\overline{\mathrm{B}})}\) and hence we have
\[(\partial_{p}(\Theta^{[p]}),\phi)=(\mathbf{1}_{\mathrm{Z}(\overline{\mathrm{ B}})},\phi)=\sum_{z\in\mathrm{Z}(\overline{\mathrm{B}})}\phi(z).\]
## 5. Distinguished representation and base change
### Dinstinguished representation
We recall the definition of a distinguished representation and its quaternionic variant. Let \(\Pi\) be a cuspidal automorphic representation of \(\mathrm{GL}_{2}(\mathbf{A}_{\mathrm{F}})\) with trivial central character, then \(\Pi\) is distinguished if there exists an automorphic form \(\phi\) in the space of \(\Pi\) such that the period
\[\mathcal{P}_{\mathrm{dis}}(\phi)=\int_{\mathrm{GL}_{2}(\mathbf{Q})\backslash \mathrm{GL}_{2}(\mathbf{A}_{\mathrm{F}})}\phi(g)dg\]
is non-vanishing. It is known that \(\pi\) is distinguished if it satisfies the following properties:
1. \(\Pi_{\infty}\) is in the discrete series;
2. \(\Pi\) is in the image of base-change of an automorphic representation \(\pi\) of \(\mathrm{GL}_{2}(\mathbf{A})\).
Recall that \(\overline{\mathrm{Q}}=\overline{\mathrm{B}}\otimes\mathrm{F}\). Suppose that \(\pi^{\overline{\mathrm{Q}}}\) is a cuspidal automorphic representation of \(\mathrm{G}(\overline{\mathrm{Q}})(\mathbf{A})\). Then we say \(\pi^{\overline{\mathrm{Q}}}\) is distinguished if there exists an automorphic form \(\phi\) in the space of \(\pi^{\overline{\mathrm{Q}}}\) such that the period
\[\mathcal{P}^{\overline{\mathrm{Q}}}_{\mathrm{dis}}(\phi)=\int_{\mathrm{G}( \overline{\mathrm{B}})(\mathbf{Q})\backslash\mathrm{G}(\overline{\mathrm{B}})( \mathbf{A})}\phi(g)dg\]
is non-vanishing. These representations appear naturally in the proof of Tate conjectures for Hilbert-Blumenthal surfaces and its quaternionic variant [HLR], [Lai], [FH]. In fact, it can be shown that the Tate classes defined on abelian extensions of \(\mathbf{Q}\) are all supported on the isotypic component of the distinguished representations.
The following proposition shows that the notion of being distinguished is compatible with respect to Jacquet-Langlands correspondence.
**Proposition 5.1**.: _Suppose that \(\Pi\) is a cuspidal automorphic representation of \(\operatorname{GL}_{2}(\mathbf{A}_{\mathrm{F}})\) which corresponds to a cuspidal automorphic representation \(\pi^{\overline{\mathbf{Q}}}\) of \(\operatorname{G}(\overline{\mathbf{Q}})\) under the Jacquet-Langlands correspondence. Then \(\pi^{\overline{\mathbf{Q}}}\) is distinguished with respect to \(\operatorname{G}(\overline{\mathbf{B}})\) if and only if \(\pi\) is distinguished with respect to \(\operatorname{GL}_{2}(\mathbf{A})\) and for each prime \(r\) dividing the discriminant \(\operatorname{D}(\overline{\mathbf{B}})\) of \(\overline{\mathbf{B}}\) which is inert in \(\mathrm{F}\), the local representation \(\pi^{\overline{\mathbf{Q}}}_{r}\) is not a principal series rerpesentation \(\operatorname{I}(\mu_{1},\mu_{2})\) with \(\mu_{i}\) trivial on \(\mathbf{Q}_{r}^{\times}\) for \(i=1,2\)._
Proof.: This is a special case of the main theorem of Flicker-Hakim, see [FH, Theorem 0.3].
### Asai representation and base change
By the above proposition and our reciprocity formula in Proposition 4.5, we will from here on restrict our attention to the case given by the following datum:
1. \(f\) is a weight \(2\) newform of level \(\operatorname{S}_{2}(\Gamma_{0}(\mathrm{N}))\) which defines a cuspidal automorphic representation \(\pi\) of \(\operatorname{GL}_{2}(\mathbf{A})\); Suppose \(\pi\) admits a Jacquet-Langlands transfer to a cuspidal automorphic representation \(\pi^{\circ}=\pi^{\overline{\mathbf{B}}}\) of \(\operatorname{G}(\overline{\mathbf{B}})(\mathbf{A})\);
2. \(\pi^{\overline{\mathbf{Q}}}\) is the cuspidal automorphic representation of \(\operatorname{G}(\overline{\mathbf{Q}})(\mathbf{A})\) given by the base change of \(\pi^{\overline{\mathbf{B}}}\) to \(\operatorname{G}(\overline{\mathbf{Q}})(\mathbf{A})\);
3. \(\pi^{\mathrm{Q}}\) is the cuspidal automorphic representation of \(\operatorname{G}(\mathbf{Q})(\mathbf{A})\) obtained from \(\pi^{\overline{\mathbf{Q}}}\) via the Jacquet-Langlands correspondence from \(\pi^{\overline{\mathbf{Q}}}\) to \(\operatorname{G}(\mathbf{Q})\).
In this case the Galois representation of \(\operatorname{G}_{\mathrm{F}}\) attached to \(\pi^{\overline{\mathbf{Q}}}\) is the same as that of \(\pi^{\mathrm{Q}}\) which we will denote by \(\operatorname{V}_{\pi^{\mathrm{Q}}}\) and we will be occupied with its associated Asai representation \(\operatorname{As}(\operatorname{V}_{\pi^{\mathrm{Q}}})(-1)\). Let \((\rho_{\pi^{\circ}},\operatorname{V}_{\pi^{\circ}})\) be the Galois representation attached to \(\pi^{\circ}\), then by linear algebra we have an isomorphism
\[\operatorname{As}(\operatorname{V}_{\pi^{\mathrm{Q}}})(-1)\cong\operatorname{ Sym}^{2}(\operatorname{V}_{\pi^{\circ}})(-1)\oplus\operatorname{E}_{\lambda}( \omega_{\mathrm{F}/\mathbf{Q}}).\]
Note that \(\operatorname{Sym}^{2}(\operatorname{V}_{\pi^{\circ}})(-1)\) is also known as \(\operatorname{Ad}^{0}(\operatorname{V}_{\pi^{\circ}})\). Similarly, we also have a such decomposition integrally
\[\operatorname{As}(\operatorname{T}_{\pi^{\mathrm{Q}}})(-1)\cong\operatorname{ Sym}^{2}(\operatorname{T}_{\pi^{\circ}})(-1)\oplus\mathcal{O}_{\lambda}( \omega_{\mathrm{F}/\mathbf{Q}})\]
for the lattice \(\operatorname{Sym}^{2}(\operatorname{T}_{\pi^{\circ}})(-1)\) in \(\operatorname{Sym}^{2}\operatorname{V}_{\pi^{\circ}}(-1)\).
## 6. Bounding the adjoint Selmer groups
### Generalities on Selmer groups
We will abuse the notation and consider a general Galois representation \(\rho:\operatorname{G}_{\mathbf{Q}}\to\operatorname{GL}(\mathrm{V})\) over \(\operatorname{E}_{\lambda}\). Suppose that \(\mathrm{T}\) is a Galois stable lattice in \(\mathrm{V}\) and set \(\mathcal{M}=\mathrm{V}/\mathrm{T}\). We recall the definitions concerning the Bloch-Kato Selmer group for \(\mathcal{M}\). These Galois modules fit in the exact sequence
\[0\to\mathrm{T}\xrightarrow{i}\mathrm{V}\xrightarrow{\mathrm{pr}}\mathcal{M}\to 0.\]
Let \(M_{n}=\mathcal{M}[\lambda^{n}]\) and \(T_{n}=T/\lambda^{n}\). Let \(i_{n}:M_{n}\hookrightarrow\mathcal{M}\) and \(pr_{n}:T\to T_{n}\) be the natural inclusion and reduction maps.
We define the local Bloch-Kato conditions using the following recipe:
1. for \(v\neq l\), we define \(H^{1}_{f}(\mathbf{Q}_{v},V)=H^{1}_{\mathrm{fin}}(\mathbf{Q}_{v},V)\);
2. for \(v=l\), we define \(H^{1}_{f}(\mathbf{Q}_{p},V)=\ker(H^{1}(\mathbf{Q}_{l},V)\to H^{1}(\mathbf{Q}_{ l},V\otimes B_{\mathrm{cris}}))\);
3. we define \(H^{1}_{f}(\mathbf{Q}_{r},\mathcal{M})=pr_{*}H^{1}_{f}(\mathbf{Q}_{r},V)\) for each prime \(r\);
4. we define \(H^{1}_{f}(\mathbf{Q}_{r},M_{n})=i_{n}^{*}H^{1}_{f}(\mathbf{Q}_{r},\mathcal{M})\) for each prime \(r\).
We will be interested in the Bloch-Kato Selmer group of \(\mathcal{M}\) defined by
\[H^{1}_{f}(\mathbf{Q},\mathcal{M})=\ker\{H^{1}(\mathbf{Q},\mathcal{M})\to\prod_ {r}\frac{H^{1}(\mathbf{Q}_{r},\mathcal{M})}{H^{1}_{f}(\mathbf{Q}_{r},\mathcal{ M})}\}.\]
The Bloch-Kato Selmer group of \(M_{n}\) is defined by the same recipe replacing \(\mathcal{M}\) by \(M_{n}\). Moreover we have
\[H^{1}_{f}(\mathbf{Q},\mathcal{M})=\varinjlim H^{1}_{f}(\mathbf{Q},M_{n})\]
and an exact sequence
\[0\to pr_{*}H^{1}_{f}(\mathbf{Q},V)\to H^{1}_{f}(\mathbf{Q},\mathcal{M})\to \Sha(\mathbf{Q},\mathcal{M})\to 0\]
defining the Tate-Shafarevich group \(\Sha(\mathbf{Q},\mathcal{M})\) for \(\mathcal{M}\).
### Statement of the main result
We now consider the datum given in the beginning of SS5.2. Let \(f\in S_{2}(\Gamma_{0}(N))\) be a newform of level \(N\). Suppose that \(N\) admits a decomposition \(N=N^{+}N^{-}\). Under the assumption that \(N^{-}\) is square-free and consists of odd number of prime divisors, \(f\) admits a normalized Jacquet-Langlands transfer \(f^{\dagger}\) to an automorphic form on \(Z(\overline{B})\). Here normalized means that there is an element \(z\in Z(\overline{B})\) such that \(f^{\dagger}(z)\) is non-zero modulo \(\lambda\). The base change of \(f^{\dagger}\) to \(F\) can be regarded as an automorphic form on \(Z(\overline{Q})\) which we will denote by \(\phi^{\dagger}\). We will choose \(\phi^{\dagger}\) such that it is normalized as well in the same sense for \(f^{\dagger}\). The automorphic form \(f^{\dagger}\) is contained in an automorphic representation of \(G(\overline{B})\) which we denoted by \(\pi^{\circ}=\pi^{\overline{B}}\) and \(\phi^{\dagger}\) is contained in an automorphic representation \(\pi^{\overline{Q}}\) of \(G(\overline{Q})\) which is the base change of \(\pi^{\circ}\) to \(F\). Then the discussion of SS5.2 applies and in particular \(\pi^{\overline{Q}}\) admits a Jacquet-Langlands transfer to \(\pi^{Q}\). We will use the following set of notations:
1. Let \((\rho_{\pi^{\circ}},V_{\pi^{\circ}})\) be the Galois representation associated to \(\pi^{\circ}\) and \(T_{\pi^{\circ}}\) be a Galois stable lattice we have fixed in SS5.2. Then we define the divisible Galois module \(\mathcal{M}_{\pi^{\circ}}\) by \[0\to T_{\pi^{\circ}}\to V_{\pi^{\circ}}\to\mathcal{M}_{\pi^{\circ}}\to 0.\]
2. Similarly, let \((\rho_{\pi^{\mathbb{Q}}},V_{\pi^{\mathbb{Q}}})\) be the Galois representation associated to \(\pi^{Q}\) and \(T_{\pi^{\mathbb{Q}}}\) be the Galois stable lattice we have fixed in SS5.2. Then we define the divisible Galois module \(\mathcal{M}_{\pi^{\mathbb{Q}}}\) by \[0\to T_{\pi^{\mathbb{Q}}}\to V_{\pi^{\mathbb{Q}}}\to\mathcal{M}_{\pi^{\mathbb{Q} }}\to 0.\]
3. We have a natural decomposition \[\mathrm{As}(\mathcal{M}_{\pi^{\mathbb{Q}}})(-1)=\mathrm{Sym}^{2}(\mathcal{M}_{ \pi^{\circ}})(-1)\oplus E_{\lambda}/\mathcal{O}_{\lambda}(\omega_{F/\mathbf{Q}}).\]
4. Note that \(\mathrm{Sym}^{2}(\mathcal{M}_{\pi^{\circ}})(-1)\) is the same as \(\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{\circ}})\). We will set \(M_{n}=\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{\circ}}[\lambda^{n}])\) and set \(N_{n}=\mathrm{Sym}^{2}(T_{\pi^{\circ},n})\) for \(T_{\pi^{\circ},n}=T_{\pi^{\circ}}\mod\lambda^{n}\) and \(n\geq 1\).
5. Recall we have an embedding \(\vartheta:Z(\overline{B})\to Z(\overline{Q})\) of Shimura sets as in Lemma 3.3 and thus we can consider the integral distinguished period defined by \[\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger})=\sum_{z\in Z(\overline{B})}\phi^{ \dagger}(z).\] where the sum is taken over elements of the image of \(\vartheta:Z(\overline{B})\to Z(\overline{Q})\).
The following theorem will be the main theorem on bounding the Selmer group using the distinguished period.
**Theorem 6.1**.: _Let \(f\in\mathrm{S}_{2}(\Gamma_{0}(\mathrm{N}))\) be a newform of weight \(2\) with \(\mathrm{N}=\mathrm{N}^{+}\mathrm{N}^{-}\) such that \(\mathrm{N}^{-}\) is squarefree and has odd number of prime factors. Let \(f^{\dagger}\) be the normalized automorphic form on \(\mathrm{G}(\overline{B})\) corresponding to \(f\) under the Jacquet-Langlands correspondence and \(\phi^{\dagger}\) be the base-change of \(f^{\dagger}\) which is also normalized. Let \(\nu=\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger}))\) and \(\eta=\varpi^{\nu}\). Suppose the following conditions hold._
1. _The residual Galois representation_ \(\overline{\rho}_{\pi^{\circ}}\) _is absolutely irreducible;_
2. _the image of_ \(\overline{\rho}_{\pi^{\circ}}\) _contains_ \(\mathrm{GL}_{2}(\mathbf{F}_{p})\)_;_
3. _the Galois cohomology groups_ \[\mathrm{H}^{1}(\mathbf{Q}(\mathrm{M}_{n})/\mathbf{Q},\mathrm{M}_{n})=0\] _for every_ \(n\geq 1\) _and where_ \(\mathbf{Q}(\mathrm{M}_{n})\) _is the splitting field of the Galois module_ \(\mathrm{M}_{n}\)_._
_Then \(\eta\) annihilates the Selmer group \(\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{\circ}}))\), in particular_
\[\mathrm{length}\ \mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{Ad}^{0}(\mathcal{M}_{ \pi^{\circ}}))\leq\nu.\]
### The Flach system argument
To prove the main theorem, we will show that under the assumptions of the theorem, \(\eta\) annihilates each finite Selmer group \(\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{M}_{n})\) with \(n\geq 1\). Here the argument follows closely that of [Flach1] and [Weston1]. We define \(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{n}(2))\) to be the reduction of \(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{\lambda}(2))\) modulo \(\lambda^{n}\).
**Lemma 6.2**.: _For each \(n\geq 1\), the Galois module_
\[\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{n}(1))\]
_is isomorphic to \(m(\pi^{\mathrm{Q}},d)\) copies of \(\mathrm{Sym}^{2}(\mathrm{T}_{\pi^{\circ},n})(-1)\oplus\mathcal{O}_{n}(\omega_{ \mathrm{F}/\mathbf{Q}})\). In particular, there is a projection from \(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{n}(2))\) to \(\mathrm{N}_{n}=\mathrm{Sym}^{2}(\mathrm{T}_{\pi^{\circ},n})\)._
Proof.: Recall that we have an isomorphism
\[\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{\lambda}(1))\cong(\mathrm{Sym}^{2}\mathrm{T}_{\pi^{ \circ}}(-1)\oplus\mathcal{O}_{\lambda}(\omega_{\mathrm{F}/\mathbf{Q}}))^{m( \pi^{\mathrm{Q},d})}\]
by the definition of \(d\)-cleaness of \(\mathrm{T}_{\pi^{\mathrm{Q}}}\). The lemma follows from this immediately from Proposition 4.1 which implies that \(\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}(\mathrm{X}(\mathrm{Q})\otimes\overline{ \mathbf{Q}},\mathcal{O}_{\lambda}(1))\) is torsion free.
We consider the Flach class
\[\kappa^{[p]}\in\mathrm{H}^{1}(\mathbf{Q},\mathrm{H}^{2}_{\pi^{\mathrm{Q}}}( \mathrm{X}(\mathrm{Q})\otimes\overline{\mathbf{Q}},\mathcal{O}_{\lambda}(2)))\]
defined as in 4.4 and we will use this class to construct annihilator of the Selmer group. By the above lemma, we can project this class to \(\mathrm{H}^{1}(\mathbf{Q},\mathrm{N}_{n})\) and we will denote by \(\kappa^{[p]}_{n}\) the resulting class. We also recall that \(\eta=\varpi^{\nu}\) where \(\nu=\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger}))\).
**Lemma 6.3**.: _Let \(p\) be an admissible prime for \(\mathrm{T}_{\pi^{\mathrm{Q}}}\). Then the singular part of the cohomology \(\mathrm{H}^{1}_{\sin}(\mathbf{Q}_{p},\mathrm{N}_{n})\) is free of rank one over \(\mathcal{O}_{n}\). And the quotient of \(\mathrm{H}^{1}_{\sin}(\mathbf{Q}_{p},\mathrm{N}_{n})\) by the module generated by \(\partial_{p}(\kappa_{n}^{[p]})\) is annihilated by \(\eta\)._
Proof.: By the definition of an \(n\)-admissible prime, it is easily calculated that among the Frobenius eigenvalues at \(p\) for \(\mathrm{N}_{n}\) the number of times \(1\) appears can only be one. Since we have
\[\mathrm{H}^{1}_{\sin}(\mathbf{Q}_{p},\mathrm{N}_{n})=\mathrm{Hom}(\mathbf{Z}_ {l}(1),\mathrm{N}_{n})^{\mathrm{G}_{\mathbf{F}_{p}}}=(\mathrm{N}_{n}(-1))^{ \mathrm{G}_{\mathbf{F}_{p}}},\]
\(\mathrm{H}^{1}_{\sin}(\mathbf{Q}_{p},\mathrm{N}_{n})\) is free of rank one over \(\mathcal{O}_{n}\). It follows from the reciprocity law proved in Proposition 4.5 that the quotient of \(\mathrm{H}^{1}_{\sin}(\mathbf{Q}_{p},\mathrm{N}_{n})\) by the module generated by \(\partial_{p}(\kappa_{n}^{[p]})\) is annihilated by \(\eta\).
**Lemma 6.4**.: _Let \(p\) be an admissible prime for \(\mathrm{T}_{\pi^{\mathrm{Q}}}\) and define_
\[\mathrm{H}^{1}_{\{p\}}(\mathbf{Q},\mathrm{M}_{n})=\ker\{\mathrm{H}^{1}( \mathbf{Q},\mathrm{M}_{n})\to\mathrm{H}^{1}(\mathbf{Q}_{p},\mathrm{M}_{n})\}.\]
_Then we have_
\[\eta\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{M}_{n})\subset\mathrm{H}^{1}_{\{p\} }(\mathbf{Q},\mathrm{M}_{n}).\]
Proof.: We consider the element \(\kappa_{n}^{[p]}\) in \(\mathrm{H}^{1}(\mathbf{Q},\mathrm{N}_{n})\). We have verified that \(\mathrm{loc}_{r}(\kappa_{n}^{[p]})\) lies in the finite part for all \(r\nmid p\mathrm{N}\). On the other hand, the same proofs for [Flach1, Lemma 2.8] and [Flach1, Lemma 2.10] carry over here and shows that \(\mathrm{loc}_{r}(\kappa_{n}^{[p]})\) lie in \(\mathrm{H}^{1}_{f}(\mathbf{Q}_{r},\mathrm{M}_{n})\) for \(r\mid\mathrm{N}\). Under the local Tate duality
\[\langle\cdot,\cdot\rangle_{r}:\mathrm{H}^{1}(\mathbf{Q}_{r},\mathrm{N}_{n}) \times\mathrm{H}^{1}(\mathbf{Q}_{r},\mathrm{M}_{n})\to\mathcal{O}_{n},\]
it is well-known that our local conditions \(\mathrm{H}^{1}_{f}(\mathbf{Q}_{r},\mathrm{N}_{n})\) are orthogonal to \(\mathrm{H}^{1}_{f}(\mathbf{Q}_{r},\mathrm{M}_{n})\) and at \(r=p\) induces a perfect pairing
\[\langle\cdot,\cdot\rangle_{p}:\mathrm{H}^{1}_{\sin}(\mathbf{Q}_{p},\mathrm{N} _{n})\times\mathrm{H}^{1}_{\mathrm{fin}}(\mathbf{Q}_{p},\mathrm{M}_{n})\to \mathcal{O}_{n}.\]
Let \(s\) be any element in \(\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{M}_{n})\), then by global class field theory
\[\sum_{r}\langle\mathrm{loc}_{r}(s),\mathrm{loc}_{r}(\kappa^{[p]})\rangle_{r}=0.\]
This identity reduces to \(\langle\mathrm{loc}_{p}(s),\mathrm{loc}_{p}(\kappa^{[p]})\rangle_{p}=0\) by the discussions above. It follows that \(\langle\mathrm{loc}_{p}(s),\partial_{p}(\kappa^{[p]})\rangle_{p}=0\). Since we know that \(\eta\mathrm{H}^{1}_{\sin}(\mathbf{Q}_{p},\mathrm{N}_{n})\) is contained in the line generated by \(\partial_{p}\kappa^{[p]}\) by Lemma 6.3, \(\eta\mathrm{loc}_{p}(s)\) has to vanish by the perfectness of the above pairing. Therefore \(\eta\mathrm{H}^{1}_{f}(\mathbf{Q},\mathrm{M}_{n})\subset\mathrm{H}^{1}_{\{p\} }(\mathbf{Q},\mathrm{M}_{n})\) follows from this.
**Lemma 6.5**.: _Let \(\mathrm{F}_{n}=\mathbf{Q}(\mathrm{M}_{n})\) be the splitting field for the Galois module \(\mathrm{M}_{n}\) and \(\Delta_{n}=\mathrm{Gal}(\mathrm{F}_{n}/\mathbf{Q})\). Then we have_
\[\mathrm{H}^{1}_{\{p\}}(\mathbf{Q},\mathrm{M}_{n})\subset\mathrm{H}^{1}(\Delta_ {n},\mathrm{M}_{n})\]
_where \(\mathrm{H}^{1}(\Delta_{n},\mathrm{M}_{n})\) is considered as a subgroup of \(\mathrm{H}^{1}(\mathbf{Q},\mathrm{M}_{n})\) via inflation._
Proof.: Let \(s\in\mathrm{H}^{1}_{\{p\}}(\mathbf{Q},\mathrm{M}_{n})\) and consider the exact sequence
\[0\to\mathrm{H}^{1}(\Delta_{n},\mathrm{M}_{n})\to\mathrm{H}^{1}(\mathbf{Q}, \mathrm{M}_{n})\to\mathrm{H}^{1}(\mathrm{F}_{n},\mathrm{M}_{n})^{\Delta_{n}}= \mathrm{Hom}_{\Delta_{n}}(\mathrm{G}_{\mathrm{F}_{n}},\mathrm{M}_{n})\to 0. \tag{6.1}\]
Let \(\psi:\mathrm{G}_{\mathrm{F}_{n}}\to\mathrm{M}_{n}\) be the image of \(s\) in \(\mathrm{Hom}_{\Delta_{n}}(\mathrm{G}_{\mathrm{F}_{n}},\mathrm{M}_{n})\). Then we need to show that \(\psi=0\). Let \(\tilde{s}\) be the cocycle representing \(s\) and let \(\mathrm{F}^{\prime}_{n}\) be the fixed field of the kernel of \(\tilde{s}\).
Let \(\Gamma\) be \(\operatorname{Gal}(\mathrm{F}_{n}^{\prime}/\mathrm{F}_{n})\), then it is clear that \(\psi\) factors through \(\psi:\Gamma\to\mathrm{M}_{n}\). Let \(\tau\) be the Frobenius element at \(p\) in \(\Delta_{n}\) and fix a lift \(\tau^{\prime}\) to \(\operatorname{Gal}(\mathrm{F}_{n}^{\prime}/\mathbf{Q})\). Let \(g\) be any element in \(\Gamma\). By the Chebatorev density theorem we can find a place \(v^{\prime}\) of \(\mathrm{F}_{n}^{\prime}\) such that \(\mathrm{Fr}_{\mathrm{F}_{n}^{\prime}/\mathrm{F}_{n}}(v^{\prime})=\tau^{\prime}g\). Let \(v\) be the place under \(v^{\prime}\) in \(\mathrm{F}_{n}\) which necessarily lie over \(p\).
Since \(s_{p}:=\mathrm{res}_{p}(s)\) is trivial, \(\tilde{s}|_{\operatorname{Gal}(\mathrm{F}_{n,v^{\prime}}^{\prime}/\mathbf{Q} _{p})}\) is a coboundary. Thus we have
\[\tilde{s}(\tau^{\prime}g)\in(\tau^{\prime}g-1)\mathrm{M}_{n}=(\tau-1)\mathrm{ M}_{n}.\]
Taking \(g=1\) gives \(\tilde{s}(\tau^{\prime})\in(\tau-1)\mathrm{M}_{n}\). On the other hand, the cocycle relation gives
\[\tilde{s}(\tau^{\prime}g)=\tilde{s}(\tau^{\prime})+\tau\tilde{s}(g).\]
It follows then \(\tau\tilde{s}(g)\in(\tau-1)\mathrm{M}_{n}\). Hence \(\tilde{s}(g)\in(\tau-1)\mathrm{M}_{n}\) for any \(g\in\Gamma\) as \((\tau-1)\tilde{s}(g)\in(\tau-1)\mathrm{M}_{n}\). Thus the image of \(\psi\) lie in \((\tau-1)\mathrm{M}_{n}\). Note that \(\psi\) is \(\Delta_{n}\)-equivariant and \(\mathrm{M}_{n}\) is irreducible. Since \(\mathrm{M}_{n}\neq(\tau-1)\mathrm{M}_{n}\) by the same reasoning as in Lemma 6.3, the image of \(\psi\) is zero.
Proof of Theorem 6.1.: For each \(n\), we pick an admissible prime \(p\) for \(\mathrm{T}_{\pi^{\mathrm{Q}}}\). By the previous Lemma 6.4 and Lemma 6.5, we have
\[\eta\mathrm{H}_{f}^{1}(\mathbf{Q},\mathrm{M}_{n})\subset\mathrm{H}^{1}(\Delta _{n},\mathrm{M}_{n}).\]
By the second assumption in the statement of the Theorem, we know \(\mathrm{H}^{1}(\Delta_{n},\mathrm{M}_{n})\) is trivial. Thus \(\mathrm{H}_{f}^{1}(\mathbf{Q},\mathrm{M}_{n})\) is indeed annihilated by \(\eta\) for each \(n\). Hence \(\mathrm{H}_{f}^{1}(\mathbf{Q},\mathrm{Ad}^{0}(\mathcal{M}_{\pi^{\circ}}))\) is also annihilated by \(\eta\) as desired.
### Comparison of quaternionic periods
Recall the set up at the beginning of SS6.2, \(f\in\mathrm{S}_{2}(\Gamma_{0}(\mathrm{N}))\) is a modular form which is new at primes dividing \(\mathrm{N}^{-}\) and admits a normalized Jacquet-Langlands transfer \(f^{\dagger}\) to an automorphic form on \(\mathrm{Z}(\overline{\mathrm{B}})\). The base change of \(f^{\dagger}\) to \(\mathrm{F}\) can be regarded as an automorphic form on \(\mathrm{Z}(\overline{\mathrm{Q}})\) which we denote by \(\phi^{\dagger}\). The purpose of this final section is to compare the distinguished periods
\[\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger})=\sum_{z\in\mathrm{Z}(\overline{ \mathrm{B}})}\phi^{\dagger}(z)\]
and the quaternionic period
\[\mathcal{P}(f^{\dagger})=\sum_{z\in\mathrm{Z}(\overline{\mathrm{B}})}f^{ \dagger 2}(z)=\langle f^{\dagger},f^{\dagger}\rangle\]
which is known as the Petersson norm of \(f^{\dagger}\). We would like to compare \(\mathrm{ord}_{\lambda}(\mathcal{P}_{\mathrm{dis}}(\phi^{\dagger}))\) and \(\mathrm{ord}_{\lambda}(\mathcal{P}(f^{\dagger}))\) under suitable assumptions on the Galois representation
\[\rho_{\pi^{\circ}}:\mathrm{G}_{\mathbf{Q}}\to\operatorname{GL}_{2}(\mathrm{E}_ {\lambda})\]
associated to \(\pi^{\circ}\). We denote by \(\overline{\rho}_{\pi^{\circ}}\) the residual representation of \(\rho_{\pi^{\circ}}\). Let \(\Sigma^{+}\) be the set of primes dividing \(\mathrm{N}^{+}\) and let \(\Sigma^{-}_{\mathrm{ram}}\) be the set of primes \(r\) dividing \(\mathrm{N}^{-}\) such that \(l\mid r^{2}-1\).
**Assumption 6.6**.: We make the following assumptions on \(\bar{\rho}_{\pi^{\circ}}\).
1. \(\bar{\rho}_{\pi^{\circ}}|_{\mathrm{G}_{\mathbf{Q}(\zeta_{l})}}\) is absolutely irreducible;
2. The image of \(\bar{\rho}_{\pi^{\circ}}\) contains \(\operatorname{GL}_{2}(\mathbf{F}_{l})\);
3. \(\overline{\rho}_{\pi^{\circ}}\) is minimal at primes in \(\Sigma^{+}\) in the sense that all the liftings of \(\overline{\rho}_{\pi^{\circ}}|_{\mathrm{G}_{\mathbf{Q}_{r}}}\) are minimally ramified for \(r\in\Sigma^{+}\);
4. \(\overline{\rho}_{\pi^{\circ}}\) is ramified at primes in \(\Sigma^{-}_{\rm ram}\).
Now we can state the main result on comparing the periods \({\mathcal{P}}_{\rm dis}(\phi^{\dagger})\) and \({\mathcal{P}}(f^{\dagger})\).
**Theorem 6.7**.: _Let \(f\in{\rm S}_{2}(\Gamma_{0}({\rm N}))\) be a newform of weight \(2\) with \({\rm N}={\rm N}^{+}{\rm N}^{-}\) such that \({\rm N}^{-}\) is squarefree and has odd number of prime factors. Let \(f^{\dagger}\) be the automorphic form on \({\rm Z}(\overline{\rm B})\) corresponding to \(f\) under the Jacquet-Langlands correspondence and let \(\phi^{\dagger}\) be the base change of \(f^{\dagger}\) considered as an automorphic form on \({\rm Z}(\overline{\rm Q})\)._
1. _We assume that the residual Galois representation_ \(\overline{\rho}_{\pi^{\circ}}\) _satisfies Assumption_ 6.6_;_
2. _We further assume that_ \[{\rm H}^{1}({\bf Q}({\rm M}_{n})/{\bf Q},{\rm M}_{n})=0\] _for every_ \(n\geq 1\) _for the splitting field_ \({\bf Q}({\rm M}_{n})\) _of the Galois module_ \({\rm M}_{n}\)_._
_Then we have the following inequality_
\[{\rm ord}_{\lambda}({\mathcal{P}}_{\rm dis}(\phi^{\dagger}))\geq{\rm ord}_{ \lambda}({\mathcal{P}}(f^{\dagger})).\]
To prove this theorem, we will study the Bloch-Kato Selmer group \({\rm H}^{1}_{f}({\bf Q},{\rm Ad}^{0}({\mathcal{M}}_{\pi^{\circ}}))\) from the perspective of deformation theory of the residual representation \(\overline{\rho}_{\pi^{\circ}}\). Although the Bloch-Kato Selmer group itself has less connection with the deformation theory of the residual representation \(\overline{\rho}_{\pi^{\circ}}\), we can introduce a smaller Selmer group as follows. We define the local condition \({\rm H}^{1}_{\rm new}({\bf Q}_{v},{\rm Ad}^{0}({\mathcal{M}}_{\pi^{\circ}}))\) and \({\rm H}^{1}_{\rm new}({\bf Q}_{v},{\rm M}_{n})\) as in [Lun, Definition 3.6], see also [KO, (3.3), (3.4)]. Let \(\Sigma^{-}_{\rm mix}\) be the set of prime divisors of \({\rm N}^{-}\)such that \(l\nmid r^{2}-1\). Then we define the Selmer group \({\rm H}^{1}_{\mathcal{S}}({\bf Q},{\rm Ad}^{0}({\mathcal{M}}_{\pi^{\circ}}))\) by
\[\ker\{{\rm H}^{1}({\bf Q},{\rm Ad}^{0}({\mathcal{M}}_{\pi^{\circ}}))\to\prod_ {v\not\in\Sigma^{-}_{\rm mix}}\frac{{\rm H}^{1}({\bf Q}_{v},{\rm Ad}^{0}({ \mathcal{M}}_{\pi^{\circ}}))}{{\rm H}^{1}_{f}({\bf Q}_{v},{\rm Ad}^{0}({ \mathcal{M}}_{\pi^{\circ}}))}\times\prod_{v\in\Sigma^{-}_{\rm mix}}\frac{{\rm H }^{1}({\bf Q}_{v},{\rm Ad}^{0}({\mathcal{M}}_{\pi^{\circ}}))}{{\rm H}^{1}_{ \rm new}({\bf Q}_{v},{\rm Ad}^{0}({\mathcal{M}}_{\pi^{\circ}}))}\}.\]
Let \({\mathbb{T}}={\mathbb{T}}^{\rm N}\) be the prime-to-N Hecke algebra for the group \({\rm G}(\overline{\rm B})\). Let
\[\phi_{\pi^{\circ}}:{\mathbb{T}}\to k_{\lambda}\]
be the morphism provided by the Hecke eigensystem corresponding to the trace of Frobenius of \(\overline{\rho}_{\pi^{\circ}}\). Then we obtain a maximal ideal \({\mathfrak{m}}=\ker(\phi_{\pi^{\circ}})\). On the other hand, consider the global deformation problem given by
\[{\mathcal{S}}_{\rm mix}:=(\overline{\rho},\chi_{l},\Sigma^{+}\cup\Sigma^{-}_{ \rm ram}\cup\Sigma^{-}_{\rm mix}\cup\{l\},\{{\mathcal{D}}_{v}\}_{v\in\Sigma^{+ }\cup\Sigma^{-}_{\rm ram}\cup\Sigma^{-}_{\rm mix}\cup\{l\}})\]
that classifies the deformations of \(\overline{\rho}\) over an \({\mathcal{O}}\)-algebra which satisfy the following local deformation conditions:
1. for \(v=l\), \({\mathcal{D}}_{l}\) classifies deformations which are Fontaine-Laffaille crystalline;
2. for \(v\in\Sigma^{-}_{\rm ram}\cup\Sigma^{+}\), \({\mathcal{D}}_{v}\) is the local deformation problem that classifies deformations that are minimally ramified;
3. for \(v\in\Sigma^{-}_{\rm mix}\), \({\mathcal{D}}_{v}={\mathcal{D}}^{\rm new}_{v}\) is the local deformation problem that classifies deformations that are new in the sense of [Lun, Definition 3.6].
This global deformation problem is represented by the deformation ring \({\rm R}_{\rm mix}\). Moreover it follows from [Lun, Proposition 4.1], see also [KO, Theorem 3.14] that there is an isomorphism
\[{\rm R}_{\rm mix}\cong{\mathbb{T}}_{\mathfrak{m}}. \tag{6.2}\] |
2307.15925 | Formation of spiral dwarf galaxies: observational data and results of
numerical simulation | Recent studies show the possibility of the formation of fairly regular and
global spiral patterns in dwarf galaxies (dS type). Our sample of observed
dwarf objects of this class also includes galaxies with a central stellar bar.
The analysis of the observational data provides a small rotation velocity and a
small disk component mass for dS galaxies, which is in poor agreement with the
spiral structure generation mechanism in isolated dwarfs due to the development
of disk gravitational instability. Numerical simulation of the stellar-gaseous
disks self-consistent dynamics imposes restrictions on the stellar disk
thickness and the maximum gas rotation velocity, at which the gravitational
mechanism of spiral formation can still be effective. | Sergey Khrapov, Alexander Khoperskov, Natalia Zaitseva, Anatoly Zasov, Alexander Titov | 2023-07-29T08:04:49Z | http://arxiv.org/abs/2307.15925v1 | # Formation of spiral dwarf galaxies: observational data and results of numerical simulation 1
###### Abstract
Recent studies show the possibility of the formation of fairly regular and global spiral patterns in dwarf galaxies (dS type). Our sample of observed dwarf objects of this class also includes galaxies with a central stellar bar. The analysis of the observational data provides a small rotation velocity and a small disk component mass for dS galaxies, which is in poor agreement with the spiral structure generation mechanism in isolated dwarfs due to the development of disk gravitational instability. Numerical simulation of the stellar-gaseous disks self-consistent dynamics imposes restrictions on the stellar disk thickness and the maximum gas rotation velocity, at which the gravitational mechanism of spiral formation can still be effective.
## 1 Introduction
Dwarf galaxies are small in size and mass compared to classical spiral galaxies (S or SB types) and are usually considered structureless, irregular objects (Irr). Some late-type dwarfs (Sd - Sm types) have a rotating stellar disk without any regular and developed spiral structure. Such galaxies exhibit flocculent type of spirals, which are discontinuous and consist of short regions [1]. Gravitational instability in large massive disks is able to provide a regular spiral pattern covering the entire disk, so Grand Design spiral structure is common for Sa - Sc galaxy types [2, 3, 4, 5, 6].
Only a small part of dwarf galaxies shows a global, relatively regular spiral pattern in their disk, and such objects belong to the fairly rare dS type [7, 8, 9]. The observations comparative analysis of normal S- and dS-galaxies represents that such dwarfs are more than just smaller copies of large objects, since their spectral characteristics are similar to Irr galaxies [10]. The small size and mass of dwarf galaxies appear to be a theoretical problem for the formation of extended spirals due to gravitational instability [5, 6].
Here we consider the observed properties of the sample of dS galaxies compared to dwarf galaxies without a regular spiral structure [7]. The numerical simulation of the dynamics of dwarfs stellar disks with a rich gaseous component makes it possible to determine the conditions
under which gravitational instability can generate sufficiently extended spiral patterns in dS galaxies, which morphology is similar to Grand Design galaxies.
## 2 Sample of dS-galaxies and its properties
Our sample is limited to objects (usually type Sc - Irr) with the absolute magnitude \(M_{B}>-18^{m}\), the optical diameter \(D_{25}<12\,\)kpc, \(m_{B}<15^{m}\), the inclination angle \(i<75^{\circ}\)[7]. It is also important to have images in different spectral ranges for deeper study (Figures 1, 2). The logarithm of isolation index \(\log(ii)\) characterizes the degree of environmental influence [11] and we do not consider both Virgo, UMa, Fornax clusters and peculiar galaxies with signs of strong interaction.
Our sample of spiral dwarf galaxies includes 43 objects, which are compared with the sample of dwarfs without spirals (119 objects of Sm and Irr types, see detailed description in [7]). Figure 1 shows dwarf galaxies with bars and two distinct arms. The images of SDSS 9 (the Sloan Digital Sky Survey), DECaLS (the Dark Energy Camera Legacy Survey), DSS (the Digitized Sky Survey), PanSTARRS (the Panoramic Survey Telescope and Rapid Response System), LEGA (the DESI Legacy Imaging Surveys) [12] characterize the distributions of the stellar components. The bottom rows in the fig. 1, 2 show the distributions of gas and young stars according to GALEX data (the Galaxy Evolution Explorer). Figure 2 demonstrates galaxies with a more complex spiral structure, where the yellow lines highlight the positions of the spiral arms. Moreover, the spirals in the stellar and gaseous components are in good agreement with each other. Three-arm patterns indicate a rather massive dark halo.
Each galaxy in our two samples is characterized by the systemic velocity \(V_{sys}\), the diameter \(D_{25}\), the maximum rotation velocity \(V_{rot}\), the HI mass \(M_{HI}\), the estimate of the total gravitating mass \(M_{dyn}\), the luminosities \(L_{K}\) according to the K-magnitudes of 2MASS catalog [16]. Statistical analysis gives the following conclusions (See also [7]).
-- Dwarf galaxies with developed spiral structure are the most massive objects in the sample.
-- The distributions of dS galaxies and objects without spirals indicate the absence of very significant differences for various pairs of parameters, for example, \(L_{K}-M_{dyn}\), \(L_{K}-M_{HI}\), \(D_{25}-M_{HI}\), \(M_{dyn}-M_{HI}\), \(V_{rot}-M_{HI}\) and others (Figure 3).
Figure 1: Dwarf galaxies with a spiral pattern from our sample.
-- The HI mass in dS galaxies is, on average, about two times less than in dwarf non-spiral galaxies, although the dynamical and photometric parameters are close for both samples.
-- The central stellar bar is found both in dS-type objects and in non-spiral galaxies.
-- Tidal influence is apparently not an essential factor of the formation in considered galaxies.
-- The proportion of baryonic matter in spiral dwarfs is, on average, lower than in objects with irregular structure and in giant spiral galaxies, which may indicate the influence of the dark halo on the formation of the spiral patterns in dS-dwarfs.
-- Remote dwarf galaxies follow the same Tully-Fisher relation as Local Volume dwarfs, but their physical parameters are determined with a larger uncertainties due to the low brightness of these objects, which significantly increases the points scatter on the diagram.
Figure 3: Positions of various dwarf galaxies: blue icons are spirals, green squares are Sm-type, pink squares are irregular objects. _a_) In-plane distribution of the “baryon mass – momentum” parameters compared to the regression for isolated galaxies of the AMIGA sample [13] (black line), colored lines show deviation \(1\sigma\), \(M_{barionic}=\Upsilon_{K}^{*}L_{K}+\eta M_{HI}\), where \(\Upsilon_{K}^{*}=0.6\)[14] and \(\eta=1.33\). _b_) Tully-Fisher relation for the K-band compared to those obtained by Karachentsev et al. [15] for the sample of Local Volume dwarf galaxies (black line, colored lines show \(1\sigma\) deviation).
Figure 2: Dwarf galaxies with a spiral pattern from our sample (continued).
## 3 Numerical modeling of the galactic disk
The numerical models are based on the self-consistent dynamics of the N-body gravitating system for the stellar disk and the gaseous component, which is described by the hydrodynamic equations [3, 7]. We used direct method to calculate self-gravity forces (each particle interacts with each other), which is the most accurate modeling approach [17]. The GPUs application for parallel code makes it possible to perform fairly fast numerical experiments with \(N=2^{20}-2^{23}\). The numerical model should ensure the collisionlessness of the stellar component [7, 18], which is achieved by cutting off the Newtonian potential at small radii \(r_{c}\). Surface density of the stellar exponential disk \(\sigma(r)=\sigma_{0}\,\exp(-r/r_{d})\) characterized by the radial scale \(r_{d}\). We use dimensionless units of length \(r=1\to 4\,\)kpc, velocity \(V=1\to 47\,\)km\(\,\)sec\({}^{-1}\) and time \(t=1\to 80\,\)Myr.
Figure 4 demonstrates the presence or absence of the stellar disk heating for the corresponding values of \(r_{c}\) and \(N=N^{(*)}+N^{(h)}\) (where \(N^{(*)}\) is the number of particles in the stellar disk, \(N^{(h)}\) is the number of particles that form dark live halo).
We see a noticeable linear increase in vertical velocity dispersion at a very small cutoff radii due the absence of collisionlessness in such a model. The value \(r_{c}=0.004\) ensures the almost stationary behavior of the velocity dispersions (green lines). The heating is stronger for a smaller values of the number of particles \(N^{(*)}\) and \(N^{(h)}\) (comparison of Figure 4a and Figure 4b).
The figure 5 shows various spiral structures in dwarf galaxies models. We obtain different stellar and gaseous disks morphology in numerical models by varying the relative masses of stars, gas, and dark halo, as well as the radial and vertical scales that determine the distributions of subsystem parameters. The model structures in gaseous and stellar disks are close to the observed patterns of the corresponding galaxies (See Figures 1, 2). Other calculation examples are given in [7].
Figure 4: The dispersion dynamics of stellar vertical velocities in disk models for different gravitational potential cutoff radii (\(r_{c}\)), the number of particles in the disk (\(N^{(*)}\)) and the live halo (\(N^{(h)}\)). _a_) \(N^{(*)}=N^{(h)}=2^{18}\), \(r=10^{-5}\) (red lines), \(r=4\cdot 10^{-3}\) (green lines), for different radii (\(r_{d}\) is the radial exponential scale of stellar disk); _b_) \(N^{(*)}=N^{(h)}=2^{20}\), \(r=10^{-5}\) (red lines), \(r=4\cdot 10^{-3}\) (green lines), for different radii.
## 4 Conclusion
Photometric and kinematic observational data do not allow us to confidently identify the factors that ensure the formation of a global spiral patterns in dwarf galaxies, which are a rather rare phenomenon. There is only some gas deficit in dS galaxies compared to dIrr objects.
We have studied the possibility of the global spiral structure formation in numerical models of isolated dwarf galaxies due to the development of gravitational instability in the stellar disk rich in gas. The presence of a spiral pattern in dwarf models imposes some restrictions on the disks thickness, the radial velocity dispersion profiles in stellar disk, the sound speed in gas and the gas density. The results of numerical simulations show that the maximum gas rotation velocity must be higher than 60 km sec\({}^{-1}\) in order to excite spiral waves with significant amplitude. Thicker stellar disk requires more gas to form the spiral pattern.
#### Acknowledgments
This research has made use of "Aladin sky atlas" developed at CDS, Strasbourg Observatory, France. The work was supported by the Ministry of Science and Higher Education of the Russian Federation (state assignment, project No. 0633-2020-0003, implementation of all numerical simulations) and by Russian Foundation for Basic Research (project 20-02-00080 A, observational data analysis).
|
2307.12092 | Binary vision: The merging black hole binary mass distribution via
iterative density estimation | Binary black hole (BBH) systems detected via gravitational-wave (GW) emission
are a recently opened astrophysical frontier with many unknowns and
uncertainties. Accurate reconstruction of the binary distribution with as few
assumptions as possible is desirable for inference on formation channels and
environments. Most population analyses have, though, assumed a power law in
binary mass ratio $q$, and/or assumed a universal $q$ distribution regardless
of primary mass. Kernel density estimation (KDE)-based methods allow us to
dispense with such assumptions and directly estimate the joint binary mass
distribution. We deploy a self-consistent iterative method to estimate this
full BBH mass distribution, finding local maxima in primary mass consistent
with previous investigations and a secondary mass distribution with a partly
independent structure, inconsistent with both power laws and with a constant
function of $q$. We find a weaker preference for near-equal mass binaries than
in most previous investigations; instead, the secondary mass has its own
"spectral lines" at slightly lower values than the primary, and we observe an
anti-correlation between primary and secondary masses around the ~$10M_\odot$
peak. | Jam Sadiq, Thomas Dent, Mark Gieles | 2023-07-22T14:47:07Z | http://arxiv.org/abs/2307.12092v1 | # Binary vision: The merging black hole binary mass distribution via iterative density estimation
###### Abstract
Binary black hole (BBH) systems detected via gravitational-wave (GW) emission are a recently opened astrophysical frontier with many unknowns and uncertainties. Accurate reconstruction of the binary distribution with as few assumptions as possible is desirable for inference on formation channels and environments. Most population analyses have, though, assumed a power law in binary mass ratio \(q\), and/or assumed a universal \(q\) distribution regardless of primary mass. Kernel density estimation (KDE)-based methods allow us to dispense with such assumptions and directly estimate the joint binary mass distribution. We deploy a self-consistent iterative method to estimate this full BBH mass distribution, finding local maxima in primary mass consistent with previous investigations and a secondary mass distribution with a partly independent structure, inconsistent with both power laws and with a constant function of \(q\). We find a weaker preference for near-equal mass binaries than in most previous investigations; instead, the secondary mass has its own "spectral lines" at slightly lower values than the primary, and we observe an anti-correlation between primary and secondary masses around the \(\sim\!10\,M_{\odot}\) peak.
Compact binaries, stellar-mass black holes, gravitational waves, density estimation 0000-0002-4880-8800]Jam Sadiq
0000-0002-4880-7880]Thomas Dent
0000-0002-4880-7880]Mark Gieles
## 1 Introduction
Ever since the first GW detection revealed a binary black hole source with the--previously unsuspected--component masses of around \(35\,M_{\odot}\)(Abbott et al., 2016, 2016), LIGO-Virgo-KAGRA observations of compact binaries have continued to yield surprises, of which the binary mass distribution arguably contains the most information bearing on formation environments and channels. In the first three observing runs of Advanced LIGO (Aasi et al., 2015) and Advanced Virgo (Acernese et al., 2015) the better part of 100 detections of binary compact object mergers via gravitational wave (GW) emission were made, as catalogued in the GWTC releases (Abbott et al., 2019, 2021, 2021, 2022). Having a set of detected events it is possible to study population properties of these compact binaries and eventually draw implications from these properties on binary astrophysical formation and evolution. Detailed investigations of the population properties of BBH mergers, the most commonly detected source type, were undertaken in Abbott et al. (2020, 2023), focusing on several population characteristics including their component masses and spins and possible dependence on redshift.
Among parameters estimated and studied in connection with the population properties of binary compact objects, the component masses are obtained with least uncertainty. Many parameterized and semi- or non-parametric models have been proposed to study the mass-dependence of the compact binary merger rate or the mass distribution of the merger population (see Abbott et al., 2023, and references therein). In parametric models, Bayesian hierarchical techniques are used to infer model hyper-parameter posteriors, and thus the population distribution (e.g. Mandel, 2010; Thrane and Talbot, 2019). On the other hand, non-parametric models are data driven methods which learn population properties either without requiring any specific functional form, or (for semi-parametric models) allowing for gen
eralised deviations from a given parametric model (Powell et al., 2019; Tiwari and Fairhurst, 2021; Tiwari, 2021, 2022; Veske et al., 2021; Rinaldi and Del Pozzo, 2021; Edelman et al., 2021, 2023; Callister and Farr, 2023; Ray et al., 2023; Toubiana et al., 2023).
In Sadiq et al. (2022) we introduced a fast and flexible adaptive width kernel density estimation (awKDE) as a non-parametric estimation method for population reconstructions of binary black hole distribution from observed gravitational wave data. A limitation of this method arose from the measurement uncertainty in each individual event's parameters. Given the relatively low signal-to-noise ratios, the observed component masses have significant uncertainties (e.g. Veitch et al., 2015) that can bias the overall population distribution if not properly accounted for.
In this work we are proposing a new method to reduce this uncertainty estimate in our populations distribution by using iterative re-weighting scheme of samples for each observed events using the awKDE density estimates as a probability for re-weighting of samples. The idea is similar to the standard expectation-maximization algorithm (Dempster et al., 1977).
As an application of this new method, we estimate the full 2-dimensional component mass distribution without assumptions (aside from the use of Gaussian kernels) on its functional form. While attention has often focused on the primary component mass or on the more precisely measured chirp mass (see among others Dominik et al., 2015; Tiwari and Fairhurst, 2021; Tiwari, 2022, 2023; Edelman et al., 2023; Schneider et al., 2023; Farah et al., 2023), less attention has been paid to the full binary distribution, either considered via the secondary \(m_{2}\) or mass ratio \(q\equiv m_{2}/m_{1}\). We expect these parameters to bear traces of the possible BBH formation channels (Kovetz et al., 2017), in that for dynamical (cluster) formation the two masses may be independent variates, up to a factor modelling probability of binary formation and merger (Fishbach and Holz, 2020; Antonini et al., 2023) that typically favors near-equal masses (e.g. Rodriguez et al., 2016; O'Leary et al., 2016); whereas for isolated binary evolution, some nontrivial though probably highly model-dependent correlation of component masses may arise (e.g. van Son et al., 2022).
Typically, parametric models have assumed power-law \(m_{2}\) dependence at fixed \(m_{1}\)(Kovetz et al., 2017; Fishbach and Holz, 2017; Talbot and Thrane, 2018; Abbott et al., 2019), recovering mildly positive powers indicating a preference for equal masses. A more detailed study using GWTC-1 events concluded that the two BHs of a given binary prefer to be of comparable mass (Fishbach and Holz, 2020). More recent non-parametric or semi-parametric studies have relaxed these assumptions, either through allowing the power-law index to vary over chirp mass (Tiwari, 2022), or allowing \(p(q)\) to be a free (data-driven) function (Edelman et al., 2023; Callister and Farr, 2023) though enforcing the same dependence over all primary masses. Tiwari (2023) introduced a more flexible approach with \(p(q)\) modelled by a truncated Gaussian whose parameters depend on chirp mass, finding some significant variation. Ray et al. (2023) measured the full 2-d distribution with a binned (piecewise-constant) model over \(m_{1}\), \(m_{2}\) (including possible redshift dependence), although they did not consider the \(q\) distribution. Note that the mass ratio distribution presents nontrivial technical issues since (at least for lower mass BBH) typical event measurement errors are both large, and correlated with the BH orbit-aligned spin components (Cutler and Flanagan, 1994; Baird et al., 2013). Concerning measurement errors, we expect our iterative reweighting scheme to yield a significant advance in reconstructing the full mass distribution.
The remainder of the paper is organized as follows: in section 2 we explain our method and demonstrate it using simple one-dimensional mock data. In section 3 we apply our method to detected BBH in GWTC-3, compare the result with our previous studies and further use our new method in two mass dimensions. In section 4 we discuss the implications of our results and consider extensions of the method. We also describe additional mock data tests and supplementary results in appendices.
## 2 Method
### Statistical framework
Our general approach to population inference can be considered as similar to maximum likelihood, with uncertainties quantified via empirical bootstrap methods (Efron, 1979). Given a set of observed events, if we neglect measurement uncertainty in each event's parameters, our population estimate is a KDE where the kernel bandwidth for each event is adjusted (Breiman et al., 1977; Abramson, 1982; Terrell and Scott, 1992; Sain and Scott, 1996) using an adaptive scheme to maximize the cross-validated likelihood (Sadiq et al., 2022). Note that a "maximum likelihood" KDE is not well defined, as the likelihood increases indefinitely in the limit of small bandwidth kernels centered on the observations; in this limit, the variance of the density estimate over realizations of the data becomes infinitely large. The bias-variance tradeoff is then addressed by adaptive kernel bandwidth, with a choice of hyperparameters--global bandwidth \(h\) and sensitivity parameter \(\alpha\), see Wang and Wang (2011)--optimized by grid search using leave-out
one cross-validation to calculate a figure of merit. We then quantify counting uncertainties for the underlying inhomogeneous Poisson process using generalized bootstrap resampling Chmandy et al. (2012).
We noted in Sadiq et al. (2022) that for nontrivial measurement errors this method, in addition to possible intrinsic biases due to the choice of a Gaussian KDE, will be biased towards an over-dispersed estimate of the true distribution. Here we motivate and present our strategy for correcting this bias. Our motivation is linked to Bayesian hierarchical population inference (Mandel, 2010), where measurement errors are treated by considering the true event properties \(\vec{\theta}_{i}\) for events labelled \(i=1,\ldots,N\) as nuisance parameters in the likelihood:
\[p(\{d\}|\vec{\lambda})=\prod_{i=1}^{N}\int p(d_{i}|\vec{\theta}_{i})p(\vec{ \theta}_{i}|\vec{\lambda})\,d\vec{\theta}_{i}, \tag{1}\]
where \(\{d\}\) is the set of data segments \(d_{i}\) corresponding to the events and \(\vec{\lambda}\) are population model hyperparameters (here for simplicity we omit selection effects). Inference is implemented using parameter estimation (PE) samples which were generated using a standard or fiducial prior \(p_{\rm PE}(\vec{\theta})\), often chosen as uniform over parameters of interest (see e.g. Veitch et al., 2015; Thrane and Talbot, 2019). Samples (labelled by \(k\)) are distributed as the posterior density using this prior, hence
\[\vec{\theta}_{i}^{k}\sim p(\vec{\theta}_{i}|d_{i},p_{\rm PE}(\vec{\theta})) \propto p(d_{i}|\vec{\theta}_{i})p(\vec{\theta}_{i}|\vec{\lambda})\cdot\frac{ p_{\rm PE}(\vec{\theta})}{p(\vec{\theta}_{i}|\vec{\lambda})}, \tag{2}\]
hence the integrals may be performed (up to a constant factor) by summing over samples _re-weighted_ by the ratio of the population distribution to the PE prior, \(p(\vec{\theta}_{i}|\vec{\lambda})/p_{\rm PE}(\vec{\theta})\).
Here, while not making use of this hierarchical likelihood, we remark that PE samples give a _biased_ estimate of each event's properties if the true population distribution \(p_{\rm pop}(\vec{\theta})\) (corresponding to \(p(\vec{\theta}|\vec{\lambda})\) in the parameterised case) is not equal to \(p_{\rm PE}(\vec{\theta})\). Then, if we have access to an estimated population distribution \(\hat{p}_{\rm pop}(\vec{\theta})\) that is more accurate than the PE prior is, we will obtain more accurate estimates of event properties by drawing samples weighted proportional to \(\hat{p}_{\rm pop}(\vec{\theta})/p_{\rm PE}(\vec{\theta})\), as described in more detail below,
To summarize, a KDE obtained by drawing from PE samples will be biased because these samples are themselves biased, due to the PE prior not being equal to the true population distribution. However, the more accurate an estimate of the true distribution we are able to obtain, the smaller will be the bias in event parameters using reweighted PE samples, and ultimately the smaller will be the bias of the KDE.
### Iterative Reweighting
The above discussion suggests an iterative procedure where, beginning with both biased PE samples and a biased population KDE, one may be improved in turn using the other, until - ideally - reaching a stationary state, where both the sample draws and the corresponding population estimates are unbiased (up to more fundamental limitations of PE and of our KDE). This iterative strategy is similar to the Expectation-Maximization (EM) algorithm (Dempster et al., 1977), a popular method to estimate parameters for statistical models when there are missing or incomplete data.
Our basic algorithm follows these steps:
1. For each GW event, draw Poisson distributed (with mean 1) PE samples weighted by the current estimate of population density \(\hat{p}_{\rm pop}\)
2. Create an awKDE from this sample set, optimizing the global bandwidth (and sensitivity parameter \(\alpha\), if not fixed)
3. Update the current population estimate using one or more KDEs and the selection function, and go to step 1.
In more detail, in step 1 we draw PE samples with probability proportional to the ratio of \(\hat{p}_{\rm pop}(\vec{\theta}_{i}^{k})\) to the PE prior distribution. Step 2 reproduces our previous awKDE method. Step 3 relates the KDE of _detected_ events to an estimate of the true population distribution, hence in general it requires us to compensate for the selection function over the event parameter space: i.e. we estimate the true distribution by the KDE of detected events divided by the probability of detection, as detailed in (Sadiq et al., 2022, section 3.1).
In step 3 we may choose to derive the updated population density \(\hat{p}_{\rm pop}(\vec{\theta})\) from only the most recently calculated KDE: then the iterative process is a Markov chain,1 and we may characterize it via the autocorrelation of various scalar quantities computed at each iteration. We use the optimized global bandwidth \(h\) (and adaptive sensitivity parameter \(\alpha\), if not fixed to unity) for this purpose.
Footnote 1: Although it may be thought of as a Markov chain Monte Carlo, our method is entirely unrelated to the Metropolis-Hastings algorithm.
After discarding a small number of initial iterations and then accumulating a number significantly greater than the autocorrelation time, we expect the collection of iterations to provide unbiased (though not necessarily independent) estimates of the population distribution. For subsequent iterations we then use the median
of \(\hat{p}_{\rm pop}(\vec{\theta}_{i}^{\rm x})\) over a buffer of previous iterations (usually the previous 100) to determine the sample draw probabilities for the next iteration. This population estimate should be more precise than one using only a single previous KDE; and in addition using the buffer estimate the samples for each successive iteration are essentially independent.
### One-dimensional mock data demonstration
We first test this iterative reweighting method on a simple mock dataset. We generate true event parameters drawing 30 event each from a truncated power law and a Gaussian distribution respectively; the power law is \(p(x)\sim x^{-0.5}\) and the Gaussian has mean (s.d.) of \(\mu=35\) (\(\sigma_{p}=3\)). We then add measurement errors to our true parameters with a s.d. \(\sigma_{m}=5\), hence broader than the true Gaussian peak; 100 mock parameter samples with the same uncertainty are then generated around each "measured" value. First, applying awKDE as in Sadiq et al. (2022) to random draws from these mock parameter samples, as expected we find an over-dispersed estimate around the peak (Fig. 1, top).
The second (bottom) plot of Fig. 1 shows the awKDE applying our iterative reweighting algorithm. Here the Gaussian height and s.d. are accurately reconstructed and the true distribution is well within the 90% percentiles of iteration samples, except at the step-function truncation of the power law which cannot be accurately represented by a Gaussian KDE.
We verify that the initial Markov process has accumulated several independent samples by plotting the autocorrelation of the optimized global bandwidth \(h\) vs. lag (separation along the chain) in Fig. 2. (We fix the adaptive sensitivity parameter \(\alpha\) to unity for 1-d data.) The autocorrelation drops near zero by a lag of \(\lesssim\!10\) iterations, thus a buffer of 100 iterations contains several independent population estimates. The estimate is more noisy for larger lags as fewer iterations are available.
More detailed tests of iterative reweighting with a Gaussian KDE in two dimensions in the presence of correlated parameter errors are given in Appendix A.
## 3 Results from GWTC-3
As in Sadiq et al. (2022), as input to our analysis we use parameter estimation samples (LVK 2021a) for the set of BBH events catalogued in GWTC-3 (Abbott et al., 2021, 2023b) with false alarm rate below 1 per year. For the sensitivity estimate we employ a fit to search injection (simulated signal) results (Wysocki et al., 2019; Wysocki, 2020) released with the catalog (LVK 2021b).
### One-dimensional mass distribution
We start by evaluating the effect of the iterative reweighting method on the 1-d primary mass distribution, taking 100 random PE samples for each of the 69 BBH events; here, we assume a power-law distribution for secondary mass \(p(m_{2})\sim q^{1.5}\). We reproduce the awKDE results from Sadiq et al. (2022) and use this estimate to seed the reweighted iteration algorithm. After \(\sim\)1000 reweighting iterations we compute the median and symmetric 90% interval from the last 900 rate estimates (the first 100 are used to set up a buffer for
Figure 1: awKDE for 60 events from a mock data mixture distribution with 50% power law (\(\alpha=-0.5\)) and 50% Gaussian (\(\mu=35\), \(\sigma=3\)) samples. 100 mock parameter samples are generated for each event with measurement error \(\sigma_{m}=5\). Top: awKDE drawing random parameter samples without reweighting. Bottom: applying iterative reweighting. The solid (dot-dashed) lines represent the median (symmetric 90% confidence band) from 900 bootstrap iterations.
Figure 2: Autocorrelation of the (log) optimized global bandwidth series for the iterative Markov chain. The x-axis shows lag, i.e. separation of the iterations between which autocorrelation is calculated.
population weighting, as above): results are presented in Fig. 3.
Our estimate is generally consistent with other non-parametric or semi-parametric approaches, represented by the Flexible mixtures, and Power Law + Spline models in Abbott et al. (2023a), and does not show the over-dispersion apparent in Figure 8 of Sadiq et al. (2022); specifically, we find a slightly higher and narrower peak around \(35\,M_{\odot}\), but no identifiable feature around \(20\,M_{\odot}\) (compare Tiwari, 2022; Toubiana et al., 2023).
### Two--dimensional Mass Reconstruction
Next, we apply our reweighting scheme on PE samples for both component masses and compute the two-dimensional (2-d) merger rate, using the estimated sensitive volume\(\times\)time (VT) as a function of the two masses. We will first discuss various technical aspects of extending the 1-d calculation without assuming any power-law dependence for \(m_{2}\).
Binary exchange symmetryTypically when presenting binary parameter estimates, the convention \(m_{1}>m_{2}\) is applied. However, all aspects of binary formation physics and event detection and parameter estimation will be invariant under swapping the component labels, i.e. exchanging \(m_{1}\leftrightarrow m_{2}\) (and at the same time exchanging the spins). Thus, considering the differential merger rate \(\mathcal{R}(m_{1},m_{2})\) as a function over the whole plane, it must also have a reflection symmetry about the line \(m_{1}=m_{2}\). To respect this symmetry and remove biases resulting from the apparent lack of support at \(m_{2}>m_{1}\), we train and evaluate KDEs on _reflected_ sample sets which contain both the released PE samples, and copies of them with swapped components. Note also that a power-law \(m_{2}\) distribution implies the rate is a non-differentiable function at the equal mass line, whereas a KDE by construction is smooth and differentiable everywhere.
Choice of KDE parametersIn previous work (Sadiq et al., 2022), we mainly considered a KDE constructed over linear mass (or distance) parameters; however, here we choose the logarithms of component masses. While this choice is not expected to have a large impact on the results, since the kernel bandwidth is free to locally adapt in either case, it is technically preferable for a few reasons: we avoid any possible KDE support at negative masses; there is a generally higher density of events towards lower masses considering the entire \(3-100\,M_{\odot}\) range; the density of observed events also shows less overall variation over log coordinates; and when evaluating the KDE on a grid with equal spacing, fewer points are required to maintain precision for the low-mass region.
For a 2-d KDE we also have a choice of kernel parameters, i.e. the Gaussian covariance matrix: given the similar or identical physical interpretation and range of values between \(\ln m_{1}\) and \(\ln m_{2}\), we choose a covariance proportional to the unit matrix, with an overall factor determined by the local adaptive bandwidth for each event.
PE priorThe PE samples released by LVK use a prior uniform in component masses (LVK, 2021) up to a factor dependent on cosmological redshift; we currently do not consider reweighting relative to the default PE cosmological model. As the prior is a density, it transforms with a Jacobian factor when changing variables to \(\ln m_{1},\ln m_{2}\), thus, we must divide the estimated rate \(\mathcal{R}(\ln m_{1},\ln m_{2})\) by a prior \(\propto m_{1}m_{2}\) when obtaining reweighted draw probabilities.
With these technical choices, we perform 1500 reweighting iterations in total, the first 600 using the Markov chain (i.e. the immediately preceding rate estimate) for sample draw weights, and the remaining 900 using the buffer median estimate. Fig. 4 shows the rate estimate computed with iterative reweighting for BBH events in GWTC-3. The autocorrelations of optimal global bandwidth and sensitivity parameter \(\alpha\) for the first 600 iterations are shown in Fig. 5: the correlation drops close to 0 at a lag of \(\sim\!30\) iterations.
The mass distribution shows several interesting features in addition to the expected peaks (overdensities) around primary masses of \(\sim\!10\,M_{\odot}\) and \(\sim\!35\,M_{\odot}\), with corresponding peaks over secondary mass. For primary
Figure 3: Differential rate over BBH primary mass using the iterative weighted KDE on GWTC-3 PE samples, assuming a power-law secondary distribution. We overplot our estimates with two of the models, Flexible mixtures (FM), and Power Law + Spline (PS) models in (Abbott et al., 2023). Our estimate is generally consistent with other non-parametric methods, though with a higher and narrower peak around \(35\,M_{\odot}\) and lacking any feature between the \(10\,M_{\odot}\) and \(35\,M_{\odot}\) peaks.
masses \(\sim\!35\,M_{\odot}\) up to \(80\,M_{\odot}\), the most likely secondary mass is \(\sim\!30\!-\!35\,M_{\odot}\). Thus, over this range the two component masses appear almost independently chosen. Around the \(m_{1}\sim 10\,M_{\odot}\) peak, there appears to be some _anti-correlation_ of the two components, i.e. higher \(m_{1}\) favors lower \(m_{2}\). Between the two peaks the distribution of mass ratios appears broader than at either one (as hinted at in Tiwari, 2023), although the apparent trend is based on a small number of events. We also see a narrow lower density region just above the \(\sim\!10\,M_{\odot}\) peak (cf. the local minimum at chirp mass \(\sim\!11\,M_{\odot}\) in Tiwari, 2023).
We also integrated the 2-d KDE rate estimate numerically over both \(m_{1}\) and \(m_{2}\) to obtain merger rates over component mass. As shown in Fig. 6, we recover features consistent with the 2-d estimates and with other methods. Each component mass distribution appears well modelled by a combination of two Gaussian peaks and (broken) power laws.
To elucidate features in the 2-d distribution, we choose various representative values of primary mass to plot
Figure 4: Two dimensional rate estimated from PE samples for 69 BBH events from GWTC-3. Red + symbols show the median masses for each event. The contours and color scale shows the two-dimensional differential merger rate over \(\ln m_{1},\ln m_{2}\) from iterative reweighted KDE. Two main maxima are visible at \(m_{1}\,(m_{2})\sim 10\,(8)\,M_{\odot}\) and \(\sim\!35\,(32)\,M_{\odot}\) with a possible less significant overdensity around \(m_{1}\sim 20\,M_{\odot}\).
Figure 5: Autocorrelations of the KDE (log) global bandwidth and adaptive parameter \(\alpha\) using the initial 600 Markov chain iterations. The autocorrelation drops close to 0 by a lag of \(\sim\)30 iterations. The estimate becomes noisy at high lags as a smaller number of iterations is available.
Figure 6: Merger rates over component masses, obtained by numerical integration of our 2-d KDE rate estimates in Fig. 4: the medians and symmetric 90% confidence regions are shown.
Figure 7: Secondary mass \(m_{2}\) distributions estimated via iterative reweighted KDE for various fixed values of primary mass: for each \(m_{1}\) (in \(M_{\odot}\)) we plot the median and symmetric 90% confidence band. Notice the top plot x-xais is in log scale unlike the bottom plot with linear scale in \(x\)-axis.
the distribution of \(m_{2}\) in Fig. 7. The similarity between secondary distributions for \(m_{1}\gtrsim 35\,M_{\odot}\) is evident.
We may also derive the distribution of mass ratio \(q\) from our 2-d rate estimate. We plot this for various representative values of primary mass in Fig. 8, and compare to a typical power law \(\propto q^{1.5}\). First, we see that the \(q\) distribution varies over primary mass; hence, models where it is forced to the same form over the whole mass range are likely to have nontrivial bias. For some primary masses, \(p(q)\) is consistent with a monotonic increasing function such as a positive power, but for others it clearly decreases over some of the range. Roughly, if \(m_{1}\) is close to a peak then \(p(q)\) is consistent with an increasing power law, but for other values the mass ratio rather shows a maximum at intermediate values, down to \(q\sim 0.5\) for \(m_{1}=15\) or \(m_{1}=70\).
This behaviour may suggest that the primary and secondary masses are independently drawn from similar distributions, modulo a \(q\)-dependent "pairing factor" (Fishbach and Holz, 2020) which influences the relative probability of binary merger. However the preference towards \(q\sim 1\) seen in previous work is not confirmed here. Callister and Farr (2023) reached a similar conclusion although assuming a "universal" \(p(q)\) over all primary masses.
Results including GW190814--We also estimate the 2-d and 1-d integrated merger rates using our iterative reweighted KDE method when the outlier event GW190814 (Abbott et al., 2020), which has a mass ratio \(\mathcal{O}(10)\) and a secondary mass barely above the likely neutron star maximum mass, is included in the BBH population. Detailed results are presented in Appendix B: roughly summarizing the trends seen there, the bulk of the estimated distribution remains little changed by the addition of the extra event, although the peak in secondary mass below \(10\,M_{\odot}\) is shifted towards lower values and both higher and broader; this is likely due to a general increase in KDE bandwidth when optimized with cross-validation. (In parameterized models the estimated mass distribution is also highly sensitive to inclusion of GW190814 (Abbott et al., 2020, 2023).) It is not clear whether other methods for bandwidth choice would yield more accurate estimates; higher event statistics in the low \(m_{2}\) regime are clearly desirable.
## 4 Discussion
Summary of results--In this work we undertook a detailed investigation of the full 2-d mass distribution of merging compact binary black holes observed by LIGO-Virgo-KAGRA up to the O3 run without assuming any specific functional form for the secondary mass or mass ratio, enabled by a new method of iterative density estimation to address mass measurement uncertainties. Although we reproduce the broad features and local maxima seen in other parametric and non-parametric analyses, we find significantly less preference for near-equal masses than in most previous works; we also find that the mass ratio distribution cannot be described by a single function over the whole population (compare Tiwari, 2023). For a range of primary masses, we find non-monotonically varying secondary and mass ratio distributions, thus a power-law dependence is ruled out. Furthermore, we find that for primary masses above \(35\,M_{\odot}\) the secondary mass distribution is nearly independent of \(m_{1}\), with a "preferred partner" mass of \(m_{2}\simeq 30-35\,M_{\odot}\). Conversely, near the low-mass peak \(m_{1}\simeq 10\,M_{\odot}\) we observe an anticorrelation between the two components, i.e. higher \(m_{1}\) implies lower \(m_{2}\).
Possible astrophysical interpretations--Our new estimate of the joint \(m_{1}\)-\(m_{2}\) distribution may be compared to model predictions in the literature; because our individual component marginal distributions are similar to previous findings, we focus here on the mass ratio. Broadly speaking, we can distinguish model predictions from the isolated binary and dynamical channels.
The isolated binary channel predicts relatively flat \(p(q)\) distributions (e.g. Belczynski et al., 2020; Olejak
Figure 8: Mass ratio \(q\) distributions estimated via iterative reweighted KDE for various fixed values of primary mass: for each \(m_{1}\) (in \(M_{\odot}\)) we plot the median and symmetric 90% confidence band. For comparison we overplot a power law dependence \(\propto q^{1.5}\).
et al., 2021) compared to the dynamical channel (see Fig. 2 in Baibhav et al., 2023 and Fig. 1 in Zevin et al., 2021). Our estimates show in general flatter \(q\) distributions than the GWTC-3 results presented in Abbott et al. (2023).
Because predictions for \(p(q)\) in the isolated binary channel depend strongly on the adopted parameters (see e.g. Broekgaarden et al., 2022), our results provide an important step towards constraining astrophysical parameters with GWs. For example, the steep \(p(q)\) found for very small common envelope efficiency parameter (\(\alpha_{\rm CE}\simeq 0.2\), Baibhav et al., 2023) and the chemically homogeneous evolution model (Mandel & de Mink, 2016) seem disfavoured, implying that these routes cannot account for the majority of the observed population.
The stable mass transfer channel is efficient for primary masses near \(\sim\!10\,M_{\odot}\). van Son et al. (2022) predicts a dearth of near-equal mass mergers, which is because the binary needs to be relatively unequal in mass during the second mass transfer phase for the orbit to shrink, but not too unequal to avoid unstable mass transfer. This is only partly supported by our Fig. 4 in that the low-mass peak has support from equal mass out to \(q\simeq 0.5\). For some parameter choices their models predict bi-modality in \(p(q)\), with peaks at \(q\simeq 0.35\) and \(q\simeq 0.75\): our results suggest a peak at \(q\simeq 0.8\) for \(m_{1}\simeq 10\,M_{\odot}\) and at \(q\simeq 0.45\) for \(m_{1}\simeq 15\,M_{\odot}\) (see Fig. 8), suggesting that a more detailed comparison may yield interesting constraints.
For the dynamical channel, it is interesting to consider whether models now predict \(q\) distributions that are too steeply rising. Rodriguez et al. (2016) modelled BBH mergers that formed dynamically in globular clusters: they find a median mass ratio of 0.87, with 68% of sources having mass ratios \(q>0.8\). As shown in Fig. 8 we find comparable support for near-equal mass only at \(m_{1}\sim 10\,M_{\odot}\) or \(\sim\!35\,M_{\odot}\); elsewhere our median \(q\) is significantly lower.
Antonini et al. (2023) model BBH merger in globular clusters in comparison to the GWTC-3 \(q\) distribution: their model distributions are flatter and underestimate the power-law LVK fits by an order of magnitude at \(q\simeq 1\). They find a final \(q\) distribution flatter than the \(q\) distribution of dynamically formed BBHs (\(p(q)\propto q^{4}\) for metal-poor clusters) because the BH mass function is not always sufficiently sampled, such that a secondary BH with a mass similar to the primary is present in a cluster; and due to a slight bias against equal-mass BBH due to their lower inspiral probability. The reported \(p(q)\) in their Fig. 1 is relatively flat for \(q\gtrsim 0.7\), qualitatively in agreement with our findings for \(m_{1}\gtrsim 20\,M_{\odot}\) (Fig. 8, lower panel; their models cannot reproduce observed rates for lower-mass primaries.) Due to the predicted pair instability gap, all BBH mergers in their models with \(m_{1}\gtrsim 50\,M_{\odot}\) are hierarchical mergers, i.e. a BBH in which at least one of the components is a BBH merger remnants that was retained in the cluster (e.g. Antonini & Rasio, 2016; Rodriguez et al., 2019; Kimball et al., 2021). Mergers with second-generation primaries are expected to have a mass ratio \(q\simeq 0.5\), which is supported by our distribution for \(m_{1}=70\,M_{\odot}\) (Fig. 8).
The \(p(q)\) distribution is expected to be slightly flatter for dynamically formed BBHs in young (open) star clusters, because they have fewer BHs per cluster and their higher metallicities lead to steeper BH mass functions and therefore lower companion masses (e.g. Banerjee, 2021). A accurate picture of the mass ratio distribution is therefore important to understand the relative contribution of dynamically formed BBHs in young (and metal-rich) and old (and metal-poor) star clusters; and also more generally for understanding the relative contributions of isolated and dynamically formed binaries in the population as a whole (Zevin et al., 2021; Baibhav et al., 2023).
An intriguing apparent feature in our reconstruction, the anticorrelation between \(m_{1}\) and \(m_{2}\) in the low-mass (\(m\sim 10\,M_{\odot}\)) peak, suggests a connection to isolated binary dynamics, though it would be premature to link it with a specific mechanism.
Technical issues and biases--As noted in the introduction, measurement errors of the binary mass ratio are correlated with those in (orbit-aligned) spins. Since we have so far not attempted to reconstruct or estimate the merging binary spin distribution, we implicitly assume that distribution is equal to the prior used for parameter estimation (uniform in magnitude and isotropic in direction): this is a potential source of bias which remains to be addressed by future work. The distribution of aligned spins has been found to be concentrated near zero, with a slight preference for positive aligned spin (Miller et al., 2020; Abbott et al., 2020, 2023); hence, the degree of bias may be limited. Callister et al. (2021) also note the intriguing possibility that the _true_ mass ratio and aligned spin (after allowing for measurement errors) are anti-correlated.
A converse question concerns inferences on BH spin distributions which either assume a specific distribution in \(q\), or a power law with index as a hyperparameter: if the \(p(q)\) model is significantly inaccurate, are such spin inferences biased? (Ng et al., 2018 and Miller et al., 2020 contain detailed discussion of potential biases in aligned spin population estimates.) The effect may not be large, as most BBH events by necessity have parameter values
close to the observed peaks, for which we find a \(q\) distribution which is not far from power-law.
_Extensions of the method--_As already noted, here we restricted the application of our KDE to the binary mass distribution; component spins, and distance or redshift are then the next relevant parameters for population analysis. We expect to encounter a technical issue in optimizing the Gaussian kernel for a multi-dimensional data set, where it will not be appropriate (or even meaningful, given the different units) to impose equal variances over different parameters as we currently do for (log) \(m_{1}\) and \(m_{2}\). For more than two dimensions a grid search may not be practicable; more sophisticated methods may be required in order to realize the potential of iterative KDE over a full set of population parameters.
## Acknowledgements
We thank Daniel Wysocki for making available fitted sensitivity estimates for binary mergers in the O1-O3 data. We also benefited from conversations with Lieke van Son, Floor Broekgaarden and Fabio Antonini, and with Will Farr, Thomas Callister, Amanda Farah, Vaibhav Tiwari and others in the LVK Binary Rates & Populations group. This work has received financial support from Xunta de Galicia (CIGUS Network of research centers), by European Union ERDF and by the "Maria de Maeztu" Units of Excellence program CEX2020-001035-M and the Spanish Research State Agency. TD and JS are supported by research grant PID2020-118635GB-I00 from the Spanish Ministerio de Ciencia e Innovacion. JS also acknowledges support from the European Union's H2020 ERC Consolidator Grant "GRavity from Astrophysical to Microscopic Scales" (GRAMS-815673) and the EU Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-Curie Grant Agreement No. 101007855. MG acknowledges support from the Ministry of Science and Innovation (EUR2020-112157, PID2021-125485NB-C22, CEX2019-000918-M funded by MCIN/AEI/10.13039/501100011033) and from AGAUR (SGR-2021-01069).
The authors are grateful for computational resources provided by the LIGO Laboratory and supported by National Science Foundation Grants PHY-0757058 and PHY-0823459. This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation, as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan.
|
2305.04089 | Homology self closeness number and cofibration sequence | We study the homology self-closeness numbers of simply connected CW complex
and those of the homotopy cofiber. Self-maps of spaces in cofibrations which
appear in Homology decomposition are studied. We also consider Postnikov tower
and studied the relation of homology self-closeness numbers between homology
decomposition and homotopy decomposition. | Gopal Chandra Dutta | 2023-05-06T16:24:50Z | http://arxiv.org/abs/2305.04089v2 | # Homology self closeness number and cofibration sequence
###### Abstract.
We study the homology self-closeness numbers of simply connected CW complex and those of the homotopy cofiber. Self-maps of spaces in cofibrations which appear in Homology decomposition are studied. We also consider Postnikov tower and studied the relation of homology self-closeness numbers between homology decomposition and homotopy decomposition.
Key words and phrases:self-homotopy equivalence, cofibration, self-closeness number, homology decomposition, homotopy decomposition 2010 Mathematics Subject Classification: Primary 55R70, 55P10 ; Secondary: 55R05, 55Q05
## 1. Introduction
For a pointed space \(X\), the set of homotopy classes of base point preserving maps \(X\to X\) is denoted by \([X,X]\). This is a monoid under composition of maps. The subset of all invertible elements of this monoid is denoted by \(\mathrm{Aut}(X)\). It is the group of homotopy classes of self-equivalences on \(X\). For notational convenience, we do not distinguish between a map \(f\colon X\to X\) and that of its homotopy class in \([X,X]\). The group \(\mathrm{Aut}(X)\) has been studied by several authors (cf. [2, 16]). Choi and Lee introduced the following sub-monoids in [6]:
\[\mathcal{A}^{k}_{\#}(X):=\big{\{}f\in[X,X]:\ f_{\#}\colon\pi_{i}(X)\xrightarrow {\cong}\pi_{i}(X),\ \text{for all}\ 0\leq i\leq k\big{\}}.\]
For connected CW complex \(X\) we have the chain:
\[[X,X]=\mathcal{A}^{0}_{\#}(X)\cdots\supseteq\mathcal{A}^{k}_{\#}(X)\supseteq \mathcal{A}^{k+1}_{\#}(X)\cdots\supseteq\mathcal{A}^{\infty}_{\#}(X)=\mathrm{ Aut}(X).\]
They associated a numeric homotopy invariant _self-closeness number_\(N\mathcal{A}_{\#}(X)\) defined as
\[N\mathcal{A}_{\#}(X):=\min\big{\{}k\geq 0:\ \mathcal{A}^{k}_{\#}(X)=\mathrm{ Aut}(X)\big{\}}.\]
Similar kind of monoids \(\mathcal{A}^{k}_{*}(X)\) and \(\mathcal{A}^{*}_{k}(X)\) for homology and cohomology were introduced in [14].
\[\mathcal{A}^{k}_{*}(X):=\big{\{}f\in[X,X]:\ f_{*}\colon H_{i}(X)\xrightarrow {\cong}H_{i}(X),\ \text{for all}\ 0\leq i\leq k\big{\}};\]
\[\mathcal{A}^{*}_{k}(X):=\big{\{}f\in[X,X]:\ f^{*}\colon H^{i}(X)\xrightarrow {\cong}H^{i}(X),\ \text{for all}\ 0\leq i\leq k\big{\}}.\]
We know that for a simply connected CW-complex, a self-map is a homotopy equivalence if and only if it is a homology equivalence (or a cohomology equivalence). Therefore we have the two types of chain for simply connected CW complex
\[[X,X]=\mathcal{A}^{0}_{*}(X)\cdots\supseteq\mathcal{A}^{k}_{*}(X)\supseteq \mathcal{A}^{k+1}_{*}(X)\cdots\supseteq\mathcal{A}^{\infty}_{*}(X)=\mathrm{ Aut}(X),\]
and
\[[X,X]=\mathcal{A}_{0}^{*}(X)\cdots\supseteq\mathcal{A}_{k}^{*}(X)\supseteq\mathcal{A} _{k+1}^{*}(X)\cdots\supseteq\mathcal{A}_{\infty}^{*}(X)=\operatorname{Aut}(X).\]
This motivates to define _homology and cohomology self-closeness numbers_ (cf. [14])
\[N\mathcal{A}_{*}(X):=\min\big{\{}k\geq 0:\ \mathcal{A}_{*}^{k}(X)=\operatorname{Aut}( X)\big{\}},\]
\[N\mathcal{A}^{*}(X):=\min\big{\{}k\geq 0:\ \mathcal{A}_{k}^{*}(X)=\operatorname{Aut}( X)\big{\}}.\]
In [13], Oda and Yamaguchi studied the homotopy self-closeness number of the following type of a _fibration sequence_:
\[\cdots\to K(G,n+1)\to X\to Y\xrightarrow{\gamma}K(G,n+2), \tag{1}\]
where \(G\) is an abelian group and \(K(G,n)\) is the Eilenberg-MacLane space of type \((G,n)\). Note that \(X\) is the homotopy fiber of the map \(\gamma\).
Recall that for an abelian group \(G\) and integer \(n\geq 2\), the Moore space \(M(G,n)\) of type \((G,n)\) is the simply connected CW-complex \(M(G,n)\), unique upto homotopy, such that
\[\tilde{H}_{i}(M(G,n))=\begin{cases}G,\ if\ i=n,\\ 0,\ if\ i\neq n.\end{cases}\]
If \(G\) is free-abelian, then \(M(G,n)\) is just the wedge of copies of \(S^{n}\) and \(M(\mathbb{Z}_{m},n)=S^{n}\cup_{m}e^{n+1}\), where \((n+1)\)-cell is attached to \(S^{n}\) by a degree \(m\) map. Thus for a finitely generated \(G\), Moore space \(M(G,n)\) is a finite CW-complex of dimension \(n\) if \(G\) is free-abelian and of dimension \(n+1\) if \(G\) has torsion. Note that, \(\pi_{n}(X;G):=[M(G,n),X]\) is a group and abelian for \(n\geq 3\). This is called the \(n\)_-th homotopy group of \(X\) with coefficients in \(G\)_.
In this paper, we consider the dual _cofibration sequence_ of Equation 1:
\[M(G,n)\xrightarrow{\gamma}X\xrightarrow{\iota}Y\to M(G,n+1)\to\cdots. \tag{2}\]
Thus \(Y\) is the homotopy cofiber of the map \(\gamma\) and \(\iota\) is a cofibration. The group \(\operatorname{Aut}(Y)\) was studied by Oka, Sawashita and Sugawara in [11]. In this paper we study the relation between the homology self-closeness numbers of \(X\) and \(Y\). The homotopy self-closeness number for the cofibration sequence \(2\) was studied for \(G=\mathbb{Z}\) in [12].
In Section 2, we consider the relation between \(N\mathcal{A}_{*}(X)\) and \(N\mathcal{A}_{*}(Y)\). First we prove the inequality \(N\mathcal{A}_{*}(X)\leq N\mathcal{A}_{*}(Y)\) under the following assumptions:
1. \(1\leq\operatorname{conn}(X)<H_{*}\text{-}\dim(X)<n\).
2. The induced map \(\gamma_{\#}\colon[M(G,n),M(G,n)]\to[M(G,n),X]\) is surjective.
(see Theorem 2.4.) Moreover the reverse inequality \(N\mathcal{A}_{*}(Y)\leq N\mathcal{A}_{*}(X)\) holds if we assume the following conditions:
1. \(2\leq\operatorname{conn}(X)<H_{*}\text{-}\dim(X)<n\).
2. The induced map \(\gamma_{\#}\colon[M(G,n),M(G,n)]\to[M(G,n),X]\) is injective such that \(\gamma^{\#}\big{(}\operatorname{Aut}(X)\big{)}\subset\gamma_{\#}\big{(}[M(G,n),M(G,n)]\big{)}\).
(see Theorem 2.6). Here \(\gamma^{\#}:[X,X]\to[M(G,n),X]\) is induced by \(\gamma\). Further if we substitute the condition (ii) of Theorem 2.6 by the assumption that the induced map \(\gamma_{\#}\colon[M(G,n),M(G,n)]\to[M(G,n),X]\) is bijective then we get the equality \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(Y)\) (see Theorem 2.7). As a special case, if we replace the condition
(ii) of Theorem 2.4 by \([M(G,n),X]\cong\mathbb{Z}/m\mathbb{Z}\) and \([M(G,n),M(G,n)]\cong\mathbb{Z}/q\mathbb{Z}\) where \(m,q\geq 0\), then we deduce the following Proposition 2.9:
1. \(N\mathcal{A}_{*}(X)\leq N\mathcal{A}_{*}(Y),\text{ if }q=0\neq m\text{ or }q>m.\)
2. \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(Y)\) if \(q=m\) and \(\operatorname{conn}(X)\geq 2\) (or \(G\) is free).
In Proposition 2.14 we take \(G=\mathbb{Z}/p\mathbb{Z}\) for a prime \(p\neq 2\), and we obtain the relation \(N\mathcal{A}_{*}(Y)\leq N\mathcal{A}_{*}(X)\) under some conditions.
In Section 3, we consider the _homology decomposition_\(\{X_{n}\}\) of a simply connected CW-complex \(X\), where \(X_{n}\) is the \(n\)-th _homology section_. In Theorem 3.3 we obtain the relations between homology self-closeness numbers of consecutive homology sections. Moreover in Lemma 3.6 we prove the relationship between the group of self-homotopy equivalences of \(X\) and \(X_{n}\). Using this Lemma we obtain the equality \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X_{n})\) under some conditions, see Theorem 3.7.
In Section 4, we consider the _homotopy decomposition (Postnikov tower)_\(\{X^{(n)}\}\) of a simply connected CW-complex \(X\), where \(X^{(n)}\) is the \(n\)-th _homotopy section_. In Lemma 4.3 we show that \(N\mathcal{A}_{*}(X^{(n)})=N\mathcal{A}_{\#}(X^{(n)})\) whenever \(H_{k}(X)\) is finitely generated for all \(k\). In Theorem 4.4 we find the analogous results of Theorem 3.3 for homotopy sections. Choi and Lee has studied the relation of homotopy self-closeness number of \(X\) with \(X^{(n)}\)[7, Theorem 3.5]. Using this fact, we deduce Theorem 4.6. We prove that \(N\mathcal{A}_{*}(X^{(n)})=N\mathcal{A}_{*}(X_{n})\) under some conditions in Corollary 4.7.. Further if \(H_{n}(X)\) is free-abelian for some \(n\in\mathbb{N}\) then \(N\mathcal{A}_{*}(X^{(n)})\leq N\mathcal{A}_{*}(X_{n})\), see Lemma 4.8. Finally we compute homology self-closeness numbers of Postnikov tower for some CW-complexes by the help of homology decomposition.
### Acknowledgements
The first author would like to thank IIT Kanpur for Ph.D fellowship.
## 2. Homology self-closeness number
In this section, we study the relation between the homology self-closeness numbers of \(X,Y\) for the cofibration sequence of Equation 2. Recall that the _connectivity_ of a space is defined as
\[\operatorname{conn}(X):=\min\big{\{}k\geq 0\ :\ \pi_{k+1}(X)\neq 0\big{\}}.\]
Thus \(\pi_{k}(X)=0\) for all \(k\leq\operatorname{conn}(X)\). Moreover we denote the _homological dimension_ by
\[H_{*}\text{-}\dim(X):=\max\big{\{}k\geq 0\ :\ H_{k}(X)\neq 0\big{\}}.\]
Therefore \(H_{k}(X)=0\) for all \(k>H_{*}\text{-}\dim(X)\). For any simply connected CW-complex, we have
\[2\leq N\mathcal{A}_{*}(X)\leq H_{*}\text{-}\dim(X).\]
Recall that, a map \(f\colon X\to X^{\prime}\) is called _homotopical \(n\)-equivalence_ if \(f_{\#}\colon\pi_{k}(X)\to\pi_{k}(X^{\prime})\) is isomorphism for all \(k<n\) and an epimorphism for \(k=n\). A map \(f\colon X\to X^{\prime}\) is called _homological \(n\)-equivalence_ if \(f_{*}\colon H_{k}(X)\to H_{k}(X^{\prime})\) is isomorphism for all \(k<n\) and an epimorphism for \(k=n\). If \(X^{\prime}\) is simply connected then the mapping cone \(C_{f}\) of a map \(f\colon X\to X^{\prime}\) is simply connected (cf [4, Proposition B.4 ]).
### Lemma
_Let \(f\colon X\to X^{\prime}\) be a map between simply connected spaces. Then the following are equivalent:_
1. \(f\) _is homotopical_ \(n\)_-equivalence._
2. \(f\) _is homological_ \(n\)_-equivalence._
3. \(C_{f}\) _is_ \(n\)_-connected._
Proof.: By Lemma 6.4.11, Proposition 6.4.14 and Theorem 6.4.15 of [4].
From now onwards, \(X\) will denote a simply connected CW-complex and \(G\) a finitely generated abelian group.
**2.2 Lemma**.: _Let \(n\geq 2\) and \(M(G,n)\xrightarrow{\gamma}X\xrightarrow{\iota}Y\) be a cofibration with \(\operatorname{conn}(X)\geq 2\). Given self-maps \(g\colon X\to X,\ f\colon Y\to Y\) such that \(f\circ\iota=\iota\circ g\), there exists a map \(h\colon M(G,n)\to M(G,n)\) such that \(\gamma\circ h=g\circ\gamma\)._
Proof.: Observe that \(\iota\colon X\to Y\) is homological \(n\)-equivalence and hence homotopical \(n\)-equivalence by Lemma 2.1. Therefore \(\operatorname{conn}(X)\geq 2\) implies that \(\operatorname{conn}(Y)\geq 2\). Note that \(\dim\big{(}M(G,n)\big{)}\leq n+1\). Therefore using [10, Lemma 2.3 ] we get the desired result.
_2.3 Remark_.: If \(G\) is a free abelian group then \(\operatorname{conn}(X)\geq 1\) is sufficient instead of \(\operatorname{conn}(X)\geq 2\) for the Lemma 2.2.
**2.4 Theorem**.: _Let \(n\geq 2\) and \(M(G,n)\xrightarrow{\gamma}X\xrightarrow{\iota}Y\) be a cofibration sequence satisfying the following conditions:_
1. \(1\leq\operatorname{conn}(X)<H_{*}\text{-}\dim(X)<n.\)__
2. _The induced map_ \(\gamma_{\#}\colon[M(G,n),M(G,n)]\to[M(G,n),X]\) _is surjective._
_Then \(N\mathcal{A}_{*}(X)\leq N\mathcal{A}_{*}(Y)\)._
Proof.: Observe that \(N\mathcal{A}_{*}(X)\leq H_{*}\text{-}\dim(X)<n.\) Let us assume that \(N\mathcal{A}_{*}(Y)=m\) and \(g\in\mathcal{A}_{*}^{m}(X)\). Let \(m<n\), otherwise we get the desire result.
Since \(\gamma_{\#}\) is onto, so there exists \(h\in[M(G,n),M(G,n)]\) such that \(g\circ\gamma=\gamma\circ h\). Therefore there is a map \(f\in[Y,Y]\) such that \(f\circ\iota=\iota\circ g\). Hence we get a homotopy commutative diagram
(3)
Note that \(\iota_{*}\colon H_{k}(X)\to H_{k}(Y)\) is isomorphism for all \(k\leq n-1\). Therefore \(f_{*}\colon H_{k}(Y)\to H_{k}(Y)\) is isomorphism for all \(k\leq m\). So \(f\in\mathcal{A}_{*}^{m}(Y)=\operatorname{Aut}(Y)\). Hence using the commutativity of the right side diagram we have \(g\in\mathcal{A}_{*}^{n-1}(X)=\operatorname{Aut}(X)\). Consequently,
\[N\mathcal{A}_{*}(X)\leq m=N\mathcal{A}_{*}(Y).\]
Proof.: From Theorem 2.4 we have \(N\mathcal{A}_{*}(X)\leq N\mathcal{A}_{*}(Y)\).
For the converse part, surjectivity of the map \(\gamma_{\#}\colon[M(G,n),M(G,n)]\to[M(G,n),X]\) implies that
\[\gamma^{\#}\big{(}\operatorname{Aut}(X)\big{)}\subset[M(G,n),X]=\gamma_{\#} \big{(}[M(G,n),M(G,n)]\big{)}.\]
Therefore using Theorem 2.6 we have \(N\mathcal{A}_{*}(Y)\leq N\mathcal{A}_{*}(X)\). Hence we get the desired result.
_._
_Remark_.: If \(G\) is a free abelian group then the first condition of Theorem 2.6 and Theorem 2.7 can be replace by \(1\leq\operatorname{conn}(X)\leq H_{*}-\dim(X)<n\).
**2.9 Proposition**.: _Let \(n\geq 2\) and \(M(G,n)\xrightarrow{\gamma}X\xrightarrow{\iota}Y\) be a cofibration sequence such that \(1\leq\operatorname{conn}(X)<H_{*}\operatorname{-dim}(X)<n\). Given two non-negative integers \(q,m\) such that_
\[[M(G,n),M(G,n)]\cong\mathbb{Z}/q\mathbb{Z}\langle\operatorname{Id}_{M}\rangle, \ \pi_{n}(X,G)=[M(G,n),X]\cong\mathbb{Z}/m\mathbb{Z}\langle\gamma\rangle.\]
_Then_
1. \(N\mathcal{A}_{*}(X)\leq N\mathcal{A}_{*}(Y)\) _if_ \(q=0\neq m\) _or_ \(q>m\)_._
2. \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(Y)\) _if_ \(q=m\) _and_ \(\operatorname{conn}(X)\geq 2\)__(_or_ \(G\) _is free_)_._
Proof.: (a) It is sufficient to show that \(\gamma_{\#}\) is surjective. Let \(g\in[M(G,n),X]\). Then there exists an integer \(s\) such that \(g=s\gamma\). Observe that
\[g=s\gamma=\gamma\circ s\operatorname{Id}_{M}=\gamma_{\#}(s\operatorname{Id}_ {M}).\]
Thus \(\gamma_{\#}\) is surjective. Therefore using Theorem 2.4 we get the desired result.
2. Note that \(\gamma_{\#}(f_{1}+f_{2})=\gamma\circ(f_{1}+f_{2})\cong\gamma\circ f_{1}+\gamma \circ f_{2}\). So \(\gamma_{\#}\) is a homomorphism. Assume that \(q=m\), so surjectivity implies \(\gamma_{\#}\) is injective. Using Theorem 2.7 we get the equality.
The following Corollary is a homological version of [12, Theorem 4].
**2.10 Corollary**.: _Let \(n\geq 2,\ X\) is a simply connected CW-complex such that \(\dim(X)\leq n-1\) and \(\pi_{n}(X)\cong\mathbb{Z}\). If \(\gamma\colon S^{n}\to X\) is a generator of \(\pi_{n}(X)\), then_
\[N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X\cup_{\gamma}e^{n+1}).\]
Proof.: It is enough to show that \(\gamma_{\#}\colon[S^{n},S^{n}]\to[S^{n},X]\) is a bijection. For surjectivity, let \(g\in[S^{n},X].\) Therefore there exists an integer \(s\) such that \(g=s\gamma\). We know that \(s\gamma=\gamma\circ s\operatorname{Id}_{S^{n}}\), where \(\operatorname{Id}_{S^{n}}\) is a generator of \(\pi_{n}(S^{n})\). Thus \(g=\gamma_{\#}(s\operatorname{Id}_{S^{n}})\). Hence \(\gamma_{\#}\) is surjective.
For injectivity, let \(f_{1},f_{2}\in[S^{n},S^{n}]\) such that \(\gamma_{\#}(f_{1})=\gamma_{\#}(f_{2})\). This implies \(\gamma\circ f_{1}=\gamma\circ f_{2}\). Moreover observe that \(f_{1}=k_{1}\operatorname{Id}_{S^{n}},\ f_{2}=k_{2}\operatorname{Id}_{S^{n}}\) for some \(k_{1},k_{2}\in\mathbb{Z}\). Therefore \(\gamma\circ k_{1}\operatorname{Id}_{S^{n}}=\gamma\circ k_{2}\operatorname{Id }_{S^{n}}\). This implies \(k_{1}\gamma=k_{2}\gamma\). So that \(k_{1}=k_{2}\). Then using Theorem 2.7 we get the desired result.
_Remark_.: Corollary 2.10 can also be deduced directly from Proposition 2.9.
Recall the generalised Fruedenthal suspension theorem: Let \(Y\) be an \(n\)-connected CW-complex and \(X\) is finite dimensional CW-complex. Then the suspension map \(\Sigma\colon[X,Y]\to[\Sigma X,\Sigma Y]\) is isomorphism if \(\dim(X)<2n+1\) and epimorphism if \(\dim(X)=2n+1\) ([8, Theorem 1.21]).
**2.12 Proposition**.: _Consider the Hopf map \(S^{3}\xrightarrow{\eta_{2}}S^{2}\) and all its suspension map \(S^{n+1}\xrightarrow{\eta_{n}}S^{n}\), where \(n\geq 3\). Then_
\[N\mathcal{A}_{*}(S^{n}\cup_{\eta_{n}}e^{n+2})=\begin{cases}2,\ if\ n=2\\ n+2,\ if\ n\geq 3.\end{cases}\]
Proof.: For \(n=2\) the Hopf map is \(S^{3}\xrightarrow{\eta_{2}}S^{2}\) generates the group \(\pi_{3}(S^{2})\cong\mathbb{Z}\). Therefore using Corollary 2.10 we have \(N\mathcal{A}_{*}(S^{2})=N\mathcal{A}_{*}(S^{2}\cup_{\eta_{2}}e^{4})\). Hence \(N\mathcal{A}_{*}(S^{2}\cup_{\eta_{2}}e^{4})=2.\)
For \(n\geq 3\), the suspension of \(\eta_{2}\) is \(S^{n+1}\xrightarrow{\eta_{2}}S^{n}\). Using [9, Corollary 4J.4] we have \(\pi_{n+1}(S^{n})\cong\mathbb{Z}_{2}\langle\eta_{n}\rangle\). Therefore we have \(N\mathcal{A}_{*}(S^{n})\leq N\mathcal{A}_{*}(S^{n}\cup_{\eta_{n}}e^{n+2})\) by Proposition 2.9(a). Hence
\[n\leq N\mathcal{A}_{*}(S^{n}\cup_{\eta_{n}}e^{n+2})\leq n+2,\ \ \text{for all $n \geq 3$.}\]
It is sufficient to show that \(N\mathcal{A}_{*}(S^{n}\cup_{\eta_{n}}e^{n+2})\neq n\). Let \(P=S^{n}\cup_{\eta_{n}}e^{n+2}\). By [1, Section 8] we have
\[[P,P]\cong\mathbb{Z}\langle\mathrm{Id}_{P}\rangle\oplus\mathbb{Z}\langle \iota\circ\bar{\xi}\rangle,\]
where \(\mathrm{Id}_{P}\colon P\to P\) is the identity map, \(\iota\colon S^{n}\to P\) is inclusion map, \(q\colon P\to S^{n+2}\) is quotient map and \(\bar{\xi}\in[P,S^{n}],\ \widetilde{\xi}\in[S^{n+2},P]\) satisfying the following relations
\[\bar{\xi}\circ\iota=2\cdot\mathrm{Id}_{S^{n}},\ q\circ\widetilde{\xi}=2\cdot \mathrm{Id}_{S^{n+2}},\ \iota\circ\bar{\xi}+\widetilde{\xi}\circ q=2\cdot\mathrm{Id}_{p}\,. \tag{4}\]
Observe that the homomorphisms \(\iota_{*}\colon H_{n}(S^{n})\xrightarrow{\cong}H_{n}(P)\) and \(q_{*}\colon H_{n+2}(P)\xrightarrow{\cong}H_{n+2}(S^{n+2})\). Let \(H_{0}(S^{0})\cong\mathbb{Z}\langle a_{0}\rangle.\) Consider the suspension map \(H_{0}(S^{0})\xrightarrow{\Sigma^{k}}H_{k}(S^{k})\) defined as \(a_{0}\mapsto a_{k}=\Sigma^{k}(a_{0})\), which is an isomorphism for each \(k\geq 1\). Observe that
\[H_{k}(P)=\begin{cases}\mathbb{Z}\langle b\rangle,\ if\ k=n\\ \mathbb{Z}\langle c\rangle,\ if\ k=n+2\\ 0\,\ \ \text{otherwise}\,\end{cases}\]
where \(b=\iota_{*}(a_{n})\), and \(c=(q_{*})^{-1}(a_{n+2})\). From the relation 4 we have
\[(\iota\circ\bar{\xi})_{*}(b)=(\iota\circ\bar{\xi}\circ\iota)_{*}(a_{n})=(2 \cdot\iota\circ\mathrm{Id}_{S^{n}})_{*}(a_{n})=2\cdot\iota_{*}(a_{n})=2\cdot b.\]
Moreover \(\widetilde{\xi}_{*}(a_{n+2})=(q_{*})^{-1}\circ q_{*}\circ\widetilde{\xi}_{*}( a_{n+2})=2\cdot(q_{*})^{-1}(a_{n+2})=2\cdot c.\)
Therefore
\[(\iota\circ\bar{\xi})_{*}(c)=(2\cdot\mathrm{Id}_{P}-\widetilde{\xi}\circ q)_ {*}(c)=2\cdot c-\widetilde{\xi}_{*}\circ q_{*}(c)=2\cdot c-\widetilde{\xi}_ {*}(a_{n+2})=0.\]
So for \(f\in\mathcal{A}_{*}^{n}(P)\) we have \(f_{*}\colon H_{k}(P)\xrightarrow{\cong}H_{k}(P)\) for all \(k\leq n\). Note that
\[f=m\,\mathrm{Id}_{P}+r(\iota\circ\bar{\xi}),\ \text{where $m,r\in\mathbb{Z}$.}\]
Thus \(f_{*}(b)=mb+2rb=(m+2r)b\). This implies that \(m+2r=+1\) or \(\,-1\). Therefore \(f=3\,\mathrm{Id}_{P}-(\iota\circ\bar{\xi})\in\mathcal{A}_{*}^{n}(P)\). But \(f\notin\mathrm{Aut}(P)\) as \(f_{*}(c)=3c.\) Hence \(N\mathcal{A}_{*}(P)\neq n\).
_2.13 Remark_.: Note that \(\Sigma(C_{\eta_{n}})=C_{\Sigma_{\eta_{n}}}=C_{\eta_{n+1}}\) for all \(n\geq 2\). Therefore \(\Sigma^{n-2}\mathbb{C}P^{2}=S^{n}\cup_{\eta_{n}}e^{n+2}\) for all \(n\geq 2\). Hence
\[N\mathcal{A}_{*}(\Sigma^{n-2}\mathbb{C}P^{2})=\begin{cases}2,\ if\ n=2,\\ n+2,\ if\ n\geq 3.\end{cases}\]
_2.14 Proposition_.: Let \(n\geq 3\) and \(M(G,n)\xrightarrow{\gamma}X\xrightarrow{\iota}Y\) be a cofibration sequence such that \(2\leq\mathrm{conn}(X)<H_{*}\)-\(\dim(X)<n\). Assume that any one of the following conditions hold:
1. For \(q\geq 0,\ [M(G,n),M(G,n)]\cong\mathbb{Z}/q\mathbb{Z}\langle\mathrm{Id}_{M}\rangle\) and \(\gamma\) is a generator of any \(\mathbb{Z}\) direct summand of the abelian group \(\pi_{n}(X,G)=[M(G,n),X]\).
_._
2. _For a prime_ \(p>2\)_, if_ \(G=\mathbb{Z}/p\mathbb{Z}\) _and_ \(\gamma\) _is a generator of any_ \(\mathbb{Z}\) _direct summand of the abelian group_ \(\pi_{n}(X,G)=[M(G,n),X]\)_._
_Then \(N\mathcal{A}_{*}(Y)\leq N\mathcal{A}_{*}(X)\)._
Proof.: Let \(N\mathcal{A}_{*}(X)=l<n\) and \(f\in\mathcal{A}_{*}^{l}(Y)\). As in the proof of Theorem 2.6 we have \(g\colon X\to X\) and \(h\colon M(G,n)\to M(G,n)\), which satisfies the commutative diagram 3. Moreover \(g\in\mathcal{A}_{*}^{l}(X)=\operatorname{Aut}(X)\). Let \(\bar{g}\) be the homotopy inverse of \(g\). It is sufficient to show that \(h\in\operatorname{Aut}(M(G,n))\).
1. Let \(\pi_{n}(X,G)\cong\mathbb{Z}\langle\gamma\rangle\oplus U\) for some subgroup \(U\). Then we have \(h=s\operatorname{Id}_{M}\) for some \(s\in\mathbb{Z}\). Moreover \(\bar{g}\circ\gamma=t\gamma+u\) for some \(t\in\mathbb{Z}\) and some \(u\in U\). It follows that \[\gamma=\bar{g}\circ g\circ\gamma=\bar{g}\circ\gamma\circ h=(t\gamma+u)\circ s \operatorname{Id}_{M}=ts\gamma+su.\] This implies that \(ts=1\) and \(su=0\). So \(s=+1\) or \(-1\), hence \(h\in\operatorname{Aut}(M(G,n))\).
2. From universal coefficient theorem for homotopy we have \[0\to\operatorname{Ext}\big{(}G,\pi_{n+1}(Z)\big{)}\to\pi_{n}(Z,G)\to \operatorname{Hom}\big{(}G,\pi_{n}(Z)\big{)}\to 0.\] Take \(Z=M(\mathbb{Z}/p\mathbb{Z},n)\). From [3, Section 1] we have \[\pi_{n+1}(Z)=\pi_{n+1}(M(\mathbb{Z}/p\mathbb{Z},n))=\mathbb{Z}/p\mathbb{Z} \otimes\mathbb{Z}/2\mathbb{Z}=0.\] Therefore \([M(\mathbb{Z}/p\mathbb{Z},n),M(\mathbb{Z}/p\mathbb{Z},n)]\cong\operatorname{ Hom}(\mathbb{Z}/p\mathbb{Z},\mathbb{Z}/p\mathbb{Z})\cong\mathbb{Z}/p\mathbb{Z} \langle\operatorname{Id}_{M}\rangle\). Hence using part (i) we get the desired result.
_2.15 Remark_.: If \(G\) is a free abelian group then the condition of Proposition 2.14 can be substitute by \(1\leq\operatorname{conn}(X)<H_{*}\)-\(\dim(X)<n\), where \(n\geq 2\).
The following Corollary is a homological version of Theorem 5 in [14].
**2.16 Corollary**.: _Let \(n\geq 2,\ X\) is a simply connected CW-complex such that \(\dim(X)\leq n-1\). If \(\gamma\colon S^{n}\to X\) is a generator of a direct summand \(\mathbb{Z}\subset\pi_{n}(X)\), then_
\[N\mathcal{A}_{*}(X\cup_{\gamma}e^{n+1})\leq N\mathcal{A}_{*}(X).\]
Proof.: Follows from Proposition 2.14(i), if we take \(G=\mathbb{Z}\).
_2.17 Example_.: Finiteness theorem of Serre, [15, Lemma 1.1.8] and [17, Section 6]
\[\pi_{4n-1}(S^{2n})\cong\mathbb{Z}\oplus F_{n},\text{ for all }n\geq 1.\]
where \(F_{n}\) is a finite abelian group. Let \(\gamma\) be a generator of the \(\mathbb{Z}\) direct summand of the abelian group \(\pi_{4n-1}(S^{2n})\). Therefore \(N\mathcal{A}_{*}(S^{2n}\cup_{\gamma}e^{4n})\leq N\mathcal{A}_{*}(S^{2n})\) by using Corollary 2.16. Hence
\[N\mathcal{A}_{*}(S^{2n}\cup_{\gamma}e^{4n})\leq 2n.\]
Observe that \(\mathcal{A}_{*}^{2n-1}(S^{2n}\cup_{\gamma}e^{4n})(X)=[S^{2n}\cup_{\gamma}e^{4 n}\,\ S^{2n}\cup_{\gamma}e^{4n}]\). Let \(f\colon S^{2n}\cup_{\gamma}e^{4n}\to S^{2n}\cup_{\gamma}e^{4n}\) be a constant map at base point. Therefore \(f\in[S^{2n}\cup_{\gamma}e^{4n}\,\ S^{2n}\cup_{\gamma}e^{4n}]\) but \(f\notin\operatorname{Aut}(S^{2n}\cup_{\gamma}e^{4n})\). Hence
\[N\mathcal{A}_{*}(S^{2n}\cup_{\gamma}e^{4n})=N\mathcal{A}_{*}(S^{2n})=2n.\]
## 3. Homology decomposition
In this section we consider homology decomposition of a space and its relation with homology self-closeness numbers. We first recall the definition.
**3.1 Definition**.: Let \(X\) be a simply connected CW-complex. A _homology decomposition_ of \(X\) consists of a sequence of simply connected CW-complexes \(\{X_{n}\}\) and structure maps
\[j_{n}\colon X_{n}\to X,\iota_{n}\colon X_{n}\to X_{n+1},\ k_{n+1}:M\big{(}H_{n +1}(X),n\big{)}\to X_{n},\]
that satisfy the following conditions:
1. \(j_{n*}\colon H_{k}(X_{n})\to H_{k}(X)\) is an isomorphism for all \(k\leq n\) and \(H_{k}(X_{n})=0\) for all \(k>n\).
2. \(M\big{(}H_{n+1}(X),n\big{)}\xrightarrow{k_{n+1}}X_{n}\xrightarrow{\iota_{n}} X_{n+1}\) is a cofibration sequence of the cellular map \(k_{n+1}\) such that the induced map \(k_{n+1*}\colon H_{n}(M(H_{n+1}(X),n))\to H_{n}(X_{n})\) is trivial.
3. The following diagram commutes.
The collection \(\big{\{}X_{n},j_{n},\iota_{n},k_{n}\big{\}}\) is called a homology decomposition of \(X\). The spaces \(X_{n}\) are called the \(n\)-th homology sections of \(X\) and the maps \(k_{n+1}\colon M\big{(}H_{n+1}(X),n\big{)}\to X_{n}\) are called homological \(k\)-invariants. It is well known that every simply connected CW-complex admits a homology decomposition (see [4, 9]). Note that if all homology groups are free then the cellular skeletons serve as a homology decompositions, with the cell attaching maps as \(k\)-invariants.
_3.2 Remark_.: It follows from the definition that \(N\mathcal{A}_{*}(X_{n})\leq n\).
The following Theorem deduce relations between the homology self-closeness numbers of two consecutive homology section.
**3.3 Theorem**.: _Let \(X\) be a simply connected \(CW\)-complex. For a homology decomposition \(\big{\{}X_{n},j_{n},\iota_{n},k_{n}\big{\}}\) of \(X\) we have the following:_
1. _If the induced map_ \[k_{m+1\#}\colon\big{[}M\big{(}H_{m+1}(X),m\big{)},M\big{(}H_{m+1}(X),m\big{)} \big{]}\to\big{[}M\big{(}H_{m+1}(X),m\big{)},X_{m}\big{]}\] _is surjective for some_ \(m\in\mathbb{N}\)_, then_ \(N\mathcal{A}_{*}(X_{m})\leq N\mathcal{A}_{*}(X_{m+1})\)_._
2. _If the induced map_ \[k_{m+1\#}\colon\big{[}M\big{(}H_{m+1}(X),m\big{)},M\big{(}H_{m+1}(X),m\big{)} \big{]}\to\big{[}M\big{(}H_{m+1}(X),m\big{)},X_{m}\big{]}\] _is injective such that_ \[k_{m+1}^{\#}(\operatorname{Aut}(X_{m}))\subset k_{m+1\#}\Big{(}\big{[}M\big{(} H_{m+1}(X),m\big{)},M\big{(}H_{m+1}(X),m\big{)}\big{]}\Big{)}\]
_where \(H_{k}(X)\) is free-abelian group for \(k=m,m+1\), then_
\[N\mathcal{A}_{*}(X_{m+1})\leq N\mathcal{A}_{*}(X_{m}).\]
3. _If_ \(H_{k}(X)\) _is free-abelian group for_ \(k=m,m+1\) _and the induced map_ \(k_{m+1\#}\) _is bijective then_ \(N\mathcal{A}_{*}(X_{m})=N\mathcal{A}_{*}(X_{m+1})\)_._
Proof.: (a) Let \(N\mathcal{A}_{*}(X_{m+1})=l\leq m+1\) and \(g\in\mathcal{A}_{*}^{l}(X_{m})\). If \(l\geq m\) then \(\mathcal{A}_{*}^{l}(X_{m})=\operatorname{Aut}(X_{m})\), so we get the desire result.
Let us assume that \(l<m\). Surjectivity of \(k_{m+1\#}\) implies that there exists \(h\in\big{[}M\big{(}H_{m+1}(X),m\big{)},M\big{(}H_{m+1}(X),m\big{)}\big{]}\) such that \(k_{m+1}\circ h=g\circ k_{m+1}\). So we get a homotopy commutative diagram
(5)
Observe that \(\iota_{m*}\colon H_{k}(X_{m})\xrightarrow{\cong}H_{k}(X_{m+1})\) for all \(k\leq m\). Therefore \(f\in\mathcal{A}_{*}^{l}(X_{m+1})=\operatorname{Aut}(X_{m+1})\). Using the commutativity of the right side diagram we have \(g\in\mathcal{A}_{*}^{m}(X_{m})=\operatorname{Aut}(X_{m})\). Consequently,
\[N\mathcal{A}_{*}(X_{m})\leq l=N\mathcal{A}_{*}(X_{m+1}).\]
2. Observe that \(H_{*}\)- \(\dim(X_{m})\leq m\) and the map \(\iota_{m}\colon X_{m}\to X_{m+1}\) is homotopical \(m\)-equivalence. So the induced map \(\iota_{\#}\colon[X_{m},X_{m}]\to[X_{m},X_{m+1}]\) is surjective. Let \(N\mathcal{A}_{*}(X_{m})=r\leq m\) and \(f\in\mathcal{A}_{*}^{r}(X_{m+1})\). By surjectivity of \(\iota_{\#}\) there exists \(g\in[X_{m},X_{m}]\) such that \(\iota_{m}\circ g=f\circ\iota_{m}\). From Lemma 2.2 there is a map \(h\colon M\big{(}H_{m+1}(X),m\big{)}\to M\big{(}H_{m+1}(X),m\big{)}\) such that we have the homotopy commutative Diagram 5. Since \(\iota_{m*}\colon H_{k}(X_{m})\xrightarrow{\cong}H_{k}(X_{m+1})\) for all \(k\leq m\), therefore \(g\in\mathcal{A}_{*}^{r}(X_{m})=\operatorname{Aut}(X_{m})\). Thus there is a \(\bar{g}\in\operatorname{Aut}(X_{m})\) such that \(g\circ\bar{g}=\operatorname{Id}_{X_{m}}=\bar{g}\circ g\). From the given assumption \[k_{m+1}^{\#}(\bar{g})\in k_{m+1\#}\big{(}\big{[}M\big{(}H_{m+1}(X),m\big{)},M \big{(}H_{m+1}(X),m\big{)}\big{]}\big{)}.\] So there exists \(\bar{h}\in\big{[}M\big{(}H_{m+1}(X),m\big{)},M\big{(}H_{m+1}(X),m\big{)}\big{]}\) such that \(k_{m+1\#}(h)=k_{m+1}^{\#}(\bar{g})\), i.e. \(k_{m+1}\circ\bar{h}=\bar{g}\circ k_{m+1}\). Observe that \[k_{m+1}\circ\bar{h}\circ h=\bar{g}\circ k_{m+1}\circ h=\bar{g}\circ g\circ k_{ m+1}=k_{m+1}.\] Similarly \(k_{m+1}\circ h\circ\bar{h}=k_{m+1}.\) Therefore injectivity of \(k_{m+1\#}\) implies that \(h\in\operatorname{Aut}\big{(}M\big{(}H_{m+1}(X),m\big{)}\big{)}\). Using five Lemma on the long exact sequence of homology groups we have \(f\in\operatorname{Aut}(X_{m+1})\). Consequently \[N\mathcal{A}_{*}(X_{m+1})\leq N\mathcal{A}_{*}(X_{m}).\]
3. Note that \(k_{m+1}^{\#}(\operatorname{Aut}(X_{m}))\subset\big{[}M\big{(}H_{m+1}(X),m\big{)},X _{m}\big{]}\). Surjectivity of \(k_{m+1\#}\) implies that \(k_{m+1\#}\Big{(}\big{[}M\big{(}H_{m+1}(X),m\big{)},M\big{(}H_{m+1}(X),m\big{)} \big{]}\Big{)}=\big{[}M\big{(}H_{m+1}(X),m\big{)},X_{m}\big{]}\). Therefore using (a) and (b) we get the desired result.
The following Proposition gives a relation between two consecutive homology sections which is independent of \(k_{\#}\) and \(k^{\#}\) maps.
**3.4 Proposition**.: _Let \(X\) be a simply connected CW-complex. If \(H_{m}(X)\) is finitely generated free abelian group for some \(m\in\mathbb{N}\) then either \(N\mathcal{A}_{*}(X_{m+1})=m+1\) or \(N\mathcal{A}_{*}(X_{m+1})\leq N\mathcal{A}_{*}(X_{m})\)._
Proof.: Observe that the map \(\iota_{m}\colon X_{m}\to X_{m+1}\) is a homotopical \(m\)-equivalence. Then \(\iota_{m\#}\colon[X_{m},X_{m}]\to[X_{m},X_{m+1}]\) is a surjective map. Assume that \(N\mathcal{A}_{*}(X_{m+1})<m+1\). Let \(N\mathcal{A}_{*}(X_{m})=l\leq m\) and \(f\in\mathcal{A}_{*}^{l}(X_{m+1})\). Then there exits \(g\colon X_{m}\to X_{m}\) which make the following diagram homotopy commutative:
(6)
Thus \(g\in\mathcal{A}_{*}^{l}(X_{m})=\operatorname{Aut}(X_{m})\). Therefore \(f\in\mathcal{A}_{*}^{m}(X_{m+1})=\operatorname{Aut}(X_{m+1})\). Hence
\[N\mathcal{A}_{*}(X_{m+1})\leq l=N\mathcal{A}_{*}(X_{m}).\]
**3.5 Example**.: Consider \(X=S^{n}\lor S^{n+1}\) for \(n\geq 2\). Observe that \(X_{n}=S^{n}\) and \(X_{n+1}=X\). From Proposition 3.4 we have \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X_{n+1})=n+1\) or \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X_{n+1})\leq N\mathcal{A}_{*}(X_{n})=n\). Let \(f\colon X\to X\) be a map defined as
\[f(x)=\begin{cases}x,\ if\ x\in S^{n},\\ *,\ if\ x\in S^{n+1}.\end{cases}\]
Then clearly \(f\in\mathcal{A}_{*}^{n}(X)\) but \(f\notin\mathcal{A}_{*}^{n+1}(X)\). Hence \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X_{n+1})=n+1\).
Moreover consider \(Y=\bigvee_{i=2}^{\infty}S^{k}\). For any \(m\geq 2\) we have \(Y_{m}=\bigvee_{i=2}^{m}S^{k}\). Now we proceed by induction. From the above we have \(N\mathcal{A}_{*}(Y_{3})=3\) and let \(N\mathcal{A}_{*}(Y_{l})=l\). Then \(Y_{l+1}=Y_{l}\bigvee S^{l+1}\) and consider a map \(g\colon Y_{l+1}\to Y_{l+1}\) defined as
\[g(y)=\begin{cases}y,\ if\ y\in Y_{l},\\ *,\ if\ y\in S^{l+1}.\end{cases}\]
Thus clearly \(g\in\mathcal{A}_{*}^{l}(Y_{l+1})\) but \(g\notin\mathcal{A}_{*}^{l+1}(Y_{l+1})\). Hence \(N\mathcal{A}_{*}(Y_{l+1})=l+1\) using Proposition 3.4. Therefore by induction we have \(N\mathcal{A}_{*}(Y_{m})=m\) for \(m\geq 2\) and \(N\mathcal{A}_{*}(Y)=\infty\).
Recall that the homotopical dimension of \(X\) is defined by
\[\pi_{*}\text{-}\dim(X):=\max\big{\{}k:\ \pi_{k}(X)\neq 0\big{\}}.\]
**3.6 Lemma**.: _Let \(X\) be a simply connected CW-complex such that for each \(k,\ H_{k}(X)\) is finitely generated free abelian group. If \(\pi_{*}\text{-}\dim(X)=m\) then for each \(n\geq m+1\)_
\[\operatorname{Aut}(X)\cong\operatorname{Aut}(X_{n}).\]
Proof.: Since \(H_{k}(X)\) is free abelian group for all \(k\), therefore homology decomposition of \(X\) gives the CW-decomposition of \(X\). Therefore for any map \(f\colon X\to X\), we have \(f^{\prime}_{n}\colon X_{n}\to X_{n}\) such that \(f\circ j_{n}\simeq j_{n}\circ f^{\prime}_{n}\) by cellular approximations, where \(f^{\prime}_{n}\colon X_{n}\to X_{n}\) is the restriction of the cellular map \(f^{\prime}\colon X\to X\) on \(X_{n}\) such that \(f\simeq f^{\prime}\). Note that if \(f\colon X\to X\) is self-homotopy equivalence then \(f^{\prime}_{n}\colon X_{n}\to X_{n}\) is self-homotopy equivalence. Consider the map
\[\phi\colon[X,X]\to[X_{n},X_{n}]\ \text{ defined as }\phi(f)=f^{\prime}_{n}.\]
Let \(n\geq m+1\). To show the surjectivity of \(\phi\), let \(r\in[X_{n},X_{n}]\) then \(r\colon X_{n}\to X_{n}\subset X\). Since \(\pi_{n}(X)=0\) for all \(n\geq m+1\), so \(H^{k+1}\big{(}X,X_{n},\pi_{k}(X)\big{)}=0\) for all \(k\geq n\). Therefore by Obstruction we have a map \(\widetilde{r}\colon X\to X\) such that \(\phi(\widetilde{r})=r\). For injectivity, let \(f,g\in[X,X]\) such that \(\phi(f)=\phi(g)\). This implies \(f^{\prime}_{n}\simeq g^{\prime}_{n}\colon X_{n}\to X_{n}\subset X\). Let \(F\colon X_{n}\times I\to X\) be the homotopy map between \(f^{\prime}_{n}\) and \(g^{\prime}_{n}\). Consider a map \(G\colon X\times\partial I\cup X_{n}\times I\to X\) defined as
\[G(x,t)=\begin{cases}f^{\prime}(x),\ \forall(x,t)\in X\times\{0\}\\ g^{\prime}(x),\ \forall(x,t)\in X\times\{1\}\\ F(x,t),\ \forall(x,t)\in X_{n}\times I.\end{cases}\]
Note that \(H^{k+1}\big{(}X\times I,X\times\partial I\cup X_{n}\times I;\pi_{k}(X)\big{)}=0\) for all \(k\geq n+1\). Therefore there exists an extension map \(\widetilde{G}\colon X\times I\to X\) of \(G\) by obstruction theory. Hence \(\widetilde{G}\) be the homotopy between \(f^{\prime}\) and \(g^{\prime}\). Thus \(f\simeq g\), so \(\phi\) is injective. Consequently for \(n\geq m+1\) we get the bijective map
\[\phi\colon[X,X]\to[X_{n},X_{n}].\]
Moreover Observe that \(\phi(f\circ g)=\phi(f^{\prime}\circ g^{\prime})=(f^{\prime}\circ g^{\prime}) _{n}=f^{\prime}_{n}\circ g^{\prime}_{n}=\phi(f)\circ\phi(g).\) Hence
\[\phi\colon\operatorname{Aut}(X)\xrightarrow{\cong}\operatorname{Aut}(X_{n}).\]
Recall the following theorem of Serre: For a simply connected CW-complex \(X\), the homology group \(H_{k}(X)\) is finitely generated abelian group for all \(k\) if and only if \(\pi_{k}(X)\) is finitely generated abelian group for all \(k\).
The following Theorem deduce the relation between homology self-closeness number of \(X\) with its homology section.
**3.7 Theorem**.: _Let \(X\) be a simply connected CW-complex such that \(\pi_{*}\text{-}\dim(X)=m\). Then following holds:_
1. \(2\leq N\mathcal{A}_{*}(X)\leq m+1\)_._
2. \(2\leq N\mathcal{A}_{*}(X)\leq m\)_, if the group_ \(\pi_{m}(X)\) _is finitely generated._
_._
3. _If_ \(H_{k}(X)\) _is finitely generated free abelian group for each_ \(k\) _then for_ \(n\geq m+1\) _we have_ \(N\mathcal{A}_{*}(X_{n})=N\mathcal{A}_{*}(X)\)_._
4. \(N\mathcal{A}_{*}(X_{n})\leq m\) _for_ \(n\leq m\)_._
Proof.:
1. Since \(X\) is a simply connected space, thus \(N\mathcal{A}_{*}(X)\geq 2\). Let \(f\in\mathcal{A}_{*}^{m+1}(X)\) then we have \(f\in\mathcal{A}_{\#}^{m}(X)=\operatorname{Aut}(X)\). Hence \(2\leq N\mathcal{A}_{*}(X)\leq m+1\).
2. Let \(f\in\mathcal{A}_{*}^{m}(X)\). Then \(f_{*}\colon H_{k}(X)\to H_{k}(X)\) is isomorphism for all \(k\leq m\). Thus \(f_{\#}\colon\pi_{k}(X)\to\pi_{k}(X)\) is isomorphism for all \(k\leq m-1\) and \(f_{\#}\colon\pi_{m}(X)\to\pi_{m}(X)\) is surjective. Since \(\pi_{m}(X)\) is finitely generated therefore \(f_{\#}\colon\pi_{m}(X)\to\pi_{m}(X)\) is also an isomorphism. So \(f\in\mathcal{A}_{\#}^{m}(X)=\operatorname{Aut}(X)\) and we get the desired result.
3. Note that \(\pi_{k}(X)\) is finitely generated abelian group for all \(k\) as \(H_{k}(X)\) is finitely generated abelian group for each \(k\) by Serre. Assume that \(n\geq m+1\). Let \(N\mathcal{A}_{*}(X)=l\leq m\) and \(g\in\mathcal{A}_{*}^{l}(X_{n})\). Then \(\phi^{-1}(g)\in\mathcal{A}_{*}^{l}(X)=\operatorname{Aut}(X)\) from Lemma 3.6. Thus \(g\in\operatorname{Aut}(X_{n})\). So \(N\mathcal{A}_{*}(X_{n})\leq l=N\mathcal{A}_{*}(X)\). If possible assume that \(N\mathcal{A}_{*}(X_{n})<l\). Let \(h\in\mathcal{A}_{*}^{l-1}(X)\) then \(\phi(h)\in\mathcal{A}_{*}^{l-1}(X_{n})=\operatorname{Aut}(X_{n})\). Therefore \(h\in\operatorname{Aut}(X)\), which contradict the fact that \(N\mathcal{A}_{*}(X)=l\). Therefore for \(n\geq m+1\) we have \[N\mathcal{A}_{*}(X_{n})=N\mathcal{A}_{*}(X).\]
**3.8 Example**.: Let \(X=\mathbb{C}P^{\infty}.\) Then \(\pi_{*}\operatorname{\text{-}}\dim(X)=2\). Note that
\[H_{k}(\mathbb{C}P^{\infty})=\begin{cases}\mathbb{Z},\ if\ k\ \text{is even},\\ 0,\ if\ k\ \text{is odd}.\end{cases}\]
From Theorem 3.7 we have \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X_{n})\) for all \(n\geq 3\), where \(X_{n}\) is the \(n\)-th homology decomposition of \(X\). Observe that \(X_{2n+1}=X_{2n}=\mathbb{C}P^{n}\) for all \(n\geq 1\). Therefore
\[N\mathcal{A}_{*}(\mathbb{C}P^{\infty})=N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X_ {3})=N\mathcal{A}_{*}(X_{2})=N\mathcal{A}_{*}(\mathbb{C}P^{1})=N\mathcal{A}_{ *}(S^{2})=2.\]
Moreover \(N\mathcal{A}_{*}(\mathbb{C}P^{n})=N\mathcal{A}_{*}(X)=2\) for all \(n\geq 2\) by Theorem 3.7. Consequently,
\[N\mathcal{A}_{*}(\mathbb{C}P^{n})=N\mathcal{A}_{*}(X_{2n})=2,\ \text{for}\ n\in \mathbb{N}\cup\{\infty\}.\]
_3.9 Remark_.: Consider \(X=\mathbb{C}P^{\infty}\). Then \(X_{2n+1}=X_{2n}=\mathbb{C}P^{n}\) for \(n\geq 1\). Therefore
\[N\mathcal{A}_{*}(X_{n})=\begin{cases}0,\ if\ n<2\\ 2,\ if\ n\geq 2.\end{cases}\]
**3.10 Lemma**.: _Let \(X\) be a simply connected \(CW\) complex such that \(H_{k}(X)\) is finitely generated abelian group for each \(k\in\mathbb{N}\). Then \(N\mathcal{A}_{\#}(X_{n})\leq N\mathcal{A}_{*}(X_{n})\leq n\)._
Proof.: Let \(H_{k}(X)\) is finitely generated abelian group for each \(k\). Therefore from the definition of homology decomposition we have \(H_{k}(X_{n})\) is finitely generated abelian group for each \(k\). Hence we get the desired result using [14, Theorem 41(1)].
**3.11 Example**.: Consider \(X=K(G,n)\) for \(n\geq 2\). Note that the Hurewicz map \(h_{*}\colon\pi_{n+1}(X)\to H_{n+1}(X)\) is epimorphism, thus \(H_{n+1}(X)=0\). For homology sections
\(\{X_{m}\}\) we have
\[X_{m}=\begin{cases}*,\ if\ m<n,\\ M(G,n),\ if\ m=n,n+1.\end{cases}\]
By definition, the map \(j_{n+2}\colon X_{n+2}\to X\) is a homological \((n+2)\)-equivalence, and hence a homotopical \((n+2)\)-equivalence (cf. Lemma 2.1). So \(j_{n+2\#}\colon\pi_{k}(X_{n+2})\xrightarrow{\cong}\pi_{k}(X)\) is an isomorphism for all \(k\leq n+1\). Hence \(X_{n+2}\) is \((n-1)\)-connected CW-complex and \(\pi_{n+1}(X_{n+2})=0\). Therefore
\[\mathcal{A}_{\#}^{n-1}(X_{n+2})=\cdots=\mathcal{A}_{\#}^{1}(X_{n+2})=[X_{n+2},X_{n+2}],\text{ and }\mathcal{A}_{\#}^{n}(X_{n+2})=\mathcal{A}_{\#}^{n+1}(X_{n+2}).\]
So we have \(N\mathcal{A}_{\#}(X_{n+2})\geq n\). Moreover, by Lemma 3.10 we have \(N\mathcal{A}_{\#}(X_{n+2})=n\) or \(n+2.\) Note that \(\mathcal{A}_{\#}^{n+1}(X_{n+2})=\mathcal{A}_{\#}^{n+2}(X_{n+2})\). Thus \(N\mathcal{A}_{\#}(X_{n+2})=n\) and \(\pi_{k}(X_{n+2})\) is finitely generated abelian group for all \(k\leq N\mathcal{A}_{\#}(X_{n+2})\). Therefore we have \(N\mathcal{A}_{*}(X_{n+2})=N\mathcal{A}_{\#}(X_{n+2})=n\) from [14, Theorem 41]. Similarly we can prove that \(N\mathcal{A}_{*}(X_{m})=n\) for \(m>n+2\). Consequently,
\[N\mathcal{A}_{*}(X_{m})=\begin{cases}0,\ if\ m<n,\\ n,\ if\ m\geq n.\end{cases}\]
## 4. Homotopy decomposition
In this section we consider homotopy decomposition (Postnikov tower) of a space \(X\) which was studied in previous (cf. [5]). This can be thought as a dual construction to homology decomposition. Here we prove some results related to homotopy and homology decomposition and their intermediate relations. These results help us to compute homology self-closeness numbers of Postnikov tower, some of them can be thought as generalization of those results in [7]. First recall the definition
### Definition
Let \(X\) be a simply connected CW-complex. A homotopy decomposition (Postnikov decomposition) of \(X\) consists of a sequence of simply connected CW-complexes \(\{X^{(n)}\}\) and maps \(g_{n}\colon X\to X^{(n)}\) such that
1. Each inclusion map \(g_{n}\colon X\to X^{(n)}\) induces isomorphisms \(g_{n\#}\colon\pi_{k}(X)\to\pi_{k}(X^{(n)})\) for all \(k\leq n\) and \(\pi_{k}(X^{(n)})=0\) for \(k>n\).
2. There exist maps \(p_{n+1}\colon X^{(n+1)}\to X^{(n)}\) such that \[K\big{(}\pi_{n+1}(X),n+1\big{)}\to X^{(n+1)}\xrightarrow{p_{n+1}}X^{(n)}\] is a fiber sequence.
3. The following diagram is commutative
* The fibration sequence in (ii) is equivalent to the principal fibration \[K\big{(}\pi_{n+1}(X),n+1\big{)}\to X^{(n+1)}\to X^{(n)}\xrightarrow{k^{n+1}}K \big{(}\pi_{n+1}(X),n+2\big{)},\] determined by a map \(k^{n+1}\), where \([k^{n+1}]\in H^{n+2}\big{(}X^{(n)};\pi_{n+1}(X)\big{)}\).
We write this as \(\{X^{(n)},g_{n},p_{n},k^{n}\}\) and call it a homotopy decomposition (Postnikov decomposition) of \(X\). The maps \(k^{n+1}\colon X^{(n)}\to K\big{(}\pi_{n+1}(X),n+2\big{)}\) are called \(k\)-invariants. The space \(X^{(n)}\) is called the \(n\)-th homotopy section and is obtained as the homotopy fiber of \(k^{n}\).
_4.2 Remark_.: From the definition we have \(N\mathcal{A}_{\#}(X^{(n)})\leq n\).
The following Lemma gives an equality between homology and homotopy self-closeness numbers of homotopy section.
**4.3 Lemma**.: _Let \(X\) be a simply connected CW-complex such that \(H_{k}(X)\) is finitely generated abelian group for each \(k\). Then \(N\mathcal{A}_{*}(X^{(n)})=N\mathcal{A}_{\#}(X^{(n)})\)._
Proof.: Let \(j_{n}\colon X_{n}\to X\) and \(g_{n}\colon X\to X^{(n)}\) be two maps over the homology decomposition and homotopy decomposition of \(X\) respectively. Observe that \(g_{n}\colon X\to X^{(n)}\) is homotopical \((n+1)\)-equivalence. From Lemma 2.1 we say that \(g_{n}\) is homological \((n+1)\)-equivalence. Therefore \((g_{n}\circ j_{n})_{*}\colon H_{k}(X_{n})\xrightarrow{\cong}H_{k}(X^{(n)})\) for all \(k\leq n\). Moreover \(\pi_{k}(X^{(n)})\) is finitely generated abelian group for all \(k\) as \(\pi_{k}(X)\) is finitely generated abelian group for all \(k\). From [14, Theorem 41(2)] we have \(N\mathcal{A}_{*}(X^{(n)})\leq N\mathcal{A}_{\#}(X^{(n)})\leq n\).
Further, \(H_{k}(X)\) is finitely generated abelian group for all \(k\) implies that \(H_{k}(X_{n})\) are so for all \(k\). Hence \(H_{k}(X^{(n)})\) is finitely generated abelian group for all \(k\leq n\). From [14, Theorem 41(1)] we have \(N\mathcal{A}_{\#}(X^{(n)})\leq N\mathcal{A}_{*}(X^{(n)})\). Consequently
\[N\mathcal{A}_{\#}(X^{(n)})=N\mathcal{A}_{*}(X^{(n)}).\]
Next we deduce relations between the homology self-closeness numbers of two consecutive homotopy sections in Postnikov tower decompostion.
**4.4 Theorem**.: _Let \(X\) be a simply connected CW-complex such that \(H_{k}(X)\) is finitely generated for all \(k\). For homotopy decomposition \(\{X^{(n)},g_{n},p_{n},k^{n}\}\) of \(X\) we have the following:_
1. _If the induced map_ \[k^{m+1\#}\colon\big{[}K\big{(}\pi_{m+1}(X),m+2\big{)},K\big{(}\pi_{m+1}(X),m+2 \big{)}\big{]}\to\big{[}X^{m},K\big{(}\pi_{m+1}(X),m+2\big{)}\big{]}\] _is surjective for some_ \(m\in\mathbb{N}\)_, then_ \(N\mathcal{A}_{*}(X^{m})\leq N\mathcal{A}_{*}(X^{m+1})\)_._
2. _If the induced map_ \[k^{m+1\#}\colon\big{[}K\big{(}\pi_{m+1}(X),m+2\big{)},K\big{(}\pi_{m+1}(X),m+ 2\big{)}\big{]}\to\big{[}X^{m},K\big{(}\pi_{m+1}(X),m+2\big{)}\big{]}\] _is injective such that_ \[k^{m+1}_{\#}(\mathrm{Aut}(X^{m}))\subset k^{m+1\#}\big{(}\big{[}K\big{(}\pi_{m +1}(X),m+2\big{)},K\big{(}\pi_{m+1}(X),m+2\big{)}\big{]}\big{)}\] _then_ \(N\mathcal{A}_{*}(X^{m+1})\leq N\mathcal{A}_{*}(X^{m})\)
_._
3. _If the induced map_ \(k^{m+1\#}\) _is bijective then_ \(N\mathcal{A}_{*}(X^{m})=N\mathcal{A}_{*}(X^{m+1})\)_._
Proof.: Note that \(N\mathcal{A}_{*}(X^{n})=N\mathcal{A}_{\#}(X^{n})\) for homotopy sections \(\{X^{n}\}\) using Lemma 4.3. Rest of the proof directly follows from [13, Theorem 3.7 & Theorem 3.9].
Combining Remark 4.2 and Lemma 4.3, we have \(N\mathcal{A}_{*}(X^{(k)})\leq k\).
### Proposition
_Let \(X\) be a simply connected CW-complex such that \(H_{k}(X)\) is finitely generated for all \(k\). Then either \(N\mathcal{A}_{*}(X^{(n+1)})=n+1\) or \(N\mathcal{A}_{*}(X^{(n+1)})\leq N\mathcal{A}_{*}(X^{(n)})\)._
Proof.: Observe that the map \(p_{n+1}^{\#}\colon[X^{(n)},X^{(n)}]\to[X^{(n+1)},X^{(n)}]\) is surjective, see [4, Proposition 8.2.2]. As in the proof of Proposition 3.4 we have either \(N\mathcal{A}_{\#}(X^{(n+1)})=n+1\) or \(N\mathcal{A}_{\#}(X^{(n+1)})\leq N\mathcal{A}_{\#}(X^{(n)})\). Moreover using Lemma 4.3 we get the desired result.
The following Theorem is a homological version of [7, Theorem 3.5].
**4.6 Theorem**.: _Let \(X\) be a simply connected CW-complex of dimension \(m\) such that \(H_{k}(X)\) is finitely generated for all \(k\). Then_
1. \(N\mathcal{A}_{*}(X^{(n)})=N\mathcal{A}_{*}(X)\)_, for each_ \(n\geq m\)_._
2. \(N\mathcal{A}_{*}(X)\leq N\mathcal{A}_{*}(X^{(n)})\leq n\)_, if_ \(N\mathcal{A}_{*}(X)\leq n<m\)_._
3. \(N\mathcal{A}_{*}(X^{(n)})<N\mathcal{A}_{*}(X)\)_, if_ \(n<N\mathcal{A}_{*}(X)\)_._
Proof.: From [14, Corollary 42] we have \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{\#}(X)\). Therefore using Lemma 4.3 and [7, Theorem 3.5] we get the desired result.
**4.7 Corollary**.: _Let \(X\) be a simply connected CW-complex such that \(H_{k}(X)\) is finitely generated for all \(k\). Assume that any one of the following conditions hold:_
1. \(\pi_{*}\text{-}\dim(X)<n<\infty\) _and_ \(H_{k}(X)\) _is free for each_ \(k\)_._
2. \(\dim(X)\leq n<\infty\)_._
_Then, \(N\mathcal{A}_{*}(X^{(n)})=N\mathcal{A}_{*}(X_{n})\)._
Proof.: (a) Observe that the structure map of the Postnikov tower \(g_{n}\colon X\to X^{(n)}\) is a homotopy equivalence if \(n\geq\pi_{*}\text{-}\dim(X)\). Thus for \(n\geq\pi_{*}\text{-}\dim(X)\) we have \(N\mathcal{A}_{*}(X)=N\mathcal{A}_{*}(X^{(n)})\). Therefore from Theorem 3.7 we have \(N\mathcal{A}_{*}(X^{(n)})=N\mathcal{A}_{*}(X_{n})\) for \(n\geq\pi_{*}\text{-}\dim(X)+1\).
2. Note that for \(n\geq\dim(X)\) we have \(X_{n}=X\). Hence using Theorem 4.6 we get the desired result.
The following Lemma gives an explicit relation between homology self-closeness numbers of homotopy and homology sections.
**4.8 Lemma**.: _Let \(X\) be a simply connected CW-complex such that \(H_{k}(X)\) is finitely generated for all \(k\). If \(H_{m}(X)\) is free for some \(m\in\mathbb{N},\) then_
\[N\mathcal{A}_{*}(X^{(m)})\leq N\mathcal{A}_{*}(X_{m}).\]
Proof.: As in the proof of Lemma 4.3 we have that \(g_{m}\circ j_{m}\colon X_{m}\to X^{(m)}\) is a homotopical \(m\)-equivalence. Thus the map \((g_{m}\circ j_{m})_{\#}\colon[X_{m},X_{m}]\to[X_{m},X^{(m)}]\) is surjective. Let \(N\mathcal{A}_{*}(X_{m})=l\leq m\) and \(f\in\mathcal{A}_{*}^{l}(X^{(m)})\). Then there exist \(h\colon X_{m}\to X_{m}\) making the following diagram homotopy commutative.
(7)
Therefore \(h\in\mathcal{A}_{*}^{l}(X_{m})=\operatorname{Aut}(X_{m})\). So \(f\in\mathcal{A}_{*}^{m}(X^{(m)})=\operatorname{Aut}(X^{(m)})\), since \(N\mathcal{A}_{*}(X^{(m)})=N\mathcal{A}_{\#}(X^{(m)})\leq m\). Hence \(N\mathcal{A}_{*}(X^{(m)})\leq l=N\mathcal{A}_{*}(X_{m})\).
**4.9 Example**.: For \(n\in\mathbb{N}\cup\{\infty\}\), consider \(X=\mathbb{C}P^{n}\). From Lemma 4.8 we have
\[N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})\ \text{ for all }k.\]
For \(k\geq 2\) we know that \(X^{(k)}\) are simply connected, thus \(N\mathcal{A}_{*}(X^{(k)})\geq 2\). Therefore,
\[2\leq N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})=2\ \text{ for }k\geq 2.\]
Hence
\[N\mathcal{A}_{*}(X^{(k)})=\begin{cases}0,\ if\ k<2,\\ 2,\ if\ k\geq 2.\end{cases}\]
**4.10 Example**.: Let \(G\) and \(H\) be finitely generated abelian groups. Consider \(X=M(G,n)\bigvee M(H,m)\) where \(m>n>1\). Observe that,
\[X^{(k)}=\begin{cases}*,\ if\ k<n,\\ K(G,n),\ if\ k=n.\end{cases}\]
Moreover \(H_{k}(X)=0\) for all \(n<k<m\). Therefore,
\[X_{k}=\begin{cases}*,\ if\ k<n,\\ M(G,n),\ if\ n\leq k<m,\\ X,\ if\ k=m.\end{cases}\]
Note that, for \(k\geq n\) we have \(n\leq N\mathcal{A}_{*}(X^{(k)})\) since \(X^{(k)}\) is \(n-1\) connected. From Proposition 4.8 we have \(N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})\) for \(n<k<m\). Thus
\[n\leq N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})=N\mathcal{A}_{*}( M(G,n))=n,\text{ for }n<k<m.\]
From Theorem 4.6(a) we have \(N\mathcal{A}_{*}(X^{(k)})=N\mathcal{A}_{*}(X)\) for all \(k\geq m+1\). Note that \(\mathcal{A}_{*}^{n}(X)=\cdots=\mathcal{A}_{*}^{m-1}(X)\) and \(\operatorname{Aut}(X)=\mathcal{A}_{*}^{m}(X)\subsetneq\mathcal{A}_{*}^{m-1}(X)\), see Example 3.5. Therefore,
\[N\mathcal{A}_{*}(X^{(k)})=N\mathcal{A}_{*}(X)=m,\text{ for }k\geq m+1.\]
Moreover \(N\mathcal{A}_{*}(X^{(m)})=m\) using Theorem 4.6(b). Consequently,
\[N\mathcal{A}_{*}(X^{(k)})=\begin{cases}0,\ if\ k<n,\\ n,\ if\ n\leq k<m,\\ m,\ if\ k\geq m.\end{cases}\]
In general we have the following result.
**4.11 Proposition**.: _Let \(G_{1},\cdots,G_{m}\) be finitely generated abelian groups. Consider \(X=\bigvee_{i=1}^{m}M(G_{i},n_{i})\), where \(n_{i+1}>n_{i}>1\). Then we have,_
\[N\mathcal{A}_{*}(X^{(k)})=\begin{cases}0,\ if\ k<n_{1},\\ n_{i},\ if\ n_{i}\leq k<n_{i+1},\\ n_{m},\ if\ k\geq n_{m}.\end{cases}\]
Proof.: Note that \(\mathcal{A}_{*}^{n_{i+1}}(X)\subsetneq\mathcal{A}_{*}^{n_{i}}(X)\) for \(i=1,\ldots,m-1\), see Example 3.5. We have already proved for the case \(m=2\). It is sufficient to prove the result for the case \(m=3\) and then we can say that the result holds inductively. Let \(X=\bigvee_{i=1}^{3}M(G_{i},n_{i})\). Observe that \(X^{(k)}=*\), for \(k<n_{1}\) and \(X^{(n_{1})}=K(G_{1},n_{1})\). Moreover \(X^{(k)}\) is \((n_{1}-1)\) connected for \(k\geq n_{1}\). Thus \(n_{1}\leq N\mathcal{A}_{*}(X^{(k)})\) for \(k\geq n_{1}\). From Lemma 4.8
\[n_{1}\leq N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})=N\mathcal{A}_{ *}(X_{n_{1}})=n_{1},\ \text{for}\ n_{1}<k<n_{2}.\]
Moreover using Proposition 4.5 we have,
\[N\mathcal{A}_{*}(X^{(n_{2})})=n_{2}\ \text{or}\ N\mathcal{A}_{*}(X^{(n_{2})}) \leq N\mathcal{A}_{*}(X^{(n_{2}-1)})=n_{1}.\]
If possible assume that \(\mathcal{A}_{*}^{n_{1}}(X^{(n_{2})})=\operatorname{Aut}(X^{(n_{2})})\). Let \(f\in\mathcal{A}_{*}^{n_{1}}(X)\) but \(f\notin\mathcal{A}_{*}^{n_{2}}(X)\). By a functorial construction of Postnikov tower, there exist \(f^{(n_{2})}\) which makes the following diagram homotopy commutative ( cf. [4, Proposition 7.2.11] ).
(8)
Thus \(f^{(n_{2})}\in\mathcal{A}_{*}^{n_{1}}(X^{(n_{2})})=\operatorname{Aut}(X^{(n_ {2})})\). This contradicts the fact that \(f\in\mathcal{A}_{*}^{n_{2}}(X)\). Hence \(N\mathcal{A}_{*}(X^{(n_{2})})=n_{2}\). Further using Lemma 4.8 we have,
\[N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})=N\mathcal{A}_{*}(X_{n_ {2}})=n_{2},\ \text{for}\ n_{2}<k<n_{3}.\]
If possible assume that \(N\mathcal{A}_{*}(X^{(k)})<n_{2}\) for some \(k\in(n_{2},n_{3})\). Then
\[\mathcal{A}_{*}^{n_{1}}(X^{(k)})=\cdots=\mathcal{A}_{*}^{n_{2}-1}(X^{(k)})= \operatorname{Aut}(X^{(k)}),\ \text{(since}\ H_{j}(X^{(k)}\cong H_{j}(X)\ \forall j\leq k).\]
Let \(h\in\mathcal{A}_{*}^{n_{1}}(X)\) but \(h\notin\mathcal{A}_{*}^{n_{2}}(X)\). Then by the similar argument as above, we arrive at contradiction. Hence \(N\mathcal{A}_{*}(X^{(k)})=n_{2}\) for \(n_{2}<k<n_{3}\).
From Theorem 4.6(a) we have \(N\mathcal{A}_{*}(X^{(k)})=N\mathcal{A}_{*}(X)=n_{3}\) for all \(k\geq n_{3}+1\). Further using Theorem 4.6(b) we get \(N\mathcal{A}_{*}(X^{(n_{3})})=n_{3}\). Consequently
\[N\mathcal{A}_{*}(X^{(k)})=\begin{cases}0,\ if\ k<n_{1},\\ n_{1},\ if\ n_{1}\leq k<n_{2},\\ n_{2},\ if\ n_{2}\leq k<n_{3},\\ n_{3},\ if\ k\geq n_{3}.\end{cases}\]
**4.12 Example**.: Consider \(X=S^{m}\times S^{n}\), where \(m>n>1\). Observe that
\[X^{(k)}=\begin{cases}*,\ if\ k<n,\\ K(\mathbb{Z},n),\ if\ k=n.\end{cases}\]
Therefore \(N\mathcal{A}_{*}(X^{(k)})=0\) for all \(k<n\). Note that \(X^{(k)}\) is \(n-1\) connected for \(k\geq n\). So \(n\leq N\mathcal{A}_{*}(X^{(k)})\) for all \(k\geq n\). From Lemma 4.8 we have
\[N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})=N\mathcal{A}_{*}(X_{n}) =N\mathcal{A}_{*}(S^{n})=n\ \text{for all}\ n\leq k<m.\]
Moreover using Theorem 4.6 we have \(N\mathcal{A}_{*}(X^{(k)})=N\mathcal{A}_{*}(X)\) for all \(k\geq m+n\). Therefore,
\[N\mathcal{A}_{*}(X^{(k)})=N\mathcal{A}_{*}(S^{m}\times S^{n})=N\mathcal{A}_{ \#}(S^{m}\times S^{n})=m\ \text{for all}\ k\geq m+n,\]
(see [12, Proposition 5]).
Further if \(m\leq k<m+n\) then \(m\leq N\mathcal{A}_{*}(X^{(k)})\) from Theorem 4.6 and using Lemma 4.8 we have,
\[N\mathcal{A}_{*}(X^{(k)})\leq N\mathcal{A}_{*}(X_{k})=N\mathcal{A}_{*}(X_{m}) =N\mathcal{A}_{*}(S^{m}\lor S^{n})=m.\]
So \(N\mathcal{A}_{*}(X^{(m)})=m\), for all \(m\leq k<m+n\). Consequently,
\[N\mathcal{A}_{*}(X^{(k)})=\begin{cases}0,\ if\ k<n,\\ n,\ if\ n\leq k<m,\\ m,\ if\ k\geq m.\end{cases}\]
**4.13 Example**.: For \(l>0\) consider \(X=\Sigma^{l}(S^{m}\times S^{n})\), where \(m>n>1\). Note that
\[\Sigma(S^{m}\times S^{n})\simeq S^{n+1}\bigvee S^{m+1}\bigvee S^{m+n+1},\ \text{( see [9, Proposition 4I.1]).}\]
Therefore \(X=S^{n+l}\bigvee S^{m+l}\bigvee S^{m+n+l}\). From Proposition 4.11 we have
\[N\mathcal{A}_{*}(X^{(k)})=\begin{cases}0,\ if\ k<n+l,\\ n+l,\ if\ n+l\leq k<m+l,\\ m+l,\ if\ m+l\leq k<m+n+l,\\ m+n+l,\ if\ k\geq m+n+l.\end{cases}\] |
2304.02909 | In-Grain Ferroelectric Switching in Sub-5 nm Thin AlScN Films at 1 V | Analog switching in ferroelectric devices promises neuromorphic computing
with highest energy efficiency, if limited device scalability can be overcome.
To contribute to a solution, we report on the ferroelectric switching
characteristics of sub-5 nm thin Al$_{0.74}$Sc$_{0.26}$N films grown on
Pt/Ti/SiO2/Si and epitaxial Pt/GaN/sapphire templates by sputter-deposition. In
this context, we focus on the following major achievements compared to
previously available wurtzite-type ferroelectrics: 1) Record low switching
voltages down to 1 V are achieved, which is in a range that can be supplied by
standard on-chip voltage sources. 2) Compared to the previously investigated
deposition of thinnest Al$_{1-x}$Sc$_x$N films on epitaxial templates, a
significantly larger coercive field to breakdown field ratio is observed for
Al$_{0.74}$Sc$_{0.26}$N films grown on silicon substrates, the technologically
most relevant substrate-type. 3) The formation of true ferroelectric domains in
wurtzite-type materials is for the first time demonstrated on the atomic scale
by scanning transmission electron microscopy investigations of a sub-5 nm thin
partially switched film. The direct observation of inversion domain boundaries
within single nm-sized grains supports the theory of a gradual domain-wall
motion limited switching process in wurtzite-type ferroelectrics. Ultimately,
this should enable the analog switching necessary for mimicking neuromorphic
concepts also in highly scaled devices. | Georg Schönweger, Niklas Wolff, Md Redwanul Islam, Maike Gremmel, Adrian Petraru, Lorenz Kienle, Hermann Kohlstedt, Simon Fichtner | 2023-04-06T07:45:08Z | http://arxiv.org/abs/2304.02909v1 | # In-Grain Ferroelectric Switching in Sub-5 nm Thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N Films at 1 V
###### Abstract
Analog switching in ferroelectric devices promises neuromorphic computing with highest energy efficiency, if limited device scalability can be overcome. To contribute to a solution, we report on the ferroelectric switching characteristics of sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N films grown on Pt/Ti/SiO\({}_{2}\)/Si and epitaxial Pt/GaN/sapphire templates by sputter-deposition. In this context, we focus on the following major achievements compared to previously available wurtzite-type ferroelectrics: 1) Record low switching voltages down to 1 V are achieved, which is in a range that can be supplied by standard on-chip voltage sources. 2) Compared to the previously investigated deposition of thinnest Al\({}_{1\textrm{-}x}\)Sc\({}_{x}\)N films on epitaxial templates, a significantly larger coercive field to breakdown field ratio is observed for Al\({}_{0.74}\)Sc\({}_{0.26}\)N films grown on silicon substrates, the technologically most relevant substrate-type. 3) The formation of true ferroelectric domains in wurtzite-type materials is for the first time demonstrated on the atomic scale by scanning transmission electron microscopy investigations of a sub-5 nm thin partially switched film. The direct observation of inversion domain boundaries within single nm-sized grains supports the theory of a gradual domain-wall motion limited switching process in wurtzite-type ferroelectrics. Ultimately, this should enable the analog switching necessary for mimicking neuromorphic concepts also in highly scaled devices.
ferroelectric; neuromorphic computing; thin film; Scandium; domains RESEARCH ARTICLE
## 1 Introduction
In recent years, ferroelectrics have become one of the main foci of advancing semiconductor technology towards higher performance and energy efficiency.[1, 2, 3] This applies especially to neuromorphic and in-memory computing, where the field-driven ferroelectric effect promises analog operation with the lowest input power. However, the entrance of ferroelectric functionality into the active areas of commercial devices other than binary ferroelectric random-access memories (FRAMs) is yet to take place. One of the major challenges in this context is an excess of device-to-device variability of key parameters like the threshold voltage in small devices as well as the loss of their capability to operate in an analog fashion. This variability becomes pronounced when the ferroelectrically active area of a device approaches the size of the grains inside the ferroelectric films and the domains therein. This grain size typically is in the range of tens of nanometer for the fluorite-type ferroelectrics, which have
been at the focus of scientific attention in recent years.[4] The factors contributing to device variability are a lack of crystalline texture, stress inhomogeneities and less than complete phase purity, which lead to different material parameters between different grains.[4; 5] The possibility of analog operation in turn becomes compromised due to the nucleation limited switching of the fluorite-type films. This implies that ferroelectric domains quickly reach their final shape while nucleating, leading to a digital behavior in devices where only a small number of ferroelectric domains remain inside the active area.[6]
Since their discovery in 2019, the new wurtzite-type ferroelectrics have raised expectations of a possible solution to the aforementioned issues.[7] Wurtzite-type ferroelectric films can typically be grown phase pure, well textured and the narrow distribution of their displacement current response upon ferroelectric switching promises a narrow distribution of the local ferroelectric properties, all of which should contribute to improved device repeatability. At the same time, wurtzite-type films can easily be deposited at complementary metal oxide semiconductor (CMOS) back-end-of-line (BEOL) compatible conditions, feature extreme temperature stability themselves and thicker films are already in large-volume industrial production.[8; 9] Further, for film thicknesses above 100 nm, the switching kinetics of the material can be modeled to be domain wall motion limited,[10] which implies a gradual or analog switching on the atomic level. Despite these conceptual advantages, major challenges remain to be solved until the material class is able to fully meet the demands of advanced microelectronic devices. In this context, it is highly necessary to further reduce the ferroelectric switching voltage of wurtzite-type thin films to meet the capabilities of typical on-chip voltage supplies (in the range of 1 V), while retaining the aforementioned advantages like phase purity and domain wall motion limited switching.
In this study we demonstrate for the first time that wurtzite-type sub-5 nm thin (8 to 9 unit cells corresponding to 4 - 4.5 nm) Al\({}_{0.74}\)Sc\({}_{0.26}\)N films sputter-deposited on silicon (Si) substrates retain ferroelectric functionality with switching voltages as low as 1 V and feature in-grain, nm-sized domains upon partial switching. We were thus able to reduce the switching voltage and film thickness of films on Si by around 50% compared to literature (5 nm thin films grown by non-BEOL compatible molecular beam epitaxy (MBE) and \(\approx\) 10 nm thin films grown by sputtering on Si).[11; 12; 13] Further, by performing atomic resolution scanning transmission electron microscopy (STEM) on epitaxial sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N films, we obtained the first images of domain walls in any wurtzite-type ferroelectric to confirm the presence of nm-sized domains within individual grains.
Our investigation starts with a structural as well as an electrical comparison of 10 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N grown epitaxially on Pt/GaN/sapphire and grown non-epitaxially on Pt/Ti/SiO\({}_{2}\)/Si to demonstrate the improved ferroelectric properties of the latter. Further downscaling to the sub-5 nm range of ferroelectric Al\({}_{0.74}\)Sc\({}_{0.26}\)N films grown on Si is investigated. The scaling of the coercive voltage, including a decrease of \(E_{c}\) below 10 nm film thickness ultimately allowed to achieve switching voltages as low as 1 V and is discussed in detail. Epitaxial growth was investigated as well, as it allows to resolve the ferroelectric polarization reversal on atomic level in partially switched sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N layers via STEM. The identification of regions with opposite polarity inside a single grain and the necessary occurrence of a domain boundary in between gives first insights into the size, shape, location and evolution of ferroelectric domains in Al\({}_{0.74}\)Sc\({}_{0.26}\)N and potentially in the whole class of wurtzite-type ferroelectrics.
## 2 Results \(\&\) Discussions
Effect of non-epitaxial growth on Si vs. epitaxial growth on GaN on the ferroelectric response of Al\({}_{0.74}\)Sc\({}_{0.26}\)N
For the direct integration of ferroelectric wurtzite-type films to CMOS technology, the possibility to deposit them on Si substrates without epitaxial templating is crucial. While one might assume that epitaxial growth and thus higher interface- and film quality will automatically result in improved electrical properties, this section motivates that the opposite can be the case for Al\({}_{1..\mathrm{x}}\)Sc\({}_{\mathrm{x}}\)N.
This can be concluded from the electrical response as well as from the interface quality investigations of 10 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N films. In Figure 1 a, the cross-sections of 10 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N films grown epitaxially on Pt/GaN/sapphire as well as grown non-epitaxially on Pt/Ti/SiO\({}_{2}\)/Si are compared. All films were capped _in situ_ to prevent oxidation of the Al\({}_{0.74}\)Sc\({}_{0.26}\)N surface, which is crucial to obtain an undisturbed ferroelectric response especially of \(<\) 10 nm thin films, where the thickness of the native oxide can be in the range of the total film thickness.[14; 15; 16]
For the epitaxially grown film stacks, the interfaces are smooth with an overall low surface roughness of the respective layers, which is known to result in a reduction of the leakage currents in capacitors.[17] Structurally, the epitaxial films with a 10 nm thin Pt bottom electrode layer also have a superior crystalline quality compared to non-epitaxial ones, i.e., higher c-axis texture, which we investigate in detail in a separate work.[18] Nonetheless, the films deposited on Si substrates exhibit more pronounced
ferroelectric switching peaks (Figure 1 b). Thus, despite their higher interface roughness and poorer interface texture compared to the epitaxial ones, a complete polarization inversion is demonstrated for the non-epitaxial 10 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N grown on Pt/Ti/SiO\({}_{2}\)/Si, yet not for the epitaxial films. This is apparent from the drop in current density after ferroelectric switching at the coercive field (\(E_{c}\)) with maximum \(J\) followed by a local minimum before the contribution from leakage currents leads to a further increase in \(J\). In comparison, no local minimum is observed for the epitaxial film. Although the leakage currents for the 10 nm thin epitaxial Al\({}_{0.74}\)Sc\({}_{0.26}\)N grown on Pt/GaN/sapphire are lower (at a fixed voltage) compared to films grown on Si, \(E_{c}\) is also higher and approaches the electrical breakdown field. Thus, as demonstrated in our recent work, we were able to fully switch the polarization of 10 nm thin epitaxial films via capacitance vs. electric field \(C-E\) measurements, but not via \(J-E\) loops.[14] We attribute the improved \(E_{c}\) of the films grown on Si compared to the ones grown on sapphire to differences in the respective Al\({}_{0.74}\)Sc\({}_{0.26}\)N film stress, which is well known to result in a shift of \(E_{c}\).[7] Despite the fact that both heterostructures were grown under exactly the same Al\({}_{0.74}\)Sc\({}_{0.26}\)N deposition conditions (same run), the thermal expansion coefficients (\(\alpha_{sub}\)) of the silicon substrate (non-epitaxial growth - \(\alpha_{sub}=2.6x10^{-6}\)/K) and sapphire substrate (epitaxial growth - \(\alpha_{sub}=7.3x10^{-6}\)/K) differ, leading to strong differences in the thermally induced film stress after cooling down from the Al\({}_{0.74}\)Sc\({}_{0.26}\)N (\(\alpha_{sub}=4.9x10^{-6}\)/K) deposition temperature at 450 \({}^{\circ}\)C.[19, 20, 21] Thus, in addition to the film-stress induced by interface strain, grain boundaries and defects, tensile stress is thermally induced in Al\({}_{1\text{-x}}\)Sc\({}_{\text{x}}\)N if grown on a silicon substrate, while compressive stress is thermally induced if grown on a sapphire substrate.
The ability to tune the coercive field of Al\({}_{1\text{-x}}\)Sc\({}_{\text{x}}\)N exploiting the thermal expansion of varying substrates has also been reported recently by Yasuoka et al.[22] In consequence, thermal induced tensile stress extends the in-plane lattice resulting in the reduction of \(E_{c}\), similar to an increase in Sc concentration. To conclude, we attribute the more pronounced ferroelectric displacement current peak of the non-epitaxial 10 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N to a more favorable position of \(E_{c}\) compared to the onset of leakage (compare the local minima in the current response) and with respect to the breakdown strength.
### Ferroelectric properties of sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N films grown on Si
Next, we present and discuss the electric characterization results of sputter-deposited 8 - 9 unit cells (4 - 4.5 nm) thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N films grown on Si wafers. Details on the exact thickness determination by using STEM can be found in section 2.4.
In Figure 2 a, \(J-E\) loops of sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N grown on Pt/Ti/SiO\({}_{2}\)/Si are illustrated. In direct measurements (black curve), the clear hysteresis is already indicative of ferroelectric switching. The leakage current flow through the dielectric
Figure 1: a) TEM cross-section of 10 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N grown (top) non-epitaxially within a Pt/Al\({}_{0.74}\)Sc\({}_{0.26}\)N/Pt/Ti/SiO\({}_{2}\)/Si and (bottom) epitaxially within a Pt/Al\({}_{0.74}\)Sc\({}_{0.26}\)N/Pt/GaN/sapphire capacitor stack. Only the capacitor structures are depicted. b) The \(J-E\) loops of the capacitors depicted in a), measured at 100 kHz.
as well as the displacement current contributions due to the relative permittivity (\(\epsilon_{r}\)) can be separated from the hysteretic (i.e., ferroelectric) displacement currents by recording non-switching loops (i.e., by pre-poling the respective measured positive and negative branch). After substraction of the non-switching- (red curve) from the switching currents (black curve) the typical shape of ferroelectric displacement current peaks are obtained (blue curve), which allows the extraction of \(E_{c}\).
The \(C-V\) loop of the sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N based capacitor depicted in Figure 2 b further confirms the ferroelectric nature of the hysteretic event. A distinct butterfly-shaped loop, typical for ferroelectric switching, is visible. Clearly distinguishable, non-hysteretic non-switching loops are depicted as well for the \(C-V\) loop. Furthermore, the polarization inversion of a sub-5 nm thin film is unambiguously demonstrated by atomically resolved STEM investigations discussed in section 2.4. Thus, it is demonstrated that such thin ferroelectric wurtzite-type films can be grown by sputter deposition on oxidized silicon in a manner compatible with CMOS technology, which is a clear advantage over high-temperature (\(\geq 500\)\({}^{\circ}\)C) MBE deposition processes on single crystal templates [11, 15].
Furthermore, the ferroelectric switching of Al\({}_{0.74}\)Sc\({}_{0.26}\)N films grown on silicon with, for wurtzite-type materials, record low voltages down to 1 V is a major milestone towards ferroelectric Al\({}_{1\text{-x}}\)Sc\({}_{\text{x}}\)N based future devices operable with the on-chip voltage supply of integrated circuits [23, 24].
### Coercive field scaling in ultrathin Al\({}_{0.74}\)Sc\({}_{0.26}\)N
The low switching voltage down to 1 V reached in our films is not only due to a simple reduction in thickness, but also due to the favorable scaling of \(E_{c}\) with thickness, which we will therefore discuss in more detail in this section. In particular, the appearance of a depolarization field and its effects on the electrical response of films below 10 nm film thickness are discussed, as is the relative dielectric permittivity - which itself is related to the coercive field through the shape of the ionic potential wells [3].
In Figure 3, \(J-E\) loops of 100 nm- down to sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N based capacitors are depicted. From 100 nm down to 10 nm the coercive field is increasing with decreasing film thickness, but interestingly, below 10 nm the coercive field is significantly decreasing again, as indicated by the red arrows in Figure 3. A comparable trend with thickness scaling down to sub-5 nm is also observed for epitaxial films grown on Pt/GaN/sapphire (see Supplement - Figure 8 ), hence it is concluded
Figure 2: a) \(J-E\) loops of sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N grown on Pt/Ti/SiO\({}_{2}\)/Si measured at 100 kHz on 5 μm diameter pads. A leakage current compensated curve (blue) by subtracting the non-switching- (red) from the switching currents (black) is included. b) \(C-V\) loop of the sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N based capacitor described in a) measured on a 10 μm diameter pad. Unipolar non-switching cycles (red) by measuring each branch (positive and negative voltages) twice with the same polarity are included to stress the non-volatile nature of the permittivity enhancement due to ferroelectricity.
that the scaling properties are rather independent of the substrate (silicon vs. sapphire), crystalline quality and associated growth modes (non-epitaxial vs. epitaxial).
A slight increase in \(E_{c}\) with decreasing film thickness down to 10 nm is consistent with the scaling properties reported so far for Al\({}_{1\text{-}\text{x}}\)Sc\({}_{\text{x}}\)N.[14, 13, 12, 11] Yasuoka et. al attributed this behavior to a change in the lattice-parameters for thinner films due to stress gradients arising from the lattice mismatch between Pt (2.78 A) and Al\({}_{1\text{-}\text{x}}\)Sc\({}_{\text{x}}\)N (3.22 A for \(x=0.2\)). Despite the high lattice mismatch, an epitaxial-like growth between Pt grains of the bottom-electrode layer and Al\({}_{1\text{-}\text{x}}\)Sc\({}_{\text{x}}\)N grains is suggested, eventually resulting in an increase in compressive strain in the basal plane when reducing the film thickness. In our films, the lattice-parameters do not change significantly for thicknesses down to 10 nm. However, for sub-5 nm thickness the Al\({}_{0.74}\)Sc\({}_{0.26}\)N lattice-parameters determined via STEM are \(a\approx 319\) pm and \(c\approx 505\) pm (details on the determination can be found in the Experimental section). This implies (relative to the equilibrium \(a\)-lattice parameter of \(\approx 324\) pm at a Sc concentration of \(x=0.26\)) an in-plane compressive strain of \(\approx 1.5\%\).[25, 26] However, we measure \(E_{c}\) to decrease below 10 nm film thickness down to less than 2 MV/cm in \(C-E\) curves, as illustrated in Fig. 4 c. This decrease in \(E_{c}\) below 10 nm thickness differs from the very recently reported thickness scaling study down to 5 nm thin epitaxial films grown via MBE.[11] In this work, Wang et al. also attributed the increase in \(E_{c}\) to a stress gradient forming due to the smaller in-plane lattice-parameter of the Mo bottom electrode compared to the one of Al\({}_{1\text{-}\text{x}}\)Sc\({}_{\text{x}}\)N.
The reduction in the electric field necessary for switching in films thinner than 10 nm is especially pronounced when considering the onset of the hysteresis opening in the \(J-E\) loops, as visible in Figure 3. For sub-5 nm film thickness the hysteresis opens at 2.1 MV/cm, while for 100 nm film thickness, the opening starts at 4.3 MV/cm. This implies a more gradual switching capability below 10 nm film thickness.
A decrease in \(E_{c}\) in ultrathin ferroelectrics was reported by Dawber et al., who included depolarization field corrections into the Janovec-Kay-Dunn scaling.[27, 28, 29] The depolarization field (\(E_{d}\)) resulting from a finite screening length in the electrodes adds up to the applied electric field if the condition \(4\pi P_{s}>>\epsilon_{e}\epsilon_{0}E\) is fulfilled. For Al\({}_{0.74}\)Sc\({}_{0.26}\)N, with a spontaneous polarization (\(P_{s}\)) of \(\approx 110\) pC/cm\({}^{2}\) and electric fields (\(E\)) up to 6 MV/cm, this condition is clearly satisfied (\(1507>>14\)). Thus, similar to what was experimentally observed in ferroelectric PVDF films, a thickness dependent depolarization field qualitatively fits very well to the drop of \(E_{c}\) below 10 nm film thickness in Al\({}_{1\text{-}\text{x}}\)Sc\({}_{\text{x}}\)N.[30] Very recently, it has also been demonstrated by first-principles calculations that reducing the thickness (\(d\)) of usually non-switchable Wurtzite III-V semiconductors (e.g., AlSb) could result in polarization switching capability (i.e., ferroelectricity), due to the depolarization field which scales with \(\propto 1/d\).[31]
Figure 3: \(J-E\) loops of 100 nm- down to sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N based capacitors deposited on Pt/Ti/SiO\({}_{2}\)/Si. All measurements were performed at 100 kHz on 5 μm diameter pads (\(<\) 10 nm Al\({}_{0.74}\)Sc\({}_{0.26}\)N thickness) and 10 x 10 μm2 pads (\(>\) 10 nm Al\({}_{0.74}\)Sc\({}_{0.26}\)N thickness). The decreasing trend of \(E_{c}\) with decreasing film thickness below 10 nm is indicated by red arrows.
The reason why Wang et al. did observe an increasing \(E_{c}\) down to 5 nm is most likely due to the small remanent polarization (\(P_{r}\)) of their films. A \(P_{r}\) of around 20 uC/cm\({}^{2}\) at the given composition cannot itself be a consequence of solely the depolarization field without being accompanied by severe retention issues - which the group did not observe in their recent paper [11]. An intrinsically lower \(P_{r}\) however will lead to a proportionally reduced depolarization field [32]. Thus, the increased compressive stress in thinner films leading to a higher \(E_{c}\), is not compensated to the same degree as in our work.
The decrease in \(E_{c}\) in our work is also reflected in the increase in \(\epsilon_{r}\) below 10 nm film thickness, as illustrated in Figure 4 a. In addition, \(\epsilon_{r}\) increases to even higher values after cycling, which is especially pronounced for the thinner films (Figure 4 b). A similar increase in the relative permittivity with cycling has also been observed for the wurtzite-type ferroelectric Al\({}_{1-x}\)B\({}_{x}\)N [33]. Through analysis of the Rayleigh parameters, this increase has been related to an increase in domain wall area compared to pristine samples at 0 V bias. If persistent domain walls indeed form during cycling and these domain walls extend vertically in the film, similar to what is reported in the following section, an enhancement in permittivity with lower film thickness would be a natural consequence - due to an increase in the ratio of domain wall area to film volume with reduced thickness. This change in the ratio would imply a larger relative volume that is frustrated by the domain wall and in turn features a higher permittivity due to shallower ionic potential.
With decreasing film thickness not only the leakage current-, but also the hysteretic area is increasing especially for sub-5 nm film thickness, as illustrated in Figure 3. This increase in apparent displacement current can not be explained alone by polarization reversal, as it would imply a physically unlikely large spontaneous polarization in excess of 1000 uC/cm\({}^{2}\). The
Figure 4: a) Relative permittivity as well as loss tangent as a function of Al\({}_{0.74}\)Sc\({}_{0.26}\)N film thickness for as-deposited- and for pre-cycled (10 times) capacitors grown on Pt/Ti/SiO\({}_{2}\)/Si. b) Absolute change of \(\epsilon_{r}\) (state with full positive polarization, 0 V bias) with cycling in dependence of the Al\({}_{0.74}\)Sc\({}_{0.26}\)N film thickness. c) The coercive field dependence on Al\({}_{0.74}\)Sc\({}_{0.26}\)N thickness for the capacitors described in a) determined via \(J-E\) (100 kHz) and via \(C-E\) (sweep time 20 s, small signal 100 mV and 900 kHz) loops. The coercive field determined via \(C-E\) loops is approximated by the peak positions of the butterfly-loop. d) First ten \(C-V\) cycles of pristine capacitors consisting of 20 nm thin-, e) 10 nm thin- and f) sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N used for determining the change of \(\epsilon_{r}\) with cycling as depicted in b). The capacitor area was 695 μm\({}^{2}\) (20 nm thickness), 341 μm\({}^{2}\) (10 nm thickness) and 99 μm\({}^{2}\) (sub-5 nm thickness).
enhanced apparent polarization has therefore to be attributed to a dynamic current contribution triggered by the polarization reversal of the Al\({}_{0.74}\)Sc\({}_{0.26}\)N film. Currently, the most likely explanation of this behavior is the temporally formation of conductive domain walls during switching [34]. As discussed above, with decreasing film thickness, the relative domain-wall density will increase and domains are more likely to extend from the top to the bottom interface. Both effects can facilitate increased electrical current to flow in the form of compensation charges for the strong polarization discontinuity along the domain walls. This concept of conducting domains walls is closely related to the well known polarization doping schemes in III-N semiconductors [35, 36, 37]. Further analysis of this effect is the focus of ongoing work.
Atomic scale investigation of ferroelectric domains in sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N
Analog-like ferroelectric switching is an elegant approach for emulating synapses and thus a stable partially switched state is an essential material property in the context of neuromorphic computing [38, 39]. Hence, it is important to image and understand the atomistic switching processes and the evolution of polarization discontinuities (i.e., ferroelectric domain walls). This section explores the microscopic consequences of ferroelectric switching in sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N films as well as their general structural properties via high-resolution STEM. The main focus of this study is the first observation of domain walls in individual Al\({}_{0.74}\)Sc\({}_{0.26}\)N grains in any wurtzite-type ferroelectric. In order to clearly observe the local polarization on unit cell scale, the analysis was conducted on an epitaxial (in-plane ordered), yet still polycrystalline film (0002 oriented columnar grains), which allows for direct imaging conditions because of the identical film/substrate crystallographic orientation. An overview image of the sub-5 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N film showing individual epitaxial grains with c-axis texture confirmed across the entire prepared area as well as \(C-E\) loops demonstrating the ferroelectric switching in 10 nm- down to sub-5 nm thin epitaxially grown films is provided in the Supplement (Figure 9 and Figure 7 ). While previous attempts to resolve the local polarization in wurtzite-type ferroelectrics were successful to identify the polarization direction of single unit cells, the observation of domain walls has so far been elusive.
The epitaxial nature of the heterostructure allowed for an accurate thickness determination on the level of monolayers due to the atomically sharp interfaces (c.f., Figure 1 a). Despite the columnar growth mode of sputtered films, the good structural quality of the in-plane oriented growth enables the direct observation of the unit-cell polarity within single grains on the atomic scale [40]. In order to draw conclusions on the switching process itself (besides just confirming up and down polarization flips), the investigated capacitor was only partially switched from the Nitrogen (N)-polar to the metal (M)-polar state. For this, the capacitor was pre-switched to full N-polarity by applying a positive field which is high enough to saturate the polarization reversal, with subsequent application of a negative field which is \(\approx\) 0.8 MV/cm below the saturation point.
The atomic scale STEM analysis of the partially switched capacitor is given in Figure 5. An annular bright-field (ABF)-STEM micrograph of the capacitor cross-section is depicted in Figure 5 a. The individual thickness of the Pt electrodes is determined to be about 11 nm and 25 nm for the epitaxial Pt bottom electrode and the top electrode, respectively. They sandwich the Al\({}_{0.74}\)Sc\({}_{0.26}\)N layer with a total thickness of 8 - 9 unit cells, which was determined by counting the number of monolayers. This corresponds to 4 to 4.5 nm at a \(c\)-lattice parameter of \(\approx\) 505 pm. This number agrees well with the targeted thickness, considering the deposition rate calibrated on thicker films. Therefore, we conclude that there is no significant delay of film growth due to nucleation. The Sc content was verified by EDS analysis of a \(\approx\) 10x4 nm\({}^{2}\) frame to be in the order of \(x\approx\) 26 at.\(\%\).
Atomic scale investigations of the polar domain structure were conducted using ABF-STEM imaging paired with multi-frame image alignment [41] on the partially switched Al\({}_{0.74}\)Sc\({}_{0.26}\)N film. Here, the use of the ABF detector allows to routinely image atomic positions of light elements such as nitrogen, which is the crucial prerequisite to observe the polarity on the unit cell level in ferroelectric Al\({}_{1\text{-}x}\)Sc\({}_{\text{x}}\)N [40]. Figure 5 b depicts atomic models of the N- and M-polar oriented wurtzite-type structures sketched along the [2-1-10] viewing direction required for the investigation of unit cell polarity. As already discussed in related work [40, 26, 18], sputtered nanocrystalline films of Al\({}_{1\text{-}x}\)Sc\({}_{\text{x}}\)N exhibit small grain diameters of 2 - 6 nm featuring an in-plane tilt between the individual grains in the order of 6\({}^{\circ}\) which restricts the observable area to single grains with exact orientation to the incident electron beam [14]. In this respect, the ABF-STEM image contrast formation crucially depends on exact orientation conditions [42, 43]. In this investigation, the directly interpretable sample area was further limited by 1 - 2 nm large Pt agglomerates present evenly spaced in the center of the Al\({}_{0.74}\)Sc\({}_{0.26}\)N layer. These Pt artifacts were introduced during sample preparation using the FIB thinning method.
Individual grains with aligned zone axis orientation were identified in the Al\({}_{0.74}\)Sc\({}_{0.26}\)N layer after centering the GaN crystal lattice into the [2-1-10] orientation. The unit cell polarity was identified from the non-rigidly registered multi-frame ABF-STEM data sets, by the analyses of intensity profiles drawn across the (Al,Sc)-N dumbbells (see Supplement - Figure 10 for a
demonstration on the GaN substrate). The ABF-STEM micrograph contrast was inverted and a color scheme (inverted-cABF-STEM) was applied to enhance the image visibility as described in the experimental section. No noise filter was applied for the analysis of intensity profiles to avoid potential artifacts by reducing the information limit. Intensity profile analysis is regularly performed to determine the polarity of materials with wurtzite-type crystal structure because of the strong contrast difference between metal and Nitrogen or Oxygen atoms [44].
Using the described method, the presence of N-polar and M-polar regions within a single grain is observed in Figures 5 c and Figure 6. They confirm the presence of inversion domain boundaries (IDB) with a varying, yet always significant horizontal component. This is highly surprising given the fact that the horizontal component should give rise to an extreme polarization discontinuity at the domain wall, which likely requires an as-yet-unknown (charge) compensation or reconfiguration mechanism for stabilization. Generally, M-polarity is clearly identified in the upper region in the inverted-ABF-STEM images, while the remaining N-polarity is predominantly located at the bottom interface. Figures 5 d and 6 b present the aforementioned profile analysis along the highlighted vertical atomic columns showing a clear N-polar (blue profiles) polarization near the bottom interface and a switch to M-polarity (pink profiles) closer to the upper interface. At the position of polarization inversion from N- to M-polarity (the profiles are drawn in the scheme "up-down" starting left of the (Al,Sc)-N dumbbell), the alternating dumbbell orientation (and so the drawn profiles) is intercepted, hence the polarity abruptly inverses to the M-polar state following an "up-down-down-up" scheme as indicated by the arrows. This change of the polarization within the sub-5 nm grains indicates that even in very thin films with grain diameters in the single digit nm range, Al\({}_{1\text{-}3}\)Sc\({}_{\text{x}}\)N gradually switches in a domain wall motion limited fashion. This suggests that the material, and possibly the wurtzite-type ferroelectrics in general, are potentially very suitable for analog switching in single-digit nanometer scaled devices. Further, as already assumed for thicker films [40], the nucleation of polar inversion domains switching from N to M-polarity is found to be initiated at the top electrode interface and from there propagates towards the bottom interface. These results demonstrate the first direct observation of IDBs in wurtzite-type ferroelectrics. From the application point of view, the gradual in-grain switching and small domain size is highly attractive to address multiple states in lateral dimensions \(<10\) nm\({}^{2}\), which emphasizes the potential of Al\({}_{0.74}\)Sc\({}_{0.26}\)N for the realization of highly scaled synapse-emulating neuromorphic computing devices [38].
Figure 5: a) ABF-STEM micrograph showing the P/Al\({}_{0.74}\)Sc\({}_{0.26}\)N/Pt/GaN capacitor stack in cross-section. The inset shows the superimposed EDS maps of Pt, Al and Ga. b) Sketches of the atomic structure in the M- and N-polar state along the line of sight. c) Inverted-ABF-STEM micrograph of the full Al\({}_{0.74}\)Sc\({}_{0.26}\)N layer featuring an inclined inversion domain boundary separating regions of M-polarity (upper right) and N-polarity (lower left). Superimposed sketches of the (Al,Sc)-N dumbbells help to visualize the polarization direction. d) Intensity profile analysis of the polarization direction of individual (Al,Sc)-N dumbbells inside the single-column frame. Profiles are always drawn from left to right (see arrows on the unfiltered single column image); color code: M(-polarity) = pink, N(-polarity) = blue.
## 3 Conclusion
In summary, ferroelectric switching in sputter-deposited, 8 to 9 unit cells (4 to 4.5 nm) thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N grown non-epitaxially on Pt/Ti/SiO\({}_{2}\)/Si and epitaxially on Pt/GaN/sapphire is demonstrated. The ferroelectric nature of the switching event was independently confirmed by electric \(J-E\) and \(C-E\) measurements, as well as by STEM investigations, resolving the polarization inversion at the atomic scale. This is the first report on a sub-5 nm thin wurtzite-type ferroelectric film switching fully on Si which also feature record low switching voltages down to \(\approx\) 1 V. Both aspects can be expected to greatly aid the future integration of the material class to advanced CMOS technology. Despite the better structural qualities of the thin film texture and interface structure of epitaxial films, the growth of sub-5 nm non-epitaxial Al\({}_{0.74}\)Sc\({}_{0.26}\)N on silicon results in an improved ratio between coercive- and breakdown field. Hence, the structural quality is not a limiting factor for good ferroelectric performance. \(E_{c}\) in our films increased only slightly with decreasing film thickness down to 10 nm, while it decreased when the film thickness is further reduced down to sub-5 nm, thereby significantly lowering the required switching voltages. This behavior fits qualitatively to the depolarization corrected JKD model described by Dawber et al. [27], who explain the decrease in \(E_{c}\) by the increase in the depolarization field, resulting from the finite screening length of the electrodes. The increasing permittivity in thinner films supports this hypothesis. The permittivity is also found to increase with cycling, especially for thinner films, which we relate to an increase in the relative volume of domain walls with respect to the total film volume.
Our high resolution ABF-STEM investigation of epitaxially grown Al\({}_{0.74}\)Sc\({}_{0.26}\)N for the first time allowed to resolve IDBs in a wurtzite-type ferroelectric. The resulting presence of nanoscale domains spanning only fractions of individual nm-sized grains strongly suggests that domain wall motion still limits the switching kinetics in wurtzite-type films thinner than 5 nm. The strong horizontal component of the observed domain walls further motivates the existence of a (charge) compensation mechanism for the strong polarization discontinuity at the boundary.
To conclude, the given evidence of in-grain switching of sub-5 nm thin films with sub-5 nm lateral grain dimensions demonstrates stable and gradual partial switching capabilities of extremely low volumes. This switching mechanism together with the positive effects of thickness downscaling on \(E_{c}\) that result in ferroelectric switching voltages as low as 1 V make ferroelectric Al\({}_{1-x}\)Sc\({}_{x}\)N a highly interesting choice for nanoscale ferroelectric synaptic devices that require analog switching - as is the usage of a CMOS BEOL compatible deposition process already used in mass-production.
Figure 6: a) Inverted-ABF-STEM micrograph of the Al\({}_{0.74}\)Sc\({}_{0.26}\)N layer featuring a horizontal inversion domain boundary separating regions of M-polarity within the top unit cells and N-polarity in the bottom film. Sketches of the (Al,Sc)-N dumbbells assist to visualize the polarity. b) Intensity profile analysis of the (Al,Sc)-N dumbbells inside the vertical single-column frames on the left (\(i\)) and right side (\(ii\)) of the grain. Both analyses hint towards the lateral progression of an IDB. Profiles are drawn from left to right (see arrows) on the unfiltered image; M(-polarity) = pink, N(-polarity) = blue.
## 4 Experimental Section
As electrodes, 100 nm thick Pt layers on a 10 nm thick Ti seed layer sputter-deposited on SiO\({}_{2}\)/Si wafers were provided by Fraunhofer ISIT, Germany. Epitaxy-ready templates consisting of GaN(4 \(\mathrm{\SIUnitSymbolMicro m}\))/sapphire were commercially bought. The substrates were diced into 1x1 \(\mathrm{cm}^{2}\) chips with prior surface protection using a photoresist. Cleaning in acetone and isopropanol using an ultrasonic bath was performed, followed by rinsing in DI-water. Subsequently, the non-epitaxial Pt templates were cleaned by performing an Ar/O\({}_{2}\)-plasma-etching in a Sentech S100 reactor, details can be found elsewhere [45]. The Al\({}_{0.74}\)Sc\({}_{0.26}\)N layers as well as the bottom epitaxial Pt and the Pt top layers were grown in-house by sputter deposition using an Oerlikon (now Evatec) MSQ 200 multisource system, details about the process can be found in a previous publication [14]. The epitaxial growth on Pt/GaN/sapphire as well as the non-epitaxial growth on Pt/Ti/SiO\({}_{2}\)/Si was obtained by using the same deposition process. The Pt top layers were deposited _in situ_ subsequently to the Al\({}_{0.74}\)Sc\({}_{0.26}\)N deposition after reaching a base pressure of at least 5 x 10\({}^{-7}\) mbar. Round- as well as square top electrodes were structured with lithography and ion-beam etching (IBE, Oxford Instruments Ionfab 300). The dry-etching was stopped right after the loss of Pt signal, detected via a secondary-ion mass spectrometer (SIMS). The capacitance and loss tangent measurements were performed using a Hewlett Packard 4284 A Precision LCR meter. If not stated otherwise, the small signal voltage and frequency were 0.1 V and 900 kHz, respectively. The sweep time for \(C-E\) measurements of different Al\({}_{0.74}\)Sc\({}_{0.26}\)N thickness was kept constant by adjusting the delay time between each step, as well as the step width of the voltage sweep. \(J-E\) measurements were performed using an AixACCT TF 3000 analyzer. A cross-section sample of the partially switched film was extracted and thinned by the focused ion-beam technique using a Helios600 FIB-SEM machine and transferred into a JEOL (JEM200F) NEOARM scanning transmission electron microscope operated at 200 kV(cold-FEG). Atomic scale investigation of the unit-cell polarity within the sub-5 nm Al\({}_{0.74}\)Sc\({}_{0.26}\)N layer was conducted using the annular bright-field scanning transmission electron microscopy (ABF-STEM) mode with 10 - 20 mrad collection angle and a spatial resolution limit of \(\approx\) 70 pm. To minimize the effects of scan distortions and sample drift during image acquisition, the atomic-scale ABF-STEM micrographs were recorded using fast serial recording of multi-frame images followed by post-processing image alignment using rigid and non-rigid registration implemented in the Smart Align algorithm (HREM Research Inc.) on the DigitalMicrograph v.3.5.1 (DM) (GatanInc) software. If not stated otherwise, the non-registered ABF-STEM images were 1) Fourier filtered by a simple radiance difference filter using the life version of DM plug-in HREM-Filters Pro/Lite v.4.2.1 (HREM Research Inc.) to remove high-frequency noise, and 2) the ABF contrast was inverted, a color scheme was applied and the contrast was slightly enhanced within DM, for presentation purposes (inverted-ABF-STEM). The in-plane and out-of-plane lattice parameters were estimated with \(\pm 2\) pm accuracy by calculating the average atomic distance over minimum 8 unit cells and 6 unit cells, respectively, and are compared with as-determined lattice parameter of the GaN substrate. For GaN, the as-determined lattice parameters are \(a\approx 318\) pm and \(c\approx 521\) pm and for Al\({}_{0.74}\)Sc\({}_{0.26}\)N these are \(a\approx 319\) pm and \(c\approx 505\) pm. Chemical analysis on the capacitor stack was conducted using energy-dispersive spectroscopy (EDS) with a dual silicon drift detector system with 100 mm\({}^{2}\) active area each. Cross-section samples of 10 nm thin Al\({}_{0.74}\)Sc\({}_{0.26}\)N based capacitor structures grown on Pt/GaN/sapphire and grown on Pt/Ti/SiO\({}_{2}\)/Si were examined using a Tecnai F30 G\({}^{2}\) STwin microscope operated at 300 kV.
### Acknowledgements
This work was supported by the project "ForMikro-SALSA" (Grant no. 16ES1053) from the Federal Ministry of Education and Research (BMBF) and the Deutsche Forschungsgemeinschaft (DFG) under the scheme of the collaborative research centers (CRC) 1261 and 1461 as well as grant 458372836. The authors gratefully acknowledge Christin Szillus for the FIB preparation of cross-section samples for TEM analysis.
|
2306.14912 | $f(Q,T)$ gravity: From early to late-time cosmic acceleration | In this article, we explore the comprehensive narrative of cosmic evolution
within a cosmological framework by utilizing a novel form of gravity known as
generalized symmetric teleparallel gravity, denoted as $f(Q,T)$ gravity. Here,
$Q$ represents the non-metricity scalar, while $T$ denotes the trace of the
energy-momentum tensor. We present and analyze two distinct $f(Q,T)$
cosmological models, each characterized by its unique Lagrangian. Our
investigation delves into the cosmological parameters of these models,
scrutinizing various energy conditions, examining the inflationary dynamics of
the early universe through scalar field formulations, and probing the
mysterious nature of dark energy using statefinder diagnostics and
$(\omega-\omega')$ phase space analysis. Ultimately, our findings offer a
comprehensive account of cosmic evolution, spanning from the early universe to
its late-time evolution. | Surajit Das, Sanjay Mandal | 2023-06-19T06:37:21Z | http://arxiv.org/abs/2306.14912v2 | # Aspects of Cosmology in Symmetric Teleparallel \(f(Q,T)\) Gravity
###### Abstract
This article aims to explore the idea of describing the complete evolution process of the universe with a cosmological model. For this purpose, we work on a recently developed generalized symmetric teleparallel gravity called \(f(Q,T)\) gravity, where \(Q\) and \(T\) represent the non-metricity scalar and trace of the energy-momentum tensor. We present two \(f(Q,T)\) cosmological models for two different Lagrangian forms of \(f(Q,T)\) and discuss their cosmological parameters. Further, we discuss various energy conditions to check the viability of those models and also discuss the scalar field construction for describing the inflationary scenario in the early times. In addition, we examine the dark energy nature of the presumed cosmological solution by employing state-finder diagnostics and \(\omega-\omega^{\prime}\) phase space analysis. In conclusion, we found that our models can present a complete evolution profile from the early to late times of the universe.
**Keywords:**\(f(Q,T)\) gravity, cosmic kinematic parameters, energy conditions (ECs), scalar field model, statefinder diagnostics.
pacs: 04.50.Kd
## I Introduction
Due to glorious and advanced progress in astronomy, astrophysics, cosmology, data science, space science & technology, recent observations suggest that our universe is accelerating with unknown stuff, so-called 'Dark Energy', which is responsible for the acceleration of the universe [1; 2; 3; 4; 5]. But the most intriguing question is that what is dark energy? Cosmologists and particle physicists are trying to identify the properties of dark energy. Our universe is covered with approximately 75% of the total unknown energy i.e., dark energy. So it's a big challenge for the physicists in this field to overcome the related problems of dark energy. There exist two eras in the cosmic history of our universe for accelerated expansion [6]. During the early times, the universe passed through an accelerated expansion, which is known as inflation. And now, in the current times, the expansion of the universe is accelerating due to the dominating dark energy.
Historically, it was added a term, called the cosmological constant (\(\Lambda\)) in Einstein's field equation described correctly at the very first time, but its magnitude is unwanted and unmotivated by the law of fundamental physics. One may explain both, the inflationary epoch and the late-time accelerated expansion of the universe by considering matter which contains barotropic fluid having the form \(p=f(\rho)\), where \(p\) and \(\rho\) are the pressure and the energy density respectively. The study of cosmological regimes beyond the phantom barrier and future singularity has been discussed in [7; 8] as an example of barotropic fluids. Also, the inflationary scenarios in the viscous fluid model have been described in [9]. On the other hand, in this literature [10], they argued that one cannot sure to determine the ultimate fate of our universe when we are using the current observational data. So looking at all these problems, researchers have proposed a lot of proposals to overcome the problems related to dark energy [11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21].
The modification of the standard general theory of relativity is a great idea to describe dark energy. Therefore the modified theories of gravity are interesting proposals to study the existence of dark energy. It represents the nature of dark energy as a geometrical property of the universe and the modified gravity theories are connected to the modification of Einstein-Hilbert action [22]. There are a lot of interesting modified theories of gravity that exist. For example, in the Einstein-Hilbert action, if we replace by an analytic function \(f(R)\) instead of \(R\), then we get \(f(R)\) gravity [23]. On the other hand, in between geometry-matter coupling in a non-minimal way, we have another interesting modified theory of gravity, known as \(f(R,T)\) gravity [24]. Another well-established modified theory of gravity is \(f(T)\) gravity, which is also known as teleparallel equivalent general relativity (TEGR) [25; 26; 27; 28]. In teleparallel gravity the action is an arbitrary function of torsion scalar \(T\), where Ricci scalar \(R\) is replaced by the arbitrary function of torsion scalar \(T\) in the action.
Besides the two representations in general relativity (one is curvature representation and another is torsion representation), another interesting and well-established representation in the modified theory of gravity, which has attracted the interests of cosmologists, is the so-called symmetric teleparallel gravity [29]. In this formalism, the geometric
variable is defined by a non-metricity \(Q\) and in this aspect, \(f(Q)\) gravity has been proposed [30]. Like \(f(R,T)\) gravity, one may extend the \(f(Q)\) theory by considering the action which is a non-minimal coupling in between gravitational interaction produced by the term \(Q\) and the trace of the energy-momentum tensor \(T\). So it is an action containing an arbitrary function of \(f(Q,T)\). In \(f(Q,T)\) gravity, the study of cosmological scenarios [31; 32; 33], constraining equation of state [34], cosmological perturbation [35], inflationary model [36], Quintessence universe [37], late-time cosmic acceleration [38], energy conditions [39; 40], wormholes [41], holographic dark energy [42], reconstruction of the \(\Lambda\)CDM universe [43] has been studied.
In the present paper, we proceed in a phenomenological way to study the complete cosmic evolutionary history of the universe by describing an ansatz for the scale factor. From this ansatz, one can easily find out the analytical solutions for energy density, pressure, EOS parameter, and various interesting physical consequences for various cosmological models in \(f(Q,T)\) gravity. To explain the early as well as late-times cosmic acceleration of the universe from the chosen parametrization, we have studied all the cosmological consequences in detail in the framework of \(f(Q,T)\) gravity.
This paper is presented as follows: We started with a discussion on the STEGR in section II. In section III, we have discussed in short and briefly the mathematical formulation of \(f(Q,T)\) gravity and presented the gravitational field equations in a spatially flat FLRW-type spacetime. In section IV, we discuss some kinematic cosmic variables such as scale factor, Hubble parameter, and deceleration parameter, which is followed by the construction of the cosmological model in \(f(Q,T)\) gravity in section V for two different models. Here we present the expressions and behaviors for energy density, pressure, and equation of state parameters for the given models. The energy conditions for the chosen models are presented in section VI. The formulation of the scalar field description for our models is discussed in section VII. In sections VIII & IX, we have discussed some geometrical diagnostics to distinguish our model from other dark energy models and finally, we conclude our results in X with a brief discussion.
## II Symmetric Teleparallel Equivalent General Relativity in a Nutshell
Since general relativity (GR) is based on the Riemannian manifold, it should be generalized to a more general geometric theory of gravity which could explain the gravitational field as a more general geometric gravitational structure that should be valid in the solar system level also. So we may explain the various cosmological aspects of the universe from the early to the late time acceleration. In this generalization process, Weyl introduced the notion of non-metricity, a new geometric quantity, in which the covariant derivative of the metric tensor is non-zero [44]. In Weyl's theory, there is an extra connection, called the length connection, which gives us information on the length of a vector and doesn't contain any knowledge about the direction of a parallel transported vector field. After Weyl, more generalization of gravity theories has been formulated and can be found here [45; 46; 47; 48; 49; 50; 51; 52; 53; 54].
So from the previous discussions, it is clear that GR can be presented at least in two equivalent formalisms. One is the well-known curvature representation, where the non-metricity and the torsion tensor vanish. Another one is called the teleparallel equivalent general relativity (TEGR) in which the curvature and the non-metricity vanish, but the torsion tensor does not. But fortunately, a relatively unexplored territory consists of another third equivalent representation of GR, namely symmetric teleparallel equivalent of general relativity (STEGR) or simply symmetric teleparallel gravity [29]. In this theory, the curvature and the torsion vanish but the non-metricity (\(Q\)) is the basic geometrical variable that describes the properties of geometric gravitational interactions. It also describes the length-variation of a parallel transported vector. This formulation is completely geometric and covariant. As the curvature and torsion relate with affine connections, not dealing with spacetime, the covariant derivatives must commute due to the vanishing of curvature. Here in STEGR, the associated energy-momentum density is essentially the Einstein pseudotensor but becomes a true tensor in the geometric representation. STEGR was further developed into an arbitrary generic function of \(Q\) i.e., \(f(Q)\) gravity which is also known as coincident general relativity (CGR) [30]. CGR is described by the Einstein-Hilbert action excluding the boundary term which is underpinned by the spin-2 field theory. This particular construction also provides a starting point for the modified gravity theories, and presents the early and late-times cosmological solutions of the universe. That's why this is a simpler geometrical structure to the affine connection which is fundamentally devastating gravity from any inertial character.
The STEGR can also be represented by a general quadratic and parity-conserving Lagrangian with the help of Lagrangian undetermined multipliers for vanishing torsion and curvature [55]. This particular Lagrangian is equivalent to the Einstein-Hilbert Lagrangian in standard GR for certain choices of the coupling coefficients. However, it was also shown that the field equations can be written as a system of Proca equations, which may be an interesting study for the propagation of gravitational-electromagnetic waves [56]. Conroy et. al., [57] studied the action, completely made up of the non-metricity tensor, and its contractions were decomposed into terms, which involved the metric and a gauge vector field. So this paper discussed the derivation of the exact propagator for the most general infinite-derivative, even-parity in the generally covariant STEGR theory of gravity. Also, the linear perturbations in flat space were analyzed in [57; 58], and [59]. The propagation of gravitational waves with their various properties like speed and polarization was studied in [60] for the various extensions of symmetric teleparallel gravity. For all the possible classifications of quadratic, first-order derivative terms of the non-metricity tensor in the framework of symmetric teleparallel gravity, Dialektopoulos et. al.,[61] used the Noether symmetry approach. Basically in [61], they used to reduce the dynamics of the system by choosing symmetries to find analytical solutions and this model were invariant
under point transformations in a cosmological background.
On the other hand, from the cosmological point of view, it was seen in the pieces of literature [62] and [63] that, the accelerating expansion of the universe is an intrinsic property of geometry and we need not deal with any extra fields or exotic dark energy, under \(f(Q)\) gravity consideration. Cosmology and the behavior of cosmological perturbations in \(f(Q)\) gravity were investigated in [64]. Energy conditions, cosmography analysis, Buchdahl quark star formation, and wormhole solutions in \(f(Q)\) gravity can be found in [65], [66], [67], and [68; 69] respectively. Some excellent works in the aspects of cosmology can be found here [70; 71; 72; 73; 74].
By introducing in the framework of the metric-affine formalism, an extension of a new class of symmetric teleparallel gravity was considered in [75], where the geometry part i.e., the non-metricity \(Q\) is non-minimally coupled to the matter Lagrangian \(L_{m}\). A Lagrangian of the form \(L=f_{1}(Q)+f_{2}(Q)L_{m}\) leads to the non-conservation of the energy-momentum tensor and appearance as an extra force in the geodesicity. Here \(f_{1}\) and \(f_{2}\) are two generic functions of \(Q\), and \(L_{m}\) is the considered matter Lagrangian. Several cosmological applications were considered for some specific functional forms of the functions like power-law and exponential dependencies of the non-minimal couplings and the cosmological solutions lead to a late-times accelerating universe.
Finally, one may consider another kind of extension of symmetric teleparallel gravity theory by considering an action that contains the non-minimal coupling between the geometry i.e., \(Q\), and the trace of the energy-momentum tensor \(T\) instead of matter Lagrangian unlike in the previous one. By considering this kind of construction, one can easily describe the gravitational interactions in the presence of geometry-matter coupling and the cosmological solutions can describe both, the accelerating and decelerating evolutionary phases of the universe. So \(f(Q,T)\) gravity can provide hopeful insights for the description of the early and late phases of our universe.
## III Mathematical formulation of \(f(Q,t)\) gravity
In the symmetric teleparallel \(f(Q,T)\) gravity theory, by imposing the condition that the connection is symmetric, so that for coincident gauge, the Levi-Civita connection \(\hat{\Gamma}^{\lambda}_{\mu\nu}=0\). So the deformation tensor \(L^{\lambda}_{\mu\nu}\) can be written as,
\[L^{\lambda}_{\mu\nu}=-\Gamma^{\lambda}_{\mu\nu}, \tag{1}\]
The non-metricity tensor is given by \(Q_{\alpha\mu\nu}=\nabla_{\alpha}g_{\mu\nu}\) and it can be also defined as,
\[Q=-g^{\mu\nu}\Big{(}L^{\alpha}_{\beta\mu}L^{\beta}_{\nu\alpha}-L^{\alpha}_{ \beta\alpha}L^{\beta}_{\mu\nu}\Big{)}, \tag{2}\]
where from (1), the disformation tensor is written as,
\[L^{\alpha}_{\beta\gamma}=-\frac{1}{2}g^{\alpha\lambda}\Big{(}-\nabla_{\lambda }g_{\beta\gamma}+\nabla_{\beta}g_{\gamma\lambda}+\nabla_{\gamma}g_{\beta \lambda}\Big{)}. \tag{3}\]
The non-metricity conjugate or the superpotential of the model is given by,
\[P^{\alpha}_{\mu\nu}=-\frac{1}{2}L^{\alpha}_{\mu\nu}+\frac{1}{4}(Q^{\alpha}- \bar{Q}^{\alpha})g_{\mu\nu}-\frac{1}{4}\delta^{\alpha}_{\ \ (\mu}Q_{\nu)}, \tag{4}\]
where we have used two known results of the trace of the non-metricity tensor i.e.,
\[Q_{\alpha}=g^{\mu\nu}Q_{\alpha\mu\nu}\equiv Q_{\alpha\ \mu}^{\ \mu}\ ;\ \ \ \ \bar{Q}_{\beta}=g^{\mu\nu}Q_{\mu\beta\nu}\equiv Q^{\mu}_{\ \beta\mu}. \tag{5}\]
In addition, we define the non-metricity scalar \(Q\) as,
\[Q=-Q_{\alpha\mu\nu}P^{\alpha\mu\nu}=-\frac{1}{4}\Big{(}-Q^{\alpha\nu\lambda}Q _{\alpha\nu\lambda}+2Q^{\alpha\nu\lambda}Q_{\lambda\alpha\nu}-2Q^{\lambda} \bar{Q}_{\lambda}+Q^{\lambda}Q_{\lambda}\Big{)}. \tag{6}\]
Now the corresponding action in \(f(Q,T)\) gravity can be written as,
\[\mathcal{A}=\int\frac{1}{16\pi G}f(Q,T)\sqrt{-g}\ d^{4}x+\int\mathcal{L}_{m} \sqrt{-g}\ d^{4}x, \tag{7}\]
where \(f\) is a generic arbitrary function of \(Q\) and the trace of the energy-momentum tensor is \(T\), \(G\) being the Newtonian universal gravitational constant. The matter Lagrangian can be considered as \(\mathcal{L}_{m}\) and \(g\) is the determinant of the metric tensor \(g_{\mu\nu}\). Here the energy-momentum tensor is defined as,
\[T_{\alpha\beta}=-\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{m})}{ \delta g^{\alpha\beta}}. \tag{8}\]
So taking the variation of the action (7) wrt the metric tensor \(g_{\mu\nu}\), we obtain the field equation of the \(f(Q,T)\) gravity as
\[-\frac{2}{\sqrt{-g}}\nabla_{\lambda}\Big{(}f_{Q}\sqrt{-g}P^{\lambda}{}_{\mu\nu} \Big{)}-\frac{1}{2}fg_{\mu\nu}-f_{Q}\Big{(}P_{\mu\lambda\beta}\ Q_{\nu}^{\ \lambda\beta}-2Q^{\lambda\beta}_{\ \mu}P_{ \lambda\beta\nu}\Big{)}=\big{(}8\pi G-f_{T}\big{)}T_{\mu\nu}-f_{T}\Theta_{\mu\nu}, \tag{9}\]
where \(f_{Q}=\frac{df}{dQ}\) and \(f_{T}=\frac{df}{dT}\). \(\Theta_{\mu\nu}\) is the variation of energy-momentum tensor wrt the metric tensor, such that
\[\Theta_{\mu\nu}\equiv g^{\alpha\beta}\frac{\delta T^{\alpha\beta}}{\delta g^{ \mu\nu}}. \tag{10}\]
It should be mentioned that the field equation, (9) is valid only for the coincident gauge [30].
By assuming the universe is homogeneous and isotropic, we consider flat FLRW-type metric having the form
\[ds^{2}=-dt^{2}+a^{2}(t)\sum_{i}(dx^{i})^{2}. \tag{11}\]
Here \(a(t)\) describes the evolution history of the universe, simply known as scale factor and \(i\) runs from \(1-3\) only. Moreover, considering the cosmic fluid behaves as perfect fluid having the energy-momentum tensor as,
\[T_{\mu\nu}=(\rho+p)u_{\mu}u_{\nu}+pg_{\mu\nu}, \tag{12}\]
where \(\rho\), \(p\) and \(u^{\mu}\) are the energy density, pressure and the four-velocity of the cosmic fluid, respectively. The non-metricity \(Q\) in this background can be calculated and it is given by \(Q=6H^{2}\).
Using the metric (11) in the field equations (9), we can write the generalized Friedmann equations (taking \(G=1\)), which are
\[8\pi\rho=\frac{f(Q,T)}{2}-6FH^{2}-\frac{2\bar{G}}{1+\bar{G}}\big{(}\dot{F}H+F \dot{H}\big{)}, \tag{13}\]
and
\[8\pi p=-\frac{f(Q,T)}{2}+6FH^{2}+2\big{(}\dot{F}H+F\dot{H}\big{)}. \tag{14}\]
Here \((\cdot)\) dot represents a derivative with respect to cosmic time, \(F=\frac{df}{dQ}\) and \(8\pi\bar{G}=\frac{df}{dT}\).
We can also write Einstein's field equations from Friedmann equations as follows
\[3H^{2}\equiv 8\pi\rho_{eff}=\frac{f}{4F}-\frac{4\pi}{F}\big{[}(1+\bar{G}) \rho+\bar{G}p\big{]}, \tag{15}\]
\[2\dot{H}+3H^{2}\equiv-8\pi p_{eff}=\frac{f}{4F}-\frac{2\dot{F}H}{F}+\frac{4 \pi}{F}\big{[}(1+\bar{G})\rho+(2+\bar{G})p\big{]}. \tag{16}\]
Here \(\rho_{eff}\) and \(p_{eff}\) are the effective energy density and effective pressure, respectively.
Using (13) and (14), the equation of state parameter can be written as,
\[\omega=-1+\frac{4(\dot{F}H+\dot{H}F)}{f(1+\bar{G})-12FH^{2}(1+\bar{G})-4\bar{G }(\dot{F}H+\dot{H}F)}. \tag{17}\]
Now using the above setup, one can explore various cosmological scenarios in the framework of \(f(Q,T)\) gravity.
## IV Cosmic parameters
In this section, we shall discuss about various cosmological parameters like the scale factor \(a(t)\), the Hubble parameter \(H(t)\) and the deceleration parameter \(q(t)\). These particular kinematic variables play very crucial roles in the study of physical cosmology. Also these parameters are the key parameters of most of the various cosmological models in modified gravity theories and these can describe the evolutionary history of the universe.
In this paper, we choose the scale factor \(a(t)\) having the following form [76],
\[a(t)=e^{ct}[\text{sech}^{d}(n-mt)]. \tag{18}\]
Using (18), one can obtain the Hubble parameter \(H(t)\) as
\[H(t)=\frac{\dot{a}}{a}=c+dm\tanh(n-mt). \tag{19}\]
Also the deceleration parameter \(q(t)\) is given by,
\[q=-1-\frac{\dot{H}}{H^{2}}=-1+\frac{dm^{2}}{[c\cosh(n-mt)+dm\sinh(n-mt)]^{2}}. \tag{20}\]
It is worthy mentioned from the analysis of the research articles [1; 2; 3; 4; 5] that our universe is now accelerating. As an accelerated expansion phase, we know from the Friedmann equation that the second order derivative of the scale
Figure 1: Plot of scale factor \((a)\) as a function of cosmic time \((t)\) for \(c=0.97,d=1,m=0.735\), \(\&\)\(n=10\).
Figure 3: Plot of deceleration parameter \((q)\) as a function of cosmic time \((t)\) for \(c=0.97,d=1,m=0.735\), \(\&\)\(n=10\).
Figure 2: Plot of Hubble parameter \((H)\) as a function of cosmic time \((t)\) for \(c=0.97,d=1,m=0.735\), \(\&\)\(n=10\).
factor \(\ddot{a}>0\) or \(\dot{a}\) must be an increasing function over the time evolution. That means the Hubble parameter \(H(t)\) is a decreasing function over the evolution of cosmic time. So Fig.2 tells us that \(H(t)\) was approximately same in the early stage of the universe and then it follows gradually decreasing and finally keeps a constant nature during the late time acceleration of the universe. The decreasing behavior of the Hubble parameter with time may also escape the universe from phantom-like evolution. As the Hubble parameter is proportional to the energy density in the late times of the evolution, here we get the same natural behaviour of the Hubble parameter in the late times of the cosmic evolution.
On the other hand, one can observe that the evolution of deceleration parameter starts at \(q=-1\) which represents the de-Sitter expansion phase and then it goes to deceleration phase through accelerating power law expansion phase. After that, it again goes to the de-Sitter expansion phase in the late time of cosmic evolution (Fig.3). So \(q(t)\) is positive during decelerating expansion and negative during accelerating expansion of the universe. Note that, for eternally accelerated universe \(q<0\), \(q=0\) occurs in the transition line and for the super-accelerated expansion, \(q<-1\). This kinds of behavior of the universe is also suggested by the observed cosmological evolution of the universe [77].
## V Cosmological models in \(f(Q,T)\) gravity
In this present section, we discuss various functional forms of \(f(Q,T)\) gravity model to analyse the cosmological evolutions in the symmetric teleparallel equivalent general relativity. The choices of \(f(Q,T)\) are as follows.
### Case I : \(f(Q,T)=\alpha Q+\beta T\).
The first simple form of \(f(Q,T)\) has
\[f(Q,T)=\alpha Q+\beta T, \tag{21}\]
where \(\alpha\) and \(\beta\) are constants. So we obtain \(F=F_{Q}=\alpha\), and \(8\pi\bar{G}=f_{T}=\beta\).
Now using the definition of scale factor mentioned in (18) and (21) in (13), (14) and (17), we have the following expressions for the energy density \(\rho\), pressure \(p\), and EOS parameter \(\omega\).
\[\rho =-\Big{(}\frac{1}{8\pi+2\beta}\Big{)}\Big{[}3\alpha\big{(}c+dm \tanh(n-mt)\big{)}^{2}+\frac{\beta dm^{2}\alpha}{8\pi+\beta}\;\text{sech}^{2}( n-mt)\Big{]}, \tag{22}\] \[p =\frac{3\alpha\big{[}c+dm\tanh(n-mt)\big{]}^{2}}{\big{(}8\pi+2 \beta\big{)}}-\frac{\big{(}16\pi+3\beta\big{)}\ dm^{2}\alpha\;\text{sech}^{2}( n-mt)}{\big{(}8\pi+2\beta\big{)}\big{(}8\pi+\beta\big{)}},\] (23) \[\omega =\frac{dm^{2}\big{(}16\pi+3\beta\big{)}\text{sech}^{2}(n-mt)-3 \big{(}8\pi+\beta\big{)}\big{[}c+dm\tanh(n-mt)\big{]}^{2}}{3\big{(}8\pi+\beta \big{)}\big{[}c+dm\tanh(n-mt)\big{]}^{2}+\beta dm^{2}\;\text{sech}^{2}(n-mt)}. \tag{24}\]
The plots of energy density \(\rho\), pressure \(p\), and the equation-of-state parameter \(\omega\) with the cosmic time \(t\) is shown in Figs.4, 5, and 6, respectively.
### Case II : \(f(Q,T)=uQ^{\epsilon+1}+vT\).
As another example of a cosmological model in the \(f(Q,T)\) gravity, we consider a generic function, given by
\[f(Q,T)=uQ^{\epsilon+1}+vT, \tag{25}\]
where \(u,\ v\) and \(\epsilon\) are constants. In this case, \(F=f_{Q}=(\epsilon+1)uQ^{\epsilon}=(\epsilon+1)u\ 6^{\epsilon}H^{2\epsilon}\) and \(8\pi\bar{G}=v\). The generalised energy density \(\rho\), pressure \(p\) and EOS parameter \(\omega\) in this case are given by
\[\rho=-\Big{(}\frac{1}{8\pi+2v}\Big{)}\big{[}c+dm\tanh(n-mt)\big{]}^{2\epsilon} (1+2\epsilon)u6^{\epsilon}\Bigg{[}3\big{[}c+dm\tanh(n-mt)\big{]}^{2}+\frac{v(1 +\epsilon)}{(8\pi+v)}\ dm^{2}\ \text{sech}^{2}(n-mt)\Bigg{]}, \tag{26}\]
\[p=\frac{3u\ 6^{\epsilon}(1+2\epsilon)\big{[}c+dm\tanh(n-mt)\big{]}^{2}}{ \big{(}8\pi+2v\big{)}}-\frac{6^{\epsilon}(16\pi+3v)(1+\epsilon)(1+2\epsilon) \ udm^{2}\ \text{sech}^{2}(n-mt)}{\big{(}8\pi+2v\big{)}\big{(}8\pi+v\big{)}}\Big{[}c+dm \tanh(n-mt)\Big{]}^{2\epsilon}, \tag{27}\]
\[\omega=-\frac{(8\pi+v)\big{[}c+dm\tanh(n-mt)\big{]}^{-2\epsilon}3 \big{[}c+dm\tanh(n-mt)\big{]}^{2}}{dm^{2}v(1+\epsilon)\ \text{sech}^{2}(n-mt)+3(8\pi+v)\big{[}c+dm\tanh(n-mt)\big{]}^{2\epsilon}}\] \[+\frac{\big{[}c+dm\tanh(n-mt)\big{]}^{-2\epsilon}dm^{2}(16\pi+3v) (1+\epsilon)\ \text{sech}^{2}(n-mt)\big{[}c+dm\tanh(n-mt)\big{]}^{2\epsilon}}{dm^{2}v(1+ \epsilon)\ \text{sech}^{2}(n-mt)+3(8\pi+v)\big{[}c+dm\tanh(n-mt)\big{]}^{2 \epsilon}}. \tag{28}\]
It should be mentioned that for \(\epsilon=0\), the model in (25) goes to the first example of \(f(Q,T)\) model. All the corresponding energy density, pressure and EOS parameter in (26), (27) and (28) reduces to the (22), (23) and (24),
Figure 5: Plot of pressure (\(p\)) as a function of cosmic time (\(t\)) for \(c=0.97,d=1,m=0.735\), & \(n=10\) with \(\alpha=0.1\) and \(\beta=-59.1\).
respectively. So the choice of \(f(Q,T)=uQ^{\epsilon+1}+vT\) behaves as a more general one. The profiles of energy density, pressure and the EOS parameter vs. cosmic time with taking \(\epsilon=0.45\) is represented in Fig.7, 8 and 9, respectively.
it smoothly raises to its maximum value and then starts decreases again. From Fig.6, it is observed that \(\omega\) takes it's maximum value which is nearly close to \(1/3\) and then it converges to exactly \(-1\) to produce the recent observation [1; 2; 3; 4; 5]. The same case is happening for the general case in Fig.9 under the consideration of the constant \(\epsilon=0.45\), but here \(\omega\) decreases to more negative values \(\sim-3.8\) to serve the huge late-times acceleration which could be generated by the negative pressure. Thus, we model a complete cosmic evolutionary history of the universe having inflation, radiation, matter and dark energy dominated eras in order by using a simple ansatz choice of scale factor. Obviously the ranges of the plot profiles of energy density, pressure and the equation of state parameter for the different cases are the different. This motivate to work with the different kinds of model consideration in \(f(Q,T)\) gravity to study the cosmological scenarios. Finally from Figs.6 and 9, it is also worthy to mentioned that the universe starts with the acceleration smoothly, goes to the deceleration phase, and finally returns to its late phase of accelerated expansion, produced by dark energy.
## VI Various energy conditions
The energy conditions are the expressions of the linear relationships between the energy density \(\rho\) and pressure \(p\). It has a very crucial significance to understand the singularity theorem, to describe the nature of geodesics, to study the properties of black holes. The energy conditions are also the essential tools to study the geodesics of the Universe from the cosmological points of view. The physical conditions on the energy-momentum tensor are basically the energy conditions, which is obtained by making a connection between the Raychaudhuri equation [78; 79; 80] and the Einstein field equation.
From Raychaudhuri equation and by the constraints of gravity to be attractive in nature, we have the following two energy conditions, namely strong energy condition (SEC) and null energy condition (NEC) with a time-like vector field \(v^{\mu}\) and a null vector \(k^{\mu}\), respectively, i.e.,
\[\left(T_{\mu\nu}-\frac{1}{2}Tg_{\mu\nu}\right)v^{\mu}v^{\nu}\geq 0 \implies\rho+3p\geq 0\,\quad\textbf{SEC} \tag{29}\] \[T_{\mu\nu}k^{\mu}k^{\nu}\geq 0\implies\rho+p\geq 0\,\quad \textbf{NEC} \tag{30}\]
On the other hand the weak energy condition (WEC) can be achieved by replacing null vector field \(k^{\alpha}\) by a time-like vector \(v^{\alpha}\) and the dominant energy condition (DEC) says that the matter should follow the time-like or null world lines.
\[\rho\geq 0\ \ with\ \ \rho+p\geq 0\,\quad\textbf{WEC} \tag{31}\] \[\rho\geq 0\ \ with\ \ \rho\pm p\geq 0\,\quad\textbf{DEC} \tag{32}\]
where \(T_{\mu\nu}\) is the energy-momentum tensor and \(g_{\mu\nu}\) is the fundamental metric tensor ; \(T=T^{\mu\nu}g_{\mu\nu}\), being the trace. Energy conditions in \(f(Q,T)\) gravity have been studied in [39; 40], but here we proceed to construct all the energy conditions on the background of (18).
### Case-A : \(f(Q,T)=\alpha Q+\beta T\)
For the simplest choice of the \(f(Q,T)=\alpha Q+\beta T\) gravity model (where \(\alpha,\ \beta\) are constants), we already have the expressions for the energy density \(\rho\) and pressure \(p\) ((22), (23)). As we know the energy density must be a positive quantity, here we chose the model parameters \(\alpha=0.1,\ \beta=-59.1\) to ensure the energy density positive and the equation of state parameter as per the observations. Due to the importance of the energy conditions, we observed the acceptable ranges of model parameters \(\alpha\) and \(\beta\), where the variation of \(\alpha\) is from \(9.1\) to \(29.1\) and for \(\beta\), it is from \(-25.55\) to \(-55.55\).
Among all the energy conditions, it is easy to see from Fig.10 that the strong energy condition (SEC) is violating on the cosmological scale, which is obvious according to the recent data observations of the accelerating Universe [81; 82]. The variation in \(\alpha\) and \(\beta\) results in the variation of SEC behavior. Since the EOS parameter \(\omega\) is negative means \(\rho+3p<0\) and thus \(\ddot{a}>0\), which implies there is a violation of SEC at present.
On the other hand, the NEC and DEC both are obeying in this model (Figs.11 and 12). We have also observed the behavior of NEC, which is a partial condition of WEC, as we have shown the behavior of energy density in Fig.4. So it is obvious that the validation of NEC and energy density together results in the validation of WEC.
The complete story of all the energy conditions is depicted together in Fig.13. From Fig.13, it is observed that the NEC and WEC satisfy for the present model but SEC violates. Since we've already discussed the profile of \(\omega\) previously, it starts with acceleration, then smoothly goes to the deceleration phase, and again it returns to the accelerated phase. The exciting thing is getting the same results for SEC, NEC and DEC in Figs.10, 11 and 12.
### Case-B : \(f(Q,T)=uQ^{\epsilon+1}+vT\)
For the generic choice of \(f(Q,T)=uQ^{\epsilon+1}+vT\) gravity model (where \(u,\ v\) are arbitrary constants), we already have the expressions for the energy density \(\rho\) and pressure \(p\) ((26), (27)). Like the previous case, similarly we chose the model parameters \(u=0.1,\ v=-59.1\) with \(\epsilon=0.45\). Due to the importance of the energy conditions, again we observed the acceptable ranges of model parameters \(u\) is from \(9.1\) to \(29.1\) and for \(v\), it is from \(-25.55\) to \(-55.55\).
Again among all the energy conditions, the strong energy condition is violated on cosmological scale (Fig.14). The variation in \(u\) and \(v\) results in the variation of SEC behavior. Since the EOS parameter \(\omega\) is negative means \(\rho+3p<0\), so that there is a violation of SEC at present epoch. The negative behavior of SEC executes the accelerated expansion of the universe.
Figure 12: Variation of DEC with \(\alpha\) and \(\beta\)
On the other hand, the NEC and DEC, both do not violate in this model. So the behaviour of NEC and DEC is always positive (Figs.15 and 16). Therefore, the validation of NEC and energy density together results in the
Figure 16: Variation of DEC with model parameters \(u,\ v\) and cosmic time \(t\)
Figure 14: Variation of SEC with model parameters \(u,\ v\) and cosmic time \(t\)
Figure 15: Variation of NEC with model parameters \(u,\ v\) and cosmic time \(t\)
validation of WEC.
Similarly, for the choice of \(f(Q,T)=uQ^{1.45}+vT\) gravity model with the constants \(u=0.1\) and \(v=-59.1\), the portrait of all the energy conditions is shown in Fig.17. The same result is happening here i.e., the violation of SEC with the non-violation of NEC and DEC, which satisfy the recent observation data of the accelerated expansion.
## VII Scalar field description
In this section, we try to construct a scalar field description for the choice of scale factor (18) in \(f(Q,T)\) gravity model for two different examples. This description is a more fundamental formulation for our study. To explain the early acceleration of the universe i.e., the inflationary framework, the most appropriate and popular way is to consider a scalar field, which is known as inflaton, managed by a specific potential. The action for the dynamics of inflation can be written as
\[\mathcal{A}=\int d^{4}x\sqrt{-g}\Big{[}\frac{R}{2\kappa}+L_{\phi}^{matt}\Big{]}. \tag{33}\]
Here \(g\) is the determinant of the metric tensor \(g_{\mu\nu}\), \(R\) is the Ricci scalar, \(\kappa=8\pi\) is a constant (\(G=1\)). The matter Lagrangian \(L_{\phi}^{matt}\) is containing an expression, which is minimally coupled of the inflaton field \(\phi(t)\) to gravity, defined as follows.
\[L_{\phi}^{matt}=-\frac{1}{2}g^{\mu\nu}\partial_{\mu}\phi\partial_{\nu}\phi-V( \phi), \tag{34}\]
where \(V(\phi)\) is the potential of the scalar field \(\phi(t)\) that can depends on one or more free parameters [83].
The energy density \(\rho_{\phi}\) and the pressure \(p_{\phi}\) in the scalar field model description can be described as
\[\rho_{\phi}=\frac{1}{2}\dot{\phi}^{2}(t)+V(\phi),\hskip 28.452756ptp_{\phi}= \frac{1}{2}\dot{\phi}^{2}(t)-V(\phi), \tag{35}\]
where one can assume that the energy-momentum tensor of the inflaton field may behaves like a perfect fluid having a linear barotropic equation of state as \(p_{\phi}=w\rho_{\phi}\). On the other hand, the trace of the energy-momentum tensor in this scalar field description reads
\[T_{\phi}\equiv-\rho_{\phi}+3p_{\phi}=\dot{\phi}^{2}(t)-4V(\phi). \tag{36}\]
### Case-A
For the linear model \(f(Q,T)=\alpha Q+\beta T\) using (35), (15) and (16) can be rewritten as
\[3H^{2}=-\frac{(8\pi+\beta)\dot{\phi}_{1}^{2}(t)+2V(\phi_{1})(8 \pi+2\beta)}{2\alpha} \tag{37}\] \[4\dot{H}+3H^{2}=\frac{3(8\pi+\beta)\dot{\phi}_{1}^{2}(t)-2V(\phi _{1})(8\pi+2\beta)}{2\alpha} \tag{38}\]
Figure 17: Plot of Energy Conditions as a function of cosmic time \(t\) for \(c=0.97\), \(d=1\), \(m=0.735\), \(n=10\), \(u=0.1\), \(\&\)\(v=-59.1\)
From (37) and (38), one can simply find out the relation between the Hubble parameter and the inflaton field \(\phi_{1}\) as
\[\dot{H}=\frac{\dot{\phi}_{1}^{2}}{2\alpha}(8\pi+\beta), \tag{39}\]
and we exclude the case for \(\beta=-8\pi\). Again, using (37) and (39), we obtain the modified Klein-Gordon equation in this scalar field model as
\[\ddot{\phi_{1}}+3H\dot{\phi_{1}}+\frac{(8\pi+2\beta)}{(8\pi+\beta)}\frac{dV}{d \phi_{1}}=0. \tag{40}\]
Now by solving (39) with \(\alpha=0.1\) and \(\beta=-59.1\), one may have the following.
\[\phi_{1}(t)=-0.0767\tan^{-1}\big{[}\sinh(n-mt)\big{]}, \tag{41}\] \[H(\phi_{1})=c-dm\sin(13.0378~{}\phi_{1}),\] (42) \[V(\phi_{1})=0.0005\Big{[}6\big{\{}c-dm\sin(13.0378~{}\phi_{1}) \big{\}}^{2}-2~{}dm^{2}\cos^{2}(13.0378~{}\phi_{1})\Big{]}. \tag{43}\]
This potential \(V(\phi_{1})\) may be seen as a combination of sine-cosine function. The parametric plot of \(V(\phi_{1})\) vs. \(\phi_{1}\) has been given in Fig.18. It is clear from Fig.18 that during the early times, \(V(\phi_{1})\) is slowly varying with the inflaton field \(\phi_{1}\) and at the end of the inflation, the field is slowly rolling down and it becomes constant in the dark energy dominated era i.e., in the late-times. We may conclude the behaviour of constant potential implies the cosmological constant-like behavior during the late-times of the universe's evolution.
### Case-B
Following the same procedure as in Case-A, we obtain the effective energy density, effective pressure in more generic \(f(Q,T)=uQ^{\epsilon+1}+vT\) as follows.
\[3H^{2}\equiv 8\pi\rho_{eff}=\frac{-(8\pi+v)\dot{\phi_{2}^{2}}(t)-2 V(\phi_{2})(8\pi+2v)}{2u~{}(1+2\epsilon)~{}(\ddot{\phi}_{2}^{2}+2V)^{ \epsilon}}, \tag{44}\] \[2\dot{H}+3H^{2}\equiv-8\pi p_{eff}=\frac{(8\pi+v)\dot{\phi_{2}^{ 2}}(t)-2V(\phi_{2})(8\pi+2v)-\epsilon\big{(}(8\pi+v)\dot{\phi_{2}^{2}}+2V( \phi_{2})(8\pi+2v)\big{)}}{2u~{}(1+\epsilon)(1+2\epsilon)~{}(8\pi)^{\epsilon} ~{}\big{(}\dot{\phi}_{2}^{2}+2V\big{)}^{\epsilon}}, \tag{45}\]
and from (44), (45), one can simply obtain an important relation i.e.,
\[H^{2\epsilon}\dot{H}=\frac{(8\pi+v)\dot{\phi_{2}^{2}}}{2u~{}6^{ \epsilon}(1+2\epsilon)(1+\epsilon)}. \tag{46}\]
By considering a binomial expansion and neglecting the higher order terms in (46) and excluding \(8\pi=-v\) case, we've solved (46) numerically with \(u=0.1,~{}v=-59.1\) and \(\epsilon=0.45\). The expressions for the scalar field in terms of cosmic time \(t\) is given by,
\[\phi_{2}(t)=0.1874\Big{[}0.3421\text{sech}(n-mt)-\tan^{-1}\big{[}\sinh(n-mt) \big{]}\Big{]}. \tag{47}\]
Due to that extra term in the expression of \(\phi_{2}(t)\), it is difficult to express the Hubble parameter in terms of \(\phi_{2}\) only. But to proceed further, we introduced the scalar field \(\phi_{1}(t)\) from the first model in our second model and did a comparison study between both of these models. So by using (41), (47) can be written as,
\[\phi_{2}(t)=\Big{[}0.0641\text{sech}(n-mt)+2.4432\phi_{1}\Big{]}. \tag{48}\]
In this case, the Hubble parameter and the potential can also be written as,
\[H=c+dm\sqrt{1-\Big{(}\frac{\phi_{2}-2.4432\phi_{1}}{0.0641} \Big{)}^{2}}, \tag{49}\] \[V=-\frac{u(1+2\epsilon)6^{\epsilon}H^{2\epsilon}}{(2+\epsilon)( 8\pi+2v)}\Big{[}(1+\epsilon)(2\dot{H}+3H^{2})+3H^{2}+(1+\epsilon)\epsilon\dot {H}\Big{]}. \tag{50}\]
From the scalar field solution for our second model, one can observe that an additional term arises due to the change in the Lagrangian \(f(Q,T)\) compared to the first model. After some numerical analysis, it is observed that our model-1 is able to describe the inflationary cosmology better than model-2 due to its' linearity in the nature of Lagrangian. Further, we have seen that the scalar field solution for the first model dominates the second model because the scalar field description for the second model depends on the first model. In addition, we found that the potential is symmetric in nature for the second model, i.e., it is increasing initially and attends a maximum value when the inflation ends and later converges towards zero. In contrast, we may say that our linear model is the best compared to the non-linear model.
## VIII Cosmic evolution in \(\omega-\omega^{\prime}\) phase plane
Cosmological evolution can also be presented in the well-known \(\omega-\omega^{\prime}\) phase space representation. In this representation, by taking the scale factor as a parameter along the dynamics of the evolutionary path, one may define,
\[\omega^{\prime}=\frac{d\omega}{d\ln a}. \tag{51}\]
The evolutionary representation in this phase plane may be characterized into two regions, namely, thawing region and freezing region. The criterion for the identification of the thawing region is \(\omega^{\prime}>0\) and \(\omega<0\), and for the freezing region, it is given by \(\omega^{\prime}<0\) with \(\omega<0\). In the thawing region, \(\omega\) is an increasing function of time, so that the evolution of the universe terminate into de-Sitter like stage. On the other hand, \(\omega\) decreases on time evolution in the freezing model. So it's behaviour is asymptotic and it will be depended on the shape of the potential [84; 85; 86]. Fig.19 represents the evolution behaviour of the universe in \(\omega-\omega^{\prime}\) phase space. In this Fig.19, the blue curve represents our model one i.e., \(f(Q,T)=\alpha Q+\beta T\), where the red trajectory define for the choice of the second model, \(f(Q,T)=uQ^{\epsilon+1}+vT\) with \(\epsilon=0.45\).
From Fig.19 (for the blue trajectory), it is clear that the universe starts with the acceleration smoothly in the thawing region and then produces its second phase of accelerated expansion in the freezing region during late times. In between these two regions, the EOS parameter shows some positive behaviour (goes with the maximum value of \(\omega\sim\frac{1}{3}\)) in the decelerating era, where \(\omega^{\prime}\) will be first increasing and then decreasing. Similarly same story is happening for the red curve also, when we have considered the contribution of \(Q\) in the model is \(1.45\). So the freezing region is larger compared to the thawing region due to the higher non-metricity effect in the model. As a whole, one may say that during early times, the universe exhibits thawing behavior, and now the universe is producing freezing behavior.
## IX Statefinder Diagnostics
As our universe is ever accelerating with an unknown things, called dark energy, the unknown nature of dark energy arises many problems in modern cosmology. So there always exists a question that what is dark energy? Many dark energy models such as \(\Lambda\)CDM, HDE, SCDM, CG, Quintessence, k-essence have been proposed to understand the most intriguing nature of dark energy. And these dark energy models have different behaviours in comparison to each other. The \(\{r,s\}\) parametrization technique is used to distinguish all these kind of dark energy models [87; 88].
Define \(r\) and \(s\) as follows.
\[r =\frac{\dot{\frac{\dot{a}}{a}}}{aH^{3}}, \tag{52}\] \[s =\frac{2}{3}\frac{r-1}{2q-1}, \tag{53}\]
where \(q\neq\frac{1}{2}\). Now the different pairs of \(\{r,s\}\) represents different kinds of dark energy models.
(a). For the \(\Lambda\)CDM model, the pair is \(\left\{r=1,s=0\right\}\).
(b). For the HDE model, the pair is \(\left\{r=1,s=\frac{2}{3}\right\}\)..
(c). For the SCDM model, the pair is \(\left\{r=1,s=1\right\}\).
(d). For the Quintessence model, the pair is \(\left\{r<1,s>0\right\}\).
(e). For the CG model, the pair is \(\left\{r>1,s<0\right\}\).
We need to study the convergence and divergence nature of the trajectory of the \(r-s\) curve corresponding to any cosmological dark energy models. So clearly the deviation from \(\left\{1,0\right\}\) represents the deviation from \(\Lambda\)CDM model. Furthermore from the observation [89; 90], the values of \(r\) and \(s\) could be concluded. Obviously it is worthy of describing the various dark energy models in the near future.
Now using (18) in (52), (53) we can rewrite the \(r,s\) as,
\[r=\frac{c^{3}+dm\tanh(n-mt)\left\{c^{2}+(d+1)m\tanh(n-mt)[3c+(d+2)m\tanh(n-mt)]-(3 d+2)m^{2}\right\}-3cdm^{2}}{[c+dm\tanh(n-mt)]^{3}}, \tag{54}\]
\[s=\frac{2dm^{2}\text{sech}^{2}(n-mt)[3c+(3d+2)m\tanh(n-mt)]}{3[c+dm \tanh(n-mt)]\left\{c^{2}+dm\tanh(n-mt)[6c+(3d+2)m\tanh(n-mt)]-2dm^{2}\right\}}. \tag{55}\]
In Fig.20, the parametrization of \(r\) and \(s\) shown in \((r,s)\) plane and the arrow mark represents the direction of the trajectory. From Fig.20, it is clearly observed that, initially, the trajectory diverges from \(\Lambda\)CDM model, and later, it converges to \(\Lambda\)CDM model. Also, the evolution of the trajectory completely lies in the Quintessence era. On the other hand, we have shown the parametrization of \(r\) and \(q\) in Fig.21. From Fig.21, we observed that our model starts with the de-Sitter universe, and initially, it goes to Chaplygin gas, represented by \(r>1\) and after that, it comes to Quintessence and finally again converges to the de-Sitter universe.
## X Conclusion
In this manuscript, we have aimed to explore a novel cosmological scenario in the context of \(f(Q,T)\) gravity. As we know, our universe is passed through various evolutionary phases from early to the present day, and there are many cosmological models have been presented to discuss those scenarios separately in the gravitational theories. But still, we are in search of a cosmological model that can explain the complete evolution process of the universe from early inflation to late-time cosmic acceleration. In this regard, we have presented two \(f(Q,T)\) cosmological models and discussed their matter evolution profiles and various energy conditions.
The equation of state parameter (\(\omega\)) plays a significant role in describing the various properties of the fluid in a cosmological model. Because different values of (\(\omega\)) present different matter profiles such as for radiation-dominated (\(\omega=1/3\)), matter-dominated (\(\omega=0\)), quintessence (\(1/3<\omega<-1\)), \(\Lambda\)CDM (\(\omega=-1\)), and phantom (\(\omega<-1\)). Therefore, we have examined the equation of state parameter over the cosmic time evolution for both models. It is observed that both models have shown accelerated expansion during the early-time and late-time evolution process. We also observed that model-I shows the \(\Lambda\)CDM type expansion for early-time and late-times, whereas model-II shows quintessence type for early-time and phantom-like for late-times. The sufficient amount of negative pressure of the universe in this model may be responsible for this accelerated expansion during the late-times. The complete profiles of the EOS parameter suggested that the evolution process of the universe started with early-time inflation and then went to its' decelerated phase. Later, it shows the second phase of the accelerated expansion of the universe.
Furthermore, we have presented various energy conditions to check the viability of our cosmological models. From all the energy condition profiles, we have seen that the energy densities are positive throughout the evolution process, and WEC, NEC, and DEC are satisfied for both models. Also, we have seen that SEC is violated during the early and late-times evolution, which is an agreement for accelerated expansion. But, both models satisfied SEC during the decelerated phase of the universe. In addition, we have re-examined the dark energy behavior of our cosmological solution through \(\omega-\omega^{\prime}\) phase space plane which is characterized by thawing and freezing regions and also by statefinder diagnostics. This behavior may also be confirmed with the scalar field description for the models in the background of \(f(Q,T)\) gravity. We also observed that the potential \(V(\phi_{1})\) is having constant value with varying \(\phi_{1}\), which indicates the cosmological-constant-like behavior in the late times of cosmic expansion. Besides this, we also observed that our first model sounds good for an explanation of inflationary epochs than the second one due to the high non-linear effects in the considered Lagrangian, presents in the second model.
In conclusion, our models have presented a complete evolution process of the universe from early to late-times in the context of \(f(Q,T)\) gravity. We hope this study will shed some light on the idea of describing the whole evolution process of the universe through a cosmological model. In the future, it would be interesting to see the observational test of these types of models, and also, it may discover some new physics.
## XI Acknowledgement
S.D is thankful to Raj Kumar Das (senior research fellow, physics and applied mathematics unit, Indian Statistical Institute, Kolkata) for encouraging the aspects of the symmetric teleparallel theory of gravity.
|
2310.17525 | Measuring Wigner functions of quantum states of light in the
undergraduate laboratory | In this work, we present an educational activity aimed at measuring the
Wigner distribution functions of quantum states of light in the undergraduate
laboratory. This project was conceived by students from various courses within
the physics undergraduate curriculum, and its outcomes were used in an
introductory Quantum Optics course at the Universidad de los Andes in Bogot\'a,
Colombia. The activity entails a two-hour laboratory practice in which students
engage with a pre-aligned experimental setup. They subsequently employ an
open-access, custom-made computational graphical user interface to reconstruct
the Wigner distribution function for various quantum states of light. Given
that the testing phase coincided with the COVID-19 pandemic, we incorporated
the capacity to analyze simulated data into the computational user interface.
The activity is now part of the course syllabus and its virtual component has
proven to be highly valuable for the implementation of distance learning in
quantum optics. | Juan-Rafael Álvarez, Andrés Martínez Silva, Alejandra Valencia | 2023-10-26T16:17:54Z | http://arxiv.org/abs/2310.17525v1 | # Measuring Wigner functions of quantum states of light in the undergraduate laboratory
###### Abstract
In this work, we present an educational activity aimed at measuring the Wigner distribution functions of quantum states of light in the undergraduate laboratory. This project was conceived by students from various courses within the physics undergraduate curriculum, and its outcomes were used in an introductory Quantum Optics course at the Universidad de los Andes in Bogota, Colombia. The activity entails a two-hour laboratory practice in which students engage with a pre-aligned experimental setup. They subsequently employ an open-access, custom-made computational graphical user interface to reconstruct the Wigner distribution function for various quantum states of light. Given that the testing phase coincided with the COVID-19 pandemic, we incorporated the capacity to analyze simulated data into the computational user interface. The activity is now part of the course syllabus and its virtual component has proven to be highly valuable for the implementation of distance learning in quantum optics.
## I Introduction
Understanding and characterizing the properties of light beams is a key endeavour in optical physics. Light sources can vary in many ways, having, among others, different amplitudes, frequencies, phases, and polarizations. Such variations can be present within the same physical objects and define the different states that can be assigned to light. In quantum optics, a field of particular relevance, the measurement of amplitude and phase behaviors in different states of light plays a critical role. In this context, the Wigner distribution function (WDF) is a commonly utilized tool for characterizing the latter behaviors in a phase space [1; 2]. WDFs are typically obtained through tomographic reconstruction, a technique that retrieves a three-dimensional image of an object by utilizing partial information from two-dimensional projections. An established method for performing tomographic reconstructions is Computed Axial Tomography (CAT), initially introduced in the realm of medical imaging [3; 4].
In this article, we provide an account of the development and execution of a laboratory practice designed to reconstruct the WDF for various quantum states of light, employing a tomographic reconstruction technique. We introduce an experimental practice that was devised for and by undergraduate physics students. The development spanned an 18-month period during which both the experimental and computational facets of the activity were integrated. Subsequently, the exercise underwent testing by undergraduate and early graduate students participating in an introductory quantum optics course at the Universidad de los Andes (Uniandes) in Bogota, Colombia.
All the computer programming required for this practice was developed in Python 3, featuring a user-friendly graphical user interface (GUI) that simplifies its use. The code is freely available and can be accessed at [https://github.com/amartinez1224/quantum-tomography](https://github.com/amartinez1224/quantum-tomography). Given that the testing phase coincided with the COVID-19 pandemic, the activity was designed to offer a similar experience for both in-person and virtual experimental sessions, enhancing accessibility for students participating in virtual classes. Despite the resumption of in-person classes at the time of writing, this activity remains valuable, as it continues to serve the needs of distance learning.
We anticipate that this activity can be of valuable use to other educators and researchers in multiple ways. Firstly, it can serve as pedagogical material to elucidate the concepts related to the measurement of quadratures, the understanding of Radon transforms, and their pivotal role in the reconstruction of Wigner Distribution Functions (WDFs) in quantum optics, complementing existing references [5; 6; 7; 8]. Secondly, it can function as a guide for an experimental practice suitable for undergraduate and early graduate students in quantum optics classes. Lastly, it can act as a versatile tool for remotely training the quantum workforce through distance learning.
This paper is structured as follows: In Section II, we delineate various tools for characterizing signals, such as quadratures and Wigner distribution functions, and elaborate on the methods for their measurement. Section III offers an overview of diverse quantum states of light, consolidating them in a comprehensive table. Section IV delves into the details of the experimental and computational aspects of the educational activity, while Section V reports on the hands-on and virtual activities developed in collaboration with undergraduate and early graduate students. Finally, in Section VI, we present our conclusions and outline potential perspectives.
Tools for state characterization and their measurement
Several conceptual tools are available for describing the properties of light coming from signals arriving at detectors. This article primarily concentrates on two of these tools: Quadratures, employed to characterize the behavior of an electromagnetic wave in terms of conjugate variables, and Wigner Distribution Functions (WDFs), which can represent signals within a phase space.
### Quadratures
Electric fields associated to electromagnetic waves can be mathematically described as solutions to the wave equation derived from Maxwell's equations [9]. Generally, solutions to the wave equation can be decomposed as the sum of monochromatic waves with polarization vector \(\mathbf{e}\) and frequency \(\omega\). A mathematical approach generally taken to describe one such monochromatic wave is to use a complex amplitude: the electric field is considered to be the real part of a vector of complex numbers that evolves in time, written in the form:
\[\mathbf{E}(t)=\mathrm{Re}\left[\tilde{E}_{0}e^{i\omega t}\right]\mathbf{e}, \tag{1}\]
where \(\tilde{E}_{0}=E_{0}e^{i\theta}=\left|\tilde{E}_{0}\right|e^{i\theta}\) is a complex number known as the _complex amplitude_, or phasor, of the field, and \(\theta\) is its initial phase. Another approach to describe the field is based on the concept of quadratures, where the exponential in Eq. 1 can be expanded to rewrite \(\mathbf{E}(t)\) as:
\[\mathbf{E}(t)=\left[X\cos\omega t-Y\sin\omega t\right]\mathbf{e}, \tag{2}\]
where \(X=\mathrm{Re}\left(\tilde{E}_{0}\right)\) and \(Y=\mathrm{Im}\left(\tilde{E}_{0}\right)\) are two complementary _quadratures of the field_.
The quadratures can be thought of as the natural variables for a harmonic oscillator: In classical mechanics, the Hamiltonian for a harmonic oscillator with mass \(m\) and angular frequency \(\omega\) is given by
\[H=\frac{\left[p(t)\right]^{2}}{2m}+\frac{1}{2}m\omega^{2}[x(t)]^{2}, \tag{3}\]
where \(x(t)\) and \(p(t)\) denote the position and momentum of the oscillator at time \(t\). The equation of motion associated to this Hamiltonian is given by:
\[\ddot{x}=-\omega^{2}x \tag{4}\]
with solutions
\[x\left(t\right)=x\left(0\right)\cos\left(\omega t\right)+\frac{p\left(0\right) }{m\omega}\sin\left(\omega t\right). \tag{5}\]
\(x(t)\) has the same form as Eq. 2. Therefore, the quadratures \(X\) and \(Y\) associated to a monochromatic electric field can be thought of as the position and momentum of a harmonic oscillator. It is, in this sense that, in the literature [10], the quadratures of the electromagnetic field are referred to as the generalized position and momentum of a field, and allow the representation of optical fields in a diagram of conjugate variables, i.e., Fourier transforms of each other, \(X\) and \(Y\), called a phase space.
The position and momentum variables that have been defined so far correspond to two conjugate variables that can be used to represent the information conveyed by \(\mathbf{E}(t).\) Nevertheless, said information could also be represented by other pairs of conjugate variables, called generalized quadratures \(\{X_{\alpha},X_{\alpha+\pi/2}\}\), which are defined as
\[X_{\alpha}=X\cos\alpha+Y\sin\alpha. \tag{6}\]
Quadratures are measured using different experimental approaches that depend on the frequency of the electromagnetic field under study: In the range of frequencies \(f=\omega/2\pi\) between kHz and MHz, the use of a lock-in amplifier for measuring the quadratures of electric fields is a well-established technique, with commercially available devices that are used everyday in research laboratories [5; 6; 8].
The working principle of a lock-in amplifier is described as follows: Following Fig. 1(a), an input signal of interest with a known frequency \(\omega_{S}\) but with unknown amplitude \(E_{S}\) and phase \(\theta_{S}\), denoted by \(E_{\mathrm{signal}}\left(t\right)=E_{S}\cos\left(\omega_{S}t+\theta_{S}\right)\), is mixed with a well-defined local oscillator signal \(E_{\mathrm{LO}}\left(t\right)=E_{R}\cos\left(\omega_{R}t+\theta_{R}\right)\), in which the frequency \(\omega_{R}\), amplitude \(E_{R}\) and phase \(\theta_{R}\) are well known. The result of this mixing, which mathematically corresponds to the product of the two signals, can be written as:
\[\begin{split} E_{\mathrm{mix}}(t)=\frac{E_{S}E_{R}}{2}\left[ \cos\left(\left(\omega_{S}+\omega_{R}\right)t+\left(\theta_{S}+\theta_{R} \right)\right)\times\right.\\ \left.\cos\left(\left(\omega_{S}-\omega_{R}\right)t+\left(\theta_ {S}-\theta_{R}\right)\right)\right]\end{split} \tag{7}\]
The lock-in amplifier works in the _homodyne_ detection regime, meaning that the frequencies of the signal and the local oscillator are set to be equal: \(\omega_{S}=\omega_{R}=\omega\). Additionally, the lock-in amplifier uses a low-pass filter that only keeps the DC component of the mixed signal. Therefore, after mixing and filtering, Eq. 7 becomes:
\[E_{\mathrm{mix}}=\frac{E_{S}E_{R}}{2}\cos\left(\theta_{\mathrm{LO}}\right), \tag{8}\]
where \(\theta_{\mathrm{LO}}=\theta_{R}-\theta_{S}\) is the relative phase between the fields \(E_{\mathrm{signal}}\) and \(E_{\mathrm{LO}}\).
Since the \(X\) quadrature of the signal field is \(X_{S}=E_{S}\cos\theta_{S}\), choosing \(\theta_{\mathrm{LO}}=0\) and \(\theta_{\mathrm{LO}}=\pi/2\) in Eq. 8 enables the retrieval of the values of \(X_{S}\) and \(Y_{S}\) according to the following equations:
\[E_{\mathrm{mix}}^{\left(\theta_{\mathrm{LO}}=0\right)} =\frac{1}{2}E_{S}X_{S}, \tag{9}\] \[E_{\mathrm{mix}}^{\left(\theta_{\mathrm{LO}}=\pi/2\right)} =\frac{1}{2}E_{S}Y_{S}. \tag{10}\]
This allows to see that, with the appropriate choice of the phase \(\theta_{\rm LO}\), the lock-in amplifier enables the retrieval of the quadratures \(X_{S}\) and \(Y_{S}\).
When one is interested in measuring electromagnetic fields in the optical domain (i.e., with THz frequency), there is a limitation in the speed of the electronics, which at the time of writing do not exceed the order of a few GHz. However, there is an optical implementation of homodyne detection that enables the access to the quadratures of light. This technique has been used to characterize different quantum states of light [11; 12; 13]. The setup to perform optical homodyne detection is shown in Fig. 1(b). The local oscillator, (LO), and the light source to be characterized (S) are combined in the two input ports of a beam splitter (BS) and then detected using photo-detectors (D\({}_{1}\) and D\({}_{2}\)). The relative phase between both fields can be changed by using a phase shifter (PS). The detected currents are subtracted, eliminating classical fluctuations as they are correlated in both outputs of the BS. The complex amplitude of the fields at D\({}_{1}\) and D\({}_{2}\) can be written as [14]
\[\tilde{E}_{\rm D1/D2}=\frac{1}{\sqrt{2}}(\tilde{E}_{\rm signal}\pm e^{i\theta _{\rm LO}}\tilde{E}_{\rm LO}). \tag{11}\]
Since the current at the detectors is proportional to the intensity of light arriving at them and the detectors have a finite response time compared with the fast oscillations of the optical fields, the time-averaged substraction of the currents associated to each detector, \(i_{-}\), can be written in terms of the quadratures of the signal field:
\[\begin{split} i_{-}&=\left\langle I_{1}-I_{2} \right\rangle_{t}\\ &\propto\tilde{E}_{\rm LO}\left[X_{S}\cos\left(\theta_{\rm LO} \right)+Y_{S}\sin\left(\theta_{\rm LO}\right)\right].\end{split} \tag{12}\]
Here, \(\left\langle\cdot\right\rangle_{t}\) denotes a time averaging.
The result in Eq. (12) shows that the subtracted current allows the measurement of the quadratures of optical signals. Analogous to the case of the lock-in amplifier, this is achieved by changing the relative phase between the fields: For \(\theta_{\rm LO}=0\) and \(\theta_{\rm LO}=\pi/2\), the values of \(X_{S}\) and \(Y_{S}\) are retrieved, respectively.
### Wigner distribution function (WDF)
One tool that can be used to represent signals in phase space is the Wigner Distribution Function (WDF). Although this tool was introduced in the early 1930s for analyzing quantum states [15], it can be used to represent any signal \(u(x)\) in the phase space spanned by two conjugate variables \(X\) and \(Y\). The WDF of a signal \(u(x)\) is defined as:
\[W_{u}(X,Y)=\\ \int u\left(X-\frac{X^{\prime}}{2}\right)u^{*}\left(X+\frac{X^{ \prime}}{2}\right)e^{-2\pi iX^{\prime}Y}dX^{\prime}. \tag{13}\]
The projection \(\mathrm{pr}_{u}(s,\phi)\) of the WDF along the \(s\)-axis spanned by the unit vector \(\mathbf{e}_{s}=(\cos\phi,\sin\phi)\) is given by:
\[\mathrm{pr}_{u}\left(s,\phi\right)=\\ \int_{-\infty}^{\infty}W_{u}(s\cos\phi-Y\sin\phi,s\cos\phi+Y\sin \phi)dY. \tag{14}\]
The \(\mathrm{pr}_{u}(s,\phi)\) are the marginal probability distributions of obtaining a value for the quadrature \(s\). These projections have a physical meaning: They are the _energy_ distributions of the signal \(u(x)\) in the generalized quadrature \(s\). Despite the marginal distributions being positive and having a physical meaning, the WDF is not necessarily defined as a positive function. In the context of probability distributions, this makes the WDF a joint quasiprobability distribution between conjugate variables[15].
Unlike \(W_{u}(X,Y)\), the \(\mathrm{pr}_{u}(s,\phi)\) are experimentally accessible. From a set of \(\mathrm{pr}_{u}(s,\phi)\), it is possible to use three-dimensional reconstruction protocols to recover the WDF. Specifically, by means of a Radon Inverse transform, it is possible to back-project the set of \(\mathrm{pr}_{u}(s,\phi)\) to recover the WDF of a signal[1; 4]. This corresponds to integrating the different projections weighted by a function \(K(s)\), called the kernel of the Radon transform. Mathematically,
\[\begin{split}& W_{u}(X,Y)=\\ &\frac{1}{2\pi^{2}}\int_{\phi=0}^{\pi}\int_{s=-\infty}^{+\infty} \mathrm{pr}_{u}(s,\phi)K(X\cos\phi+Y\sin\phi-s)dsd\phi,\end{split} \tag{15}\]
with \(K(s)\) given by
\[K\left(s\right)=\frac{1}{2}\int_{-\infty}^{+\infty}|\xi|e^{i\xi s}\ \mathrm{d}\xi. \tag{16}\]
At the moment of applying Eq. 15, it is necessary to characterize and limit the behavior of the kernel \(K(s)\) due to the fact that any measurement relies on the discretization of amplitudes and phases. In order to avoid ill behaviors [16], the kernel is regularized [1], or filtered, by introducing a cutoff value, \(k_{c}\). When changing the integration limits in Eq. 16 to \([-k_{c},k_{c}]\) and performing a Taylor expansion for low values of \(s\), the filtered kernel \(K_{\rm filter}\) becomes
\[\begin{split}& K_{\rm filter}\left(s\right)=\\ &\begin{cases}\frac{1}{s^{2}}\left(\cos\left(k_{c}s\right)+k_{c}s \sin\left(k_{c}s\right)-1\right)&|s|>0.1/k_{c},\\ \frac{k_{c}^{2}}{2}\left(1-\frac{k_{c}^{2}s^{2}}{4}+\frac{k_{c}^{4}s^{4}}{72}-...\right)&\text{otherwise}.\end{cases}\end{split} \tag{17}\]
By applying Eq. 17 into Eq. 15, the WDF is recovered numerically by having performed a filtered back-projection algorithm.
## III Quantum states of light
Understanding certain optical fields, generated experimentally in the last decades [10], requires a fully
quantum-mechanical picture. In this picture, it is possible to describe the dynamics of a mode of the electromagnetic field using the Hamiltonian of a quantum harmonic oscillator of frequency \(\omega\) as [17]:
\[\hat{H}_{\text{opt}}=\hbar\omega\left(\hat{a}^{\dagger}\hat{a}+\frac{1}{2}\right), \tag{18}\]
where \(\hat{a}^{\dagger}\) and \(\hat{a}\) are creation and annihilation operators, respectively. From these operators, it is possible to define a pair of conjugate variables known as generalized position and momentum:
\[\hat{x} =\frac{1}{2}\left(\hat{a}+\hat{a}^{\dagger}\right), \tag{19}\] \[\hat{p} =-\frac{i}{2}\left(\hat{a}-\hat{a}^{\dagger}\right). \tag{20}\]
The presence of position and momentum observables permits the introduction of a phase space and the definition of quadratures in the same spirit of those mentioned in the previous section.
According to quantum mechanics, a physical system can be described by a density matrix \(\hat{\rho}\). The Wigner function of such a quantum state \(\hat{\rho}\) has the same properties as the WDFs introduced above and is given by
\[W_{\hat{\rho}}(x,p)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{ipq}\langle x-q/2| \hat{\rho}|x+q/2\rangle dq, \tag{21}\]
where \(\langle u|\hat{\rho}|v\rangle\) is the matrix element \((u,v)\) when \(\hat{\rho}\) is represented in the continuous position basis.
Statistically, the measurement of the generalized quadrature rotated by an angle \(\theta\): \(\hat{s}=\hat{x}\cos\theta+\hat{p}\sin\theta\), on a physical system, described by \(\hat{\rho}\), has a mean given by the trace of the operator \(\hat{s}\hat{\rho}\),
\[\langle\hat{s}\rangle_{\hat{\rho}}=\text{Tr}\left[\hat{s}\hat{\rho}\right], \tag{22}\]
and a standard deviation given by
\[\Delta\hat{s}_{\hat{\rho}}=\sqrt{\left\langle\hat{s}^{2}\right\rangle_{\hat{ \rho}}-\left\langle\hat{s}\right\rangle_{\hat{\rho}}^{2}}. \tag{23}\]
The conjugate quadratures \((\hat{s},\hat{s}^{\prime}=-\hat{x}\sin\theta+\hat{p}\cos\theta)\) of a quantum state satisfy the uncertainty principle:
\[\Delta\hat{s}_{\hat{\rho}}\Delta\hat{s}^{\prime}_{\hat{\rho}}\geq\frac{1}{2}. \tag{24}\]
Using the concepts just introduced, this section describes the physical properties of a gallery of different quantum states of light. In particular, we describe their density matrices, quadrature distributions, Wigner functions and phasor diagrams.
### Fock states
One respresentation suitable to describe eigenstates of \(\hat{H}_{\text{opt}}\) is the Fock basis, constituted as
\[\mathcal{B}=\left\{\left|n\right\rangle\right\}_{n=0,1,2,\dots}, \tag{25}\]
where each of the elements is an eigenstate of the number operator \(\hat{n}=\hat{a}^{\dagger}\hat{a}\): \(\hat{n}\left|n\right\rangle=n\left|n\right\rangle\). In this representation, a state \(\left|n\right\rangle\) has a well-defined number of excitations, i.e., photons. The density matrix of a quantum state \(\hat{\rho}\) in this representation is
\[\rho_{nm}=\left\langle n\right|\hat{\rho}\left|m\right\rangle, \tag{26}\]
which is illustrated in Fig 2 (1a) for the Fock state \(\left|n=1\right\rangle\).
In a physical system described by the elements of the Fock basis, the probability distribution for measuring the generalized quadrature \(s\) rotated by an angle \(\theta\) is independent of \(\theta\) and is given by [18]
\[\text{pr}_{\left|n\right\rangle}\left(s,\theta\right) =\left|\left\langle s|n\right\rangle\right|^{2}\] \[=\frac{1}{2^{n}n!\sqrt{\pi}}e^{-s^{2}}\left(H_{n}\left(s\right) \right)^{2}. \tag{27}\]
Here, \(\left\langle s|n\right\rangle=\psi_{n}\left(s\right)\), where \(\psi_{n}\left(s\right)\) is the wavefunction associated to the basis state \(\left|n\right\rangle\) in the \(s\) representation.
Figure 1: (a) Schematic representation of a lock-in amplifier. (b) Scheme for homodyne detection. (c) Scheme of the WDF and one of its experimentally accessible marginal distributions, \(\text{pr}_{u}(s,\phi)\).
Additionally, \(H_{n}(s)\) is the Hermite polynomial of \(n\)th order. This is illustrated in Fig. 2 (2a) for \(\left|n=1\right\rangle\). Due to the form of the Hermite polynomial, all the projections have the shape of a double-hump.
Following Eq. 21 the Wigner function of a Fock state can be written using the Laguerre polynomials \(L_{n}(x)\) as [1]:
\[W_{\left|n\right\rangle}(x,p)=\\ \frac{(-1)^{n}}{\pi}\exp\left(-\left(x^{2}+p^{2}\right)\right)L_{ n}\left(2\left(x^{2}+p^{2}\right)\right). \tag{28}\]
This is illustrated in Fig 2 (3a) for \(n=1\).
From the Wigner function, it is possible to perform a transverse cut parallel to the \((x,p)\) plane and draw a corresponding phasor diagram, illustrating the means and variances of the quadratures: Fock states have a mean value for the quadrature \(s\) of \(\left\langle\hat{s}\right\rangle_{\left|n\right\rangle}=0\) and a standard deviation of
\[\Delta\hat{s}_{\left|n\right\rangle}=\sqrt{\frac{2n+1}{2}}, \tag{29}\]
both of which are independent on the value of \(\theta\). For all values of \(n\), the uncertainty principle satisfies
\[\Delta\hat{s}\Delta\hat{s}^{\prime}=\frac{2n+1}{2}\geq 1/2, \tag{30}\]
as illustrated in Fig. 2 (4a). For the \(\left|n=1\right\rangle\) case, the phasor diagram has a donut shape [10], while its Wigner function attains negative values.
### Coherent states
Coherent states correspond to eigenstates of the annihilation operator \(\hat{a}\), and are written as \(\left|\alpha\right\rangle\), where \(\alpha\) is a complex number that satisfies
\[\hat{a}\left|\alpha\right\rangle=\alpha\left|\alpha\right\rangle. \tag{31}\]
In the Fock basis, coherent states are written as [10]
\[\left|\alpha\right\rangle=e^{-\frac{1}{2}\left|\alpha\right|^{2}}\sum_{n=0}^ {\infty}\frac{\alpha^{n}}{\sqrt{n!}}\left|n\right\rangle. \tag{32}\]
From this expression and using the definition in Eq. (26) the density matrix in the Fock basis for a coherent state is
\[\rho_{nm}=\frac{e^{-\left|\alpha\right|^{2}}\alpha^{n}\alpha^{*m}}{\sqrt{n!m! }}, \tag{33}\]
illustrated in Figure 2 (1b) for the case \(\alpha=2\).
For a coherent state, the probability distribution for measuing the generalized quadrature \(s\) rotated by an angle \(\theta\) is given by
\[\mathrm{pr}_{\left|\alpha\right\rangle}\left(s,\theta\right)=\sqrt{\frac{1}{ \pi}}e^{-\left(-s+\mathrm{Re}(\alpha)\cos\theta+\mathrm{Im}(\alpha)\sin \theta\right)^{2}}. \tag{34}\]
An illustration of this is shown in Fig. 2 (2b) for \(\alpha=2\).
The Wigner function for a coherent state can be calculated following Eq. 21 to be a 2D Gaussian distribution:
\[W_{\left|\alpha\right\rangle}(x,p)=\frac{1}{\pi}e^{-\left(\left(x-\mathrm{Re} (\alpha)\right)^{2}+\left(p-\mathrm{Im}(\alpha)\right)^{2}\right)}. \tag{35}\]
Graphically, this is illustrated in Fig. 2 (3b).
Statistically, it is possible to find the mean value of the generalized quadrature \(\hat{s}\) for the coherent state. This value depends on \(\theta\) and is given by
\[\left\langle s\right\rangle_{\left|\alpha\right\rangle}=\mathrm{Re}\left( \alpha\right)\cos\theta+\mathrm{Im}\left(\alpha\right)\sin\theta. \tag{36}\]
The uncertainty in any generalized quadrature is
\[\Delta s^{2}=\left\langle\hat{s}^{2}\right\rangle_{\left|\alpha\right\rangle} -\left\langle\hat{s}\right\rangle_{\left|\alpha\right\rangle}^{2}=\frac{1}{2}, \tag{37}\]
for any value of \(\theta\). For this reason, the phasor diagram associated to a coherent state corresponds to a circle displaced from the center by \(\alpha\) with a diameter of \(1/\sqrt{2}\), as shown in Fig. 2 (4b). Coherent states correspond to minimum uncertainty states, as they attain the minimum value allowed by the uncertainty relation: \(\Delta s\Delta s^{\prime}=1/2\).
### Vacuum
The vacuum state is a particular case of both the coherent states and the Fock states, where the amplitude is \(\alpha=0\) and the photon number is \(n=0\). Therefore, the density matrix is one for the element \(\rho_{00}\), as shown in Fig. 2(1c).
The projections of the vacuum state can be found by setting \(n=0\) in Eq. 27 or \(\alpha=0\) in Eq. 34. Therefore,
\[\mathrm{pr}_{\left|0\right\rangle}\left(s,\theta\right)=\sqrt{\frac{1}{\pi}}e^ {-s^{2}}. \tag{38}\]
This is shown in Fig. 2(2c). According to Eq. 21 and setting \(n=0\) in Eq. 28 and \(\alpha=0\) in Eq. 35, the Wigner function of the vacuum state is given by:
\[W_{\left|0\right\rangle}(x,p)=\frac{1}{\pi}e^{-\left(x^{2}+p^{2}\right)}. \tag{39}\]
Here, the displacement in the phasor diagram is zero for both quadratures, as shown in Fig. 2 (3c). Just as before, the uncertainties in the quadratures are given by Eq. 37, corresponding to minimum-uncertainty states, represented in the phasor diagram as a circle centered at the origin with diameter \(1/\sqrt{2}\).
### Thermal states
Thermal states characterize photons at a frequency \(\omega\) radiated from a blackbody at a temperature \(T\) and thermodynamic beta \(\beta=1/k_{B}T\). The density matrix in the
Fock basis for a thermal state is given by [19]:
\[\hat{\rho}_{\rm th}=(1-e^{-\beta\hbar\omega})\sum_{n=0}^{\infty}e^{-n\beta\hbar \omega}|n\rangle\langle n|. \tag{40}\]
Since the thermal distribution has a mean number of photons of the form
\[\langle n\rangle=\frac{1}{e^{\beta\hbar\omega}-1}, \tag{41}\]
the density matrix can be rewritten as
\[\hat{\rho}_{\rm th}=\sum_{n=0}^{\infty}\frac{\langle n\rangle^{n}}{\left( \langle n\rangle+1\right)^{n+1}}|n\rangle\langle n|. \tag{42}\]
As revealed in Fig. 2 (1d), the density matrix of a thermal state is diagonal and therefore represents a statistical mixture of Fock states.
Using the linearity of Eq. 21, it is possible to calculate the Wigner function of a thermal state by writing the weighted sum of the Wigner functions of all the different Fock states:
\[W_{\rm th}\left(x,p\right)=\sum_{n=0}^{\infty}\frac{\langle n\rangle^{n}}{ \left(\langle n\rangle+1\right)^{n+1}}W_{|n\rangle}\left(x,p\right). \tag{43}\]
This equation can be rewritten taking into account that the generating function of the Laguerre polynomials is given by \(\sum_{n=0}^{\infty}t^{n}L_{n}(u)=\frac{1}{1-t}e^{-tu/(1-t)}\) and setting \(t=-\frac{\langle n\rangle}{\langle n\rangle+1}\) and \(u=2\left(x^{2}+p^{2}\right)\), the Wigner function for the thermal state becomes:
\[W_{\rm th}\left(x,p\right)=\frac{1}{\sqrt{2\pi\sigma_{x}^{2}}}e^{-x^{2}/2 \sigma_{x}^{2}}\frac{1}{\sqrt{2\pi\sigma_{p}^{2}}}e^{-p^{2}/2\sigma_{p}^{2}}, \tag{44}\]
where \(\sigma_{x}=\sigma_{p}=\sigma=\sqrt{\frac{2\langle n\rangle+1}{2}}\). Eq. 44 reveals that the Wigner function for a thermal state is the product of two Gaussian functions in the quadratures \(x\) and \(p\) centered at the origin. The independence between \(x\) and \(p\) reveals a lack of correlations betweeen the quadratures. This lack of correlations implies that the projection in any generalized quadrature will have the same mean, centered at zero, and variance, given by \(\sigma\). The projections are shown in Fig. 2(2d) and the Wigner function is shown in Fig. 2(3d). In the phasor diagram (Fig. 2(4d)), a thermal state corresponds to a circle of non-minimum uncertainty centered around zero.
### Squeezed states
Squeezed states have different uncertainties in each of their quadratures. Any general state given by the density matrix \(\hat{\rho}\) can be transformed using the squeezing operator
\[\hat{S}(\zeta)=\exp\left[\frac{1}{2}\left(\zeta^{*}\hat{a}^{2}-\zeta\hat{a}^{ 4}2\right)\right], \tag{45}\]
where \(\zeta\) is a complex squeezing parameter. The squeezing operator modifies the density matrix \(\hat{\rho}\) of any state as \(\hat{\rho}_{\rm sq}=\hat{S}\hat{\rho}\hat{S}^{\dagger}\), rescaling the Wigner function of the original state \(\hat{\rho}\)[19]:
\[\begin{split}& W_{\hat{S}(\hat{\rho})}(x,p)\\ &=\frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{ipq}\left\langle x- \frac{q}{2}\left|\hat{S}\hat{\rho}\hat{S}^{\dagger}\right|x+\frac{q}{2}\right\rangle dq \\ &=\frac{1}{2\pi}\int_{-\infty}^{+\infty}e^{ipq}\mathrm{e}^{ \zeta}\left\langle\mathrm{e}^{\zeta}\left(x-\frac{q}{2}\right)|\hat{\rho}| \mathrm{e}^{\zeta}\left(x+\frac{q}{2}\right)\right\rangle dq\\ &=W_{\hat{\rho}}\left(\mathrm{e}^{\zeta}x,\mathrm{e}^{-\zeta}p \right).\end{split} \tag{46}\]
The rescaling is different in each of the quadratures, compressing one and expanding the other one, as indicated by the different sign in the exponentials in the last line of Eq. 46.
One example of a squeezed state is the squeezed vacuum. Since vacuum is a pure state, the effect of the squeezing operator can be written as \(\hat{S}\left(\zeta\right)|0\rangle\). Writing \(\zeta=re^{2i\delta}\), \(r\) can be seen as the squeezing amplitude, while \(\delta\) corresponds to the angle along which squeezing is performed.
In the Fock basis, the squeezed vacuum can be written as [20]
\[\hat{S}\left(re^{2i\delta}\right)|0\rangle=\frac{1}{\sqrt{\cosh(r) }}\\ \sum_{m=0}^{\infty}(-1)^{m}\frac{\sqrt{(2m)!}}{2^{m}m!}e^{2im \delta}\tanh(r)^{m}\left|2m\right\rangle. \tag{47}\]
From this expression, the density matrix for squeezed vacuum is calculated to be
\[(\rho_{\rm sq})_{nm}=\frac{1}{\cosh(r)}(-1)^{m+n}\\ \frac{\sqrt{(2m)!(2n)!}}{2^{m}m!2^{n}n!}e^{2i(m-n)\delta}\tanh(r) ^{m+n} \tag{48}\]
for even values of \(n\) and \(m\). This implies that the density matrix in the Fock representation for the squeezed vacuum only has values different from zero in even values of \(n\) and \(m\), as shown in Fig. 2 (1e).
Following Eq. 46, the Wigner function of squeezed vacuum corresponds to a Gaussian function with different widths in the quadratures, rotated by an angle \(\delta\). Just like vacuum states, squeezed vacuum has a zero mean around any generalized quadrature. However, the variances along the quadratures \(x\) and \(p\) are given by:
\[\Delta x^{2}=\frac{1}{4}\left[\cosh\left(2r\right)-2\sinh r\cosh r \cos 2\delta\right], \tag{49}\] \[\Delta p^{2}=\frac{1}{4}\left[\cosh\left(2r\right)+2\sinh r\cosh r \cos 2\delta\right]. \tag{50}\]
For example, if we perform squeezing along the quadrature \(p\), i.e., \(\delta=\pi/2\), the variances will be reduced to
\[\Delta x^{2}=\frac{1}{4}e^{+2r},\quad\text{ and }\quad\Delta p^{2}=\frac{1}{4}e^{-2r}. \tag{51}\]
Although the squeezing is not equally distributed in both quadratures, \(\Delta x\Delta p\) still attains minimum uncertainty, while interestingly, one of the uncertainties on the quadratures can be made smaller than that of the vacuum state. The marginal distributions of this state are shown in Fig. 2 (2e), while the Wigner function is shown in Fig. 2 (3e). As a consequence, the phasor diagram in Fig. 2 (4e) is an ellipse rotated by an angle \(\delta\) whose major and minor axes have been stretched.
## IV Experimental implementation by students
The following section describes the experimental and computational procedures constituting an educational activity for recovering the Wigner function of different quantum states of light. These procedures were implemented by undergraduate students interested in learning the topic, and was documented in a bachelor monography, mandatory to obtain the degree of physicist at Uniandes. Additionally, the created experimental setup remains in the laboratory and is the one used by students in the quantum optics class.
### Experimental activity
To recover the experimentally accessible quadratures, an optical set-up for homodyne detection, such as the one depicted in Fig. 3, was implemented. A stable 633nm He-Ne laser with a power close to 0.5 mW and linear polarization provides the light source for both S and LO. The laser is split using a beam splitter, labeled BS\({}_{1}\). Both beams are then recombined using another beam splitter, BS\({}_{2}\). The half-wave plates before BS\({}_{2}\) ensure the proper polarization matching between the two beams. Finally, two lenses with a 30mm focal length are used to focus the light into two silicon photodiodes (Thorlabs FDS100). To change the LO phase, a dover prism is mounted into a piezo-electric (PZT) platform, changing the optical path difference between both inputs of the homodyne detector. The subtraction of detector outputs can be electronically implemented either by using a subtracting circuit or an oscilloscope. For the present implementation, the latter is used: An oscilloscope (Tektronix DPO 4054) records each photodiode output and the subtracted data, taking ten thousand data points during an acquisition time of 2 seconds, and the process is repeated for a new position of the piezo-electric, changed in steps of \(\Delta x=0.01\)\(\mu\)m in a range of 0.6 \(\mu\)m for a stepwise change of the local oscillator phase of \(\Delta\phi_{\rm LO}=\pi/12\).
A pipeline for the tomographic reconstruction of the Wigner function is shown in Fig. 4: First, different values of \(i_{-}\) are sampled (Fig. 4(a)), second, they are tallied as different histograms (Fig. 4(b)), third, histograms are recorded for different values of \(\phi_{\rm LO}\) (Fig. 4(c)), and finally, using a filtered back-projection algorithm, the Wigner function is recovered (Fig. 4(d)).
### Tomographic reconstruction and simulation
The recovery of the Wigner function from experimentally measured data was developed in Python by implementing a Graphical User Interface (GUI), shown in Fig. 4(e). The GUI receives the set of marginal distributions as a function of the LO phase, and computes the reconstruction of the Wigner function performing a filtered back-projection algorithm. This program is openly accessible, and can be found at [https://github.com/amartinez1224/quantum-tomography](https://github.com/amartinez1224/quantum-tomography).
The software created for the tomographic reconstruction of the Wigner function can also be used to analyze simulated data. This feature was introduced due to COVID restrictions that forbade student access to the laboratory. These simulations can be used to emulate the measurement of more exotic states of light which are not experimentally feasible to achieve in an educational laboratory with a HeNe laser, and constitutes an important tool for distance learning.
The GUI uses the information from marginal distributions, simulated or measured. The file to be introduced must be formatted appropriately, containing three sets of data embedded in a JSON file as indicated in the aforementioned website. Two arrays, s and phi, are row vectors containing all the possible values of \(i_{-}\) and the phase \(\theta\), respectively. A third component, pr, is formatted as a two-dimensional array of dimensions size(s)*size(phi) containing the frequencies of the histograms, pr\({}_{u}(s,\theta)\). The GUI requires the user to define the ranges of \(X\) and \(Y\) to display the tomography. This is done by using the inputs Xmin, Xmax, Ymin, Ymax and Density. The filter of the Radon kernel, \(k_{c}\), can be tuned by changing the parameter \(\mathtt{Kc}\). Finally, the Wigner function can be used to obtain the density matrix of a quantum state of light in the photon number basis.
Additionally, the interface enables color map and perspective settings, which can be changed by using the computer mouse, and enables the user to save the figures and the reconstructed tomographic data.
## V Hands-on activity
The different tools introduced in the previous section were condensed in a laboratory practice developed by students and for students. It was subsequently used in the "_Quantum optics: theory and practice_" course at Uniandes. The laboratory activities consisted in attending the laboratory in groups of maximum three students, who visited the lab in different pre-booked two-hour shifts. The optical setup was already implemented, and students controlled the piezoelectric to change the phase of the local oscillator and perform the different quadrature
measurements. The data was taken home to process and hand in a report.
As mentioned before, since this practice was originally implemented during one of the lockdowns of the COVID-19 pandemic, some students could not attend the laboratory but could generate their own data using the part of the program used to simulate data. In the in-person laboratory, students recovered the Wigner function for a vacuum state (in which one of the ports for the homodyne detector was blocked) and for a coherent state (a laser with a power of approximately 0.5 mW). The students working remotely also reconstructed more exotic states, such as single-photon states and squeezed vacuum. Some examples of the Wigner functions recovered by in-person students are shown in Fig. 5 (a) and (b), and one example of a Wigner function recovered by a student following the course remotely is shown in Fig. 5 (c).
## VI Conclusions and perspectives
This paper has presented the development of an undergraduate educational activity introducing the reconstruction of Wigner distribution functions using optical homodyne tomography. Interestingly, this activity was developed by undergraduate students in three semesters in different mandatory courses of the physics undergraduate curriculum at Uniandes. This activity was later used in the "_Quantum Optics: theory and practice_" course, which is open for advanced undergraduate students, as well as early graduate students.
Since the use of the educational activity coincided with the COVID-19 lockdowns, the option of processing real experimental and simulated data was included. This activity is now running in-person, and the virtual part opens the possibility of implementing distance learn
Figure 2: Column (a) shows, for a Fock state with \(n=1\), the density matrix (1), marginal distributions (2), Wigner function (3) and phasor diagram representations (4). Likewise, Column (b) shows, for a coherent state with \(\alpha=2\), the same representations. Column (c) does the same for a vacuum state (\(\alpha=n=0\)), and Column (d) shows the same representations for a thermal state with \(\langle n\rangle=10\), and finally, Column (e) shows, for squeezed vacuum with squeezing factor \(\zeta=2e^{i\pi/2}\), the different representations.
ing in a quantum optics course. Future iterations of this activity could improve the experimental results obtained by the students: Implementing a Maximum Likelihood method [21] would enable the recovery of smoother Wigner functions, and a thorough characterization of detector performance, in particular detector clearance and common mode rejection ratio, would enable students to interpret and correct their experimental data adequately.
###### Acknowledgements.
J.R.A. acknowledges funding by the European Union Horizon 2020 (MSCA 765075-LIMQUET, FET 899544-PHOQUSING), and from the Plan France 2030 through the project ANR-22-PETQ-0006. A.V. acknowledges financial support from the Facultad de Ciencias at the Universidad de los Andes.
|
2302.00248 | A Nearly-Optimal Bound for Fast Regression with $\ell_\infty$ Guarantee | Given a matrix $A\in \mathbb{R}^{n\times d}$ and a vector $b\in
\mathbb{R}^n$, we consider the regression problem with $\ell_\infty$
guarantees: finding a vector $x'\in \mathbb{R}^d$ such that $ \|x'-x^*\|_\infty
\leq \frac{\epsilon}{\sqrt{d}}\cdot \|Ax^*-b\|_2\cdot \|A^\dagger\|$ where
$x^*=\arg\min_{x\in \mathbb{R}^d}\|Ax-b\|_2$. One popular approach for solving
such $\ell_2$ regression problem is via sketching: picking a structured random
matrix $S\in \mathbb{R}^{m\times n}$ with $m\ll n$ and $SA$ can be quickly
computed, solve the ``sketched'' regression problem $\arg\min_{x\in
\mathbb{R}^d} \|SAx-Sb\|_2$. In this paper, we show that in order to obtain
such $\ell_\infty$ guarantee for $\ell_2$ regression, one has to use sketching
matrices that are dense. To the best of our knowledge, this is the first user
case in which dense sketching matrices are necessary. On the algorithmic side,
we prove that there exists a distribution of dense sketching matrices with
$m=\epsilon^{-2}d\log^3(n/\delta)$ such that solving the sketched regression
problem gives the $\ell_\infty$ guarantee, with probability at least
$1-\delta$. Moreover, the matrix $SA$ can be computed in time $O(nd\log n)$.
Our row count is nearly-optimal up to logarithmic factors, and significantly
improves the result in [Price, Song and Woodruff, ICALP'17], in which a
super-linear in $d$ rows, $m=\Omega(\epsilon^{-2}d^{1+\gamma})$ for
$\gamma=\Theta(\sqrt{\frac{\log\log n}{\log d}})$ is required. We also develop
a novel analytical framework for $\ell_\infty$ guarantee regression that
utilizes the Oblivious Coordinate-wise Embedding (OCE) property introduced in
[Song and Yu, ICML'21]. Our analysis is arguably much simpler and more general
than [Price, Song and Woodruff, ICALP'17], and it extends to dense sketches for
tensor product of vectors. | Zhao Song, Mingquan Ye, Junze Yin, Lichen Zhang | 2023-02-01T05:22:40Z | http://arxiv.org/abs/2302.00248v1 | # A Nearly-Optimal Bound for Fast Regression with \(\ell_{\infty}\) Guarantee
###### Abstract
Given a matrix \(A\in\mathbb{R}^{n\times d}\) and a vector \(b\in\mathbb{R}^{n}\), we consider the regression problem with \(\ell_{\infty}\) guarantees: finding a vector \(x^{\prime}\in\mathbb{R}^{d}\) such that
\[\|x^{\prime}-x^{*}\|_{\infty}\leq\frac{\epsilon}{\sqrt{d}}\cdot\|Ax^{*}-b\|_{ 2}\cdot\|A^{\dagger}\|,\]
where \(x^{*}=\operatorname*{arg\,min}_{x\in\mathbb{R}^{d}}\|Ax-b\|_{2}\). One popular approach for solving such \(\ell_{2}\) regression problem is via sketching: picking a structured random matrix \(S\in\mathbb{R}^{m\times n}\) with \(m\ll n\) and \(SA\) can be quickly computed, solve the "sketched" regression problem \(\operatorname*{arg\,min}_{x\in\mathbb{R}^{d}}\|SAx-Sb\|_{2}\).
In this paper, we show that in order to obtain such \(\ell_{\infty}\) guarantee for \(\ell_{2}\) regression, one has to use sketching matrices that are _dense_. To the best of our knowledge, this is the first user case in which dense sketching matrices are necessary. On the algorithmic side, we prove that there exists a distribution of dense sketching matrices with \(m=\epsilon^{-2}d\log^{3}(n/\delta)\) such that solving the sketched regression problem gives the \(\ell_{\infty}\) guarantee, with probability at least \(1-\delta\). Moreover, the matrix \(SA\) can be computed in time \(O(nd\log n)\). Our row count is nearly-optimal up to logarithmic factors, and significantly improves the result in [10], in which a super-linear in \(d\) rows, \(m=\Omega(\epsilon^{-2}d^{1+\gamma})\) for \(\gamma=\Theta(\sqrt{\frac{\log\log n}{\log d}})\) is required.
Moreover, we develop a novel analytical framework for \(\ell_{\infty}\) guarantee regression that utilizes the _Oblivious Coordinate-wise Embedding_ (OCE) property introduced in [11]. Our analysis is much simpler and more general than that of [10]. Leveraging this framework, we extend the \(\ell_{\infty}\) guarantee regression result to dense sketching matrices for computing the fast tensor product of vectors.
###### Contents
* 1 Introduction
* 2 Preliminary
* 2.1 Oblivious subspace embedding and coordinate-wise embedding
* 2.2 Sketching matrices
* 2.3 OSE property of dense sketches
* 2.4 Probability tools
* 3 \(\ell_{\infty}\) guarantee via OCE
* 3.1 High probability bound for OCE
* 3.2 Inner product bound for SRHT and SRCT
* 3.3 Column norm bound for SRHT and SRCT
* 4 Put things together
* 5 Conclusion
* A Tools for matrices and probability
* B Kronecker product regression with \(\ell_{\infty}\) guarantee
* B.1 Main result
* B.2 Oblivious coordinate-wise embedding for TensorSRHT and TensorSRCT
* C SRCT and TensorSRCT: OSE via strong JL moment property
* C.1 Notations
* C.2 Strong JL moment property
* C.3 SRCT and TensorSRCT satisfy strong JL moment property
* D Gaussian and AMS
* D.1 OSE property of random Gaussian and AMS
* D.2 OCE property of random Gaussian and AMS
Introduction
Linear regression, or \(\ell_{2}\) least-square problem is ubiquitous in numerical linear algebra, scientific computing and machine learning. Given a tall skinny matrix \(A\in\mathbb{R}^{n\times d}\) and a label vector \(b\in\mathbb{R}^{n}\), the goal is to (approximately) compute an optimal solution \(x^{\prime}\) that minimizes the \(\ell_{2}\) loss \(\|Ax-b\|_{2}\). For the regime where \(n\gg d\), sketching is a popular approach to obtain an approximate solution quickly [13, 14]: the idea is to pick a random matrix \(S\in\mathbb{R}^{m\times n}\) from carefully-designed distributions, so that 1). \(S\) can be efficiently applied to \(A\) and 2). the row count \(m\ll n\). Given these two guarantees, one can then solve the "sketched" regression problem:
\[\arg\min_{x\in\mathbb{R}^{d}}\|SAx-Sb\|_{2},\]
and obtain a vector \(x^{\prime}\) such that \(\|Ax^{\prime}-b\|_{2}=(1\pm\epsilon)\cdot\text{OPT}\), where OPT denotes the optimal \(\ell_{2}\) discrepancy between vectors in column space of \(A\) and \(b\). Recent advances in sketching [14] show that one can design matrix \(S\) with \(m=O(\epsilon^{-2}d\log^{2}(n/\delta))\) and the sketched regression can be solved in time \(O(\operatorname{nnz}(A)+d^{\omega}+\epsilon^{-2}d^{2}\operatorname{poly}\log(n,d,1/\epsilon,1/\delta))\) where \(\operatorname{nnz}(A)\) denotes the number of nonzero entries of \(A\) and \(\delta\) is the failure probability.
Unfortunately, modern machine learning emphasizes more and more on large, complex, and nonlinear models such as deep neural networks, thus linear regression becomes less appealing as a _model_. However, it is still a very important _subroutine_ in many deep learning and optimization frameworks, especially second-order method for training neural networks [1, 13] or convex optimization [12, 13, 14, 15]. In these applications, one typically seeks _forward error_ guarantee, i.e., how close is the approximate solution \(x^{\prime}\) to the optimal solution \(x^{*}\). A prominent example is Newton's method: given the (possibly implicit) Hessian matrix \(H\in\mathbb{R}^{d\times d}\) and the gradient \(g\in\mathbb{R}^{d}\), one wants to compute \(H^{-1}g\). A common approach is to solve the regression \(\arg\min_{x\in\mathbb{R}^{d}}\|Hx-g\|_{2}\), in which one wants \(\|x-H^{-1}g\|_{2}\) or even \(\|x-H^{-1}g\|_{\infty}\) to be small. When the matrix \(S\) satisfies the so-called Oblivious Subspace Embedding (OSE) property [15], one can show that the approximate solution \(x^{\prime}\) is close to \(x^{*}\) in the \(\ell_{2}\) sense:
\[\|x^{\prime}-x^{*}\|_{2}\leq O(\epsilon)\cdot\|Ax^{*}-b\|_{2}\cdot\|A^{ \dagger}\|. \tag{1}\]
Unfortunately, \(\ell_{2}\)-closeness cannot characterize how good \(x^{\prime}\) approximates \(x^{*}\), as \(x^{*}\) can have a good spread of \(\ell_{2}\) mass over all coordinates while \(x^{\prime}\) concentrates its mass over a few coordinates. Formally speaking, let \(a\in\mathbb{R}^{d}\) be a fixed vector, then one can measure how far \(\langle a,x^{\prime}\rangle\) deviates from \(\langle a,x^{*}\rangle\) via Eq. (1):
\[|\langle a,x^{*}\rangle-\langle a,x^{\prime}\rangle| =|\langle a,x^{\prime}-x^{*}\rangle|\] \[\leq\|a\|_{2}\|x^{\prime}-x^{*}\|_{2}\] \[\leq O(\epsilon)\cdot\|a\|_{2}\cdot\|Ax^{*}-b\|_{2}\cdot\|A^{ \dagger}\|.\]
This bound is clearly too loose, as one would expect the deviation on a random direction is only \(\frac{1}{\sqrt{d}}\) factor of the \(\ell_{2}\) discrepancy. [14] shows that this intuition is indeed true when \(S\) is picked as the subsampled randomized Hadamard transform (SRHT) [11]: 1
Footnote 1: We will later refer the following property as \(\ell_{\infty}\) guarantee.
\[|\langle a,x^{*}\rangle-\langle a,x^{\prime}\rangle|\lesssim\frac{\epsilon}{ \sqrt{d}}\|a\|_{2}\|Ax^{*}-b\|_{2}\|A^{\dagger}\|. \tag{2}\]
However, their analysis is not tight as they require a row count \(m=\Omega(\epsilon^{-2}d^{1+\gamma})\) for \(\gamma=\Theta(\sqrt{\frac{\log\log n}{\log d}})\). Such a row count is super-linear in \(d\) as long as \(n\leq\exp(d)\) and therefore is worse than the required row count for \(S\) to be a subspace embedding, in which only \(m=\epsilon^{-2}d\log^{2}n\) rows are required for constant success probability. In contrast, for random Gaussian matrices, the \(\ell_{\infty}\) guarantee only requires nearly linear in \(d\) rows. In addition to their sub-optimal row count, the [10] analysis is also complicated: let \(U\in\mathbb{R}^{n\times d}\) be an orthonormal basis of \(A\), [10] has to analyze the higher moment of matrix \(I_{d}-U^{\top}S^{\top}SU\). This makes their analysis particularly hard to generalize to other dense sketching matrices beyond SRHT.
In this work, we present a novel framework for analyzing the \(\ell_{\infty}\) guarantee induced by SRHT and more generally, a large class of _dense_ sketching matrices. Our analysis is arguably much simpler than [10], and it exposes the fundamental structure of sketching matrices that provides \(\ell_{\infty}\) guarantee: if any two columns of the sketching matrix have a small inner product with high probability, then \(\ell_{\infty}\) guarantee can be preserved. We then prove that the small pairwise column inner product is also closely related to the _Oblivious Coordinate-wise Embedding_ (OCE) property introduced in [10]. More concretely, for any two fixed vectors \(g,h\in\mathbb{R}^{n}\), we say the sketching matrix is \((\beta,\delta,n)\)-OCE if \(|\langle Sg,Sh\rangle-\langle g,h\rangle|\leq\frac{\beta}{\sqrt{m}}\cdot\|g\| _{2}\|h\|_{2}\) holds with probability at least \(1-\delta\). This property has previously been leveraged for approximating matrix-vector product between a dynamically-changing projection matrix and an online sequence of vectors for the fast linear program and empirical risk minimization algorithms [11, 12, 13, 10, 11] as these algorithms need \(\ell_{\infty}\) bound on the matrix-vector product. One common theme shared by those applications and \(\ell_{\infty}\) guarantee is to use _dense_ sketching matrices, such as random Gaussian, the Alon-Matias-Szegedy sketch (AMS, [1]) or SRHT. This is in drastic contrast with the trending direction for using sparse matrices such as Count Sketch [13, 14] and OSNAP [15, 16], as they can be applied in (nearly) input sparsity time.
In recent years, sketches that can be applied to the tensor product of matrices/vectors have gained popularity [1, 12, 13, 14, 15, 16, 17, 18, 19, 1, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] as they can speed up optimization tasks and large-scale kernel learning. We show that dense sketches for degree-2 tensors also provide \(\ell_{\infty}\) guarantee.
**Theorem 1.1** (Nearly-optimal bound for dense sketching matrices).: _Suppose \(n\leq\exp(d)\) and matrix \(A\in\mathbb{R}^{n\times d}\) and vector \(b\in\mathbb{R}^{n}\) are given. Let \(S\in\mathbb{R}^{m\times n}\) be a subsampled randomized Hadamard transform matrix \(\mathsf{SRHT}\) with \(m=O(\epsilon^{-2}d\log^{3}(n/\delta))\) rows._
_For any fixed vector \(a\in\mathbb{R}^{d}\),_
\[|\langle a,x^{*}\rangle-\langle a,x^{\prime}\rangle|\lesssim\frac{\epsilon}{ \sqrt{d}}\cdot\|a\|_{2}\cdot\|Ax^{*}-b\|_{2}\cdot\|A^{\dagger}\|\]
_with probability \(1-\delta\), where \(x^{\prime}=\arg\min_{x\in\mathbb{R}^{d}}\|SAx-Sb\|_{2}\), \(x^{*}=\arg\min_{x\in\mathbb{R}^{d}}\|Ax-b\|_{2}\)._
**Remark 1.2**.: The row count \(m=\epsilon^{-2}d\log^{3}n\) is nearly-optimal up to logarithmic factors, as the row count for \(S\) being an OSE is \(m=\epsilon^{-2}d\log^{2}n\) for constant success probability. In comparison, [10] requires \(m=\epsilon^{-2}d^{1+\gamma}\) rows for \(\gamma=\Theta(\sqrt{\frac{\log\log n}{\log d}})\) which is only nearly-linear in \(d\) if \(n>\exp(d)\). In most applications, we concern about \(n=\operatorname{poly}(d)\), meaning that their row count is worse than ours in almost all meaningful scenarios.
The row count and guarantee obtained in Theorem 1.1 extend beyond SRHT; in fact, for a range of _dense_ sketching matrices including random Gaussian, AMS sketch [1], SRHT and subsampled randomized Circulant Transform (see Definition 2.11) (\(\mathsf{SRCT}\)). This is because our argument is a structural condition that can be satisfied by various dense sketches.
Our result can also be generalized to degree-2 Kronecker product regression, see Theorem B.1.
Roadmap.In Section 2, we introduce the notations that we use and explain the key definitions and properties to support the framework for \(\ell_{\infty}\) guarantee regression. In Section 3, we introduce our framework by presenting a sufficient condition for a sketching matrix to give a good \(\ell_{\infty}\) guarantee. In Section 4, we provide a proof for our main theorem by putting everything together. Finally, in Section 5, we summarize the main findings of this paper and through comparing with previous work.
## 2 Preliminary
For a positive integer, we define \([n]:=\{1,2,\cdots,n\}\). For a vector \(x\in\mathbb{R}^{n}\), we define \(\|x\|_{2}:=(\sum_{i=1}^{n}x_{i}^{2})^{1/2}\) and \(\|x\|_{\infty}:=\max_{i\in[n]}|x_{i}|\). For a matrix \(A\), we define \(\|A\|:=\sup_{x}\|Ax\|_{2}/\|x\|_{2}\) to be the spectral norm of \(A\). We use \(\|A\|_{F}:=\sum_{i,j}A_{i,j}^{2}\) to be the Frobenius norm of \(A\). In general, we have the following property for spectral norm, \(\|AB\|\leq\|A\|\cdot\|B\|\). We use \(A^{\dagger}\) to denote the Moore-Penrose pseudoinverse of \(m\times n\) matrix \(A\) which if \(A=U\Sigma V^{\top}\) is its SVD (where \(U\in\mathbb{R}^{m\times n}\), \(\Sigma\in\mathbb{R}^{n\times n}\) and \(V\in\mathbb{R}^{n\times n}\) for \(m\geq n\)), is given by \(A^{\dagger}=V\Sigma^{-1}U^{\top}\).
We use \(\mathbb{E}[\cdot]\) to denote the expectation, and \(\Pr[\cdot]\) to denote the probability. For a distribution \(D\) and a random variable \(x\), we use \(x\sim D\) to denote that we draw a random variable from the distribution \(D\). We use \(\mathcal{N}(\mu,\sigma^{2})\) to denote a Gaussian distribution with mean \(\mu\) and variance \(\sigma^{2}\). We say a random variable \(x\) is Rademacher random variables if \(\Pr[x=1]=1/2\) and \(\Pr[x=-1]=1/2\). We also call it a sign random variable.
In addition to \(O(\cdot)\) notation, for two functions \(f,g\), we use the shorthand \(f\lesssim g\) (resp. \(\gtrsim\)) to indicate that \(f\leq Cg\) (resp. \(\geq\)) for an absolute constant \(C\). We use \(f\eqsim g\) to mean \(cf\leq g\leq Cf\) for constants \(c>0\) and \(C>0\). For two matrices \(A\in\mathbb{R}^{n_{1}\times d_{1}}\), \(B\in\mathbb{R}^{n_{2}\times d_{2}}\), we use \(A\otimes B\in\mathbb{R}^{n_{1}n_{2}\times d_{1}d_{2}}\) to denote the Kronecker product, i.e., the \((i_{1},i_{2})\)-th row and \((j_{1},j_{2})\)-th column of \(A\times B\) is \(A_{i_{1},j_{1}}B_{i_{2},j_{2}}\). For two vectors \(x\in\mathbb{R}^{m},y\in\mathbb{R}^{n}\), we use \(x\otimes y\in\mathbb{R}^{mn}\) to denote the tensor product of vectors, in which \(x\otimes y=\mathrm{vec}(xy^{\top})\).
### Oblivious subspace embedding and coordinate-wise embedding
**Definition 2.1** (Oblivious subspace embedding [10]).: We define \((\epsilon,\delta,d,n)\)-Oplivious subspace embedding (OSE) as follows: Suppose \(\Pi\) is a distribution on \(m\times n\) matrices \(S\), where \(m\) is a function of \(n,d,\epsilon\), and \(\delta\). Suppose that with probability at least \(1-\delta\), for any fixed \(n\times d\) orthonormal basis \(U\), a matrix \(S\) drawn from the distribution \(\Pi\) has the property that the singular values of \(SU\) lie in the range \([1-\epsilon,1+\epsilon]\).
The oblivious coordinate-wise embedding (OCE) is implicitly used in [11, 12] and formally introduced in [10].
**Definition 2.2** (\((\alpha,\beta,\delta)\)-coordinate wise embedding [10]).: We say a random matrix \(S\in\mathbb{R}^{m\times n}\) satisfying \((\alpha,\beta,\delta)\)-coordinate wise embedding if
\[1. \operatorname*{\mathbb{E}}_{S\sim\Pi}[g^{\top}S^{\top}Sh]=g^{\top}h,\] \[2. \operatorname*{\mathbb{E}}_{S\sim\Pi}[(g^{\top}S^{\top}Sh)^{2}] \leq(g^{\top}h)^{2}+\frac{\alpha}{m}\|g\|_{2}^{2}\|h\|_{2}^{2},\] \[3. \operatorname*{\Pr}_{S\sim\Pi}[|g^{\top}S^{\top}Sh-g^{\top}h|\geq \frac{\beta}{\sqrt{m}}\|g\|_{2}\|h\|_{2}]\leq\delta.\]
In this paper, we mainly use the property 3 of Definition 2.2. For convenient, we redefine OCE as follows:
**Definition 2.3** (\(\mathsf{OCE}\)).: Let \(\beta\geq 1\) and \(\delta\in(0,0.1)\). We say a randomized matrix \(S\in\mathbb{R}^{m\times n}\) satisfy \((\beta,\delta,n)\)-\(\mathsf{OCE}\), if
\[\Pr_{S\sim\Pi}[|g^{\top}S^{\top}Sh-g^{\top}h|\geq\frac{\beta}{\sqrt{m}}\|g\|_{2 }\|h\|_{2}]\leq\delta\]
and the distribution \(\Pi\) is oblivious to any fixed vectors \(g\) and \(h\).
### Sketching matrices
In this paper, we concern a list of dense sketching matrices.
**Definition 2.4** (Random Gaussian matrix, folklore).: We say \(S\in\mathbb{R}^{m\times n}\) is a random Gaussian matrix if all entries are sampled from \(\mathcal{N}(0,1/m)\) independently.
**Definition 2.5** (AMS sketch matrix, [1]).: Let \(h_{1},h_{2},\cdots,h_{m}\) be \(m\) random hash functions picking from a \(4\)-wise independent hash family \(\mathcal{H}=\{h:[n]\rightarrow\{-\frac{1}{m},+\frac{1}{m}\}\}\). Then \(S\in\mathbb{R}^{m\times n}\) is an AMS sketch matrix if we set \(S_{i,j}=h_{i}(j)\).
The following sketching matrices can utilize fast Fourier Transform (FFT) for efficient application to matrices.
**Definition 2.6** (Subsampled randomized Hadamard transform (\(\mathsf{SRHT}\)) [11, 13]).: The \(\mathsf{SRHT}\) matrix \(S\in\mathbb{R}^{m\times n}\) is defined as \(S:=\frac{1}{\sqrt{m}}PHD\), where each row of matrix \(P\in\{0,1\}^{m\times n}\) contains exactly one \(1\) at a random position, \(H\) is the \(n\times n\) Hadamard matrix, and \(D\) is a \(n\times n\) diagonal matrix with each diagonal entry being a value in \(\{-1,+1\}\) with equal probability.
**Remark 2.7**.: Using the fast Fourier transform (FFT), \(S\) can be applied to a vector in time \(O(n\log n)\).
**Definition 2.8** (Tensor subsampled randomized Hadamard transform (\(\mathsf{TensorSRHT}\)) [13]).: The \(\mathsf{TensorSRHT}\)\(S:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) is defined as \(S:=\frac{1}{\sqrt{m}}P\cdot(HD_{1}\otimes HD_{2})\), where each row of \(P\in\{0,1\}^{m\times n^{2}}\) contains only one \(1\) at a random coordinate and one can view \(P\) as a sampling matrix. \(H\) is a \(n\times n\) Hadamard matrix, and \(D_{1}\), \(D_{2}\) are two \(n\times n\) independent diagonal matrices with diagonals that are each independently set to be a Rademacher random variable (uniform in \(\{-1,1\}\)).
**Remark 2.9**.: By leveraging the FFT algorithm in the sketch space, \(S(x\otimes y)\) can be computed in time \(O(n\log n+m)\).
To store and generate a Hadamard matrix is expensive, we consider a cheaper and space-efficient way to generate an FFT matrix via circulant transform.
**Definition 2.10** (Circulant matrix).: A circulant matrix is an \(n\times n\) matrix, where \(n\in\mathbb{N}\), whose row vectors consist of the same element, and compared to the preceding row vector, each row vector is rotated one element to the right.
**Definition 2.11** (Subsampled randomized circulant transform (\(\mathsf{SRT}\))).: Let \(x\in\mathbb{R}^{n}\) be a random vector, whose elements are i.i.d. Rademacher random variables.
Also, let \(P\in\mathbb{R}^{m\times n}\) be a random matrix in which each row contains a \(1\) at a uniformly distributed coordinate and zeros elsewhere.
Let \(G\in\mathbb{R}^{n\times n}\) be a circulant matrix (see Definition 2.10) generated by \(x\) and \(D\in\mathbb{R}^{n\times n}\) be a diagonal matrix whose diagonal elements are i.i.d. Rademacher random variables.
Then, the subsampled randomized circulant transform is defined as follows: \(S:=\ \frac{1}{\sqrt{m}}PGD\).
**Definition 2.12** (Tensor subsampled randomized circulant transform (TensorSRCT)).: Let \(x\in\mathbb{R}^{n}\) be a random vector, whose elements are i.i.d. Rademacher random variables.
Also, let \(P\in\mathbb{R}^{m\times n^{2}}\) be a random matrix in which each row contains a \(1\) at a uniformly distributed coordinate and zeros elsewhere.
Let \(G\in\mathbb{R}^{n\times n}\) be a circulant matrix (see Definition 2.10) generated by \(x\).
Let \(D_{1}\in\mathbb{R}^{n\times n}\) and \(D_{2}\in\mathbb{R}^{n\times n}\) be two independent diagonal matrices whose diagonal elements are i.i.d. Rademacher random variables.
Then, the tensor circulant transform \(T:\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R}^{m}\) is defined as follows: \(T:=P\cdot(GD_{1}\otimes GD_{2})\).
**Remark 2.13**.: Similar to SRHT, we can utilize the fast Fourier transform with circulant matrix. SRCT can be applied to a vector of length \(n\) in \(O(n\log n)\) time, and TensorSRCT can be applied to \(x\otimes y\) in \(O(n\log n+m)\) time.
### Ose property of dense sketches
An important condition for sketch-and-solve regressions is OSE. We focus particularly on SRHT, SRCT, and their tensor variants.
**Lemma 2.14** (Lemma 2.11 in [25]).: _Let \(S\) be an_ SRHT _matrix defined in Definition 2.6. If \(m=O(\epsilon^{-2}d\log^{2}(nd/\delta))\), then \(S\) is an \((\epsilon,\delta,d,n)\)-_OSE_._
**Lemma 2.15** (Lemma 2.12 in [25]).: _Let \(S\) be a_ TensorSRHT _matrix defined in Definition 2.8. If \(m=O(\epsilon^{-2}d\log^{3}(nd/\epsilon\delta))\), then \(S\) is an \((\epsilon,\delta,d,n^{2})\)-_OSE _for degree-\(2\) tensors._
SRCT requires more row count than SRHT due to the Gram \(G^{\top}G\) is only \(I_{n}\) in expectation.
**Lemma 2.16** (Informal version of Corollary C.7).: _Let \(S\) be an_ SRCT _matrix defined in Definition 2.11. If \(m=O(\epsilon^{-2}d^{2}\log^{2}(nd/\delta))\), then \(S\) is an \((\epsilon,\delta,d,n)\)-_OSE_._
**Lemma 2.17** (Informal version of Corollary C.8).: _Let \(S\) be an_ TensorSRCT _matrix defined in Definition 2.12. If \(m=O(\epsilon^{-2}d^{2}\log^{3}(nd/\delta))\), then \(S\) is an \((\epsilon,\delta,d,n^{2})\)-_OSE_._
### Probability tools
**Lemma 2.18** (Khintchine's inequality [15]).: _Let \(\sigma_{1},\cdots,\sigma_{n}\) be i.i.d. Rademacher random variables and \(z_{1},\cdots,z_{n}\) be real numbers. Then, there exists constants \(C,C^{\prime}>0\) such that_
\[\Pr[|\sum_{i=1}^{n}z_{i}\sigma_{i}|\geq Ct\|z\|_{2}]\leq\exp(-C^{ \prime}t^{2}).\]
**Lemma 2.19** (Hoeffding bound, [14]).: _Let \(Z_{1},\cdots,Z_{n}\) be independent, zero-mean random variables with \(Z_{i}\in[\alpha_{i},\beta_{i}]\). Then,_
\[\Pr[|\sum_{i=1}^{n}Z_{i}|>t]\leq 2\exp(-\frac{t^{2}}{2\sum_{i=1}^{n}( \beta_{i}-\alpha_{i})^{2}}).\]
**Lemma 2.20** (Lemma 1 on page 1325 of Laurent and Massart [14] ).: _Let \(X\sim\mathcal{X}_{k}^{2}\) be a chi-squared distributed random variable with \(k\) degrees of freedom. Each one has zero means and \(\sigma^{2}\) variance. Then,_
\[\Pr[X-k\sigma^{2}\geq(2\sqrt{kt}+2t)\sigma^{2}] \leq\,\exp{(-t)}\] \[\Pr[k\sigma^{2}-X\geq 2\sqrt{kt}\sigma^{2}] \leq\,\exp{(-t)}\]
**Lemma 2.21** (Hanson-Wright inequality [14]).: _Let \(x\in\mathbb{R}^{n}\) denote a random vector with independent entries \(x_{i}\) with \(\mathbb{E}[x_{i}]=0\) and \(|x_{i}|\leq K\). Let \(A\) be an \(n\times n\) matrix. Then, for every \(t\geq 0\),_
\[\Pr[|x^{\top}Ax-\mathbb{E}[x^{\top}Ax]|>t]\] \[\leq 2\cdot\exp\left(-c\min\{t^{2}/(K^{4}\|A\|_{F}^{2}),t/(K^{2}\|A\| )\}\right)\]
**Lemma 2.22** (Matrix Chernoff bound, Theorem 2.2 in [13]).: _Let \(X\) be a finite set of positive-semidefinite matrices with dimension \(d\times d\). Suppose that \(\max_{X\in\mathcal{X}}\lambda_{\max}(X)\leq B\). Sample \(\{X_{1},\cdots,X_{n}\}\) uniformly at random from \(\mathcal{X}\) without replacement. We define \(\mu_{\min}\) and \(\mu_{\max}\) as follows: \(\mu_{\min}:=n\cdot\lambda_{\min}(\mathbb{E}_{X\sim\mathcal{X}}[X])\) and \(\mu_{\max}:=n\cdot\lambda_{\max}(\mathbb{E}_{X\sim\mathcal{X}}[X])\). Then,_
\[\Pr[\lambda_{\min}(\sum_{i=1}^{n}X_{i})\leq(1-\delta)\mu_{\min}]\leq d\cdot \exp\left(-\delta^{2}\mu_{\min}/B\right)\]
_for \(\delta\in[0,1)\),_
\[\Pr[\lambda_{\max}(\sum_{i=1}^{n}X_{i})\leq(1+\delta)\mu_{\max}]\] \[\leq d\cdot\exp\left(-\delta^{2}\mu_{\max}/(4B)\right)\]
_for \(\delta\geq 0\)._
## 3 \(\ell_{\infty}\) guarantee via \(\mathsf{OCE}\)
In this section, we present a sufficient condition for a sketching matrix to give good \(\ell_{\infty}\) guarantee: given a pair of fixed vectors \(g,h\) such that \(g^{\top}h=0\), if the sketching matrix approximately preserves the inner product with high probability, then it gives good \(\ell_{\infty}\) guarantee for regression.
**Lemma 3.1** (Core lemma).: _Let \(A\in\mathbb{R}^{n\times d}\) be a fixed matrix. Let \(U\in\mathbb{R}^{n\times d}\) denote the orthonormal basis of \(A\). Let \(S\in\mathbb{R}^{m\times n}\) be a sketching matrix that satisfies two properties_
* \(S\) _is an_ \((0.1,\delta,d,n)\)_-_\(\mathsf{OSE}\) _(with_ \(\delta\in(0,0.1)\)_, Definition_ 2.1_)._
* \(S\) _is an_ \((\beta,\delta,n)\)_-_\(\mathsf{OCE}\) _(with_ \(\beta\geq 1\) _and_ \(\delta\in(0,0.1)\)_, Definition_ 2.3_)._
_For any fixed vectors \(a\in\mathbb{R}^{d}\) and \(b\in\mathbb{R}^{n}\) with \(U^{\top}b=0\), we have_
\[|a^{\top}(SA)^{\dagger}Sb|\lesssim\frac{\beta}{\sqrt{m}}\cdot\|a\|_{2}\cdot\|b \|_{2}\cdot\|\Sigma^{-1}\|\]
_holds with probability at least \(1-\delta\)._
Proof.: With probability 1, the matrix \(SA\in\mathbb{R}^{m\times d}\) has linearly independent columns.
Therefore, \((SA)^{\dagger}\in\mathbb{R}^{d\times m}\) is
\[(SA)^{\dagger} = (A^{\top}S^{\top}SA)^{-1}A^{\top}S^{\top}\] \[= (V\Sigma U^{\top}S^{\top}SU\Sigma V^{\top})^{-1}V\Sigma U^{\top}S ^{\top}\] \[= (V^{\top})^{-1}\Sigma^{-1}(U^{\top}S^{\top}SU)^{-1}\Sigma^{-1}V^ {-1}V\Sigma U^{\top}S^{\top}\] \[= V\Sigma^{-1}(U^{\top}S^{\top}SU)^{-1}U^{\top}S^{\top},\]
where the first step follows from \(SA\in\mathbb{R}^{m\times d}\) has full rank, the second step follows from SVD on \(A\in\mathbb{R}^{n\times d}\), the third step follows from \((AB)^{-1}=B^{-1}A^{-1}\), and the last step follows from the fact that \(V\) is orthogonal based on the property of SVD.
For convenience, we define \(x\) as follows:
\[x:=a^{\top}V\Sigma^{-1}(U^{\top}S^{\top}SU)^{-1}U^{\top}S^{\top}Sb.\]
In the next few paragraphs, we will explain how to upper bound \(|x|\) with high probability.
Since \(S\) is a \((0.1,\delta,d,n)\)-\(\mathsf{OSE}\) (Definition 2.1), we know
\[\Pr[\|I-U^{\top}S^{\top}SU\|\leq 0.1]\geq 1-\delta.\]
We condition on this event. It follows that
\[\|V\Sigma^{-1}(U^{\top}S^{\top}SU)^{-1}U^{\top}\|\] \[= \|\Sigma^{-1}(U^{\top}S^{\top}SU)^{-1}U^{\top}\|\] \[\leq \|\Sigma^{-1}\|\|(U^{\top}S^{\top}SU)^{-1}\|\|U^{\top}\|\] \[\leq \|\Sigma^{-1}\|\cdot\frac{1}{1-0.1}\cdot 1\] \[= O(\|\Sigma^{-1}\|),\]
where the first step follows from that \(V\) is a rotation, the second step follows from sub-multiplicativity, and the third step follows from \(\|I-U^{\top}S^{\top}SU\|\leq 0.1\) and that \(U\) is a rotation.
Hence, we have
\[\Pr[\|a^{\top}V\Sigma^{-1}(U^{\top}S^{\top}SU)^{-1}U^{\top}\|_{2}\] \[= O(\|\Sigma^{-1}\|\cdot\|a\|_{2})]\geq\ 1-\delta. \tag{3}\]
Let us define a vector \(u\in\mathbb{R}^{n}\)
\[u:=U(U^{\top}S^{\top}SU)^{-1}\Sigma^{-1}V^{\top}a\]
By the definition of \(\mathsf{OCE}\) (Definition 2.3, we have that
\[\Pr[|u^{\top}S^{\top}Sb-u^{\top}b|\leq\frac{\beta}{\sqrt{m}}\cdot\|u\|_{2}\|b \|_{2}]\leq 1-\delta,\]
where \(U^{\top}b=0\) gives us \(u^{\top}b=0\) and \(u^{\top}S^{\top}Sb=x\).
Thus, the above bound translates to
\[\Pr[|x|\leq C\cdot\frac{\beta}{\sqrt{m}}\cdot\|\Sigma^{-1}\|\|a\|_{2}\|b\|_{2 }]\geq 1-\delta \tag{4}\]
as desired.
We are now ready to prove the \(\ell_{\infty}\) guarantee given the inner product bound of Lemma 3.1.
**Theorem 3.2**.: _Suppose \(A\in\mathbb{R}^{n\times d}\) has full column rank and \(b\in\mathbb{R}^{n}\). Let \(S\in\mathbb{R}^{m\times n}\) be a sketching matrix satisfying conditions in Lemma 3.1. For any fixed vector \(a\in\mathbb{R}^{d}\), we have_
\[|\langle a,x^{*}\rangle-\langle a,x^{\prime}\rangle|\lesssim\frac{\epsilon}{ \sqrt{d}}\cdot\|a\|_{2}\cdot\|Ax^{*}-b\|_{2}\cdot\|A^{\dagger}\|,\]
_holds with probability at least \(1-\delta\), where \(x^{*}=\arg\min_{x\in\mathbb{R}^{d}}\|Ax-b\|_{2}\) and \(x^{\prime}=\arg\min_{x\in\mathbb{R}^{d}}\|SAx-Sb\|_{2}\)._
Proof.: Since \(A\) has full column rank, we have that \(x^{*}=A^{\dagger}b\). Similarly, \(SA\) has full column rank with probability \(1\), therefore \(x^{\prime}=(SA)^{\dagger}Sb\) and \((SA)^{\dagger}SA=I\). Thus, we have
\[|\langle a,x^{*}\rangle-\langle a,x^{\prime}\rangle| =|\langle a,x^{*}-(SA)^{\dagger}Sb\rangle|\] \[=|\langle a,(SA)^{\dagger}S(Ax^{*}-b)\rangle|\] \[=|\langle a,(SA)^{\dagger}S(AA^{\dagger}b-b)|\] \[=|\langle((SA)^{\dagger}S)^{\top}a,(I-UU^{\top})b\rangle| \tag{5}\]
where \(U\in\mathbb{R}^{n\times d}\) is an orthonormal basis for \(A\). It is well-known that \(I-UU^{\top}=U_{\perp}U_{\perp}^{\top}\) where \(U_{\perp}\in\mathbb{R}^{n\times(n-d)}\) is the orthonormal basis for the orthogonal component of \(\operatorname{span}(A)\). To maximize the above expression, we shall let \(b\in\operatorname{span}(U_{\perp})\) or equivalently, \(U^{\top}b=0\). Thus, bounding Eq. (5) is equivalent to consider
\[|a^{\top}(SA)^{\dagger}Sb|\lesssim\frac{\beta}{\sqrt{m}}\cdot\|a\|_{2}\cdot\|b \|_{2}\cdot\|A^{\dagger}\|,\]
the inequality holds with probability at least \(1-2\delta\) by Lemma 3.1. Finally, note that since \(U^{\top}b=0\), we have that \(Ax^{*}=0\) and we have proved
\[|\langle a,x^{*}\rangle-\langle a,x^{\prime}\rangle|\lesssim\frac{\epsilon}{ \sqrt{d}}\cdot\|a\|_{2}\cdot\|Ax^{*}-b\|_{2}\cdot\|A^{\dagger}\|.\]
Note that we only require the \(\mathsf{OSE}\) with \(\epsilon=O(1)\) and the \(\epsilon\) dependence follows from the row count of \(\mathsf{OCE}\).
### High probability bound for \(\mathsf{OCE}\)
In this section, we provide a unified framework for proving the high probability bound of \(\mathsf{OCE}\). Our analysis utilizes the three dense sketching matrices that can all be designed as first picking a set of fresh random signs, then picking the sketching matrix according to the distribution.
We state the key assumptions on dense sketching matrices that are sufficient for \(\mathsf{OCE}\) property.
**Assumption 3.3**.: _Let \(S\in\mathbb{R}^{m\times n}\) be a dense sketching matrix satisfying the following two assumptions:_
* _Pairwise inner product bound:_ \[\Pr[\max_{i\neq j}|\langle S_{*,i},S_{*,j}\rangle|\leq\frac{\sqrt{\log(n/ \delta)}}{\sqrt{m}}]\geq 1-\delta.\]
* _Column norm bound:_ \[\Pr[|\|S_{*,i}\|_{2}^{2}-1|\leq\frac{\sqrt{\log(n/\delta)}}{\sqrt{m}}]\geq 1 -\delta,\] _for all_ \(i\in[n]\)_._
**Lemma 3.4**.: _Let \(S\in\mathbb{R}^{m\times n}\) be a dense sketching matrix meets Assumption 3.3. Let \(h\in\mathbb{R}^{n}\) and \(g\in\mathbb{R}^{n}\) be two fixed vectors. Then, the following properties hold:_
\[|(g^{\top}S^{\top}Sh)-(g^{\top}h)|\leq\,\frac{\log^{1.5}(n/\delta)}{\sqrt{m}}\|g \|_{2}\|h\|_{2}\]
_holds with probability at least \(1-\delta\)._
Proof.: We can rewrite \((g^{\top}S^{\top}Sh)-(g^{\top}h)\) as follows:
\[(g^{\top}S^{\top}Sh)-(g^{\top}h)\] \[= \sum_{i=1}^{n}\sum_{j\in[n]\setminus i}^{n}g_{i}h_{j}\langle S_{ *,i},S_{*,j}\rangle+\sum_{i=1}^{n}g_{i}h_{i}(\|S_{*,i}\|_{2}^{2}-1)\] \[= \underbrace{\sum_{i=1}^{n}\sum_{j\in[n]\setminus i}^{n}g_{i}h_{j }\langle\sigma_{i}\overline{S}_{*,i},\sigma_{j}\overline{S}_{*,j}\rangle}_{ \text{off-diag}}\] \[+ \underbrace{\sum_{i=1}^{n}g_{i}h_{i}(\|S_{*,i}\|_{2}^{2}-1)}_{ \text{diag}},\]
where the first step follows from the fact that \(\sigma_{i}\)'s are independent Rademacher random variables and \(S_{*,i}=\sigma_{i}\overline{S}_{*,i}\), \(\forall i\in[n]\), the second step follows from separating diagonal and off-diagonal terms.
We will focus on bounding the quantity off-diag, as diag can be handled in a rather simple fashion.
We define matrix \(A\in\mathbb{R}^{n\times n}\) and \(B\in\mathbb{R}^{n\times n}\) as follows:
\[A_{i,j} :=g_{i}h_{j}\cdot\langle\overline{S}_{*,i},\overline{S}_{*,j}\rangle, \forall i\in[n],j\in[n]\] \[B_{i,j} :=g_{i}h_{j}\cdot\max_{i^{\prime}\neq j^{\prime}}|\langle \overline{S}_{*,i^{\prime}},\overline{S}_{*,j^{\prime}}\rangle|, \forall i\in[n],j\in[n].\]
We define \(A^{\circ}\in\mathbb{R}^{n\times n}\) to be the matrix \(A\in\mathbb{R}^{n\times n}\) with removing diagonal entries.
By applying Hanson-Wright inequality (Lemma 2.21), we have
\[\Pr[|\sigma^{\top}A^{\circ}\sigma|>\tau]\] \[\leq 2\cdot\exp\left(-c\cdot\min\{\tau^{2}/\|A^{\circ}\|_{F}^{2}, \tau/\|A^{\circ}\|\}\right)\]
We can upper bound \(\|A^{\circ}\|\) and \(\|A^{\circ}\|_{F}\).
\[\|A^{\circ}\| \leq\|A^{\circ}\|_{F}\] \[\leq\|A\|_{F}\] \[\leq\|B\|_{F}\] \[\leq\|g\|_{2}\cdot\|h\|_{2}\cdot\max_{i\neq j}|\langle\overline{S} _{*,i},\overline{S}_{*,j}\rangle|,\]
where the first step follows from \(\|\cdot\|\leq\|\cdot\|_{F}\), the second step follows from the definition of \(A^{\circ}\), the third step follows from the definition of \(A\) and \(B\), and the fourth step follows from \(B\) is rank \(1\) as \(B=\max_{i\neq j}|\langle\overline{S}_{*,i},\overline{S}_{*,j}\rangle|\cdot gh ^{\top}\).
It remains to obtain a bound on \(\max_{i\neq j}|\langle\overline{S}_{*,i},\overline{S}_{*,j}\rangle|\). Note that for any column \(i,j\),
\[|\langle\overline{S}_{*,i},\overline{S}_{*,j}\rangle| =|\langle\sigma_{i}\overline{S}_{*,i},\sigma_{j}\overline{S}_{*,j}\rangle|\] \[=|\langle S_{*,i},S_{*,j}\rangle|,\]
where the first step follows from the fact that random signs do not change the magnitude of the inner product and the second step follows from the definition of \(S_{*,i}\) and \(S_{*,j}\).
Since \(S\) meets Assumption 3.3, we have that with probability at least \(1-\delta\),
\[\max_{i\neq j}|\langle\overline{S}_{*,i},\overline{S}_{*,j}\rangle|\leq\frac {\sqrt{\log(n/\delta)}}{\sqrt{m}}.\]
Conditioning on the above event holds, we have that
\[\Pr[|\texttt{off-diag}|>\tau]\] \[\leq 2\cdot\exp(-c\cdot\frac{\tau}{\|b\|_{2}\cdot\|h\|_{2}\cdot\frac {\sqrt{\log(n/\delta)}}{\sqrt{m}}}),\]
choosing \(\tau=\|g\|_{2}\cdot\|h\|_{2}\cdot\log^{1.5}(n/\delta)/\sqrt{m}\), we can show that
\[\Pr[|\texttt{off-diag}|\geq\|g\|_{2}\cdot\|h\|_{2}\frac{\log^{1.5}(n/\delta)}{ \sqrt{m}}]\leq\Theta(\delta).\]
To bound the term diag, note that due to Assumption 3.3, we have with probability at least \(1-\delta\), \(|\|S_{*,i}\|_{2}^{2}-1|\leq\frac{\sqrt{\log(n/\delta)}}{\sqrt{m}}\).
Conditioning on this event, we have
\[|\texttt{diag}| \leq\ \max_{i\in[n]}\|\|S_{*,i}\|_{2}^{2}-1|\cdot|g^{\top}h|\] \[\leq\ \frac{\sqrt{\log(n/\delta)}}{\sqrt{m}}\cdot\|g\|_{2}\cdot\|h\|_{2},\]
where the last step is by Cauchy-Schwartz. Note that \(|\texttt{diag}|\) is subsumed by \(|\texttt{off-diag}|\).
Union bounding over all events, we have that
\[\Pr[|g^{\top}S^{\top}Sh-g^{\top}h|\geq\frac{\log^{1.5}(n/\delta)} {\sqrt{m}}\cdot\|g\|_{2}\cdot\|h\|_{2}]\] \[\leq\ \Theta(\delta).\qed\]
### Inner product bound for \(\texttt{SRHT}\) and \(\texttt{SRTC}\)
We will show that \(\texttt{SRHT}\) and \(\texttt{SRTC}\) satisfy Assumption 3.3. Before proving the pairwise inner product bound, we state a general property to characterize these sketching matrices. This key property will be used in the later proof.
**Definition 3.5** (Sign structure).: For any sketching matrix, we say it has "Sign structure" if the following properties hold
* \(S_{k,i}\in\{\pm\frac{1}{\sqrt{m}}\}\), for all \(k\in[m],i\in[n]\).
* \(S_{k,i}\) and \(S_{k,j}\) are independent for any \(i\neq j\).
* \(\mathbb{E}[S_{k,i}]=0\) for all \(k\in[m]\) and \(i\in[n]\).
**Lemma 3.6**.: _Both_ SRHT _and_ SRCT _satisfy Definition 3.5._
Proof.: It follows from the definitions of two sketching matrices directly.
**Lemma 3.7** (SRHT and SRCT).: _Let \(S\in\mathbb{R}^{m\times n}\) be any sketching matrices that satisfy the Definition 3.5. Then, we have_
\[\Pr[\max_{i\neq j}|\langle S_{*,i},S_{*,j}\rangle|\geq\frac{\sqrt{\log(n/ \delta)}}{\sqrt{m}}]\leq\Theta(\delta).\]
Proof.: Fix a pair of indices \(i\neq j\) and we define \(X\in\mathbb{R}^{m\times 2}\) as follows:
\[X:=\begin{bmatrix}S_{*,i}&S_{*,j}\end{bmatrix}\]
The Gram matrix is \(X^{\top}X=\sum_{k=1}^{m}G_{k}\), where
\[G_{k} =\begin{bmatrix}S_{k,i}&S_{k,j}\end{bmatrix}^{\top}\begin{bmatrix} S_{k,i}&S_{k,j}\end{bmatrix}\] \[=\begin{bmatrix}S_{k,i}^{2}&S_{k,i}S_{k,j}\\ S_{k,i}S_{k,j}&S_{k,j}^{2}\end{bmatrix}\] \[=\begin{bmatrix}\frac{1}{m}&S_{k,i}S_{k,j}\\ S_{k,i}S_{k,j}&\frac{1}{m}\end{bmatrix}.\]
where the first step follows from the definition of \(G_{k}\), the second step follows from rewriting \(\begin{bmatrix}S_{k,i}&S_{k,j}\end{bmatrix}^{\top}\), the third step follows from the definition of matrix multiplication, and the last step follows from \(S_{k,i}^{2}=1/m\) and \(S_{k,j}^{2}=1/m\).
Note that \(G_{k}\) has eigenvalues \(0\) and \(\frac{2}{m}\), i.e.,
\[\lambda_{1}(G_{k})=\ 2/m,\quad\lambda_{2}(G_{k})=\ 0.\]
Since \(S_{k,i}\) and \(S_{k,j}\) are independent Rademacher random variables, we have
\[\mathbb{E}[S_{k,i}S_{k,j}]=\mathbb{E}[S_{k,i}]\cdot\mathbb{E}[S_{k,j}]=0.\]
Thus, we know
\[\mathbb{E}[G_{k}]=\begin{bmatrix}1/m&0\\ 0&1/m\end{bmatrix}. \tag{6}\]
Consequently, we have
\[\mathbb{E}[X^{\top}X] =\ \mathbb{E}[\sum_{k=1}^{m}G_{k}]=\ m\cdot\mathbb{E}[G_{k}]\] \[=\ m\cdot\begin{bmatrix}1/m&0\\ 0&1/m\end{bmatrix}\] \[=\ \begin{bmatrix}1&0\\ 0&1\end{bmatrix},\]
where the first step follows from the definition of \(X^{\top}X\), the second step follows from the fact that \(\mathbb{E}[ca]=c\,\mathbb{E}[a]\) for a constant \(c\), the third step follows from Eq. (6), and the last step follows from simple algebra.
Let \(\lambda_{i}(X^{\top}X)\) be the \(i\)-th eigenvalue of \(X^{\top}X\in\mathbb{R}^{2\times 2}\). By matrix Chernoff bound (Lemma 2.22 with \(B=2/m\)), for any \(t>0\), we have
\[\Pr[\forall i\in[2],|\lambda_{i}(X^{\top}X)-1|\geq t]\leq 4\exp(-t^{2}m/2)\]
This means with probability at least \(1-4\exp(-t^{2}m/2)\), the eigenvalues of \(X^{\top}X\) are between \([1-t,1+t]\) and consequently, the eigenvalues of \(X^{\top}X-I_{2}\) are between \([-t,t]\). Let us choose \(t=O(\frac{\sqrt{\log(n/\delta)}}{\sqrt{m}})\), we have
\[\Pr[\|X^{\top}X-I_{2}\|\geq C\cdot\frac{\sqrt{\log(n/\delta)}}{ \sqrt{m}}]\leq\frac{\delta}{n^{2}}.\]
The proof can be wrapped up by noting that
\[X^{\top}X-I_{2}=\begin{bmatrix}0&\langle S_{*,i},S_{*,j}\rangle \\ \langle S_{*,i},S_{*,j}\rangle&0\end{bmatrix},\]
the spectral norm of this matrix is \(|\langle S_{*,i},S_{*,j}\rangle|\) and union bound over all \(n^{2}\) pairs of columns, we have
\[\Pr[\max_{i\neq j}|\langle S_{*,i},S_{*,j}|\geq C\cdot\frac{ \sqrt{\log(n/\delta)}}{\sqrt{m}}]\leq\delta.\]
### Column norm bound for \(\mathsf{SRHT}\) and \(\mathsf{SRCT}\)
In this section, we prove the column norm bound for \(\mathsf{SRHT}\) and \(\mathsf{SRCT}\). In particular, their columns are unit vectors. In Appendix D, we prove for random Gaussian matrix, the squared column norm is \(\chi_{m}^{2}\) random vriable that concentrates around \(1\) with high probability.
**Lemma 3.8** (\(\mathsf{SRHT}\) and \(\mathsf{SRCT}\)).: _Let \(S\in\mathbb{R}^{m\times n}\) be an \(\mathsf{SRHT}\) matrix or \(\mathsf{SRCT}\) matrix._
_Then, for any \(i\in[n]\), we have \(\|S_{*,i}\|_{2}^{2}=~{}1\)._
Proof.: The proof directly follows from the definition.
For \(\mathsf{SRHT}\), recall \(S=PHD\), the column norm of \(H\) is \(\sqrt{n}\), and \(D\) is a random sign that does not change the norm. The matrix \(P\) subsamples \(m\) rows and rescale each entry by \(\sqrt{\frac{1}{m}}\). The (squared) column norm is then \(1\).
For \(\mathsf{SRCT}\), the column norm of \(G\) is \(\sqrt{n}\) as well. Thus, by the same argument, \(\mathsf{SRCT}\) has its column vectors being units.
## 4 Put things together
Now, we're ready to present the proof for Theorem 1.1.
Proof of Theorem 1.1.: Using Lemma 2.14 (it shows SRHT gives OSE), we know if \(m\geq d\log^{2}(n/\delta)\), it gives \((O(1),\delta,n,d)\)-OSE.
Using Lemma 3.7 (it shows SRHT gives OCE), we know \(\beta=O(\log^{1.5}(n/\delta))\).
Using Lemma 3.1 (it shows OSE + OCE implies our result), we need to choose
\[m\geq\epsilon^{-2}d\beta^{2}\geq\epsilon^{-2}d\log^{3}(n/\delta)\]
Combining the above equation together, we have
\[m\geq\ d\log^{2}(n/\delta)+\epsilon^{-2}d\log^{3}(n/\delta)\geq\ \epsilon^{-2}d\log^{3}(n/\delta).\]
## 5 Conclusion
In this paper, we study the sketching-based regression algorithm with an \(\ell_{\infty}\) guarantee. We show that SRHT with \(m=\epsilon^{-2}d\log^{3}(n/\delta)\) rows provides the desired \(\ell_{\infty}\) guarantee solution, improving upon the \(\epsilon^{-2}d^{1+\gamma}\) rows for \(\gamma=\sqrt{\frac{\log\log n}{\log d}}\) of [13]. This is nearly-optimal up to logarithmic factors. Our proof adapts the oblivious coordinate-wise embedding property introduced in [11] in a novel way. We also greatly extends the reach of \(\ell_{\infty}\) guarantee to degree-2 Kronecker product regression via TensorSRHT matrix.
In addition, we introduce the SRCT and TensorSRCT matrices. These matrices can be applied in a fashion similar to SRHT, and they have similar OCE behaviors as SRHT.
Our result provides an elegant way to integrate fast, sketching-based regression solver for optimization process, in particular second-order methods. The regression problem per iteration can be solved in time nearly-linear in the input size, and the \(\ell_{\infty}\) guarantee comes in handy when analyzing convergence with approximate step. It also gives improved generalization bound on approximate regression via SRHT[13].
## Appendix
Roadmap.In Section A, we introduce the fundamental definitions and properties that we will use in Appendix. In Section B, we analyze and develop the \(\ell_{\infty}\) guarantee of Kronecker product regressions. In Section C, we introduce the Strong JL Moment Property and prove that both Circulant Transform and Tensor Circulant Transform satisfy this. In Section D, we focus on studying AMS, random Gaussian, and SRHT and show that the inner product is bounded on any pair of different columns of AMS, random Gaussian, and SRHT-dense sketching matrices.
## Appendix A Tools for matrices and probability
For matrix \(A_{1}\in\mathbb{R}^{n_{1}\times d_{1}}\) and \(A_{2}\in\mathbb{R}^{n_{2}\times d_{2}}\), we use \(A_{1}\otimes A_{2}\in\mathbb{R}^{n_{1}n_{2}\times d_{1}d_{2}}\) to denote the matrix that \((i_{1}-1)\cdot(n_{2})+i_{2},(j_{1}-1)d_{2}+j_{2}\) -th entry is \((A_{1})_{i_{1},j_{1}}\cdot(A_{2})_{i_{2},j_{2}}\).
**Lemma A.1** (Markov's inequality).: _If \(X\) is a non-negative random variable and \(a>0\). Then we have_
\[\Pr[X\geq a]\leq\mathbb{E}[X]/a.\]
**Definition A.2** (Sub-exponential distribution ([11])).: We say \(X\in\mathsf{SubExp}(\sigma^{2},\alpha)\) with parameters \(\sigma>0\), \(\alpha>0\) if:
\[\mathbb{E}[e^{\lambda X}]\leq\exp{(\lambda^{2}\sigma^{2}/2)},\forall|\lambda| <1/\alpha.\]
**Lemma A.3** (Tail bound for sub-exponential distribution ([11])).: _Let \(X\in\mathsf{SubExp}(\sigma^{2},\alpha)\) and \(\mathbb{E}[X]=\mu\). Then,_
\[\Pr[|X-\mu|\geq t]\leq\exp{(-0.5\min\{t^{2}/\sigma^{2},t/\alpha\})}.\]
**Claim A.4**.: _For every matrix \(A\in\mathbb{R}^{n_{1}\times n_{2}},B\in\mathbb{R}^{n_{2}\times n_{3}},C\in \mathbb{R}^{d_{1}\times d_{2}},D\in\mathbb{R}^{d_{2}\times d_{3}}\)_
\[(A\cdot B)\otimes(C\cdot D)=(A\otimes C)\cdot(B\otimes D).\]
## Appendix B Kronecker product regression with \(\ell_{\infty}\) guarantee
In this section, we study the \(\ell_{\infty}\) guarantee of Kronecker product regressions. Given two matrices \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\) and a label vector \(b\in\mathbb{R}^{n^{2}}\), the goal is to solve the regression \(\operatorname*{arg\,min}_{x\in\mathbb{R}^{d^{2}}}\|(A_{1}\otimes A_{2})x-b\|_{ 2}^{2}\). This problem can be easily generalized to product of \(q\) matrices and fast, input-sparsity time algorithms have been studied in a line of works [14, 15, 16, 17, 18].
### Main result
**Theorem B.1** (Tensor version of Theorem 1.1).: _Suppose \(n\leq\exp(d)\) and matrix \(A\in\mathbb{R}^{n^{2}\times d^{2}}\) and vector \(b\in\mathbb{R}^{n^{2}}\) are given, where \(A=A_{1}\otimes A_{2}\) for matrices \(A_{1},A_{2}\in\mathbb{R}^{n\times d}\) and \(b=b_{1}\otimes b_{2}\) for vectors \(b_{1},b_{2}\in\mathbb{R}^{n}\). Let \(S\in\mathbb{R}^{m\times n^{2}}\) be a_
* _tensor subsampled randomized Hadamard transform matrix (_\(\mathsf{TensorSRHT}\)_) with_ \(m=\Theta(\epsilon^{-2}d^{2}\log^{3}(n/\delta))\) _rows or_
* _tensor subsampled randomized circulant transform matrix (_\(\mathsf{TensorSRCT}\)_) with_ \(m=\Theta(\epsilon^{-2}d^{4}\log^{3}(n/\delta))\) _rows._
_For_
\[x^{\prime}=\arg\min_{x\in\mathbb{R}^{d^{2}}}\|SAx-Sb\|_{2}\]
_and_
\[x^{*}=\arg\min_{x\in\mathbb{R}^{d^{2}}}\|Ax-b\|_{2},\]
_and any fixed \(a\in\mathbb{R}^{d^{2}}\),_
\[|\langle a,x^{*}\rangle-\langle a,x^{\prime}\rangle|\leq\frac{ \epsilon}{d}\cdot\|a\|_{2}\cdot\|Ax^{*}-b\|_{2}\cdot\|A^{\dagger}\|\]
_with probability \(1-1/\operatorname{poly}(d)\)._
Proof.: Recall that we require \((O(1),\delta,d,n)\)-\(\mathsf{OSE}\) and \(\beta=O(\log^{1.5}(n/\delta))\)-\(\mathsf{OCE}\) for it to give \(\ell_{\infty}\) guarantee.
For \(\mathsf{OCE}\), it follows from Lemma B.3.
For \(\mathsf{TensorSRHT}\)'s \(\mathsf{OSE}\), it follows from Lemma 2.15 and for \(\mathsf{TensorSRCT}\), it follows from Corollary C.8.
**Remark B.2**.: The slightly different guarantee follows from the small dimension becomes \(d^{2}\) instead of \(d\). Let us discuss the utility of using these sketching matrices for solving the regression. As discussed in Def. 2.8 and 2.12, each column of \(A_{1}\otimes A_{2}\) can be computed in \(O(n\log n+m)\) time instead of \(n^{2}\), thus the total running time of applying \(S\) to \(A\) is \(O(nd^{2}\log n+\operatorname{poly}(d))\). Similarly, \(Sb\) can be applied in time \(O(n\log n+\operatorname{poly}(d))\). The regression can then be solved in \(\widetilde{O}(nd^{2}+\operatorname{poly}(d))\) time. Prior works mainly focus on input-sparsity sketches [18], importance sampling [19], iterative method [17] or more complicated sketches that scale well to \(q\) products and in dynamic setting [16]. To the best of our knowledge, this is the first \(\ell_{\infty}\) guarantee for Kronecker product regression (with two matrices).
### Oblivious coordinate-wise embedding for \(\mathsf{TensorSRHT}\) and \(\mathsf{TensorSRCT}\)
**Lemma B.3** (\(\mathsf{TensorSRHT}\) and \(\mathsf{TensorSRCT}\), Tensor version of Lemma 3.7).: _Let \(S\in\mathbb{R}^{m\times n}\) be \(\mathsf{TensorSRHT}\) or \(\mathsf{TensorSRCT}\). Then, \(S\) is an \(\mathsf{OCE}\) with parameter \(\beta=\log^{1.5}(n/\delta)\)._
Proof.: To prove this result, we show that \(\mathsf{TensorSRHT}\) and \(\mathsf{TensorSRCT}\) satisfy Definition 3.5.
For \(\mathsf{TensorSRHT}\), recall \(S=\frac{1}{\sqrt{m}}P(HD_{1}\times HD_{2})\), since \(H\) is Hadamard matrix and \(D_{1},D_{2}\) are just diagonal matrices with random signs. Thus, all entries of \(HD_{1}\times HD_{2}\) are also in \(\{\pm 1\}\). As \(P\) is a row sampling matrix and we rescale each entry by \(\frac{1}{\sqrt{m}}\). Thus, each entry of \(S\) is in \(\{\pm\frac{1}{\sqrt{m}}\}\). For entries at the same row but two different columns \(i,j\), if \(i\) is generated from two columns disjoint from \(j\), then it's clear then are independent. Otherwise, suppose \(i\) is generated from columns \(a,b\) and \(j\) is generated from columns \(a,c\) with \(b\neq c\). Then it is again independent, as the sign is completely determined by signs of \(b\) and \(c\). Finally, we need to verify \(\mathbb{E}[S_{k,i}]=0\), this is trivially true since product of two random signs is still a random sign. For \(\mathsf{TensorSRCT}\), the argument is exactly the same.
Now that both of these matrices satisfy Definition 3.5, we can use Lemma 3.7 to give a bound on pairwise inner product. The column norm bound is automatically satisfied by definition. Thus, we can invoke Lemma 3.4 to wrap up the proof.
Srct and TensorSRT: Ose via strong JL moment property
In this section, we prove that both SRT and TensorSRT are OSE's. We prove this property via the strong JL moment property [1]. This gives a worse row count compared to that of SRHT and TensorSRHT. We believe that these two family of distributions should have similar row count for an OSE and leave it as a major open problem to close the gap between these two distributions.
### Notations
To make the notation less heavy, we will use \(\|X\|_{L^{t}}\) for the \(t\)-th moment of a random variable \(X\). This is formally defined below.
**Definition C.1** (\(t\)-th moment).: For every integer \(t\geq 1\) and any random variable \(X\in\mathbb{R}\), we write
\[\|X\|_{L^{t}}=\left(\mathbb{E}\left[|X|^{t}\right]\right)^{1/t}\]
Note that
\[\|X+Y\|_{L^{t}}\leq\|X\|_{L^{t}}+\|Y\|_{L^{t}}\]
for any random variables \(X\), \(Y\) by the Minkowski inequality.
### Strong JL moment property
We show that both SRT (see Definition 2.11) and TensorSRT (see Definition 2.12) satisfy the so-called _strong JL moment property_. Strong JL moment property is one of the core properties that can show the sketching matrix has subspace embedding property [14].
**Definition C.2** (Strong JL moment property [1]).: For every \(\epsilon,\delta\in[0,1]\), we say a distribution over random matrices \(S\in\mathbb{R}^{m\times n}\) has the Strong \((\epsilon,\delta)\)-JL Moment Property when
\[\|\|Sx\|_{2}^{2}-1\|_{L^{t}}\leq\epsilon\sqrt{t/\log(1/\delta)}\]
and
\[\mathbb{E}\left[\|Sx\|_{2}^{2}\right]=1\]
for all \(x\in\mathbb{R}^{n}\), \(\|x\|_{2}=1\) and every integer \(t\leq\log(1/\delta)\).
Given a distribution with strong JL moment property, it is well-known that such distribution provides OSE.
**Lemma C.3** (Lemma 11 of [1]).: _Let \(S\in\mathbb{R}^{m\times n}\) be a random matrix with \((\epsilon/d,\delta)\)-strong JL moment property (Def. C.2). Then, \(S\) is also an \((\epsilon,\delta,d,n)\)-OSE (Def. 2.1)._
To prove that SRT (see Definition 2.11) and TensorSRT (see Definition 2.12) satisfy the strong JL moment property, we will do this by proving that a more general class of matrices satisfies the strong JL moment property.
More precisely, let \(k\in\mathbb{Z}_{>0}\) be a positive integer and
\[(D^{(i)})_{i\in[k]}\in\prod_{i\in[k]}\mathbb{R}^{n_{i}\times n_{i}}\]
be independent matrices, each with diagonal entries given by independent Rademacher variables.
Let \(n=\prod_{i\in[k]}n_{i}\) and \(P\in\{0,1\}^{m\times n}\) be a random sampling matrix in which each row contains exactly one uniformly distributed nonzero element which has value one.
Then, we prove that the matrix
\[S=\frac{1}{\sqrt{m}}PG\cdot(D_{1}\otimes\cdots\otimes D_{k})\]
satisfies the strong JL moment property, where \(G\) is \(n\times n\) circulant matrix (see Definition 2.10) generated by a random vector whose elements are Rademacher variables.
If \(k=1\), then \(S\) is just a SRCT (see Definition 2.11). If \(k=2\), then \(M\) is a TensorSRCT (see Definition 2.12).
In order to prove this result we need a couple of lemmas. The first lemma can be seen as a version of Khintchine's Inequality (see Lemma 2.18) for higher order chaos.
**Lemma C.4** (Lemma 19 in [1]).: _Let \(t\geq 1\) and \(k\in\mathbb{Z}_{>0}\). Let \((\sigma^{(i)})_{i\in[k]}\in\prod_{i\in[k]}\mathbb{R}^{n_{i}}\) be independent vectors each satisfying the Khintchine's inequality (see Lemma 2.18):_
\[\|\langle\sigma^{(i)},x\rangle\|_{L^{t}}\leq C_{t}\|x\|_{2}\]
_for \(t\geq 1\) and any vector \(x\in\mathbb{R}^{d_{i}}\)._
_Let \((a_{i_{1},\ldots,i_{k}})_{i_{1}\in[n_{j}],\ldots,i_{k}\in[n_{k}]}\) be a tensor in \(\mathbb{R}^{n_{1}\times\cdots\times n_{k}}\). Then,_
\[\left\|\sum_{i_{1}\in[n_{1}],\ldots,i_{k}\in[n_{k}]}(\prod_{j\in[k]}\sigma^{( j)}_{i_{j}})a_{i_{1},\ldots,i_{k}}\right\|_{L^{t}}\leq C^{k}_{t}(\sum_{i_{1} \in[n_{1}],\ldots,i_{k}\in[n_{k}]}a^{2}_{i_{1},\ldots,i_{k}})^{\frac{1}{2}},\]
_for \(t\geq 1\)._
_Viewing \(a\in\mathbb{R}^{n_{1}\times\ldots\times n_{k}}\) as a vector, then_
\[\|\langle\sigma^{(1)}\otimes\cdots\otimes\sigma^{(k)},a\rangle\|_{L^{t}}\leq C ^{k}_{t}\|a\|_{2},\]
_for \(t\geq 1\)._
Proof.: The proof will be by induction on \(k\).
**Base case:** For \(k=1\), the result is by the assumption that the vectors satisfy Khintchine's inequality.
**Inductive case:** Assume that the result is true for every value up to \(k-1\).
Let
\[B_{i_{1},\ldots,i_{k-1}}=\sum_{i_{k}\in[d_{k}]}\sigma^{(k)}_{i_{k}}a_{i_{1}, \ldots,i_{k}}. \tag{7}\]
We then pull it out of the left hand term in the theorem:
\[\|\sum_{i_{1}\in[n_{1}],\ldots,i_{k}\in[n_{k}]}(\prod_{j\in[k]} \sigma^{(j)}_{i_{j}})a_{i_{1},\ldots,i_{k}}\|_{L^{t}} =\|\sum_{i_{1}\in[n_{1}],\ldots,i_{k-1}\in[n_{k-1}]}(\prod_{j\in[ k-1]}\sigma^{(j)}_{i_{j}})B_{i_{1},\ldots,i_{k-1}}\|_{L^{t}}\] \[\leq C^{k-1}_{t}\|(\sum_{i_{1}\in[d_{1}],\ldots,i_{k-1}\in[n_{k-1}]}B ^{2}_{i_{1},\ldots,i_{k-1}})^{\frac{1}{2}}\|_{L^{t}}\] \[= C^{k-1}_{t}\|\sum_{i_{1}\in[n_{1}],\ldots,i_{k-1}\in[n_{k-1}]}B ^{2}_{i_{1},\ldots,i_{k-1}}\|^{\frac{1}{2}}_{L^{t/2}}\]
\[\leq C_{t}^{k-1}(\sum_{i_{1}\in[n_{1}],\ldots,i_{k-1}\in[n_{k-1}]}\|B_{i_{1},\ldots,i_{k-1}}^{2}\|_{L^{t/2}})^{\frac{1}{2}}\] \[= C_{t}^{k-1}(\sum_{i_{1}\in[n_{1}],\ldots,i_{k-1}\in[n_{k-1}]}\|B_ {i_{1},\ldots,i_{k-1}}\|_{L^{t}}^{2})^{\frac{1}{2}},\]
where the first step follows from Eq. (7), the second step follows from the inductive hypothesis, the third step follows from the definition of \(\|\cdot\|\), the fourth step follows from the triangle inequality, the fifth step follows from the definition of \(\|\cdot\|\).
It remains to bound
\[\|B_{i_{1},\ldots,i_{k-1}}\|_{L^{t}}^{2}\leq C_{t}^{2}\sum_{i_{k}\in[n_{k}]}a_ {i_{1},\ldots,i_{k}}^{2}\]
by Khintchine's inequality, which finishes the induction step and hence the proof.
The next lemma we will be using is a type of Rosenthal inequality based on first principles. It mixes large and small moments of random variables in an intricate way. For completeness, we include a proof here.
**Lemma C.5** (Properties of random variables with \(t\)-moment, Lemma 20 in [1]).: _There exists a universal constant \(L\), such that, for \(t\geq 1\) if \(X_{1},\ldots,X_{k}\) are independent non-negative random variables with \(t\)-moment, then_
\[\|\sum_{i\in[k]}(X_{i}-\mathbb{E}[X_{i}])\|_{L^{t}}\leq L\cdot\big{(}\sqrt{t} \cdot\|\max_{i\in[k]}X_{i}\|_{L^{t}}^{1/2}\cdot\big{(}\sum_{i\in[k]}\mathbb{E} [X_{i}]\big{)}^{1/2}+t\cdot\|\max_{i\in[k]}X_{i}\|_{L^{t}}\big{)}.\]
Proof.: Throughout these calculations \(L_{1},L_{2}\) and \(L_{3}\) will be universal constants.
\[\|\sum_{i\in[k]}(X_{i}-\mathbb{E}[X_{i}])\|_{L^{t}} \leq L_{1}\|\sum_{i\in[k]}\sigma_{i}X_{i}\|_{L^{t}}\] \[\leq L_{2}\sqrt{t}\cdot\|\sum_{i\in[k]}X_{i}^{2}\|_{L^{t/2}}^{1/2}\] \[\leq L_{2}\sqrt{t}\cdot\|\max_{i\in[k]}X_{i}\cdot\sum_{i\in[k]}X_ {i}\|_{L^{t/2}}^{1/2}\] \[\leq L_{2}\sqrt{t}\cdot\|\max_{i\in[k]}X_{i}\|_{L^{t}}^{1/2}\cdot \|\sum_{i\in[k]}X_{i}\|_{L^{t}}^{1/2}\] \[\leq L_{2}\sqrt{t}\cdot\|\max_{i\in[k]}X_{i}\|_{L^{t}}^{1/2}\cdot \Big{(}(\sum_{i\in[k]}\mathbb{E}[X_{i}])^{1/2}+L_{2}\|\sum_{i\in[k]}(X_{i}- \mathbb{E}[X_{i}])\|_{L^{t}}^{1/2}\Big{)} \tag{8}\]
where the first step follows from symmetrization of \(X_{i}\), the second step follows from Khintchine's inequality (see Lemma 2.18), the third step follows from Non-negativity of \(X_{i}\), the fourth step follows from Cauchy-Schwartz inequality, and the last step follows from the triangle inequality.
Now, let \(A,B,C\) be defined as follows:
\[C:=\|\sum_{i\in[k]}(X_{i}-\mathbb{E}[X_{i}])\|_{L^{t}}^{1/2},\]
\[B:=L_{2}(\sum_{i\in[k]}\mathbb{E}[X_{i}])^{1/2},\]
\[A:=\sqrt{t}\|\max_{i\in[k]}X_{i}\|_{L^{t}}^{1/2}.\]
Then, by rewriting Eq. (8), we have
\[C^{2}\leq A(B+C).\]
This implies \(C\) is smaller than the largest of the roots of the quadratic.
Solving this quadratic inequality gives
\[C^{2}\leq L_{3}(AB+A^{2}),\]
which completes the proof.
### Src and TensorSRT satisfy strong JL moment property
We can now prove that SRCT (see Definition 2.11) and TensorSRT (see Definition 2.12) have the strong JL moment property.
**Theorem C.6**.: _There exists a universal constant \(L\), such that, the following holds._
_Let \(k\in\mathbb{Z}_{>0}\). Let \((D^{(i)})_{i\in[k]}\in\prod_{i\in[k]}\mathbb{R}^{n_{i}\times n_{i}}\) be independent diagonal matrices with independent Rademacher variables._
_We define \(n:=\prod_{i\in[k]}n_{i}\), \(D:=D_{1}\otimes D_{2}\otimes\dots\otimes D_{k}\in\mathbb{R}^{n\times n}\) and \(G:=G_{1}\otimes\dots\otimes G_{k}\in\mathbb{R}^{n\times n}\), where each \(G_{i}\in\mathbb{R}^{n_{i}\times n_{i}}\) is a circulant matrix generated by an independent Rademacher random vector. Let \(P\in\mathbb{R}^{m\times n}\) be a row sampling matrix that has exactly one nonzero per row. Let \(S:=PGD\)._
_Let \(x\in\mathbb{R}^{n}\) be any vector with \(\|x\|_{2}=1\) and \(t\geq 1\)._
_Then,_
\[\left\|\frac{1}{m}\|PGDx\|_{2}^{2}-1\right\|_{L^{t}}\leq L(\sqrt{ \frac{tr^{k}}{m}}+\frac{tr^{k}}{m}),\]
_where \(r=\max\{t,\log m\}\)._
_There exists a universal constant \(C_{0}>1\), such that, by setting_
\[m=\Omega(\epsilon^{-2}\log(1/\delta)\cdot(C_{0}\log(1/(\epsilon \delta)))^{k}),\]
_we get that \(\frac{1}{\sqrt{m}}PGD\) has \((\epsilon,\delta)\)-strong JL moment property._
Proof.: Throughout the proof \(C_{1}\), \(C_{2}\) and \(C_{3}\) will denote universal constants.
For every \(i\in[m]\), we let \(P_{i}\) be the random variable that says which coordinates the \(i\)-th row of \(P\) samples.
We define the random variable
\[Z_{i}:=M_{i}x=G_{P_{i}}Dx.\]
We note that since the variables \((P_{i})_{i\in[m]}\) are independent, so the variables \((Z_{i})_{i\in[m]}\) are conditionally independent given \(D\), that is, if we fix \(D\), then \((Z_{i})_{i\in[m]}\) are independent.
Then, we could get the following inequality:
\[\|\frac{1}{m}\sum_{i\in[m]}Z_{i}^{2}-1\|_{L^{t}}\]
\[=\|(\mathbb{E}[(\frac{1}{m}\sum_{i\in[m]}Z_{i}^{2}-1)\ \big{|}\ D])^{1/t}\|_{L^{t}}\] \[\leq C_{1}\|\frac{\sqrt{t}}{m}\cdot(\mathbb{E}[(\max_{i\in[m]}Z_{i }^{2})\ \big{|}\ D])^{1/(2t)}\cdot(\sum_{i\in[m]}\mathbb{E}[Z_{i}^{2}\ \big{|}\ D])^{1/2}+\frac{t}{m}\cdot(\mathbb{E}[(\max_{i\in[m]}Z_{i}^{2})^{t} \ \big{|}\ D])^{1/t}\|_{L^{t}}\] \[\leq C_{1}\frac{\sqrt{t}}{m}\cdot\|(\mathbb{E}[(\max_{i\in[m]}Z_{ i}^{2})\ \big{|}\ D])^{1/(2t)}\cdot(\sum_{i\in[m]}\mathbb{E}[Z_{i}^{2}\ \big{|}\ D])^{1/2}\|_{L^{t}}+C_{1}\frac{t}{m}\cdot\|\max_{i\in[m]}Z_{i}^{2}\|_ {L^{t}}\] \[\leq C_{1}\frac{\sqrt{t}}{m}\cdot\|\max_{i\in[m]}Z_{i}^{2}\|_{L^{ t}}^{1/2}\cdot\|\sum_{i\in[m]}\mathbb{E}[Z_{i}^{2}\ \big{|}\ D]\|_{L^{t}}^{1/2}+C_{1}\frac{t}{m}\cdot\|\max_{i\in[m]}Z_{i}^{2}\|_ {L^{t}}\]
where the first step follows from Definition C.1, the second step follows from Lemma C.5, the third step follows from triangle inequality, and the last step follows from Cauchy-Schwartz inequality.
Note that each row of \(G\) is generated by taking the tensor product of independent Rademacher random vectors, we thus can view the row vector itself as a length \(n\) Rademacher random vector. Thus,
\[\mathbb{E}[Z_{i}^{2}|D] =\sum_{\sigma\in\{-1,+1\}^{n}}p_{\sigma}\cdot(\langle x,\sigma \rangle)^{2}\] \[=\frac{(x_{1}+x_{2}+\cdots+x_{n})^{2}}{2^{n}}+\frac{(x_{1}+x_{2}+ \cdots-x_{n})^{2}}{2^{n}}+\cdots+\frac{(-x_{1}-x_{2}-\cdots-x_{n})^{2}}{2^{n}}\] \[=x_{1}^{2}+x_{2}^{2}+\cdots+x_{n}^{2}\] \[=\|x\|_{2}^{2}, \tag{9}\]
where the first step follows from the definition of the expected value, \(\mathbb{E}[Z_{i}^{2}|D]\), the second step follows from expanding all the \(2^{n}\) possibilities, the third step follows from simple algebra, and the last step follows from the definition of \(\|\cdot\|_{2}^{2}\).
We could get that
\[\sum_{i\in[m]}\mathbb{E}[Z_{i}^{2}\ \big{|}\ D] =\sum_{i\in[m]}\|x\|_{2}^{2}\] \[=m,\]
where the first step follows from Eq. (9) and the second step follows from \(\|x\|_{2}^{2}=1\).
To bound \(\|\max_{i\in[m]}Z_{i}^{2}\|_{L^{t}}\), we could show
\[\|Z_{i}^{2}\|_{L^{r}} =\|G_{P^{i}}Dx\|_{L^{2r}}^{2}\] \[=\|Dx\|_{L^{2r}}^{2}\] \[\leq r^{k}\|x\|_{2}^{2}.\]
where the first step follows from the definition of \(Z_{i}\), the second step follows from each row of \(G\) is independent Rademacher vector, therefore \(\mathbb{E}[G^{\top}G]=I\), and the last step follows from Lemma C.4.
We then bound the maximum using a sufficiently high-order sum:
\[\|\max_{i\in[m]}Z_{i}^{2}\|_{L^{t}} \leq\|\max_{i\in[m]}Z_{i}^{2}\|_{L^{r}}\] \[\leq(\sum_{i\in[m]}\|Z_{i}^{2}\|_{L^{r}}^{r})^{1/r}\]
\[\leq m^{1/r}r^{k}\|x\|_{2}^{2}\leq er^{k},\]
where the first step follows from Definition C.1, the second step follows from \(Z_{i}^{2}\) is non-negative, and the last step follows from \(r\geq\log m\).
This gives us that
\[\|\frac{1}{m}\sum_{i\in[m]}Z_{i}^{2}-\|x\|_{2}^{2}\|_{L^{t}}\leq C_{2}\sqrt{ \frac{tr^{k}}{m}}+C_{2}\frac{tr^{k}}{m} \tag{10}\]
which finishes the first part of the proof.
We want to choose \(m\) as follows
\[m=16C_{2}^{2}\epsilon^{-2}\cdot\log(1/\delta)\cdot(C_{3}\log(1/(\delta\epsilon )))^{k}.\]
According to the above choice of \(m\), we know following condition for \(r\) is holding
\[r\leq C_{3}\log(1/(\delta\epsilon)).\]
Hence,
\[m\geq 16C_{2}^{2}\epsilon^{-2}\cdot\log(1/\delta)\cdot r^{k}.\]
For all \(1\leq t\leq\log(1/\delta)\), we then get that
\[\|\|PGDx\|_{2}^{2}-1\|_{L^{t}} \leq C_{2}\sqrt{\frac{tr^{k}}{m}}+C_{2}\frac{tr^{k}}{m}\] \[\leq C_{2}(\frac{tr^{k}}{16C_{2}^{2}\epsilon^{-2}\log(1/\delta)r^{k }})^{1/2}+C_{2}\frac{tr^{k}}{16C_{2}^{2}\epsilon^{-2}\log(1/\delta)r^{k}}\] \[\leq 0.5\epsilon\sqrt{t/\log(1/\delta)}+0.5\epsilon^{2}t/\log(1/\delta)\] \[\leq \epsilon\sqrt{t/\log(1/\delta)}.\]
where the first step follows from Eq. (10), and the second step follows from choice of \(m\), the third step follows from simple algebra, and the last step follows from \(\epsilon^{2}\leq\epsilon\) and \(t/\log(1/\delta)\sqrt{t/\log(1/\delta)}\) (since \(t/\log(1/\delta)\in(0,1)\)).
This finishes the proof.
As two corollaries, we have \(\mathsf{SRCT}\) and \(\mathsf{TensorSRCT}\) are \(\mathsf{OSE}\)'s with \(d^{2}\) rows, instead of \(d\) rows.
**Corollary C.7** (\(\mathsf{SRCT}\) is an \(\mathsf{OSE}\)).: _Let \(S\in\mathbb{R}^{m\times n}\) be an \(\mathsf{SRCT}\) matrix with \(m=\Theta(\epsilon^{-2}d^{2}\log^{2}(n/\epsilon\delta))\), then \(S\) is an \((\epsilon,\delta,d,n)\)-\(\mathsf{OSE}\)._
Proof.: The proof follows from combining Lemma C.3 and Theorem C.6 with \(k=1\).
**Corollary C.8** (\(\mathsf{TensorSRCT}\) is an \(\mathsf{OSE}\)).: _Let \(S\in\mathbb{R}^{m\times n}\) be a \(\mathsf{TensorSRCT}\) matrix with \(m=\Theta(\epsilon^{-2}d^{2}\log^{3}(n/\epsilon\delta))\), then \(S\) is an \((\epsilon,\delta,d,n^{2})\)-\(\mathsf{OSE}\)._
Proof.: The proof follows from combining Lemma C.3 and Theorem C.6 with \(k=2\).
## Appendix D Gaussian and AMS
In this section, we prove that both random Gaussian matrices and AMS matrices satisfy \(\mathsf{OCE}\) with good parameter \(\beta\). Combining with the fact that they are \(\mathsf{OSE}\)'s, one can derive \(\ell_{\infty}\) guarantee for them.
### Ose property of random Gaussian and AMS
The OSE property for these two distributions are folklore. For a proof for them, see, e.g., [20].
**Lemma D.1**.: _Let \(S\) be a random Gaussian matrix defined in Def. 2.4. If \(m=\Theta(\epsilon^{-2}(d+\log(d/\delta)))\), then \(S\) is an \((\epsilon,\delta,d,n)\)-_OSE_._
**Lemma D.2**.: _Let \(S\) be an AMS matrix defined in Def. 2.5. If \(m=\Theta(\epsilon^{-2}d\log^{2}(n/\delta))\), then \(S\) is an \((\epsilon,\delta,d,n)\)-_OSE_._
### Oce property of random Gaussian and AMS
In this section, we prove the OCE property of random Gaussian and AMS. We start with the pairwise inner product bound for these two distributions. For column norm bound, AMS has unit columns and we will prove for random Gaussian.
**Lemma D.3** (Gaussian pairwise inner product bound, Lemma B.18 in [10]).: _Let \(S\in\mathbb{R}^{m\times n}\) be a random Gaussian matrix (Definition 2.4)._
_Then, we have:_
\[\Pr[\max_{i\neq j}|\langle S_{*,i},S_{*,j}\rangle|\geq C\cdot\frac{\sqrt{\log \left(n/\delta\right)}}{\sqrt{m}}]\leq\Theta(\delta).\]
Proof.: Note for \(i\neq j\), \(S_{*,i},S_{*,j}\sim\mathcal{N}(0,\frac{1}{m}I_{m})\) are two independent Gaussian vectors. Let \(z_{k}=S_{k,i}S_{k,j}\) and \(z=\langle S_{*,i},S_{*,j}\rangle\).
Then, we have for any \(|\lambda|\leq m/2\),
\[\mathbb{E}[e^{\lambda z_{k}}]=\frac{1}{\sqrt{1-\lambda^{2}/m^{2}}}\leq\exp \left(\lambda^{2}/m^{2}\right),\]
where the first step follows from \(z_{k}=\frac{1}{4}(S_{k,i}+S_{k,j})^{2}+\frac{1}{4}(S_{k,i}-S_{k,j})^{2}=\frac{ m}{2}(Q_{1}-Q_{2})\) where \(Q_{1},Q_{2}\sim\chi_{1}^{2}\), and \(\mathbb{E}[e^{\lambda Q}]=\frac{1}{\sqrt{1-2\lambda}}\) for any \(Q\sim\chi_{1}^{2}\).
This implies \(z_{k}\in\mathsf{SubExp}(2/m^{2},2/m)\) is a sub-exponential random variable. Here \(\mathsf{SubExp}\) is the shorthand of sub-exponential random variable.
Thus, we have
\[z=\sum_{k=1}^{m}z_{k}\in\mathsf{SubExp}(2/m,2/m),\]
by sub-exponential concentration Lemma A.3, we have
\[\Pr[|z|\geq t]\leq 2\exp\left(-mt^{2}/4\right)\]
for \(0<t<1\). Picking \(t=\sqrt{\log\left(n^{2}/\delta\right)/m}\), we have
\[\Pr[|\langle S_{*,i},S_{*,j}\rangle|\geq C\cdot\frac{\sqrt{\log \left(n/\delta\right)}}{\sqrt{m}}]\leq\delta/n^{2}.\]
Taking the union bound over all \((i,j)\in[n]\times[n]\) and \(i\neq j\), we complete the proof.
**Lemma D.4** (AMS pairwise inner product bound).: _Let \(S\in\mathbb{R}^{m\times n}\) be an AMS matrix (Definition 2.5. Let \(\{\sigma_{i}\}_{i\in[n]}\) be independent Rademacher random variables and \(\overline{S}\in\mathbb{R}^{m\times n}\) with \(\overline{S}_{*,i}=\sigma_{i}S_{*,i}\), \(\forall i\in[n]\)._
_Then, we have:_
\[\Pr[\max_{i\neq j}|\langle\overline{S}_{*,i},\overline{S}_{*,j} \rangle|\geq\frac{\sqrt{\log{(n/\delta)}}}{\sqrt{m}}]\leq\Theta(\delta)\]
Proof.: Note for any fixed \(i\neq j\), \(\overline{S}_{*,i}\) and \(\overline{S}_{*,j}\) are independent. By Hoeffding inequality (Lemma 2.19), we have
\[\Pr[|\langle\overline{S}_{*,i},\overline{S}_{*,j}\rangle|\geq t]\] \[\leq 2\exp{(-\frac{2t^{2}}{\sum_{i=1}^{m}(\frac{1}{m}-(-\frac{1}{m}) )^{2}})}\] \[\leq 2e^{-t^{2}m/2},\]
where the second step follows from simple algebra \((m\cdot 1/m^{2}=1/m)\).
Choosing \(t=\sqrt{2\log{(2n^{2}/\delta)}}/\sqrt{m}\), we have
\[\Pr[|\langle\overline{S}_{*,i},\overline{S}_{*,j}\rangle|\geq\sqrt{2\log{(2n^{ 2}/\delta)}}/\sqrt{m}]\leq\frac{\delta}{n^{2}},\]
union bound over all \(n^{2}\) pairs of columns gives the desired result.
**Lemma D.5** (Gaussian column norm bound).: _Let \(S\in\mathbb{R}^{m\times n}\) be a random Gaussian matrix._
_Then, for any \(i\in[n]\), we have_
\[\Pr[|\|S_{*,i}\|_{2}^{2}-1|\geq\frac{\sqrt{\log(n/\delta)}}{\sqrt{m}}]\leq \Theta(\delta).\]
Proof.: For any column \(S_{*,i}\), note that \(\|S_{*,i}\|_{2}^{2}\sim\chi_{m}^{2}\), each one with zero mean and variance \(\frac{1}{m}\).
By Lemma 2.20, we have
\[\Pr[|\|S_{*,i}\|_{2}^{2}-1|\geq 2\frac{\sqrt{t}}{\sqrt{m}}]\leq 2\exp(-t).\]
Setting \(t=\log(n/\delta)\), we have
\[\Pr[|\|S_{*,i}\|_{2}^{2}-1|\geq C\cdot\frac{\sqrt{\log(n/\delta)}}{\sqrt{m}}] \leq\delta/n,\]
the proof is concluded by union bounding over all \(n\) columns.
We conclude random Gaussian and AMS are \(\mathsf{OCE}\)'s.
**Lemma D.6** (Gaussian \(\mathsf{OCE}\)).: _Let \(S\in\mathbb{R}^{m\times n}\) be a random Gaussian matrix, then \(S\) is a \((\log^{1.5}(n/\delta),\delta,n)\)-\(\mathsf{OCE}\)._
Proof.: By Lemma D.3 and Lemma D.5, we know both pairwise inner product bound and column norm bound hold and thus, by Lemma 3.4, \(S\) satisfies the desired \(\mathsf{OCE}\) property.
**Lemma D.7** (AMS \(\mathsf{OCE}\)).: _Let \(S\in\mathbb{R}^{m\times n}\) be an AMS matrix, then \(S\) is a \((\log^{1.5}(n/\delta),\delta,n)\)-\(\mathsf{OCE}\)._
Proof.: The proof is similar to Lemma D.6. The column norm bound follows from definition. |
2308.15178 | Symbolic LTLf Best-Effort Synthesis | We consider an agent acting to fulfil tasks in a nondeterministic
environment. When a strategy that fulfills the task regardless of how the
environment acts does not exist, the agent should at least avoid adopting
strategies that prevent from fulfilling its task. Best-effort synthesis
captures this intuition. In this paper, we devise and compare various symbolic
approaches for best-effort synthesis in Linear Temporal Logic on finite traces
(LTLf). These approaches are based on the same basic components, however they
change in how these components are combined, and this has a significant impact
on the performance of the approaches as confirmed by our empirical evaluations. | Giuseppe De Giacomo, Gianmarco Parretti, Shufang Zhu | 2023-08-29T10:00:33Z | http://arxiv.org/abs/2308.15178v1 | # Symbolic \(\mathtt{ltl}_{\boldsymbol{f}}\) Best-Effort Synthesis
###### Abstract
We consider an agent acting to fulfil tasks in a nondeterministic environment. When a strategy that fulfills the task regardless of how the environment acts does not exist, the agent should at least avoid adopting strategies that prevent from fulfilling its task. Best-effort synthesis captures this intuition. In this paper, we devise and compare various symbolic approaches for best-effort synthesis in Linear Temporal Logic on finite traces (\(\mathtt{ltl}_{f}\)). These approaches are based on the same basic components, however they change in how these components are combined, and this has a significant impact on the performance of the approaches as confirmed by our empirical evaluations.
## 1 Introduction
We consider an agent acting to fulfill tasks in a nondeterministic environment, as considered in Planning in nondeterministic (adversarial) domains [8, 15], except that we specify both the environment and the task in Linear Temporal Logic (\(\mathtt{ltl}\)) [3], the formalism typically used for specifying complex dynamic properties in Formal Methods [5].
In fact, we consider Linear Temporal Logic on finite traces (\(\mathtt{ltl}_{f}\)) [11, 12], which maintains the syntax of \(\mathtt{ltl}\)[18] but is interpreted on finite traces. In this setting, we study synthesis [17, 13, 12, 3]. In particular, we look at how to synthesize a strategy that is guaranteed to satisfy the task against all environment behaviors that conform to the environment specification.
When a winning strategy that fulfills the agent's task, regardless of how the environment acts, does not exist, the agent should at least avoid adopting strategies that prevent it from fulfilling its task. Best-effort synthesis captures this intuition. More precisely, best-effort synthesis captures the game-theoretic rationality principle that a player would not use a strategy that is "dominated" by another of its strategies (i.e. if the other strategy would fulfill the task against more environment behaviors than the one chosen by the player). Best-effort strategies have been studied in [4] and proven to have some notable properties:
(_i_) they always exist, (_ii_) if a winning strategy exists, then best-effort strategies are exactly the winning strategies, (_iii_) best-effort strategies can be computed in 2EXPTIME as computing winning strategies (best-effort synthesis is indeed 2EXPTIME-complete).
The algorithms for best-effort synthesis in ltl and ltl\({}_{f}\) have been presented in [4]. These algorithms are based on creating, solving, and combining the solutions of three distinct games but of the same game arena. The arena is obtained from the automata corresponding to the formulas \(\mathcal{E}\) and \(\varphi\) constituting the environment and the task specifications, respectively.
In particular, the algorithm for ltl\({}_{f}\) best-effort synthesis appears to be quite promising in practice since well-performing techniques for each component of the algorithm are available in the literature. These components are: (_i_) transformation of the ltl\({}_{f}\) formulas \(\mathcal{E}\) and \(\varphi\) into deterministic finite automata (dfa), which can be double-exponential in the worst case, but for which various good techniques have been developed [16, 22, 6, 10]; (_ii_) Cartesian product of dfas, which is polynomial; (_iii_) minimization of dfas, which is also polynomial; (_iv_) fixpoint computation over dfa to compute adversarial and cooperative winning strategies for reaching the final states, which is again polynomial.
In this paper, we refine the ltl\({}_{f}\) best-effort synthesis techniques presented in [4] by using symbolic techniques [7, 5, 22]. In particular, we show three different symbolic approaches that combine the above operations in different ways (and in fact allow for different levels of minimization). We then compare the three approaches through empirical evaluations. From this comparison, a clear winner emerges. Interestingly, the winner does not fully exploit dfa minimization to minimize the dfa whenever it is possible. Instead, this approach uses uniformly the same arena for all three games (hence giving up on minimization at some level). Finally, it turns out that the winner performs better in computing best-effort solutions even than state-of-the-art tools that compute only adversarial solutions. These findings confirm that ltl\({}_{f}\) best-effort synthesis is indeed well suited for efficient and scalable implementations.
The rest of the paper is organized as follows. In Section 2, we recall the main notions of ltl\({}_{f}\) synthesis. In Section 3, we discuss ltl\({}_{f}\) best-effort synthesis, and the algorithm presented in [4]. In Section 4, we introduce three distinct symbolic approaches for ltl\({}_{f}\) best-effort synthesis: the first (c.f., Subsection 4.2) is a direct symbolic implementation of the algorithm presented in [4]; the second one (c.f., Subsection 4.3) favors maximally conducting dfa minimization, thus getting the smallest possible arenas for the three games; and the third one (c.f., Subsection 4.4) gives up dfa minimization at some level, and creates a single arena for the three games. In Section 5, we perform an empirical evaluation of the three algorithms. We conclude the paper in Section 6.
## 2 Preliminaries
ltl\({}_{f}\) Basics. Linear Temporal Logic on finite traces (ltl\({}_{f}\)) is a specification language to express temporal properties on finite traces [11]. In particular, ltl
has the same syntax as ltl, which is instead interpreted over infinite traces [18]. Given a set of propositions \(\Sigma\), \(\textsc{ltl}_{f}\) formulas are generated as follows:
\[\varphi::=a\mid(\varphi_{1}\land\varphi_{2})\mid(\neg\varphi)\mid(\mathsf{O} \varphi)\mid(\varphi_{1}\,\mathcal{U}\,\varphi_{2})\]
where \(a\in\Sigma\) is an _atom_, \(\mathsf{O}\) (_Next_), and \(\mathcal{U}\) (_Until_) are temporal operators. We make use of standard Boolean abbreviations such as \(\lor\) (or) and \(\rightarrow\) (implies), _true_ and _false_. In addition, we define the following abbreviations _Weak Next_\(\,\mathsf{O}\varphi\equiv\neg\mathsf{O}\neg\varphi\), _Eventually_\(\,\Diamond\varphi\equiv\mathit{true}\,\mathcal{U}\,\varphi\) and _Always_\(\Box\varphi\equiv\neg\Diamond\neg\varphi\). The length/size of \(\varphi\), written \(|\varphi|\), is the number of operators in \(\varphi\).
A _finite_ (resp. _infinite_) _trace_ is a sequence of propositional interpretations \(\pi\in(2^{\Sigma})^{*}\) (resp. \(\pi\in(2^{\Sigma})^{\omega}\)). For every \(i\geq 0\), \(\pi_{i}\in 2^{\Sigma}\) is the \(i\)-th interpretation of \(\pi\). Given a finite trace \(\pi\), we denote its last instant (i.e., index) by \(\mathsf{lst}(\pi)\). \(\textsc{ltl}_{f}\) formulas are interpreted over finite, nonempty traces. Given a finite, non-empty trace \(\pi\in(2^{\Sigma})^{+}\), we define when an ltl\({}_{f}\) formula \(\varphi\)_holds_ at instant \(i\), \(0\leq i\leq\mathsf{lst}(\pi)\), written \(\pi,i\models\varphi\), inductively on the structure of \(\varphi\), as:
* \(\pi,i\models a\) iff \(a\in\pi_{i}\) (for \(a\in\Sigma\));
* \(\pi,i\models\neg\varphi\) iff \(\pi,i\not\models\varphi\);
* \(\pi,i\models\varphi_{1}\land\varphi_{2}\) iff \(\pi,i\models\varphi_{1}\) and \(\pi,i\models\varphi_{2}\);
* \(\pi,i\models\mathsf{O}\varphi\) iff \(i<\mathsf{lst}(\pi)\) and \(\pi,i+1\models\varphi\);
* \(\pi,i\models\varphi_{1}\,\mathcal{U}\,\varphi_{2}\) iff \(\exists j\) such that \(i\leq j\leq\mathsf{lst}(\pi)\) and \(\pi,j\models\varphi_{2}\), and \(\forall k,i\leq k<j\) we have that \(\pi,k\models\varphi_{1}\).
We say \(\pi\)_satisfies_\(\varphi\), written as \(\pi\models\varphi\), if \(\pi,0\models\varphi\).
Reactive Synthesis Under Environment Specifications.Reactive synthesis concerns computing a strategy that allows the agent to achieve its goal in an adversarial environment. In many AI applications, the agent has a model describing possible environment behaviors, which we call here an _environment specification_[2, 3]. In this work, we specify both environment specifications and agent goals as \(\textsc{ltl}_{f}\) formulas defined over \(\Sigma=\mathcal{X}\cup\mathcal{Y}\), where \(\mathcal{X}\) and \(\mathcal{Y}\) are disjoint sets of variables under the control of the environment and the agent, respectively.
An _agent strategy_ is a function \(\sigma_{ag}:(2^{\mathcal{X}})^{*}\to 2^{\mathcal{Y}}\) that maps a sequence of environment choices to an agent choice. Similarly, an _environment strategy_ is a function \(\sigma_{env}:(2^{\mathcal{Y}})^{+}\to 2^{\mathcal{X}}\) mapping non-empty sequences of agent choices to an environment choice. A trace \(\pi=(X_{0}\cup Y_{0})(X_{1}\cup Y_{1})\ldots\in(2^{\mathcal{X}\cup \mathcal{Y}})^{\omega}\) is \(\sigma_{ag}\)-consistent if \(Y_{0}=\sigma_{ag}(\epsilon)\), where \(\epsilon\) denotes empty sequence, and \(Y_{i}=\sigma_{ag}(X_{0},\ldots,X_{i-1})\) for every \(i>0\). Analogously, \(\pi\) is \(\sigma_{env}\)-consistent if \(X_{i}=\sigma_{env}(Y_{0},\ldots,Y_{i})\) for every \(i\geq 0\). We define \(\pi(\sigma_{ag},\sigma_{env})\) to be the unique infinite trace that is consistent with both \(\sigma_{ag}\) and \(\sigma_{env}\).
Let \(\psi\) be an ltl\({}_{f}\) formula over \(\mathcal{X}\cup\mathcal{Y}\). We say that agent strategy \(\sigma_{ag}\)_enforces_\(\psi\), written \(\sigma_{ag}\triangleright\psi\), if for every environment strategy \(\sigma_{env}\), there exists a _finite_ prefix of \(\pi(\sigma_{ag},\sigma_{env})\) that satisfies \(\psi\). Conversely, we say that an environment strategy \(\sigma_{env}\)_enforces_\(\psi\), written \(\sigma_{env}\triangleright\psi\), if for every agent strategy \(\sigma_{ag}\), every finite prefix of \(\pi(\sigma_{ag},\sigma_{env})\) satisfies \(\psi\). \(\psi\) is _agent enforceable_ (resp. _environment enforceable_) if there exists an agent (resp. environment) strategy that enforces it. An _environment specification_ is an ltl\({}_{f}\) formula \(\mathcal{E}\) that is environment enforceable.
The problem of \(\textsc{ltl}_{f}\) reactive synthesis under environment specifications is defined as follows.
Definition 1: The \(\textsc{ltl}_{f}\) reactive synthesis under environment specifications problem is defined as a pair \(\mathcal{P}=(\mathcal{E},\varphi)\), where \(\textsc{ltl}_{f}\) formulas \(\mathcal{E}\) and \(\varphi\) correspond to an environment specification and an agent goal, respectively. Realizability of \(\mathcal{P}\) checks whether there exists an agent strategy \(\sigma_{ag}\) that enforces \(\varphi\) under \(\mathcal{E}\), i.e.,
\[\forall\sigma_{env}\triangleright\mathcal{E},\pi(\sigma_{ag},\sigma_{env})\models\varphi\]
Synthesis of \(\mathcal{P}\) computes such a strategy if it exists.
A naive approach to this problem is a reduction to standard reactive synthesis of \(\textsc{ltl}_{f}\) formula \(\mathcal{E}\rightarrow\varphi\)[3]. Moreover, it has been shown that the problem of \(\textsc{ltl}_{f}\) reactive synthesis under environment specifications is 2EXPTIME-complete [3].
## 3 Best-effort Synthesis Under Environment Specifications
In reactive synthesis, the agent aims at computing a strategy that enforces the goal regardless of environment behaviors. If such a strategy does not exist, the agent just gives up when the synthesis procedure declares the problem _unrealizable_, although the environment can be possibly "over-approximated". In this work, we synthesize a strategy ensuring that the agent will do nothing that would needlessly prevent it from achieving its goal - which we call a _best-effort strategy_. _Best-effort synthesis_ is the problem of finding such a strategy [4]. We start by reviewing what it means for an agent strategy to make more effort with respect to another.
Definition 2: Let \(\mathcal{E}\) and \(\varphi\) be \(\textsc{ltl}_{f}\) formulas denoting an environment specification and an agent goal, respectively, and \(\sigma_{1}\) and \(\sigma_{2}\) be two agent strategies. \(\sigma_{1}\) dominates \(\sigma_{2}\) for \(\varphi\) under \(\mathcal{E}\), written \(\sigma_{1}\geq_{\varphi|\mathcal{E}}\sigma_{2}\), if for every \(\sigma_{env}\triangleright\mathcal{E},\pi(\sigma_{2},\sigma_{env})\models\varphi\) implies \(\pi(\sigma_{1},\sigma_{env})\models\varphi\).
Furthermore, we say that \(\sigma_{1}\)_strictly dominates_\(\sigma_{2}\), written \(\sigma_{1}>_{\varphi|\mathcal{E}}\sigma_{2}\), if \(\sigma_{1}\geq_{\varphi|\mathcal{E}}\sigma_{2}\) and \(\sigma_{2}\not\geq_{\varphi|\mathcal{E}}\sigma_{1}\). Intuitively, \(\sigma_{1}>_{\varphi|\mathcal{E}}\sigma_{2}\) means that \(\sigma_{1}\) does at least as well as \(\sigma_{2}\) against every environment strategy enforcing \(\mathcal{E}\) and strictly better against one such strategy. If \(\sigma_{1}\) strictly dominates \(\sigma_{2}\), then \(\sigma_{1}\) makes more effort than \(\sigma_{2}\) to satisfy the goal. In other words, if \(\sigma_{2}\) is strictly dominated by \(\sigma_{1}\), then an agent that uses \(\sigma_{2}\) does not do its best to achieve the goal: if it used \(\sigma_{1}\) instead, it could achieve the goal against a strictly larger set of environment behaviors. Within this framework, a best-effort strategy is one that is not strictly dominated by any other strategy.
Definition 3: An agent strategy \(\sigma\) is best-effort for \(\varphi\) under \(\mathcal{E}\), if there is no agent strategy \(\sigma^{\prime}\) such that \(\sigma^{\prime}>_{\varphi|\mathcal{E}}\sigma\).
It follows immediately from Definition 3 that if a goal \(\varphi\) is agent enforceable under \(\mathcal{E}\), then best-effort strategies enforce \(\varphi\) under \(\mathcal{E}\). Best-effort synthesis concerns computing a best-effort strategy.
Definition 4 ([4]): The \(\textsc{ltl}_{f}\) best-effort synthesis problem is defined as a pair \(\mathcal{P}=(\mathcal{E},\varphi)\), where \(\textsc{ltl}_{f}\) formulas \(\mathcal{E}\) and \(\varphi\) are the environment specification and the agent goal, respectively. Best-effort synthesis of \(\mathcal{P}\) computes an agent strategy that is best-effort for \(\varphi\) under \(\mathcal{E}\).
While classical synthesis settings first require checking the realizability of the problem, i.e., the existence of a strategy that enforces the agent goal under environment specification [12, 17], deciding whether a best-effort strategy exists is trivial, as they always exist.
Theorem 3.1 ([4]): _Let \(\mathcal{P}=(\mathcal{E},\varphi)\) be an \(\textsc{ltl}_{f}\) best-effort synthesis problem. There exists a best-effort strategy for \(\varphi\) under \(\mathcal{E}\)._
\(\textsc{ltl}_{f}\) best-effort synthesis can be solved by a reduction to suitable dfa games and is 2EXPTIME-complete [4].
dfa _Game._ A dfa _game_ is a two-player game played on a deterministic finite automaton (dfa). Formally, a dfa is defined as a pair \(\mathcal{A}=(\mathcal{D},F)\), where \(\mathcal{D}\) is a deterministic transition system such that \(\mathcal{D}=(2^{\mathcal{X}\cup\mathcal{Y}},S,s_{0},\delta)\), where \(2^{\mathcal{X}\cup\mathcal{Y}}\) is the alphabet, \(S\) is the state set, \(s_{0}\in S\) is the initial state and \(\delta\colon S\times 2^{\mathcal{X}\cup\mathcal{Y}}\to S\) is the deterministic _transition function_, and \(F\subseteq S\) is a set of final states. We call \(|S|\) the _size_ of \(\mathcal{D}\). Given a finite word \(\pi=(X_{0}\cup Y_{0})\ldots(X_{n}\cup Y_{n})\in(2^{\mathcal{X}\cup\mathcal{Y} })^{+}\), running \(\pi\) in \(\mathcal{D}\) yields the sequence \(\rho=s_{0}\ldots s_{n+1}\) such that \(s_{0}\) is the initial state of \(\mathcal{D}\) and \(s_{i+1}=\delta(s_{i},X_{i}\cup Y_{i})\) for all \(i\). Since the transitions in \(\mathcal{D}\) are all deterministic, we denote by \(\rho=\mathsf{Run}(\pi,\mathcal{D})\) the unique sequence induced by running \(\pi\) on \(\mathcal{D}\). We define the _product_ of transition systems as follows.
Definition 5: The product of transition systems \(\mathcal{D}_{i}=(\Sigma,S_{i},s_{(0,i)},\delta_{i})\) (with \(i=1,2\)) over the same alphabet is the transition system \(\mathcal{D}_{1}\times\mathcal{D}_{2}=(\Sigma,S,s_{0},\delta)\) with: \(S=S_{1}\times S_{2}\); \(s_{0}=(s_{(0,1)},s_{(0,2)})\); and \(\delta((s_{1},s_{2}),x)=(\delta(s_{1},x),\delta(s_{2},x))\). The product \(\mathcal{D}_{1}\times\ldots\times\mathcal{D}_{n}\) is defined analogously for any finite sequence \(\mathcal{D}_{1},\ldots,\mathcal{D}_{n}\) of transition systems over the same alphabet.
A finite word \(\pi\) is _accepted_ by \(\mathcal{A}=(\mathcal{D},F)\) if the last state of the run it induces is a final state, i.e., \(\mathsf{lst}(\rho)\in F\), where \(\rho=\mathsf{Run}(\pi,\mathcal{D})\). The language of \(\mathcal{A}\), denoted as \((\mathcal{A})\), consists of all words accepted by the automaton. Every \(\textsc{ltl}_{f}\) formula \(\varphi\) can be transformed into a dfa \(\mathcal{A}_{\varphi}\) that accepts exactly the traces that satisfy the formula, in other words, \(\mathcal{A}_{\varphi}\)_recognizes_\(\varphi\).
Theorem 3.2 ([11]): _Given an \(\textsc{ltl}_{f}\) formula over \(\Sigma\), we can build a dfa \(\mathcal{A}_{\varphi}=(\mathcal{D}_{\varphi},F_{\varphi})\) whose size is at most double-exponential in \(|\varphi|\) such that \(\pi\models\varphi\) iff \(\pi\in(\mathcal{A}_{\varphi})\)._
In a dfa game \((\mathcal{D},F)\), the transition system \(\mathcal{D}\) is also called the _game arena_. Given \(\sigma_{ag}\) and \(\sigma_{env}\) denoting an agent strategy and an environment strategy, respectively, the trace \(\pi(\sigma_{ag},\sigma_{env})\) is called a _play_. Specifically, a play is _winning_ if it contains a finite prefix that is accepted by the dfa. Intuitively, dfa games require \(F\) to be visited at least once. An agent strategy \(\sigma_{ag}\) is _winning_
in \((\mathcal{D},F)\) if, for every environment strategy \(\sigma_{env}\), it results that \(\pi(\sigma_{ag},\sigma_{env})\) is winning. Conversely, an environment strategy \(\sigma_{env}\) is _winning_ in the game \((\mathcal{D},F)\) if, for every agent strategy \(\sigma_{ag}\), it results that \(\pi(\sigma_{ag},\sigma_{env})\) is not winning. In dfa _games_, \(s\in S\) is a _winning_ state for the agent (resp. environment) if the agent (resp. the environment) has a winning strategy in the game \((\mathcal{D}^{\prime},F)\), where \(\mathcal{D}^{\prime}=(2^{\mathcal{X}\cup\mathcal{Y}},S,s,\delta)\), i.e., the same arena \(\mathcal{D}\) but with the new initial state \(s\). By \(\mathsf{W}_{ag}(\mathcal{D},F)\) (resp. \(\mathsf{W}_{env}(\mathcal{D},F)\)) we denote the set of all agent (resp. environment) winning states. Intuitively, \(\mathsf{W}_{ag}\) represents the "agent winning region", from which the agent is able to win the game, no matter how the environment behaves.
We also define cooperatively winning strategies for dfa games. An agent strategy \(\sigma_{ag}\) is _cooperatively winning_ in game \((\mathcal{D},F)\) if there exists an environment strategy \(\sigma_{env}\) such that \(\pi(\sigma_{ag},\sigma_{env})\) is winning. Hence, \(s\in S\) is a _cooperatively winning state_ if the agent has a cooperatively winning strategy in the game \((\mathcal{D}^{\prime},F)\), where \(\mathcal{D}^{\prime}=(2^{\mathcal{X}\cup\mathcal{Y}},S,s_{0},\delta)\). By \(\mathsf{W}^{\prime}_{ag}(\mathcal{D},F)\) we denote the set of all agent cooperative winning states.
When the agent makes its choices based only on the current state of the game, we say that it uses a _positional strategy_. Formally, we define an _agent positional strategy_ (a.k.a. _memory-less strategy_) as a function \(\tau_{ag}:S\to 2^{\mathcal{X}}\). An agent positional strategy \(\tau_{ag}\)_induces_ an agent strategy \(\sigma_{ag}:(2^{\mathcal{X}})^{*}\to 2^{\mathcal{Y}}\) as follows: \(\sigma_{ag}(\epsilon)=\tau(s_{0})\) and, for \(i\geq 0\), \(\sigma_{ag}(X_{0}\ldots X_{i})=\tau_{ag}(s_{i+1})\), where \(s_{i+1}\) is the last state in the sequence \(\rho=\mathsf{Run}(\pi,\mathcal{D})\), with \(\pi\) being the finite sequence played until now, i.e., \(\pi=(\sigma_{ag}(\epsilon)\cup X_{0})(\sigma_{ag}(X_{0})\cup X_{1})\ldots( \sigma(X_{0}\ldots X_{k-1})\cup X_{k})\). Similarly, we can define an _environment positional_ strategy as a function \(\tau_{env}\colon S\times 2^{\mathcal{Y}}\to 2^{\mathcal{X}}\). A positional strategy for a player that is winning (resp. cooperatively winning) from every state in its winning region is called _uniform winning_ (resp. _uniform cooperatively winning_).
The solution to ltl\({}_{f}\) best-effort synthesis presented in [4] can be summarized as follows.
Algorithm 0[4].Given an ltl\({}_{f}\) best-effort synthesis problem \(\mathcal{P}=(\mathcal{E},\varphi)\), proceed as follows:
1. For every \(\xi\in\{\neg\mathcal{E},\mathcal{E}\to\varphi,\mathcal{E}\land\varphi\}\) compute the dfa \(\mathcal{A}_{\xi}=(\mathcal{D}_{\xi},F_{\xi})\).
2. Form the product \(\mathcal{D}=\mathcal{D}_{\neg\mathcal{E}}\times\mathcal{D}_{\mathcal{E}\to \varphi}\times\mathcal{D}_{\mathcal{E}\land\varphi}\). Lift the final states of each component to the product, i.e. if \(\mathcal{A}_{\xi}=(D_{\xi},F_{\xi})\) is the dfa for \(\xi\), then the lifted condition \(G_{\xi}\) consists of all states \((s_{\neg\mathcal{E}},s_{\mathcal{E}\to\varphi},s_{\mathcal{E}\land\varphi})\) s.t. \(s_{\xi}\in F_{\xi}\).
3. In dfa game \((\mathcal{D},G_{\mathcal{E}\to\varphi})\) compute a uniform positional winning strategy \(f_{ag}\). Let \(W_{ag}\subseteq S\) be the agent's winning region.
4. In dfa game \((\mathcal{D},G_{\neg\mathcal{E}})\) compute the environment's winning region \(V\subseteq Q\).
5. Compute the environment restriction \(\mathcal{D}^{\prime}\) of \(\mathcal{D}\) to \(V\).
6. In dfa game \((\mathcal{D}^{\prime},G_{\mathcal{E}\land\varphi})\) find a uniform positional cooperatively winning strategy \(g_{ag}\).
7. **Return** the agent strategy \(\sigma_{ag}\) induced by the positional strategy \(k_{ag}\), which is defined as follows: \(k_{ag}(s)=\begin{cases}f_{ag}(s)&\text{ if }s\in W_{ag},\\ g_{ag}(s)&\text{ otherwise.}\end{cases}\)
## 4 Symbolic ltl\({}_{f}\) Best-effort Synthesis
We present in this section three different symbolic approaches to ltl\({}_{f}\) best-effort synthesis, namely monolithic, explicit-compositional, and symbolic-compositional, as depicted in Figure 1. In particular, we base on the symbolic techniques of DFA games presented in [22], which we briefly review below.
### Symbolic dfa Games
We consider the dfa representation described in Section 3 as an explicit-state representation. Instead, we are able to represent a dfa more compactly in a symbolic way by using a logarithmic number of propositions to encode the
state space. More specifically, the _symbolic_ representation of \(\mathcal{D}\) is a tuple \(\mathcal{D}^{s}=(\mathcal{X},\mathcal{Y},\mathcal{Z},Z_{0},\eta)\), where \(\mathcal{Z}\) is a set of state variables such that \(|\mathcal{Z}|=\lceil\log|S|\rceil\), and every state \(s\in S\) corresponds to an interpretation \(Z\in 2^{\mathcal{Z}}\) over \(\mathcal{Z}\); \(Z_{0}\in 2^{\mathcal{Z}}\) is the interpretation corresponding to the initial state \(s_{0}\); \(\eta\colon 2^{\mathcal{X}}\times 2^{\mathcal{Y}}\times 2^{\mathcal{Z}}\to 2^{ \mathcal{Z}}\) is a Boolean function such that \(\eta(Z,X,Y)=Z^{\prime}\) if and only if \(Z\) is the interpretation of a state \(s\) and \(Z^{\prime}\) is the interpretation of the state \(\delta(s,X\cup Y)\). The set of goal states is represented by a Boolean function \(f\) over \(\mathcal{Z}\) that is satisfied exactly by the interpretations of states in \(F\). In the following, we denote symbolic dfs as pairs \((\mathcal{D}^{s},f)\).
Given a symbolic dfa game \((\mathcal{D}^{s},f)\), we can compute a positional uniform winning agent strategy through a least fixpoint computation over two Boolean formulas \(w\) over \(\mathcal{Z}\) and \(t\) over \(\mathcal{Z}\cup\mathcal{Y}\), which represent the agent winning region and winning states with agent actions such that, regardless of how the environment behaves, the agent reaches the final states, respectively. Specifically, \(w\) and \(t\) are initialized as \(w_{0}(\mathcal{Z})=f(\mathcal{Z})\) and \(t_{0}(\mathcal{Z},\mathcal{Y})=f(\mathcal{Z})\), since every goal state is an agent winning state. Note that \(t_{0}\) is independent of the propositions from \(\mathcal{Y}\), since once the play reaches goal states, the agent can do whatever it wants. \(t_{i+1}\) and \(w_{i+1}\) are constructed as follows:
\[t_{i+1}(Z,Y)=t_{i}(Z,Y)\vee(\neg w_{i}(Z)\wedge\forall X.w_{i}( \eta(X,Y,Z)))\] \[w_{i+1}(Z)=\exists Y.t_{i+1}(Z,Y)\]
The computation reaches a fixpoint when \(w_{i+1}\equiv w_{i}\). To see why a fixpoint is eventually reached, note that function \(w_{i+1}\) is _monotonic_. That is, at each step, a state \(Z\) is added to the winning region \(w_{i+1}\) only if it has not been already detected as a winning state, written \(\neg w_{i}(Z)\) in function \(t_{i+1}(Z,Y)\) above, _and_ there exists an agent choice \(Y\) such that, for every environment choice \(X\), the agent moves in \(w_{i}\), written \(\forall X.w_{i}(\eta(X,Y,Z))\).
When the fixpoint is reached, no more states will be added, and so all agent winning states have been collected. By evaluating \(Z_{0}\) on \(w_{i+1}\) we can determine if there exists a winning strategy. If that is the case, \(t_{i+1}\) can be used to compute a uniform positional winning strategy through the mechanism of Boolean synthesis [14]. More specifically, by passing \(t_{i}\) to a Boolean synthesis procedure, setting \(\mathcal{Z}\) as input variables and \(\mathcal{Y}\) as output variables, we obtain a uniform positional winning strategy \(\tau:2^{\mathcal{Z}}\to 2^{\mathcal{Y}}\) that can be used to induce an agent winning strategy.
Computing a uniform positional cooperatively winning strategy can be performed through an analogous least-fixpoint computation. To do this, we define again Boolean functions \(\hat{w}\) over \(\mathcal{Z}\) and \(\hat{t}\) over \(\mathcal{Z}\cup\mathcal{Y}\), now representing the agent cooperatively winning region and cooperatively winning states with agent actions such that, if the environment behaves cooperatively, the agent reaches the final states. Analogously, we initialize \(\hat{w}_{0}(\mathcal{Z})=f(\mathcal{Z})\) and \(\hat{t}_{0}(\mathcal{Z},\mathcal{Y})=f(\mathcal{Z})\). Then, we construct \(\hat{t}_{i+1}\) and \(\hat{w}_{i+1}\) as follows:
\[\hat{t}_{i+1}(Z,Y)=\hat{t}_{i}(Z,Y)\vee(\neg\hat{w}_{i}(Z)\wedge \exists X.\hat{w}_{i}(\eta(X,Y,Z)))\] \[\hat{w}_{i+1}(Z)=\exists Y.\hat{t}_{i+1}(Z,Y);\]
Once the computation reaches the fixpoint, checking the existence and computing a uniform cooperatively winning positional strategy can be done similarly.
Sometimes, the state space of a symbolic transition system must be restricted to not reach a given set of invalid states represented as a Boolean function. To do so, we redirect all transitions from states in the set to a _sink_ state. Formally:
Definition 6: Let \(\mathcal{D}^{s}=(\mathcal{Z},\mathcal{X},\mathcal{Y},Z_{0},\eta)\) be a symbolic transition system and \(g\) a Boolean formula over \(\mathcal{Z}\) representing a set of states. The restriction of \(\mathcal{D}^{s}\) to \(g\) is a new symbolic transition system \(\mathcal{D}^{\prime s}=(\mathcal{Z},\mathcal{X},\mathcal{Y},Z_{0},\eta^{ \prime})\), where \(\eta^{\prime}\) only agrees with \(\eta\) if \(Z\models g\), i.e., \(\eta^{\prime}=\eta\wedge g\).
### Monolithic Approach
The monolithic approach is a direct implementation of the best-effort synthesis approach presented in [4] (i.e., of Algorithm 0), utilizing the symbolic synthesis framework introduced in [22]. Given a best-effort synthesis problem \(\mathcal{P}=(\mathcal{E},\varphi)\), we first construct the dfas following the synthesis algorithm described in Section 3, and convert them into a symbolic representation. Then, we solve suitable games on the symbolic dfas and obtain a best-effort strategy. The workflow of the monolithic approach, i.e., **Algorithm 1**, is shown in Figure 1(_a_). We elaborate on the algorithm as follows.
**Algorithm 1**.: Given an ltl\({}_{f}\) best-effort synthesis problem \(\mathcal{P}=(\mathcal{E},\varphi)\), proceed as follows:
1. For ltl\({}_{f}\) formulas \(\mathcal{E}\rightarrow\varphi\), \(\neg\mathcal{E}\) and \(\mathcal{E}\wedge\varphi\) compute the corresponding minimal explicit-state dfas\(\mathcal{A}_{\mathcal{E}\rightarrow\varphi}=(\mathcal{D}_{\mathcal{E}\rightarrow\varphi},F_{\mathcal{E}\rightarrow\varphi})\), \(\mathcal{A}_{\neg\mathcal{E}}=(\mathcal{D}_{\neg\mathcal{E}},F_{\neg \mathcal{E}})\) and \(\mathcal{A}_{\mathcal{E}\wedge\varphi}=(\mathcal{D}_{\mathcal{E}\wedge\varphi},F_{\mathcal{E}\wedge\varphi})\).
2. Convert the dfas to a symbolic representation to obtain \(\mathcal{A}^{s}_{\mathcal{E}\rightarrow\varphi}=(\mathcal{D}^{s}_{\mathcal{E }\rightarrow\varphi},f_{\mathcal{E}\rightarrow\varphi})\), \(\mathcal{A}^{s}_{\neg\mathcal{E}}=(\mathcal{D}^{s}_{\neg\mathcal{E}},f_{ \neg\mathcal{E}})\) and \(\mathcal{A}^{s}_{\mathcal{E}\wedge\varphi}=(\mathcal{D}^{s}_{\mathcal{E}\wedge \varphi},f_{\mathcal{E}\wedge\varphi})\).
3. Construct the product \(\mathcal{D}^{s}=\mathcal{D}^{s}_{\mathcal{E}\rightarrow\varphi}\times \mathcal{D}^{s}_{\neg\mathcal{E}}\times\mathcal{D}^{s}_{\mathcal{E}\wedge \varphi}\).
4. In dfa game \((\mathcal{D}^{s},f_{\mathcal{E}\rightarrow\varphi})\), compute a uniform positional winning strategy \(\tau_{ag}\) and the agent's winning region \(\mathsf{W}_{ag}(\mathcal{D}^{s},f_{\mathcal{E}\rightarrow\varphi})\).
5. In dfa game \((\mathcal{D}^{s},f_{\neg\mathcal{E}})\), compute the environment's winning region \(\mathsf{W}_{env}(\mathcal{D}^{s},f_{\neg\mathcal{E}})\).
6. Compute the symbolic restriction \(\mathcal{D}^{\prime s}\) of \(\mathcal{D}^{s}\) to \(\mathsf{W}_{env}(\mathcal{D}^{s},f_{\neg\mathcal{E}})\) to restrict the state space of \(\mathcal{D}^{s}\) to considering \(\mathsf{W}_{env}(\mathcal{D}^{s},f_{\neg\mathcal{E}})\) only.
7. In dfa game \((\mathcal{D}^{\prime s},f_{\mathcal{E}\wedge\varphi})\), compute a uniform positional cooperatively winning strategy \(\gamma_{ag}\).
8. **Return** the best-effort strategy \(\sigma_{ag}\)_induced_ by the positional strategy \(\kappa_{ag}\) constructed as follows: \(\kappa_{ag}(Z)=\begin{cases}\tau_{ag}(Z)&\text{ if }Z\models\mathsf{W}_{ag}( \mathcal{D}^{s},f_{\mathcal{E}\rightarrow\varphi})\\ \gamma_{ag}(Z)&\text{ otherwise.}\end{cases}\)
The main challenge in the monolithic approach comes from the ltl\({}_{f}\)-to-dfa conversion, which can take, in the worst case, double-exponential time [11], and thus is also considered the bottleneck of ltl\({}_{f}\) synthesis [22]. To that end, we propose an explicit-compositional approach to diminish this difficulty by decreasing the number of ltl\({}_{f}\)-to-dfa conversions.
### Explicit-Compositional Approach
As described in Section 4.2, the monolithic approach to a best-effort synthesis problem \(\mathcal{P}=(\mathcal{E},\varphi)\) involves three rounds of \(\textsc{ltl}_{f}\)-to-dfa conversions corresponding to \(\textsc{ltl}_{f}\) formulas \(\mathcal{E}\rightarrow\varphi\), \(\neg\mathcal{E}\) and \(\mathcal{E}\wedge\varphi\). However, observe that \(\textsc{dfas}\ \mathcal{A}_{\mathcal{E}\rightarrow\varphi}\), \(\mathcal{A}_{\neg\mathcal{E}}\) and \(\mathcal{A}_{\mathcal{E}\wedge\varphi}\) can, in fact, be constructed by manipulating the two \(\textsc{dfas}\ \mathcal{A}_{\mathcal{E}}\) and \(\mathcal{A}_{\varphi}\) of \(\textsc{ltl}_{f}\) formulas \(\mathcal{E}\) and \(\varphi\), respectively. Specifically, given the explicit-state \(\textsc{dfas}\ \mathcal{A}_{\varphi}\) and \(\mathcal{A}_{\mathcal{E}}\), we obtain \(\mathcal{A}_{\mathcal{E}\rightarrow\varphi}\), \(\mathcal{A}_{\neg\mathcal{E}}\) and \(\mathcal{A}_{\mathcal{E}\wedge\varphi}\) as follows:
* \(\mathcal{A}_{\mathcal{E}\rightarrow\varphi}=\mathsf{Comp}(\mathsf{Inter}( \mathcal{A}_{\mathcal{E}},\mathsf{Comp}(\mathcal{A}_{\varphi}))\);
* \(\mathcal{A}_{\neg\mathcal{E}}=\mathsf{Comp}(\mathcal{A}_{\mathcal{E}})\);
* \(\mathcal{A}_{\mathcal{E}\wedge\varphi}=\mathsf{Inter}(\mathcal{A}_{\mathcal{E}},\mathcal{A}_{\varphi})\);
where \(\mathsf{Comp}\) and \(\mathsf{Inter}\) denote complement and intersection on explicit-state \(\textsc{dfas}\), respectively. Note that transforming \(\textsc{ltl}_{f}\) formulas into \(\textsc{dfas}\) takes double-exponential time in the size of the formula, while the complement and intersection of \(\textsc{dfas}\) take polynomial time in the size of the \(\textsc{dfas}\).
The workflow of the explicit-compositional approach, i.e., **Algorithm 2**, is shown in Figure 1(_b_). As the monolithic approach, we first translate the formulas \(\mathcal{E}\) and \(\varphi\) into minimal explicit-state \(\textsc{dfas}\ \mathcal{A}_{\mathcal{E}}\) and \(\mathcal{A}_{\varphi}\), respectively. Then, \(\textsc{dfas}\ \mathcal{A}_{\mathcal{E}\rightarrow\varphi}\), \(\mathcal{A}_{\neg\mathcal{E}}\) and \(\mathcal{A}_{\mathcal{E}\wedge\varphi}\) are constructed by manipulating \(\mathcal{A}_{\mathcal{E}}\) and \(\mathcal{A}_{\varphi}\) through complement and intersection. Indeed, the constructed explicit-state \(\textsc{dfas}\) are also minimized. The remaining steps of computing suitable \(\textsc{dfa}\) games are the same as in the monolithic approach.
### Symbolic-Compositional Approach
The monolithic and explicit-compositional approaches are based on playing three games over the symbolic product of transition systems \(\mathcal{D}_{\mathcal{E}\rightarrow\varphi}\), \(\mathcal{D}_{\neg\mathcal{E}}\), and \(\mathcal{D}_{\mathcal{E}\wedge\varphi}\). We observe that given \(\textsc{dfas}\ \mathcal{A}_{\mathcal{E}}=(\mathcal{D}_{\mathcal{E}},F_{\mathcal{E}})\) and \(\mathcal{A}_{\varphi}=(\mathcal{D}_{\varphi},F_{\varphi})\) recognizing \(\mathcal{E}\) and \(\varphi\), respectively, the \(\textsc{dfa}\) recognizing any Boolean combination of \(\mathcal{E}\) and \(\varphi\) can be constructed by taking the product of \(\mathcal{D}_{\mathcal{E}}\) and \(\mathcal{D}_{\varphi}\) and properly defining the set of final states over the resulting transition system.
Lemma 1: _Let \(\mathcal{A}_{\psi_{1}}=(\mathcal{D}_{\psi_{1}},F_{\psi_{1}})\) and \(\mathcal{A}_{\psi_{2}}=(\mathcal{D}_{\psi_{2}},F_{\psi_{2}})\) be the automata recognizing \(\textsc{ltl}_{f}\) formulas \(\psi_{1}\) and \(\psi_{2}\), respectively, and \(\psi=\psi_{1}\ op\ \psi_{2}\) denoting an arbitrary Boolean combination of \(\psi_{1}\) and \(\psi_{2}\), i.e., \(op\in\{\wedge,\vee,\rightarrow,\leftrightarrow\}\). The \(\textsc{dfa}\ \hat{\mathcal{A}}_{\psi}=(\hat{\mathcal{D}}_{\psi},\hat{F}_{\psi})\) with \(\hat{\mathcal{D}}_{\psi}=\mathcal{D}_{\psi_{1}}\times\mathcal{D}_{\psi_{2}}\) and \(\hat{F}_{\psi}=\{(s_{\psi_{1}},s_{\psi_{2}})\ |\ s_{\psi_{1}}\in F_{\psi_{1}}\ op\ s_{\psi_{2}}\in F_{\psi_{2}}\}\) recognizes \(\psi\)._
Proof: (\(\rightarrow\)) Assume \(\pi\models\psi\). We will prove that \(\pi\in(\hat{\mathcal{A}}_{\varphi})\). To see this, observe that \(\pi\models\psi\) implies \(\pi\models\psi_{1}\ op\ \pi\models\psi_{2}\). It follows by [11] that \(\pi\in(\mathcal{A}_{\psi_{1}})\ op\ \pi\in(\mathcal{A}_{\psi_{2}})\), meaning that running \(\pi\) in \(\mathcal{D}_{\psi_{1}}\) and \(\mathcal{D}_{\psi_{2}}\) yields the sequences of states \((s_{0}^{\psi_{1}},\ldots,s_{n}^{\psi_{1}})\) and \((s_{0}^{\psi_{2}},\ldots,s_{n}^{\psi_{2}})\) such that \(s_{n}^{\psi_{1}}\in F_{\psi_{1}}\ op\ s_{n}^{\psi_{2}}\in F_{\psi_{2}}\). Since \(\hat{\mathcal{D}}_{\psi}\) is obtained through synchronous product of \(\mathcal{D}_{\psi_{1}}\) and \(\mathcal{D}_{\psi_{2}}\), running \(\pi\) in \(\hat{\mathcal{A}}_{\psi}\) yields the sequence of states \(((s_{0}^{\psi_{1}},s_{0}^{\psi_{2}}),\ldots,(s_{n}^{\psi_{1}},s_{n}^{\psi_{2}}))\), such that \((s_{n}^{\psi_{1}},s_{n}^{\psi_{2}})\in\hat{F}_{\psi}\). Hence, we have that \(\pi\in(\hat{\mathcal{A}}_{\psi})\).
(\(\leftarrow\)) Assume \(\pi\in(\hat{\mathcal{A}}_{\varphi})\). We prove that \(\pi\models\psi\). To see this, observe that \(\pi\in(\hat{\mathcal{A}}_{\varphi})\)
means that the run \(\rho=(s_{0}^{\psi_{1}},s_{0}^{\psi_{2}})\ldots(s_{n}^{\psi_{1}},s_{n}^{\psi_{2}})\) induced by \(\pi\) on \(\hat{\mathcal{D}}_{\psi}\) is such that \((s_{n}^{\psi_{1}},s_{n}^{\psi_{2}})\in\hat{F}_{\psi}\). This means, by construction of \(\hat{F}_{\psi}\), that \((s_{n}^{\psi_{1}},s_{n}^{\psi_{2}})\) s.t. \(s_{n}^{\psi_{1}}\in F_{\psi_{1}}\)\(op\)\(s_{n}^{\psi_{2}}\in F_{\psi_{2}}\). Since \(\hat{\mathcal{D}}_{\psi}\) is obtained through synchronous product of \(\mathcal{D}_{\psi_{1}}\) and \(\mathcal{D}_{\psi_{2}}\), it follows that \(\pi\in(\mathcal{A}_{\psi_{1}})\)\(op\)\(\pi\in(\mathcal{A}_{\psi_{2}})\). By [11] we have that \(\pi\models\psi_{1}\)\(op\)\(\pi\models\psi_{2}\), and hence \(\pi\models\psi\).
Notably, Lemma 1 tells that the dfas\(\mathcal{A}_{\mathcal{E}\to\varphi}\), \(\mathcal{A}_{\neg\mathcal{E}}\), and \(\mathcal{A}_{\mathcal{E}\wedge\varphi}\) can be constructed from the same transition system by defining proper sets of final states. Specifically, given the dfas\(\mathcal{A}_{\mathcal{E}}=(\mathcal{D}_{\mathcal{E}},F_{\mathcal{E}})\) and \(\mathcal{A}_{\varphi}=(\mathcal{D}_{\varphi},F_{\varphi})\) recognizing \(\mathcal{E}\) and \(\varphi\), respectively, the dfas recognizing \(\mathcal{E}\to\varphi\), \(\neg\mathcal{E}\), and \(\mathcal{E}\wedge\varphi\) can be constructed as \(\mathcal{A}_{\mathcal{E}\to\varphi}=(\mathcal{D},F_{\mathcal{E}\to\varphi})\), \(\mathcal{A}_{\neg\mathcal{E}}=(\mathcal{D},F_{\neg\mathcal{E}})\), and \(\mathcal{A}_{\mathcal{E}\wedge\varphi}=(\mathcal{D},F_{\mathcal{E}\wedge\varphi})\), respectively, where \(\mathcal{D}=\mathcal{D}_{\mathcal{E}}\times\mathcal{D}_{\varphi}\) and:
* \(F_{\mathcal{E}\to\varphi}=\{(s_{\mathcal{E}},s_{\varphi})\mid s_{\mathcal{E}} \in F_{\mathcal{E}}\to s_{\varphi}\in F_{\varphi}\}\).
* \(F_{\neg\mathcal{E}}=\{(s_{\mathcal{E}},s_{\varphi})\mid s_{\mathcal{E}}\not \in F_{\mathcal{E}}\}\).
* \(F_{\mathcal{E}\wedge\varphi}=\{(s_{\mathcal{E}},s_{\varphi})\mid s_{\mathcal{E }}\in F_{\mathcal{E}}\wedge s_{\varphi}\in F_{\varphi}\}\).
The symbolic-compositional approach precisely bases on this observation. As shown in Figure 1(_c_), we first transform the ltl\({}_{f}\) formulas \(\mathcal{E}\) and \(\varphi\) into minimal explicit-state dfas\(\mathcal{A}_{\mathcal{E}}\) and \(\mathcal{A}_{\varphi}\), respectively, and then construct the symbolic representations \(\mathcal{A}_{\mathcal{E}}^{s}\) and \(\mathcal{A}_{\varphi}^{s}\) of them. Subsequently, we construct the symbolic product \(\mathcal{D}^{s}=\mathcal{D}_{\mathcal{E}}^{s}\times\mathcal{D}_{\varphi}^{s}\), once and for all, and get the three dfas games by defining the final states (which are Boolean functions) from \(f_{\mathcal{E}}\) and \(f_{\varphi}\) as follows:
* \(f_{\mathcal{E}\to\varphi}=f_{\mathcal{E}}\to f_{\varphi}\).
* \(f_{\neg\mathcal{E}}=\neg f_{\mathcal{E}}\).
* \(f_{\mathcal{E}\wedge\varphi}=f_{\mathcal{E}}\wedge f_{\varphi}\).
From now on, the remaining steps are the same as in the monolithic and explicit-compositional approaches.
**Algorithm 3**.: Given a best-effort synthesis problem \(\mathcal{P}=(\mathcal{E},\varphi)\), proceed as follows:
1. Compute the minimal explicit-state dfas\(\mathcal{A}_{\mathcal{E}}=(\mathcal{D}_{\mathcal{E}},F_{\mathcal{E}})\) and \(\mathcal{A}_{\varphi}=(\mathcal{D}_{\varphi},F_{\varphi})\).
2. Convert the dfas to a symbolic representation to obtain \(\mathcal{A}_{\mathcal{E}}^{s}=(\mathcal{D}_{\mathcal{E}}^{s},f_{\mathcal{E}})\) and \(\mathcal{A}_{\varphi}^{s}=(\mathcal{D}_{\varphi}^{s},f_{\varphi})\).
3. Construct the symbolic product \(\mathcal{D}^{s}=\mathcal{D}_{\mathcal{E}}^{s}\times\mathcal{D}_{\varphi}^{s}\).
4. In dfa game \(\mathcal{G}_{\mathcal{E}\to\varphi}^{s}=(\mathcal{D}^{s},f_{\mathcal{E}}\to f_ {\varphi})\) compute a positional uniform winning strategy \(\tau_{ag}\) and the agent winning region \(\mathsf{W}_{ag}(\mathcal{D}^{s},f_{\mathcal{E}}\to f_{\varphi})\).
5. In the dfa game \((\mathcal{D}^{s},\neg f_{\mathcal{E}})\) compute the environment's winning region \(\mathsf{W}_{env}(\mathcal{D}^{s},\neg f_{\mathcal{E}})\).
6. Compute the symbolic restriction \(\mathcal{D}^{\prime s}\) of \(\mathcal{D}^{s}\) to \(\mathsf{W}_{env}(\mathcal{D}^{s},f_{\neg\mathcal{E}})\) so as to restrict the state space of \(\mathcal{D}^{s}\) to considering \(\mathsf{W}_{env}(\mathcal{D}^{s},f_{\neg\mathcal{E}})\) only.
7. In the dfa game \((\mathcal{D}^{\prime s},f_{\mathcal{E}}\wedge f_{\varphi})\) find a positional cooperatively winning strategy \(\gamma_{ag}\).
8. **Return** the best-effort strategy \(\sigma_{ag}\)_induced_ by the positional strategy \(\kappa_{ag}\) constructed as follows: \(\kappa_{ag}(Z)=\begin{cases}\tau_{ag}(Z)&\text{ if }Z\models\mathsf{W}_{ag}( \mathcal{D}^{s},f_{\mathcal{E}\to\varphi})\\ \gamma_{ag}(Z)&\text{ otherwise.}\end{cases}\)
## 5 Empirical Evaluations
In this section, we first describe how we implemented our symbolic ltl\({}_{f}\) best-effort synthesis approaches described in Section 4. Then, by empirical evaluation, we show that Algorithm 3, i.e., the symbolic-compositional approach, shows an overall best-performance. In particular, we show that performing best-effort synthesis only brings a minimal overhead with respect to standard synthesis and may even show better performance on certain instances.
### Implementation
We implemented the three symbolic approaches to ltl\({}_{f}\) best-effort synthesis described in Section 4 in a tool called _BeSyft_, by extending the symbolic synthesis framework [22, 20] integrated in state-of-the-art synthesis tools [6, 9]. In particular, we based on Lydia3, the overall best performing ltl\({}_{f}\)-to-dfa conversion tool, to construct the minimal explicit-state dfas of ltl\({}_{f}\) formulas. Moreover, _BeSyft_ borrows the rich APIs from Lydia to perform relevant explicit-state dfa manipulations required by both Algorithm 1, i.e., the monolithic approach (c.f., Subsection 4.2), and Algorithm 2, i.e., the explicit-compositional approach (c.f., Subsection 4.3), such as complement, intersection, minimization. As in [22, 20], the symbolic dfa games are represented in Binary Decision Diagrams (BDDs) [7], utilizing CUDD-3.0.0 [19] as the BDD library. Thereby, _BeSyft_ constructs and solves symbolic dfa games using Boolean operations provided by CUDD-3.0.0, such as negation, conjunction, and quantification. The uniform positional winning strategy \(\tau_{ag}\) and the uniform positional cooperatively winning strategy \(\gamma_{ag}\) are computed utilizing Boolean synthesis [14]. The positional best-effort strategy is obtained by applying suitable Boolean operations on \(\tau_{ag}\) and \(\gamma_{ag}\). As a result, we have three derivations of _BeSyft_, namely _BeSyft_-Alg-1, _BeSyft_-Alg-2, and _BeSyft_-Alg-3, corresponding to the monolithic, explicit-compositional, and symbolic-compositional approach, respectively.
Footnote 3: [https://github.com/whitemech/lydia](https://github.com/whitemech/lydia)
### Experiment Methodology
Experiment Setup.All experiments were run on a laptop with an operating system 64-bit Ubuntu 20.04, 3.6 GHz CPU, and 12 GB of memory. Time out was set to 1000 seconds.
Benchmarks.We devised a _counter-game_ benchmark, based on the one proposed in [21]. More specifically, there is an \(n\)-bit binary counter and, at each round, the environment chooses whether to issue an increment request for the counter or not. The agent can choose to grant the request or ignore it and its goal is to get the counter to have all bits set to 1. The increment requests only come from the environment, and occur in accordance with the environment specification.
The size of the minimal dfa of a counter-game specification grows exponentially as \(n\) increases.
In the experiments, environment specifications ensure that the environment eventually issues a minimum number \(K\) of increment requests in sequence, which can be represented as \(\textsc{ltl}_{f}\) formulas \(\mathcal{E}_{K}=\Diamond(add\land\raisebox{-1.0pt}{\scalebox{0.8}{$\bullet$}} (add)\ldots\land\raisebox{-1.0pt}{\scalebox{0.8}{$\bullet$}}(\ldots(\raisebox{-1.0 pt}{\scalebox{0.8}{$\bullet$}}(add))\ldots))\), where \(K\) is the number of conjuncts. Counter-game instances may be realizable depending on the parameter \(K\) and the number of bits \(n\). In the case of a realizable instance, a strategy for the agent to enforce the goal is to grant all increment requests coming from the environment. Else, the agent can achieve the goal only if the environment behaves cooperatively, such as issuing more increment requests than that specified in the environment specification. That is, the agent needs a best-effort strategy. In our experiments, we considered counter-game instances with at most \(n=10\) bits and \(K=10\) sequential increment requests. As a result, our benchmarks consist of a total of 100 instances.
### Experimental Results and Analysis.
In our experiments, all _BeSyft_ implementations are only able to solve counter-game instances with up to \(n=8\) bits. Figure 2 shows the comparison (in log scale) of the three symbolic implementations of best-effort synthesis on counter-game instances with \(n=8\) and \(1\leq K\leq 10\). First, we observe that _BeSyft_-Alg-1 (monolithic) and _BeSyft_-Alg-2 (explicit-compositional) reach timeout when \(K\geq 8\), whereas _BeSyft_-Alg-3 (symbolic-compositional) is able to solve all 8-bit counter-game instances. We can also see that _BeSyft_-Alg-1 performs worse than the other two derivations since it requires three rounds of \(\textsc{ltl}_{f}\)-to-dfa conversions, which in the worst case, can lead to a double-exponential blowup. Finally,
Figure 2: Comparison (in log scale) of _BeSyft_ implementations on counter game instances with \(n=8\) and \(1\leq K\leq 10\).
we note that _BeSyft_-Alg-3, which implements the symbolic-compositional approach, achieves orders of magnitude better performance than the other two implementations, although it does not fully exploit the power of dfa minimization. Nevertheless, it is not the case that automata minimization always leads to improvement. Instead, there is a tread-off of performing automata minimization. As shown in Figure 2, _BeSyft_-Alg-3, performs better than _BeSyft_-Alg-2, though the former does not minimize the game arena after the symbolic product, and the latter minimizes the game arena as much as possible.
On a closer inspection, we evaluated the time cost of each major operation of _BeSyft_-Alg-3, and present the results on counter-game instances with \(n=8\) and \(1\leq K\leq 10\) in Figure 3. First, the results show that ltl\({}_{f}\)-to-dfa conversion is the bottleneck of ltl\({}_{f}\) best-effort synthesis, the cost of which dominates the total running time. Furthermore, we can see that the total time cost of solving the cooperative dfa game counts for less than 10% of the total time cost. As a result, we conclude that performing best-effort synthesis only brings a minimal overhead with respect to standard reactive synthesis, which consists of constructing the dfa of the input ltl\({}_{f}\) formula and solving its corresponding adversarial game. Also, we observe that solving the cooperative game takes longer than solving the adversarial game. Indeed, this is because the fixpoint computation in the cooperative game often requires more iterations than that in the adversarial game.
Finally, we also compared the time cost of symbolic-compositional best-effort synthesis with that of standard reactive synthesis on counter-game instances. More specifically, we considered a symbolic implementation of reactive synthesis that computes an agent strategy that enforces the ltl\({}_{f}\) formula \(\mathcal{E}\rightarrow\varphi\)[10, 22], which can be used to find an agent strategy enforcing \(\varphi\) under \(\mathcal{E}\), if it exists [3]. Interestingly, Figure 4 shows that for certain counter-game instances, symbolic-compositional best-effort synthesis takes even less time than standard reactive
Figure 3: Relative time cost of _BeSyft_-Alg-3 major operations on counter game instances with \(n=8\) and \(1\leq K\leq 10\).
synthesis. It should be noted that symbolic-compositional best-effort synthesis performs ltl\({}_{f}\)-to-dfa conversions of ltl\({}_{f}\) formulas \(\varphi\) and \(\mathcal{E}\) separately and combines them to obtain the final game arena without having automata minimization, whereas reactive synthesis performs the ltl\({}_{f}\)-to-dfa conversion of formula \(\mathcal{E}\rightarrow\varphi\) and minimizes its corresponding dfa. These results confirm the practical feasibility of best-effort synthesis and that automata minimization does not always guarantee performance improvement.
## 6 Conclusion
We presented three different symbolic ltl\({}_{f}\) best-effort synthesis approaches: monolithic, explicit-compositional, and symbolic-compositional. Empirical evaluations proved the outperformance of the symbolic-compositional approach. An interesting observation is that, although previous studies suggest taking the maximal advantage of automata minimization [20, 21], in the case of ltl\({}_{f}\) best-effort synthesis, there can be a trade-off in doing so. Another significant finding is that the best-performing ltl\({}_{f}\) best-effort synthesis approach only brings a minimal overhead compared to standard synthesis. Given this nice computational result, a natural future direction would be looking into ltl\({}_{f}\) best-effort synthesis with multiple environment assumptions [1].
## Acknowledgments
This work has been partially supported by the ERC-ADG White- Mech (No. 834228), the EU ICT-48 2020 project TAILOR (No. 952215), the PRIN project RIPER (No. 20203FFYLK), and the PNRR MUR project FAIR (No. PE0000013).
Figure 4: Comparison (in log scale) of _BeSyft_-Alg-3 and implementations of symbolic ltl\({}_{f}\) reactive synthesis on counter-game instances with \(n=8\) and \(1\leq K\leq 10\). |
2301.06369 | A rapid optical and X-ray timing study of the neutron star X-ray binary
Swift J1858.6-0814 | We present a rapid timing analysis of optical (HiPERCAM and ULTRACAM) and
X-ray (NICER) observations of the X-ray transient Swift J1858.6-0814 during
2018 and 2019. The optical light curves show relatively slow, large amplitude
(~1 mags in g$_s$) `blue' flares (i.e. stronger at shorter wavelengths) on
time-scales of ~minutes as well as fast, small amplitude (~0.1 mag in g$_s$)
`red' flares (i.e. stronger at longer wavelengths) on time-scales of ~seconds.
The `blue' and `red' flares are consistent with X-ray reprocessing and
optically thin synchrotron emission, respectively, similar to what is observed
in other X-ray binaries. The simultaneous optical versus soft- and hard-band
X-ray light curves show time- and energy dependent correlations.
The 2019 March 4 and parts of the June data show a nearly symmetric positive
cross correlations (CCFs) at positive lags consistent with simple X-ray disc
reprocessing. The soft- and hard-band CCFs are similar and can be reproduced if
disc reprocessing dominates in the optical and one component (disc or
synchrotron Comptonization) dominates both the soft and hard X-rays. A part of
the 2019 June data shows a very different CCFs. The observed positive
correlation at negative lag in the soft-band can be reproduced if the optical
synchrotron emission is correlated with the hot flow X-ray emission.
The observed timing properties are in qualitative agreement with the hybrid
inner hot accretion flow model, where the relative role of the different X-ray
and optical components that vary during the course of the outburst, as well as
on shorter time-scales, govern the shape of the optical/X-ray CCFs. | T. Shahbaz, J. A. Paice, K. M. Rajwade, A. Veledina, P. Gandhi., V. S. Dhillon, T. R. Marsh, S. Littlefair, M. R. Kennedy, R. P. Breton, C. J. Clark | 2023-01-16T11:34:57Z | http://arxiv.org/abs/2301.06369v1 | # A rapid optical and X-ray timing study of the neutron star X-ray binary Swift J1858.6-0814
###### Abstract
We present a rapid timing analysis of optical (HiPERCAM and ULTRACAM) and X-ray (NICER) observations of the X-ray transient Swift J1858.6-0814 during 2018 and 2019. The optical light curves show relatively slow, large amplitude (\(\sim\)1 mags in \(gs\)) 'blue' flares (i.e. stronger at shorter wavelengths) on time-scales of \(\sim\)minutes as well as fast, small amplitude (\(\sim\)0.1 mag in \(gs\))'red' flares (i.e. stronger at longer wavelengths) on time-scales of \(\sim\)seconds. The 'blue' and'red' flares are consistent with X-ray reprocessing and optically thin synchrotron emission, respectively, similar to what is observed in other X-ray binaries. The simultaneous optical versus soft- and hard-band X-ray light curves show time- and energy dependent correlations. The 2019 March 4 and parts of the June data show a nearly symmetric positive cross correlations (CCFs) at positive lags consistent with simple X-ray disc reprocessing. The soft- and hard-band CCFs are similar and can be reproduced if disc reprocessing dominates in the optical and one component (disc or synchrotron Comptonization) dominates both the soft and hard X-rays. A part of the 2019 June data shows a very different CCFs. The observed positive correlation at negative lag in the soft-band can be reproduced if the optical synchrotron emission is correlated with the hot flow X-ray emission. The observed timing properties are in qualitative agreement with the hybrid inner hot accretion flow model, where the relative role of the different X-ray and optical components that vary during the course of the outburst, as well as on shorter time-scales, govern the shape of the optical/X-ray CCFs.
keywords: accretion, accretion discs - X-rays: binaries - X-rays: individual: Swift J1858.6-0814 - stars: neutron
## 1 Introduction
The low-mass X-ray binary Swift J1858.6-0814 was discovered as an X-ray transient on 2018 October (Krimm et al., 2018) with the Burst Alert Telescope (BAT) aboard the Neil Gehrels _Swift_ Observatory (Gehrels et al., 2004). Subsequent multi-wavelengths observations detected the source at longer wavelengths. The Ultraviolet and Optical Telescope (_Swift-UVOT_) on-board _Swift_ detected a variable UV source which was coincident with a previously detected UKIRT Infrared Deep Sky Survey (UKIDSS) and Pan-STARRs source (Kennea and Krimm, 2018). Optical follow-up observations revealed that the source had brightened by \(\sim\)2.5 magnitudes (Vasilopoulos et al., 2018). The source was also detected in the radio by the Arcminute Microkelvin Imager Large Array having a variable flux density of 300-600 \(\mu\)Jy at 15.5 GHz (Bright et al., 2018). At X-ray wavelengths, the outburst was relatively faint, with a flux of \(\sim\)10\({}^{-11}\) erg s\({}^{-1}\) cm\({}^{-2}\) at 0.5-10 keV and a hard spectrum with a photon index of \(\Gamma\) = 2 (Reynolds et al., 2018).
Superimposed on the outburst were bright, short X-ray flare events (Ludlam et al., 2018; Hare et al., 2019) where the observed flux increased by more than an order of magnitude in a few seconds (Hare et al., 2020). Optical flares were also identified (Vasilopoulos et al., 2018; Baglio et al., 2018; Rajwade et al., 2018, 2019; Paice et al., 2018) with wavelength-dependent optical variability on time-scales of minutes, and sporadic, fast'red' flares on time-scales of seconds (Paice et al., 2018). The timing characteristics were reminiscent of
those seen in the black hole X-ray binary V404 Cyg, which showed long-term 'blue' flaring and short-term sporadic'red' flaring during its 2015 outburst (Kimura et al., 2016; Gandhi et al., 2016). The radio emission from Swift J1858.6-0814 showed variability by up to a factor of \(\sim\)8 on time-scales of minutes due to mass accretion rate fluctuations consistent with a compact jet (Bright et al., 2018; van den Eijnden et al., 2020). The X-ray spectrum showed evidence for significant intrinsic local absorption (Reynolds et al., 2018; Hare et al., 2020) and the P-Cygni profile observed in the optical spectrum (Munoz-Darias et al., 2020), suggested that a significant amount of mass was ejected from the inner accretion flow.
Although Swift J1858.6-0814 entered the Sun constraint for most X-ray telescopes in 2019 November, it was detected again with the Monitor of the All-sky X-ray Imager (MAXI) in 2020 February in a previously unobserved X-ray state, with significantly less variability and enhanced soft X-ray emission, implying a transition to a soft state (Negoro et al., 2020; Buisson et al., 2020). During 2020 March several Type I X-ray bursts were detected with the Neutron star Interior Composition Explorer (NICER) and the Nuclear Spectroscopic Telescope Array (NuSTAR), identifying Swift J1858.6-0814 as a neutron star binary system despite the fact that pulsations were not detected (Buisson et al., 2020). These bursts exhibited photospheric radius expansion allowing a distance estimate of \(\sim\)12.8 kpc. Strong periodic drops in X-ray flux were also detected, consistent with eclipses by the secondary star and variable obscuration due to the thickness of the disc/accretion stream which is also responsible for the strong variability (Buisson et al., 2021).
Here we report on high time-resolution HiPERCAM and ULTRACAM optical observations of Swift J1858.6-0814 some of which are simultaneous with NICER observations, taken in 2018 and 2019. We comment on the observed optical flaring and on the optical/X-ray flux correlations and timing properties of the light curves.
## 2 Observations
In Fig. 1 we show the long-term X-ray light curve of Swift J1858.6-0814 during its 2018 and 2019 outburst and mark the optical and X-ray observations presented in this paper.
Figure 1: The long-term X-ray light curves and the radio and X-ray spectral index light curves of Swift J1858.6–0814 during its 2018 and 2019 outburst. The top panel shows the Swift/XRT PC mode X-ray data (black squares), where the vertical lines mark the times of our ULTRACAM (blue) and HiPERCAM (red) optical observations. The dashed lines show the times when the optical observations were simultaneous with NICER. The bottom panel show the radio (red stars; 4.5 GHz) and X-ray (black squares; 0.2–10 keV) spectral indices (\(F_{\nu}\propto\nu^{\alpha}\)) taken from van den Eijnden et al. (2020). The blue squares show the spectral index of the optical flares observed with ULTRACAM and HiPERCAM determined in this paper (see Section 4).
\begin{table}
\begin{tabular}{l l l l l l l} \hline UT date & UT Start & UT End & Instrument & Filters & Cadence1 & Comments \\ \hline
2018/11/14 & 19:24:22 & 19:50:15 & HiPERCAM & \(u_{s}\), \(g_{s}\), \(r_{s}\), \(i_{s}\), \(z_{s}\) & 46.6 ms & \\
2018/11/109 & 00:42:59 & 01:18:33 & ULTRACAM & \(u^{\prime},g^{\prime},i^{\prime}\) & 0.93 (4.63) s & \\
2019/03/01 & 09:14:15 & 09:49:19 & ULTRACAM & \(u^{\prime\prime},g_{s}\), \(i_{s}\) & 1.00 (3.01) s & \\
2019/03/02 & 09:07:00 & 09:39:20 & NICER & 0.2–12 keV & 40 ns & ObsId 2200400101 \\
2019/03/02 & 09:05:22 & 09:45:14 & ULTRACAM & \(u_{s},g_{s}\), \(i_{s}\) & 0.28 s (0.29) s & Simultaneous with NICER \\
2019/03/04 & 08:54:22 & 09:48:59 & ULTRACAM & \(u_{s},g_{s}\), \(i_{s}\) & 0.50 s (4.01) s & Simultaneous with NICER \\
2019/03/04 & 09:05:25 & 09:36:20 & NICER & 0.2–12 keV & 40 ns & ObsId 2200400103 \\
2019/03/05 & 09:06:26 & 09:26-49 & ULTRACAM & \(u_{s},g_{s}\), \(i_{s}\) & 0.58 s (1.17) s & \\
2019/05/09 & 08:13:38 & 10:23:17 & ULTRACAM & \(u_{s},g_{s}\), \(i_{s}\) & 0.25 s (1.26) s & \\
2019/06/07 & 01:52:25 & 02:39:45 & HiPERCAM & \(u_{s},g_{s}\), \(r_{s}\), \(i_{s}\), \(z_{s}\) & 47.9 ms & Simultaneous with NICER \\
2019/06/07 & 01:53:20 & 02:33:43 & NICER & 0.2–12 keV & 40 ns & ObsId 2541030101 \\
2019/06/07 & 03:23:09 & 04:17:27 & HiPERCAM & \(u_{s},g_{s}\), \(r_{s}\), \(i_{s}\), \(z_{s}\) & 47.9 ms & \\ \hline \end{tabular}
\end{table}
Table 1: Log of ULTRACAM, HiPERCAM & NICER observations for Swift J1858.6–0814.
### Nicer - X-rays
Swift J1858.6-0814 was observed with NICER in an intensive monitoring program during its 2018 and 2019 X-ray outburst. NICER is an X-ray instrument on board the International Space Station (ISS) where individual photons with energies in the range 0.2-12 keV can be detected with a time resolution of 40 ns (Gendreau et al., 2016). The data reduction was carried out using the collection of NICER-specific tools nicerdas which is part of HEASARC 1. Full Level2 calibration and screening was conducted with nicerd.2, which calibrated, checked for good time intervals, merged, and cleaned the data. The barycentric correction was carried out using barycorr, and finally the photon events were binned to the times of the optical light curves as described in the following sections. We produced a light curve in the 0.2-12 keV energy band for each data segment using xselect and then applied the background correction. In order to calculate the hardness ratio, we extracted light curves in the 0.5-3.0 keV and 3-10 keV bands. For these light curves, we normalised each incoming photon with respect to the effective area of the telescope at that energy. We define the hardness ratio of the X-rays as (hard-soft)/(hard+soft), where the hard and soft X-ray rates are in the 3-10 keV and 0.5-3.0 keV range, respectively. The errors on the hardness ratio were calculated by using 1-\(\sigma\) Poisson errors (following the example of Gehrels, 1986) to simulate maximum and minimum values of the individual X-ray bands, and then calculating the hardness ratio at each extreme. We note that these errors are an approximation only and may underestimate any outliers.
Footnote 1: [https://heassarc.gsfc.nasa.gov](https://heassarc.gsfc.nasa.gov)
### ULTRACAM/NTT - Optical
High-speed multi-colour photometry of Swift J1858.6-0814 was carried out using ULTRACAM instrument (Dhillon et al., 2007) on the 3.5 m New Technology Telescope (NTT) in La Silla, Chile. ULTRACAM uses dichroic beamsplitters to simultaneously image three custom made Sloan Digital Sky Survey (SDSS) filters, and can observe at frame-rates well above 100 Hz due to the frame-transfer CCDs and the lack of a physical shutter (Dhillon et al., 2007). We used ULTRACAM to observe Swift J1858.6-0814 during 2018 November, 2019 March and 2019 May. The 2018 observations were carried out simultaneously with the \(u^{\prime}\), \(g^{\prime}\), and \(i^{\prime}\) SDSS filters (Doi et al., 2010), whereas the 2019 observations were performed using the higher throughput \(u_{s}\), \(g_{s}\), and \(i_{s}\) Super-SDSS filters (Dhillon et al., 2021) which use multi-layer coatings rather that coloured glass to define the filter bandpasses, with the cut-on/off wavelengths designed to match higher throughput the original SDSS filters. Unlike most observations of this type, the times were not explicitly chosen to coincide with X-ray observations. Some of the observations did overlap with the X-ray observations performed with the NICER instrument and such simultaneity was purely serendipitous (see Section 2.1). On different nights, ULTRACAM was used in windowed mode (one window containing the target and the other containing multiple comparison stars) with 1\(\times\)1 binning. Typically, compact binaries are faint in the \(u_{s}\)-band, and so ULTRACAM's on-chip co-adding feature was used, which provides a longer exposure time in the \(u\)-band so as to increase the signal-to-noise ratio. The details of the observing setup for each night are given in Table 1.
We used the HiPERCAM pipeline software2 to debias, flat-field and extract the target count rates using aperture photometry with a seeing-dependent circular aperture tracking the centroid of the source. The sky background was computed using the clipped mean of an annular region around the target and relative photometry of Swift J1858.6-0814 was carried out with respect to the local standard star (PSO J185832.982-081400.913). For the \(r_{s}\)-band and \(g_{s}\)-band, the field is covered by the Pan-STARRS survey and so the calibrated \(r_{s}\)-band and \(g_{s}\)-band magnitudes are listed in DR1 catalog (Magnier et al., 2020). These were transformed to SDSS magnitudes (Finkbeiner et al., 2016) and then used to calibrate the target light curves. Since the field is not covered by any archival optical survey in the \(u_{s}\)-band, calibrating these data was less straightforward. Flux standards were observed on various nights during the ULTRACAM observations in 2019 March. These flux standards were used to determine the \(u_{s}\)-band instrument zero-point. The local standards were then calibrated which in turn were used to calibrate the target light curve. For the nights when no flux standard was observed, we assume that the \(u_{s}\)-band zero-point measured during the March observing runs was still valid. The difference between the ULTRACAM Super-SDSS and SDSS filters leads to an uncertainty in the flux calibration of \(<\) 3 per cent (Wild et al., 2022). The observed ULTRACAM light curves are shown in Fig. 2a.
Footnote 2: [https://github.com/HiPERCAM/hipercam](https://github.com/HiPERCAM/hipercam)
### HiPERCAM/GTC - Optical
Sub-second optical imaging was carried out in 2018 November and 2019 June using HiPERCAM on the 10.4 m Gran Telescopio Canarias (GTC) in La Palma, Spain. HiPERCAM uses dichroic beamsplitters to simultaneously image the custom made Super-SDSS \(u_{s}\), \(g_{s}\), \(r_{s}\), \(i_{s}\) and \(z_{s}\) filters. Similar to ULTRACAM, HiPERCAM can observe at frame-rates well above 1000 Hz which is achieved by the lack of a physical shutter and the frame-transfer CCDs that can rapidly shift charge into a storage area for reading out, freeing up the original pixels for observation and thereby achieving low (7.8 ms) dead-times (Dhillon et al., 2021). The CCDs were binned by a factor of 4 and drift mode was used with four windows (336\(\times\)200 pixels each) for all the observations. The instrument was orientated so that one window was centered on Swift J1858.6-0814 and another window on a local standard star. We used an exposure time of 43.6 ms and 44.9 which resulted in a cadence of 46.6 ms and 47.9 ms, for the 2018 and 2019 observations, respectively (see Table 1 for details). Observations were obtained on two nights, 2018 November 14 and 2019 June 7. The observations taken in 2019 were coordinated with the X-ray instrument NICER. A log of the observations is given in Table 1. Similar to the ULTRACAM data, we used the HiPERCAM pipeline software to debias, flat-field and extract the photon counts for the target and local standard using aperture photometry with a seeing dependent circular aperture. The local standard stars used are listed in the Pan-STARRS survey DR1 catalog (Magnier et al., 2020) and have \(g^{\prime}\), \(r^{\prime}\), \(i^{\prime}\) and \(z^{\prime}\) magnitudes which were transformed to SDSS magnitudes (Finkbeiner et al., 2016) and then used for the photometric calibration of Swift J1858.6-0814. For the 2018 data the \(u^{\prime}\)-band calibration was determined using the local standard star PSO J185827.968-081329.815 and the full-frame acquisition images which was calibrated by determining the instrument zero-point. As a check we also determined the \(g_{s}\), \(r_{s}\), \(i_{s}\) and \(z_{s}\) magnitudes and found that they agreed with the Pan-STARRS magnitudes at the \(<\)10 per cent level. Unfortunately, the local standard star PSO J185826.795-081357.216 used in the 2019 observations was not detected in the \(u_{s}\)-band images, and so it could not be flux calibrated. The difference between the HiPERCAM Super-SDSS and SDSS filters leads to an uncertainty in the flux calibration of \(<\) 3 per cent (Brown et al., 2022). Finally we convert from SDSS magnitudes to flux density, where we
Figure 2: The observed ULTRACAM (top) and HiPERCAM (bottom) light curves of Swift J1858.6–0814. The black dotted horizontal line shows the time of NICER observations. The mean magnitude of Swift J1858.6–0814 is shown in each panel. A MJD time offset of \(T_{0}\) + 58000.0 (\(T_{0}\) is in days) is applied and we use the orbital ephemeris given in Buisson et al. (2021).
propagate the uncertainty in the local standard. The observed HiPERCAM light curves are shown in Fig. 2b.
## 3 Reddening
Swift J1858.6-0814's position in the sky allows us to estimate the line of sight interstellar reddening. The Galactic neutral atomic hydrogen (H I) column density to the target is \(N_{\rm H}\sim 1.84\times 10^{21}\) cm\({}^{-2}\) (H14PI Collaboration et al., 2016). Using the relation between the Galactic hydrogen absorption column density and optical extinction (Foight et al., 2016) along with the galactic extinction law (Cardelli et al., 1989) we determine a colour excess of \(E\) (\(B-V\))=0.21 mag. We can also estimate \(N_{\rm H}\) from spectral fits to the NICER data. Using the XSPEC (Arnaud, 1996) software package, a blackbody and power-law model fit to the 2018 November data gives \(N_{\rm H}\sim 2.0-2.5\times 10^{21}\) cm\({}^{-2}\), whereas fits to the 2019 June data gives \(N_{\rm H}\sim 1.6-1.7\times 10^{21}\) cm\({}^{-2}\). The value for \(N_{\rm H}\) determined from the NICER data is consistent with the value determined from the H I maps and we assume a colour excess of 0.21 mag for the rest of this paper.
## 4 Optical Flares
In Figs. 2a and 2b we show the observed HiPERCAM and ULTRACAM light curves, respectively, where wavelength dependent flaring activity is clearly seen. Flaring is superimposed on a sinusoidal modulation, which is due to a combination of the secondary star's ellipsoidal modulation, X-ray heating and other possible sources of light in the system (see Fig. 3). To determine the properties of the flares first use the colour excess of \(E\) (\(B-V\))=0.21 mag determined in Section 3 with the interstellar extinction law (Cardelli et al., 1989) to dereden the observed fluxes. We identify and isolate the flare events by determining the start and end of the same flare event in each waveband. We then subtract the interpolated flux underneath the flare event which in effect subtracts the contribution of the non-variable component. We assume during the actual flare event that the other components that contribute to the observed flux do not vary. We define small and large flares as events with \(g_{s}\)-band amplitudes of \(\sim\)0.1 mag and \(\sim\)1 mag, respectively. A total of 102 large and 5 small flare events were isolated, respectively. Fig. A1 of the Appendix shows some examples of the isolated flare events where flares on different time-scales, amplitude and colour are clearly seen. For the flare events we also determine the peak flare flux in each waveband and flux ratio.
In Fig. 3 we show the observed \(g_{s}\)-band light curve of Swift J1858.6-0814 as a function of orbital phase, using the orbital ephemeris given in Buisson et al. (2021) where phase 0.0 is defined as superior conjunction of the compact object. Although our orbital phase coverage is relatively poor (\(\sim\)33 per cent), we observe flares at all orbital phases. Buisson et al. (2021) find that the bright flares occur preferentially in the post-eclipse phase of the orbit, around orbital phase \(\sim\)0.3, most likely due to increased thickness at the disc-accretion stream. We do not find any evidence for this in our optical data, but note our poor phase coverage. We find that the mean flux and the intrinsic source fractional RMS variability defined as \(\sigma_{\rm source}^{2}\) = \(\sigma_{\rm total}^{2}\) - \(\sigma_{\rm noise}^{2}\)(Vaughan et al., 2003) are strongly linearly correlated with a Pearson's correlation coefficient of 0.84. The low RMS observed at phase 0.0 (2019 March 01) which has the lowest flux of our observations and very little flaring is consistent with a system at a high binary inclination angle (Buisson et al., 2021; Knight et al., 2022).
### Time-scales
We determine the rise, decay and duration of the dereddened flares which are shown in Fig. 4. As one can see, the'red' flares (more flux at longer wavelengths) have a much shorter time-scale and amp
Figure 4: The rise, decay time-scales, duration and flux histograms of the dereddened flare events.
Figure 5: The flare duration versus colour of the dereddened small (red points) and large flare (blue points) events.
Figure 3: ULTRACAM and HIPERCAM \(g_{s}\)-band light curves of Swift J1858.6-0814 as a function of orbital phase using the orbital ephemeris given in Buisson et al. (2021), where phase 0.0 is defined as superior conjunction of the compact object. For clarity two orbital phases are plotted and the light curves have been rebinned to a time resolution of 10 s.
litude compared to the 'blue' flares (more flux at shorter wavelengths) events. The'red' and 'blue' flares have median \(g_{s}\)-band amplitudes of \(\sim\)0.1 mag and \(\sim\)1 mag, respectively. In Fig. 5 we show the flare durations versus colour. The flares are separated into two regions: short-duration'red' flares and long-duration 'blue' flares. Small amplitude'red' flares are observed on 2018 November 14 (HiPERCAM) and 2019 February 2 (ULTRACAM), whereas large 'blue' flares are present in all observations, except on 2019 March 2 where no flares are observed. The different time-scales and amplitudes of the flares indicate that they arise from different emission processes.
### Spectral energy distribution
In an attempt to interpret the broad-band spectral properties of the flares, we compare the observed fluxes with the prediction for different emission mechanisms, namely synchrotron emission and blackbody. The latter has an approximately power-law form on the Rayleigh-Jeans tail and so we characterise the synchrotron and blackbody emission with a power-law form \(F_{\nu}\propto\nu^{\alpha}\), where \(\nu\) is the frequency and \(\alpha\) is the spectral index. We compute the given emission spectrum and then calculate the expected flux density ratios in the relevant filters using the synthetic photometry package synphot in iraf/stSDAS. For the blackbody emission, given the intrinsic model flux we then determine the corresponding radius of the region that produces the observed dereddened flux at a given distance.
In Fig. 6 we show the HiPERCAM and ULTRACAM individual peak flare flux ratios and the expected results for different emission models. We show the \(g_{s}\), \(r_{s}\) and \(z_{s}\) fluxes common to the HiPERCAM 2018, 2019 and ULTRACAM 2019 data sets and the \(u^{\prime}\), \(g^{\prime}\) and \(i^{\prime}\) fluxes for the ULTRACAM 2018 data set. Fig. 2 of the Appendix shows some example fits to the individual dereddened flare events observed on 2018 November 14 (HiPERCAM) and 2019 May 9 (ULTRACAM). The power-law indices obtained by fitting the broad-band spectral energy distribution of the individual large and small flare events are in the range \(\alpha\sim\)-1.0 to -2.0 (with a mean of \(\alpha\sim\) -1.5) for the'red' flares. In contrast the 'blue' flares can be represented with a power-law of \(\alpha\sim\) 1.0 (range of \(\alpha\sim\)0.6 to 1.2) or a \(\sim\)14,000 \(\pm\) 2000 K blackbody which with a mean \(g_{s}\) peak flare flux of \(\sim\)0.45 mJy (out of eclipse) corresponds to a radius of \(\sim\)1.0 \(\pm\) 0.2 R\({}_{\sun}\), assuming a distance of 12.8 kpc (Buisson et al., 2020). Although a single temperature blackbody has limited physical significance and is likely a very poor description of a flare event, it is useful for comparison with other works. The 2019 data was taken at orbital phase \(\sim\) 0.9 which is outside the start of eclipse ingress (Buisson et al., 2021) and so we can rule out a decrease in \(N_{\rm H}\) due to the absorption in the atmosphere of the secondary star. However, Castro Segura et al. (2022) have detected disc winds in the hard state and the associated variable obscuring columns that contribute to \(N_{\rm H}\) might explain the differences we observe.
## 5 Timing and correlation analysis
The auto-correlation function (ACF) analysis of the individual optical and X-ray light curves and the cross-correlation function (CCF) of the simultaneous optical and X-ray light curves can also be used to constrain the emission processes and location, respectively. We perform such a timing analysis on the simultaneous optical and X-ray data using the same methods/techniques outlined in Paice et al. (2019). We use the NICER X-ray light curves and the dereddened ULTRACAM and HiPERCAM optical light curves determined in Sections 2.1 and 4, respectively. To create the simultaneous light curves we first corrected the times of both datasets to the solar system barycentre and then binned the X-ray photons directly to the optical time bins. Since the optical light curves have a constant dead-time, the X-ray photons observed during these times are not used. For the 2019 June 7 HiPERCAM data, we show the four different simultaneous sections, whereas for the ULTRACAM dataset we show the two simultaneous sections taken in 2019 March 2 and 4.
Figure 6: Colour–colour diagram for the ULTRACAM and HiPERCAM large and small dereddened flare events. The dashed line shows a power-law model of the form \(F_{\nu}\propto\nu^{\alpha}\), where the black squares mark the value of \(\alpha\) ranging from -2.0 to +2.0 in units of 0.5. The solid black line is a blackbody model where the crosses show the temperature in units of 1000 K. In the top panel the red circles show the HiPERCAM small flares, whereas the blue (2018 November 14) and green circles (2019 June 7) show the HiPERCAM large flare events. In the bottom panel the red and blue circles show the ULTRACAM small and large flares events, respectively.
### Optical/X-ray correlations
In Fig. 7 we show the simultaneous optical and X-ray light curves taken on 2019 March 2, 4 and June 7. For the X-ray data we also show the hardness ratio of the X-ray count rates. The CCF shows the response of the optical light curves to variations in the X-ray light curve as a function of time lag. Positive time lags indicate a net correlation in which the optical flux lags the X-ray flux. The CCF is produced by splitting and detrending the simultaneous light curves into segments of equal length. We determine the CCF for each segment and calculate the mean CCF and standard error in each bin. We also compute the auto-correlation functions (ACFs) of the X-ray/optical light curves. The Poisson noise dominating the X-ray ACFs at zero lag is corrected by making use of the Wiener-Khinchin
Figure 7: The simultaneous optical and X-ray light curves of SwiftJ1858.6–0814. From top to bottom, the optical light curves, the hard (3–10 keV; blue) and soft (0.5–3.0 keV; red) X-ray light curves and the X-ray hardness ratio defined as the ratio of the rates (hard-soft)/(hard+soft). The X-rays and hardness ratio light curves have been binned with a moving average of 100 points for readability (except for 2019 March 4 where a 20 point moving average was used due to the much higher count rates). A barycentered MJD time offset of 58544.37641329, 58546.36898883 and 58641.08379497 has been applied to the 2019 March 2, March 4 and June 7 data, respectively.
theorem, which states that the power spectrum of a random process and its ACF are Fourier pairs. Therefore, we can subtract the white noise from the X-ray power spectrum and then compute the inverse Fourier transform to determine the ACF. In Fig. 8 we plot the corresponding ACF and CCFs for all our simultaneous optical/X-ray light curves. To determine the confidence levels in the CCFs we simulate 1000 similar (yet uncorrelated) optical light curves, compute the cross-correlation function with respect to the X-ray light curve and then determine the the 5 and 95 percent boundaries in each bin of the CCF lag. We create the optical light curves by first computing the Fourier transform of the optical light curve, randomising the arguments and then performing the inverse Fourier transform to create a lightcurve with an identical power spectrum. In the following,for each simultaneous dataset we summarise the observed characteristic of the light curves and average ACFs and CCFs.
* For the 2019 March 2 data the mean X-ray count rate is 2.6 counts s\({}^{-1}\) over the length of the simultaneous ULTRACAM observation. Low optical and X-ray variability is observed with no significant flaring behaviour, compared to what is observed on other nights. In general the X-ray light curve has a strong hard component. The optical ACF is broader than the X-ray ACF which is what one expects if the optical flux arises from X-ray reprocessing. No significant features are observed in the CCFs.
* For the 2019 March 4 data, the mean count rate is relatively high at 7.9 counts s\({}^{-1}\) over the length of the simultaneous ULTRACAM observation. A few relatively strong X-ray flare events are observed
Figure 8: The ACF (left plot) and CCF (right plot) of the simultaneous optical and X-ray light curves of the 2019 March 2 (a), 2019 March 4 (b) and 2019 June 7 (c to f) data. A positive lag implies that the optical flux lags the X-ray flux. For the 2019 June 7 data (c to f) we show the corresponding ACFs and CCFs of the data split into four sections, corresponding to the sections when the data were simultaneous. In the left panel, the ACF of the X-ray data is shown in black and the ACF of the \(g_{s}\), \(r_{s}\), \(i_{s}\), and \(z_{s}\) data are shown in blue, green, orange and red, respectively. In the right panel the CCF of the X-ray data with respect to the \(g_{s}\), \(r_{s}\), \(i_{s}\) and \(z_{s}\) data are in shown blue, green, orange and red, respectively. The black dashed lines represent the 5 and 95 percent confidence intervals.
which have a strong hard component. The optical ACF is broader than the X-ray ACF, consistent with X-ray reprocessing. One can clearly see that the optical and X-ray fluxes are correlated, which provides a visual confirmation of the CCF observed. The CCF of this observation shows the strongest positive correlation of any of our epochs, with a peak at a time lag of \(\sim\)5 s in every band (a coefficient of \(\sim\)0.3 is a significant value in fast-timing studies of X-ray binaries; see e.g. Gandhi et al., 2010, 2017; Paice et al., 2019). A weak negative correlation at negative lags is also seen at \(\sim\)5 s. Furthermore, there appears to be a repeated phenomenon in the light curves - the hard X-rays increase first and then give way to softer X-rays. This is more clearly seen in the flare at time \(\sim\)1100 s. In the CCFs, there appears to be a correlation between the optical delay and wavelength, in which the \(u_{s}\)-band delay is shorter than the \(g_{s}\)-band delay, which is shorter than the \(t_{s}\)-band delay. This implies that reprocessing is dominant.
Finally, the 2019 June 7 data has a relatively low mean count rate of 0.65 counts s\({}^{-1}\) coincident with the HiPERCAM observations. Although the X-ray variability is much lower, several optical peaks do have slight increases in X-ray count rates, where the increase seems to be slightly greater in the hard X-rays. In general the X-ray light curve has a hard component but slightly softer than other epochs and is dominated by a large flare event in part 4 at 7700 s, which has a strong soft component as noted by the change in the X-ray hardness ratio. This is in contrast to the other short-term X-ray flare events which seem to have a hard component. The ACF and CCF properties in sections 1 to 3 are very similar. The parts 1 to 4 data show the optical ACF and X-ray ACF to be similar in shape. A relatively strong positive correlation in the CCF with a peak at a time lag of \(\sim\)5 s is observed in every band for parts 1, 2 and 4, and at a time lag of \(\sim\)0 s in the part 3 data. A weak negative correlation at negative lags is also observed between \(\sim-20\) s and \(-10\) s.
### Optical/X-ray correlations of flaring Events
In order to further investigate the flaring events we determine the ACFs and CCF for three clearly defined flare events on 2019 March 4. We compute the optical and X-ray ACFs and well as the optical/X-ray CCF using a 100 s window (see Fig. 9). As one can see, the CCFs of the flare events share many characteristics, including a high CCF correlation (0.4-0.8) with lags between 0-5 s and a preposition dip. The 2019 March 4 and 2019 June 9 data are taken at orbital phase \(\sim\)0.35 and \(\sim\)0.93, respectively. Indeed, the flare events taken at different orbital phases have time delays consistent with arising from reprocessing in the secondary star. One expects the longest time delay to arise at orbital phase quadrature (phase 0.25) and the shortest at superior conjunction of the secondary star (phase 0.0). Indeed, if one had sufficient flare events across the binary orbit one could perform echo-mapping in order to extract the fundamental binary parameters (O'Brien et al., 2002).
### Fourier Analysis
In order to understand the nature of the different components contributing to the CCF, we decomposed the observed variability into different time-scales using Fourier techniques. We performed a Fourier analysis of the light curves using the X-ray spectral-timing software package stinggrav3(Huppenkothen et al., 2019). The coherence and corresponding errors were determined using the method described in Vaughan and Nowak (1997). We computed the Fourier transform of the light curves and then analysed them at each frequency. The power spectra represent the amplitude of the variability at each Fourier frequency, the coherence shows how the variability in the power of the correlated signal is distributed over the Fourier frequencies, and the phase lags represent a measure of the lag between the bands at each frequency as a function of phase. Sometimes, the time lags are a more intuitive representation of the delays, which are connected to the phase lags through \(\Delta t=\Delta\phi/2\pi f\), where \(f\) is the frequency of the bin and \(\phi\) is the phase lag. Positive phase lags correspond to the delay of the optical light curve with respect to the X-rays.
Footnote 3: [https://github.com/StingraySoftware/stingray](https://github.com/StingraySoftware/stingray)
Good Time Intervals (GTIs) are used based on the individual epoch of the X-ray observations and the average cross-spectrum is computed over independent light curve segments with 2048 bins in length. We use 2, 1, and 6 segment(s) for the 2019 March 2, 2019 March 4 and 2019 June 7 data, respectively, where the white noise is fitted to each power spectrum and removed prior to the calculation of coherence. The standard root-mean-squared (RMS) normalisation is applied (Belloni and Hasinger, 1990). In Fig. 10 we show the frequency-dependent products binned logarithmically in frequency. The HiPERCAM data were binned by a factor of 8, and then all data were averaged over segments of 2048 bins, or \(\sim\)572, 1028, and 784 s respectively (except for the \(u_{s}\) band data in March 4, which was co-added and thus sampled differently; this was averaged over segments of 1024 bins, or 1028 s).
Figure 9: Same as Fig. 8 but for the clear flare events on 2019 March 4 and 2019 June 7.
For Swift J1858.6-0814 on the nights where there is significant optical and X-ray variability (2019 March 4 and June 7), the power spectra for the optical and X-ray light curves are very similar. However, there is consistently higher power in the X-ray variability compared to the optical, which suggests that the optical variability is a result of reprocessing of faster X-ray variations at frequencies above the optical power spectrum peak.
In the 2019 March 4 and June 7 data, the coherence function shows a linear decline with increasing frequency. The declining absolute value of the optical/X-ray coherence means that a single component is not a good representation of the broad-band variability. In the 2019 March 4 and June 7 data there is a plateau in the coherence between 0.01-0.1 Hz at \(\sim\)10 s (most notable in the March 4 data), during which a rise in phase lags is observed. Beyond \(\sim\)0.1 Hz, the data become white-noise dominated, and it is not possible to find meaningful results. Frequency-dependent time-lags are also observed. Below \(\sim\)0.01 Hz, the time lags rise towards low Fourier frequencies and a plateau is observed between 0.01-0.1 Hz. Beyond \(\sim\)0.1 Hz the time-lags are observed to decrease with frequency, a natural consequence of the large scatter and randomly distributed phase lags. The time lag observed at \(\sim\)0.1 Hz on March 4 is longer than what is observed in the CCFs; this is likely because the lower frequency lags contribute more to the CCF than the high frequency lags, as evidenced by the higher coherence below 0.05 Hz, and the sharp drop thereafter with time lags between 3-10 s. This combination gives rise to a culmination of many frequencies which results in the time-lag of \(\sim\)5 s observed in the CCFs.
## 6 Discussion
### Flare spectra
Generally, in the optical/near-IR region, a negative power-law index is expected if there is an optically-thin synchrotron spectrum from a flow/jet, whereas a positive power-law index is expected (with spectral index \(\sim\) 1) if the optical emission is dominated by blackbody emission from regions in the accretion disc (Hynes, 2005). The fast,'red' optical flares observed in the ULTRACAM and HiPERCAM data have a power-law index of \(\alpha\sim\)-1.3, steeper than what is typically observed in XRBs, \(\alpha\sim\)-0.7 (Hynes et al., 2003; Gandhi et al., 2011; Russell et al., 2013). However, it should be noted that flares with similarly steep spectral properties have been observed before with a power-law in the range -1.3 to -1.5 (Russell et al., 2010, 2013; Shahbaz et al., 2013; Gandhi et al., 2016) and indeed the fast'red' flares are reminiscent of the'red' flares observed during the outburst of V404 Cyg (Gandhi et al., 2016). Indeed, for V404 Cyg, based on the cooling timescales of the flaring events the emission has been attributed to synchrotron processes (Dalliar et al., 2017). For optically thin synchrotron emission, the only parameter which changes the spectral index is the particle energy distribution (\(p\)) of the emitting electrons, which is related to the observed spectral slope in the optically thin plasma; \(\alpha_{\rm thin}=(1-p)/2\). If the observed quiescent power-law index of \(\alpha\sim-1.3\) is interpreted as optically thin synchrotron, then \(p=3.6\), which is steeper than \(p\sim 2.4\) (or \(\alpha\sim-0.7\)), which is typical for optically thin synchrotron in XRBs. A mixture of thermal and non-thermal particle energies could potentially explain such a steep
Figure 10: Results of the Fourier analysis of the simultaneous optical and X-ray light curves of Swift J1858.6-0814. From top to bottom: the X-ray and optical power spectra where the white noise has been removed, coherence spectrum, phase lags and time lags. For the bottom two panels,a positive lag mean that the optical lags the X-rays. We use a logarithmic rebinning of a factor of 1.4 to display the data. In each plot, the X-ray data are shown in black, whereas the purple, blue, green, orange and red show the \(u_{s}\), \(g_{s}\), \(r_{s}\), \(i_{s}\)- and \(z_{s}\)-band data, respectively.
slope observed in Swift J1858.6\(-\)0814. In contrast, slow, large 'blue' flares are observed with a power-law index \(\sim\) 1.0, consistent with blackbody emission from an irradiated accretion disc (Hynes, 2005); the spectrum from an irradiated accretion disc has a power-law index of 1.2 in the \(g_{\rm s}\) to \(i_{\rm s}\) bands.
We estimate the binary separation to be \(\sim\)5.1 R\({}_{\sun}\)(\(M_{\rm I}\)=2.0 M\({}_{\sun}\), \(M_{2}\)=0.25 M\({}_{\sun}\), \(P_{\rm orb}\)=0.883 d) assuming that the accretion disc extends to its tidal truncation radius (\(R_{\rm d}\) = 0.9 \(R_{\rm L1}\), where \(R_{\rm L1}\) is the equivalent radius of the Roche lobe of a sphere with the same volume), we find \(R_{\rm d}<2.5\pm 0.2\) R\({}_{\sun}\). The large flare events can be represented by a \(\sim\)14,000\(\pm\)2,000 K blackbody and an equivalent blackbody radius of \(\sim\)1.0\(\pm\)0.2 R\({}_{\sun}\)(see Section 4.2) which is consistent with arising from periods in the accretion disc or from an extended disc atmosphere or wind (Buisson et al., 2021). The optical multi-wavelength spectral properties are reminiscent of those observed in the black hole X-ray binary V404 Cyg, where slow, 'blue' as well as fast,'red' flares were observed during its 2015 outburst (Kimura et al., 2016; Gandhi et al., 2016). From the strongest observed flare on 2019 June 6 we estimate the optical (\(u_{s}\)\(-\)\(z_{\rm s}\)) and X-ray (0.5\(-\)10 keV) unabsorbed flare power to be \(\sim\)0.1% and \(\sim\)0.33% of the Eddington luminosity, assuming a 1.8 M\({}_{\sun}\)neutron star and a distance of 12.8 kpc (Buisson et al., 2020). The optical flare in Swift J1858.6\(-\)0814 is a factor \(\sim\)5 less powerful compared to the optical flares in GX 339-4 (Gandhi et al., 2010) and V404 Cyg (Gandhi et al., 2016).
### Spectral energy distribution
The radio-X-ray spectral energy distribution of the X-ray binary systems GX 339-4 (Gandhi et al., 2010), MAXIJ1820+070 (Rodi et al., 2021) and GRS 1716-249 (Bassi et al., 2020), can be described by a combination of non-thermal emission of electrons accelerated in the jet by internal shocks (Malzac, 2013, 2014) and emission from the irradiated disc and hot corona (Gierlinski et al., 2009). In Fig. 11 we show the absorption-corrected spectral energy distribution of Swift J1858.6\(-\)0814 observed on 2019 June 7 using \(N_{\rm H}=1.84\times 10^{21}\) cm\({}^{-2}\). The absorption-corrected NICER soft- and hard-band fluxes were determined using the XSPEC software package (Arnaud, 1996) with the tbabs(diskbb+bbody) model with \(\Gamma=1.6\). The mean absorption-corrected optical (\(g_{s},r_{s},i_{s},z_{s}\)) data were determined using the light curves in Section 2.3. There are not many radio measurements in 2019, so we interpolate the radio flux values at 1.4, 4.5, 15.5 GHz given in Rhodes et al. (2022), van den Eijnden et al. (2020) and Bright et al. (2018), respectively. From the mean optical (\(u_{s}\)\(-\)\(z_{s}\)) and X-ray (0.5\(-\)10 keV) unabsorbed fluxes on 2019 June 7 we estimate luminosities of \(\sim 2.5\times 10^{35}\) erg s\({}^{-1}\)and \(\sim 4.5\times 10^{35}\) erg s\({}^{-1}\), respectively, assuming a distance of 12.8 kpc (Buisson et al., 2020). The optical to X-ray luminosity ratio \(L_{\rm opt}/L_{\rm X}\) ratio is \(\sim\)0.6, which is much higher than what is typical of X-ray binaries in outburst. In neutrons star X-ray binaries the optical and X-ray luminosity's are described by \(L_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ \rm{ }}}}}}}}}}}}}}}\)\), where the optical luminosity is dominated by X-ray reprocessing with an additional contributions from a jet and the viscously heated accretion disc. Russell et al. (2006). We find that either the optical luminosity in Swift J1858.6\(-\)0814 is a factor of \(\sim\)140 more than what is expected or that X-ray luminosity is a factor of \(\sim\)2530 under-luminous. Note that the 2019 June 7 observations were taken at orbital phase \(\sim\)0.93 and given the high binary inclination angle (Buisson et al., 2021; Knight et al., 2022) the low X-ray luminosity can be explained by optically thick material in the outer regions of the accretion disc or secondary star blocking most of the direct X-ray emission. So what we observe is scattered X-rays and the intrinsic X-rays is much higher. If we scale the X-rays using the \(L_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}}}}}}}}}}} -L_{\rm{X}}^{0.63}\) relation (Russell et al., 2006), we find that the radio-X-ray spectral energy distribution can be described with a power-law of the form \(F_{\nu}\propto\nu^{\alpha}\) with an index of \(\alpha\sim 0.16\). Indeed, this is similar to what is observed in the mean spectrum of GX 339-4 (Gandhi et al., 2010) and XTE J1118+480 (Hynes et al., 2003), where the spectral energy distribution is attributed to a mixture of optically thin synchrotron emission from a jet and the irradiated accretion disc/corona.
### Optical/X-ray correlations
In the optical waveband, many components can potentially contribute to the optical emission e.g. the irradiated secondary star, the cold optically-thick accretion disc, the hot optically-thin X-ray emitting medium and hot flow/jet (Poutanen & Veledina, 2014). Whereas in the X-rays, two separate components are present, a soft component arising from Comptonization of disc photons and a harder component arising from synchrotron Comptonization in the hot flow (Veledina, 2016). Indeed, this results in optical/X-ray correlations that show complex patterns, with both positive and negative correlations. The CCFs show a variety of shapes: some show positive correlations with optical photons lagging the X-rays, consistent with simple reprocessing (O'Brien et al., 2002; Hynes et al., 2009; Paice et al., 2018; Kajava et al., 2019); some show a very broad and nearly symmetric positive cross-correlation (Casella et al., 2010); some show a more complex structure containing a narrow 'precognition' dip at negative lags (optical photons leading X-rays) superimposed on a very broad positive cross-correlation (Kanbach et al., 2001; Gandhi et al., 2008; Durant et al., 2008, 2011; Lasso-Cabrera & Eikenberry, 2013); and some show only a strong broad anti-correlation (Motch et al., 1983; Pahari et al., 2017) or a narrow positive correlation superimposed on a very broad positive cross-correlation (Hynes et al., 2019). Cyclo-synchrotron optical photons undergoing Compton upscatter
Figure 11: The absorption-corrected spectral energy distribution of Swift J1858.6\(-\)0814 on 2019 June 7 (black points). The NICER (0.5\(-\)3.0 keV and 3.0\(-\)10.0 keV) and optical (\(g_{s},r_{s},i_{s},z_{s}\)) data are simultaneous, whereas the radio data (1.4, 4.5, and 15.5 GHz) are interpolated values taken within \(-\)2 months (Bright et al., 2018; van den Eijnden et al., 2020; Rhodes et al., 2022). We assume a distance of 12.8 kpc (Buisson et al., 2021). The blue points are the scaled X-ray luminosities according to the \(L_{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\,\rm{\,\rm{\,\,\rm{\,\,\rm{\,\,\rm{\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\}}}}}}}}}}\) ratio of Russell et al. (2006). The radio–X-ray spectral energy distribution can be described with a power-law of the form \(F_{\nu}\propto\nu^{\alpha}\) with an index of \(\alpha=-0.84\pm 0.02\) (dashed line).
ing to X-rays in a hot flow can also reproduce both the observed optical/X-ray anti-correlation and QPOs (Veledina et al., 2011, 2013, 2015). In some cases the observed features can be explained by synchrotron emission from internal shocks within a relativistic compact jet (Malzac, 2013; Hynes et al., 2019; Paice et al., 2019). Finally, in some sources a fast optical delay component at \(\sim 100\) ms is observed which is associated with the base of the optically-emitting jet close to the compact object (Gandhi et al., 2008, 2017; Paice et al., 2019).
Although there are some strong similarities in the timing behaviour of Swift J1858.6-0814 with well-studied XRBs, one notable difference is the lack of a \(\sim 100\) ms positive optical time lag with respect to X-rays. This feature has been seen in the cross-correlated timing behaviour of three sources now: GX 339-4 (Gandhi et al., 2008), V404 Cyg (Gandhi et al., 2017) and MAXI J1820-070 (Paice et al., 2019). Both timing and multi-wavelength spectral properties support an origin of this feature in the inner jets of hard state binaries, in a compact region no larger than a few thousand Schwarzschild Radii. Malzac (2014) has shown that flicker noise Lorentz factor plasma variations within a compact jet can naturally produce such timing lags. The fact that Swift J1858.6-0814 does _not_ show this feature then implies some difference between its internal jet structure with respect to other systems. Whether this is related to a difference in jet plasma Lorentz factors during the state when it was observed, or perhaps even a difference in compact object types (all three systems named above host black holes whereas Swift J1858.6-0814 does not), remains to be investigated.
In the standard reprocessing model, X-rays arising from the inner accretion disc photoionize and heat the surrounding regions, which later recombine and cool producing lower energy (optical/near-IR) photons. The observed optical/near-IR flux is thus delayed relative to the X-rays due to the light travel time between the X-ray source and the reprocessing region. The corresponding CCF arising from X-ray reprocessing has a characteristic orbital phase-dependent shape, where the CCF rises from negative lags, peaks, and subsequently falls off (Hynes et al., 1998; O'Brien et al., 2002). Depending on the orbital phase the CCF can be very symmetric, but sometimes
Figure 12: The HiPERCAM/NICER CCFs. The left plot shows the CCF of the optical bands versus soft X-rays (0.5–3.0 keV) and the right plot shows the CCF of the optical bands versus hard X-rays (3.0–10.0 keV). The black dashed lines represent the 5 and 95 percent confidence intervals.
an extended positive delay is observed, especially near quadrature (O'Brien et al., 2002; Hynes et al., 2009). The shapes of the CCFs observed in Swift J1858.6-0814 are more consistent with the shape of the CCFs in Sco X-1, Cyg X-2 (Durant et al., 2011), rather than other XRBs such as XTE J1118+480 (Kanbach et al., 2001), Swift J1753.5-0127 (Durant et al., 2008) and GX 339-4 (Gandhi et al., 2008) and MAXI J1820+70 (Paice et al., 2019), where 'pre-recognition' dips are observed and X-ray reprocessing is not thought to be dominant. The time delay between the optical/near-IR and X-ray flux can be up to twice the binary separation (\(a\)) and can be obtained from Kepler's third law: \(a/c=9.77\,M^{1/3}\,P_{\rm d}^{2/3}\) s (where \(c\) is the speed of light, \(M\) is the sum of the binary masses in solar units and \(P_{\rm d}\) is the orbital period in days). Although the binary parameters for Swift J1858.6-0814 are not fully known, the orbital period of 21.2 hr together with estimates of the binary masses allows one to estimate the binary separation to be \(a/c\sim 12\) s. Indeed, we observe CCFs with time delays of \(\sim\)5-15 s which suggests that the delays are consistent with arising from regions in the accretion disc.
As mentioned earlier, in the hybrid hot inner flow model of Veledina (2016) two X-ray components, one arising from disc Comptonization and the other from synchrotron Comptonization, as well as two optical components due to synchrotron self-Compton emission from the hot inner accretion flow and disc reprocessing are present. In the X-rays, the seed photons for Comptonization are provided by the accretion disc (disc Comptonization) which dominates in the hard state. However, the hot flow itself also produces synchrotron radiation that can contribute or even dominate the seed photon flux for Comptonization (synchrotron Comptonization). In the optical, the flux can arise from X-ray reprocessing or from synchrotron emission in the hot inner accretion flow. An anti-correlation and negative lags between the optical and X-ray flux is expected because the increase in the mass accretion rate leads to an increased X-ray flux and a higher level of synchrotron self-absorption, leading to a drop in the optical emission (Veledina et al., 2011). Furthermore, the optical is expected to have a stronger anti-correlation with the hard X-rays compared to with the soft X-rays, characteristics that are expected if the source transitions from a hard to soft state. During the initial stages of the outburst of Swift J1858.6-0814 (in the hard state) we observe CCFs with a positive peak at a time delay of \(\sim\)5-15 s and optical ACFs which are broader than the X-ray ACFs (see Fig. 8a,b). This implies some underlying connection between the optical and X-ray fluxes and is consistent with optical flux arising from X-ray reprocessing in the outer regions of the accretion disc. For example, the 2019 March 4 CCF shows a nearly symmetric positive correlations at positive lags which is consistent with X-ray reprocessing, supported by the wavelength dependant optical/X-ray delays in the CCFs, in which the longest wavelength delay has the longest delay. On the other hand, the 2019 June 7 data taken during a softer state cannot be described within the simple reprocessing scenario. The narrow optical ACF (comparable with the X-ray ACF) and the negative correlation in the optical/X-ray CCFs (see Fig. 8c-f) are the characteristics of the synchrotron self-Compton mechanism operating in a hot accretion flow (Veledina et al., 2011). The presence of both synchrotron and reprocessed X-ray emission in the optical is in line with the spectral energy distributions of the observed fast'red' flares (see Section 4.2).
The CCFs of the 2019 June 7 parts 1 and 4 data have similar shapes, with anti-correlations at negative lags and positive correlation at positive lags. The shape can be explained by the presence of two emission components in the optical, with the X-rays being dominated by the synchrotron Comptonization continuum (Veledina et al., 2017). The CCF of the 2019 June 7 part 3 data shows a hint of positive correlation at negative lags. It looks very similar to the CCF observed in MAXI J1820+070 (see epoch 6 in Paice et al., 2021). To explain this shape, one requires an additional source of X-ray photons arising from the disc Comptonization. Indeed, the hard-to-soft spectral state transition involves the motion of the cold accretion disc towards the compact object. As the role of the disc increases with the overall increase in the mass accretion rate, the power dissipated in the hot accretion flow increases, so the whole spectrum of this component increases (similar to ADAFs) resulting in the enhancement of the synchrotron emission. The simultaneous presence of two X-ray components, synchrotron Comptonization and disc Comptonization, leads to the complex shape of the optical/X-ray CCF and manifests itself through the different correlations with the soft and hard X-ray bands.
To investigate this possibility further, we separate the X-ray range into soft (0.5-3.0 keV) and hard (3.0-10.0 keV) energy bands and show the CCFs with respect to only one optical band (\(g_{s}\)) for clarity (see Fig. 12). We systematically observe different correlations between the optical and soft/hard-X-rays, supporting the assumption of two X-ray components. In Appendix B we attempt to reproduce the timing and correlation properties observed in Swift J1858.6-0814 in the context of the hot inner flow-disc Comptonization and reprocessing model (Veledina et al., 2011; Veledina, 2018). The low absolute value of the optical/X-ray coherence of \(\sim\)0.1-0.2 means that multiple components are required to explain the broad-band variability. We clearly observe correlations between some optical and X-ray flares which shows that they are indeed related, some flares events have weak correlations and so may not be related. In general, we find good qualitative agreement between the data and the multi-component hot inner flow-disc Comptonization and reprocessing model, and find that the relative role of the different X-ray and optical components vary during the course of the outburst as well as on shorter time-scales.
## 7 Conclusions
We present a rapid timing analysis of simultaneous optical (HiPERCAM and ULTRACAM) and X-ray (NICER) observations of the X-ray transient Swift J1858.6-0814 during 2018 and 2019. The optical light curves show rapid, small amplitude (\(\sim\)0.1 mag in \(g_{s}\))'red' flares (i.e. stronger at longer wavelengths) on time-scales of \(-\)seconds which have a power-law index consistent with optically thin synchrotron emission. The optical light curves also show relatively slow, large amplitude (\(\sim\)1 mag in \(g_{s}\)) 'blue' flares (i.e. stronger at shorter wavelengths) on time-scales of \(\sim\)minutes, with a spectral energy distribution consistent with X-ray reprocessing in the accretion disc.
We present a Fourier time- and energy-dependant timing analysis of the simultaneous optical/X-ray light curves. The simultaneous optical and X-ray data show correlated variability that has a strong hard-energy component on 2019 March 2 and 4, and a strong soft-energy X-ray component on 2019 June 7, suggesting a spectral state change. We find that the optical ACF is broader than the X-ray ACF during the initial outburst stages, which can be explained by simple X-ray reprocessing. The coherence function shows a linear decline with increasing frequency. There is also a plateau in the time lags between 0.01-0.1 Hz at \(\sim\)10 s. These characteristics can be attributed to thermal reprocessing of X-ray emission in the outer regions of the accretion disc.
We find that relative roles of the different X-ray and optical components governs the shape of the optical/X-ray CCFs and vary on shorter time-scales. The CCFs of the simultaneous optical versus soft- and hard-band X-ray light curves show time- and energy de
pendent correlations. The 2019 March 4 and 2019 June parts 1 and 4 CCFs show a nearly symmetric positive correlations at positive lags consistent with simple X-ray disc reprocessing. The soft- and hard-band CCFs are similar and can be reproduced if disc reprocessing dominates in the optical and one component (disc or synchrotron Comptonization) dominates both the soft and hard X-rays. The 2019 June 7 part 3 data obtained between parts 1 and 4, shows a very different CCFs. The observed positive correlation at negative lags in the soft X-ray band can be reproduced if the optical synchrotron emission is correlated with the hot flow X-ray emission. The observed timing properties are in qualitative agreement with the inner hot accretion flow model, where X-rays are produced by both synchrotron and disc Comptonization and the optical emission arises from the hot flow synchrotron and irradiated disc components.
## Acknowledgements
TS and VSD acknowledge financial support from the Spanish Ministry of Science, Innovation and Universities (MICIU) under grant PID2020-114822GB-I00. KMR acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 694745). PG and JAP acknowledges support from Science and Technology Facilities Council (STFC) and a UGC-UKIERI Thematic Partnership. TRM acknowledges support from STFC, grant ST/T000406/1. M.R.K acknowledges support from the Irish Research Council in the form of a Government of Ireland Postdoctoral Fellowship (GOPID/2021/670: Invisible Monsters). M.R.K., R.P.B., and C.J.C. acknowledge support from the ERC under the European Union's Horizon 2020 research and innovation programme (grant agreement No.715051; Spiders). The design and construction of HiPERCAM was funded by the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013) under ERC-2013-ADG Grant Agreement no. 340040 (HiPERCAM). HiPERCAM operations and VSD are supported by STFC grant ST/V000853/1.
Based on observations were made with the Gran Telescopio Canarias, installed at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias, on the island of La Palma. Based on observations made with ESO Telescopes at the La Silla Paranal Observatory under ESO programme 096.D-0808. We gratefully acknowledge the use of python packages: matplotlib (Hunter, 2007) and numpy (van der Walt et al., 2011). We acknowledge to use of Aladin (Bonnarel et al., 2000). This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory. The Pan-STARRS1 Surveys (PS1) have been made possible through contributions of the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation under Grant No. AST-1238877, the University of Maryland, and Eotvos Lorand University (ELTE).
_Facilities:_GTC (HiPERCAM), NTT (ULTRACAM), NICER
## Data Availability
The ULTRACAM and HiPERCAM data can be obtained by contacting the ULTRACAM team. The NICER data are available in the HEASARC Data Archive ([https://heasarc.gsfc.nasa.gov/docs/archive.html](https://heasarc.gsfc.nasa.gov/docs/archive.html)). The data used in this paper will be shared on reasonable request to the corresponding author.
|
2307.04828 | Advanced Radiation Panel design for applications in National Security
and Food Safety | We describe a new concept for a basic radiation detection panel based on
conventional scintillator technology and commercially available solid-state
photo-detectors. The panels are simple in construction, robust, very efficient
and cost-effective and are easily scalable in size, from tens of cm$^2$ to tens
of m$^2$. We describe two possible applications: flagging radioactive food
coontamination and detection of illicit radio nucleides, such as those
potentially used in a terrorist attack with a dirty bomb. | A. Bross, E. C. Dukes, S. Hansen, A. Pla-Dalmau, P. Rubinov | 2023-06-27T15:57:33Z | http://arxiv.org/abs/2307.04828v4 | # Advanced Radiation Panel Design for Applications in Food Safety and National Security
###### Abstract
We describe a new concept for a radiation detection panel based on extruded scintillator technology and commercially available solid-state photo-detectors. The panels are simple in construction, robust, very efficient, cost-effective and are easily scalable in size from tens of cm\({}^{2}\) to tens of m\({}^{2}\). We describe two possible applications: flagging radioactive food contamination and detection of illicit radioactive materials, such as those potentially used in a dirty bomb.
Radiation monitoring, scintillators, search for radioactive and fissile materials, x-ray detectors +
Footnote †: preprint: Prepared for submission to JINST
ArXiv ePrint: 2307.04828
## 1 Overview
Plastic scintillator detectors have been used in high-energy physics experiments for decades, and with the development of extruded plastic scintillator their use has expanded considerably. A recent example of an extruded scintillator detector is the one that has been developed for the Mu2e experiment at Fermilab.[1]. This experiment requires approximately 1200 m\({}^{2}\) of a very efficient detector for cosmic-ray muons. We believe that this concept can be effectively extended to the radiation detection applications described in this paper. The active element is the "Di-counter unit" (Figure 1) which consists of two extruded polystyrene-based scintillator strips, each with two holes for wavelength-shifting (WLS) fibers. The mechanical specifications for the Di-counter are given in Figure 2. A basic detector panel would consists of 3 "Di-counters", each up to 60 cm long.
When a charged particle, such as a muon, passes through the scintillator, it loses energy and this energy is converted into blue light by the plastic scintillator. Similarly, if an X-ray Compton scatters in the plastic, the Compton electron produces light in the scintillator. Some of this blue light is absorbed by the WLS fiber and is shifted into the green, where roughly 5% (in each direction) of the green-shifted light is piped along the fiber to a photo-detector. By using the WLS fibers to guide the light to the photo-detectors, it is possible to make large detectors with good light collection efficiency even when the particle hits far from the photo-detector. Figure 3 (Left) gives a schematic of this operating principle. The extrusions used for this work were fabricated in the FNAL-NICADD Extrusion Line Facility at Fermilab following the methodologies we have developed over the past 20
years. Figure 3 (Right) a shows a photograph of a sample of the raw extrusion while Figure 4 shows a schematic of the full Di-counter assembly. The photo-detectors we use are silicon photomultipliers (SiPMs) [2]. Similar detectors are being used in numerous major particle physics experiments at high-energy physics laboratories throughout the world. A novel mechanical system was designed to align the fibers with the SiPMs, which are carried on a consumer-grade PC board.
he photo-detector system is modular in nature. An exploded view of the components is given in Figure 5.
There are 4 SiPMs in each photo-detector module with a module being mounted on both ends of the Di-counter, so that both ends of the WLS fiber are readout.
The SiPM module carries 4 SiPMs that are 2 \(\times\) 2 mm square. Figure 6 gives two photos of the module, a bottom view on the left showing the 4 SiPMs and a top view on the right showing the HDMI cable used for readout. The SiPM modules can easily be removed from the scintillator extrusions, so that they can be characterized separately. This allows the user to accurately set thresholds to remove the SiPM noise. A readout system using commercial off-the-shelf parts, also designed for the Mu2e experiment, was used in the tests reported in this note [3]. We see two immediate applications for this technology. The first application is for food safety monitors. In areas where radioactive contamination of food products (primarily seafood) is a problem, we shall demonstrate that this technology can provide a high-sensitivity system that can flag unsafe seafood. It will provide an "in situ" background-subtracted counting environment that can quickly (within 10 sec.) flag
Figure 4: Schematic of the final Di-counter assembly
Figure 5: Exploded view of Di-counter readout module
unsafe food. Cosmic-ray interactions are easily rejected due to their very-large charge deposition and subsequent large electronic signal.In addition, a new approach using triangular extrusions [4] adds the capability to reject cosmic-ray muons using event topology, further improving cosmic-ray rejection. The second application is for a radiation portal or urban-area monitors. These require a high sensitivity to a number of radionuclides such as, \({}^{109}\)Cd, \({}^{57}\)Co, and \({}^{137}\)Cs. Neutron sensitivity can be provided by modifying the basic plate structure (see section 4).
## 2 Test results
Section 2.1 describes the calibration of the detectors, section 2.2 describes the detector's performance for the detection of radioactive isotopes and section 2.3 presents data on the system's capabilities for flagging food contamination.
### Calibration - Cesium 137
Because the detector is modular (photo-detector module separate from scintillator detector), determining a threshold cut in ADC counts to remove the SiPM noise is very straightforward. The SiPM modules are removed from the scintillator, placed in a light-tight enclosure and data are then taken. An integration time of 10 seconds is used to produce histograms of the SiPM noise distribution. Figure 7 shows one such histogram for one counter in the Di-counter (sum of four SiPM signals - 2 WLS fibers with double-ended readout). In order to effectively eliminate a large contribution from the SiPM noise, a cut in ADC counts is chosen to produce a summed SiPM noise rate of \(\simeq\) 2 Hz.
The energy calibration is determined from the position of a \({}^{137}\)Cs Compton edge in ADC counts. A 0.6\(\mu\)Ci \({}^{137}\)Cs source is placed in direct contact with one of the counters in the di-counter. A 10 second integration is performed and the sum of the output from the 4 SiPMs is histogrammed. The system gain is set so that the Compton edge is at \(\simeq\) 500 ADC counts. A typical distribution is shown in Figure 8. The Compton edge gives us the equivalent energy deposition per ADC count, since we know that the energy of the Compton edge is determined from:
\[E_{Edge}=E(1-\frac{1}{1+\frac{2E}{m_{e}c^{2}}}) \tag{1}\]
Figure 6: Left: SiPM module showing the 4 square SiPMs, Right: SiPM module, top view
where \(E\) is the \({}^{137}\)Cs photo-peak energy (662 keV). The Compton edge is thus at \(\simeq\) 480 keV. From Figure 8 we see that the Compton edge is at \(\simeq\) bin 515. The pedestal is at bin 15 (see Figure 7), so
Figure 8: \({}^{137}\)Cs signal (source distance = 0) from the sum of 4 SiPMs on one Di-counter
Figure 7: Noise histogram for sum of 4 SiPMs on one Di-counter
the energy per bin is equal to 480/(515-15) \(\simeq\) 1 keV. Note: The overflow bin shown in Figure 8 will also capture cosmic-ray muon events.
### Data with Cesium 137 source
The data described in this section used the same 0.6\(\mu\)Ci \({}^{137}\)Cs source that was used for the calibration described in Section 2.1. First data are taken without the source in place and then data are taken with the source positioned 30 cm above one of the counters in the Di-counter. Data taken without the source gives us the terrestrial background rate in our laboratory.
#### 2.2.1 Di-Counter with 1.4mm WLS fiber
These tests used a Di-counter that was assembled with 1.4mm WLS fiber. In addition, for this sample, glue (BC600) was used to fill the channel in the extrusion that holds the WLS fiber. This improves the optical coupling between the bulk scintillation in the extrusion and the WLS fiber. The background (no source) data are shown in Figure 9. To measure the flux from the \(\mu\)Ci \({}^{137}\)Cs, an ADC cut at 77 yielded the best results with respect to S:N. With this ADC cut, the count rate was 391 for a 10 second integration with no source in place. Figure 10 shows data with the \({}^{137}\)Cs source positioned 30 cm above the counter. In this case, the count rate is 696 for a 10 second integration.
Note: In the hit sum, the overflow bin (1024) is not included in the sum.
The differential count rate (with source-without source) was \(\simeq\) 31 Hz. Using the source strength (0.6\(\mu\)Ci), source distance (30 cm), the counter area and thickness (30 cm \(\times\) 5cm \(\times\) 2 cm thick) and the stopping power
Figure 9: 10 second integration of signal from 1 counter, no source present
of polystyrene at 662 keV, for full efficiency we would expect a count rate of 43Hz. Therefore, this counter, as configured, is \(\simeq 70\%\) efficient for the detection \({}^{137}\)Cs gammas that interact in the scintillator. As a cross check, for the data with no source, we integrated the flux above the ADC cut of 77, weighted by 1 keV per bin to obtain a total integrated dose for the 10 second exposure. Assuming a 70% efficiency for the background gammas also and extrapolating for a 1 year exposure, we obtain \(\simeq\) 25 mrem, which is consistent with terrestrial background rates. Remember, cosmic-ray counts (overflow bin) are not included and account for \(\simeq\) half the total yearly dose rate at sea level.
We have tested this counter's performance against a commercial standard - a Bicron Analyst. Table 1 gives the results. In each case, the system's sensitivity to our \({}^{137}\)Cs source was measured. \(\sigma\) is the \(\sqrt{Bkg}\). As can be seen in Table 1, our panel out performs the commercial standard which uses an expensive NaI crystal for its detector.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline Device & T(s) & Dist.(cm) & S & Bkg & S-Bkg(Hz) & \(\sigma\) & No. \(\sigma\) above Bkg \\ \hline \(\frac{1}{2}\) Di-Counter & 10 & 30 & 696 & 391 & 30.5 & 19.8 & 16 \\ \hline Bicron Analyst & 10 & 30 & 327 & 217 & 11 & 15 & 7 \\ \hline \end{tabular}
\end{table}
Table 1: Commercial device comparison
Figure 10: 10 second integration of signal from 1 counter, source 30 cm from counter
### Food safety monitoring test
In order to study food safety applications for our detector, we have used our prototype panel to measure the count rate for 100 grams of Brazil nuts, which have a nominal activity of 10 Bq. Food safety standards in the US use a limit of 1000 Bq/kg, but in the Far East, in part in reaction to illegal fishing after the Fukushima accident, the standards are stricter, and food is deemed safe if the activity is less than 100 Bq/kg. In our tests, we evenly spread the 100 grams of Brazil nuts directly on top of one counter of a Di-counter. The measured signal due to the activity of the nuts was \(\simeq 3\) Hz. In this study, we found that the optimal (best S:N ratio) was achieved with an ADC cut of 190, which is significantly higher than that needed to reduce the SiPM noise to 2 Hz. The Di-Counter data in Table 2 were obtained using this ADC cut and rejection of the overflow bin.
We also used the Bicron Analyst to detect the radiation from the nuts. In this case, our panel performs much better than the NaI-based detector. This is due to the large surface area (relative to the Bicron Analyst's NaI crystal) of our counter, which results in much better effective stopping efficiency for the radiation. In a practical installation at point-of-purchase, the sensitive area of the radiation panel would be roughly \(30\ \mathrm{cm}\times 30\ \mathrm{cm}\) or roughly 6 times the area of the single counter of a Di-counter used in the test described above. The background rate would, therefore, increase by 6, reaching \(\simeq 420\) counts in our 10 sec integration window. One sigma above background would be 20 counts. For a 1kg sample with 100Bq of radioactive contamination placed on this 30 cm \(\times\) 30 cm counter, and based on the results given above, we would have a count above background of 300 counts (\(30\ \mathrm{Hz}\times 10\ \mathrm{sec.}\)) or \(\simeq 15\sigma\) above background. This case clearly demonstrates the efficacy of our technology for flagging, with very-high efficiency and very-low false-positive rate, food borne radioactive contamination at the 100 Bq/kg level.
## 3 National security application
Given the performance show in Section 2, we have also evaluated how well our technology will work for detecting illicit nuclear material. The US Department of Homeland Security radiation portal spec for \({}^{137}\)Cs is 100 cps/\(\mu\)Ci for 47k cm\({}^{3}\) of plastic detector at a distance of 2m from the source. If we extrapolate our panel performance described in Section 2 to these parameters (1 \(\mu\)Ci and 47k cm\({}^{3}\) of plastic detector at a distance of 2m), we would have an equivalent count rate of \(\simeq\) 1200 cps above background - **more than 12 times** the portal specification.
### City-wide radiation monitoring system
We can also consider the effectiveness of this technology to protect large areas from illicit radiation sources, such as might be used in a "dirty bomb" attack by terrorists. Here we give an idea of how an array of counters _(Radiation Cell Towers)_ with the type of performance we demonstrated above
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline Device & T(sec) & Rate, “food” on & Rate, “food” off & Delta (Hz) & No. \(\sigma\) \\ \hline \(\frac{1}{2}\) Di-Counter & 10 & 102 & 70 & 3 & 3.8 \\ \hline Bicron Analyst & 10 & 137 & 131 & 0.6 & 0.5 \\ \hline \end{tabular}
\end{table}
Table 2: Sensitivity to Food radioactivity
for \({}^{137}\)Cs could be used to cover a city. As an example, we use the NY borough of Manhattan. See Figure 11. Manhattan has a total area of \(\simeq\) 60 km\({}^{2}\). We assume the performance level indicated above and extend the area of each panel to 60 cm \(\times\) 30 cm (still small and easily deployable on light poles, for example). The metric we use here is the detection of the equivalent of an unshielded 10mCi \({}^{137}\)Cs source. This would be equivalent to approximately 6 cm of lead surrounding a 10kCi source. Table 3 summarizes the performance of this 60 cm \(\times\) 30 cm detector panel extrapolated from the measured detector performance described in section 2.2.
A single panel can detect the equivalent of a 10 mCi from an un-shielded \({}^{137}\)Cs source at a distance of 163 m, assuming a ten second integration window and a 3\(\sigma\) above background threshold. A single panel can cover an area of approximately 0.084 km\({}^{2}\) based on this number. In addition, we are only considering half the solid angle here in order to be conservative. (Note: In a real deployment, the panels would obviously be sensitive to radiation coming from all angles.) Therefore, \(\simeq\) 720 such panels could monitor the entire borough of Manhattan.
#### 3.1.1 Cost estimate
A rough cost estimte for such a system is given here. To fully engineer an environmentally robust detector would require \(\simeq\) $10M in non-recurring engineering costs and is a major fraction of the total cost for a system of this size. Once in production, we estimate a unit cost of \(\simeq\)$2000 with installation and infrastructure costs of $3000 per unit, averaged over the entire system. Total investment is then on the order of $14M including non-recurring engineering (NRE). A more detailed cost estimate is given in Table 4.
## 4 Neutron sensitivity enhancement
Although scintillator detectors have sensitivity to fast neutrons, it is difficult to distinguish a neutron event from ordinary ionization radiation. In addition, plastic scintillator has very little sensitivity to
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Area & T(s) & Bkg & \(\sigma\) = \(\sqrt{Bkg}\) & Dist.(m) when signal is 3\(\sigma\) above Bkg. \\ \hline
60X30 cm\({}^{2}\) & 10 & 4700 & 69 & 163 \\ \hline \end{tabular}
\end{table}
Table 3: Radiation Cell Towers counter performance
Figure 11: Map of Manhattan.
thermal neutrons. The extruded plastic scintillator technology that has been developed at Fermilab can be extended to produce a neutron sensitive plastic and, if there is an argument for including neutron detection capability, a second panel could be added to our system to include this function.
An extruded scintillator that is only sensitive to thermal neutrons, based on our existing technology, could be developed and put into the field. This would be of particular benefit in aiding in the detection of weapons-grade plutonium (WGPu). The basic idea is to prepare polystyrene-based scintillator with \({}^{6}\)LiF nano-particles and detect neutrons via the process: n + \({}^{6}\)Li \(\rightarrow\)\({}^{7}\)Li\({}^{*}\)\(\rightarrow\)\({}^{4}\)He + \({}^{3}\)H + 4.79 MeV. The daughter particles in the reaction ( \(\alpha\) + triton) deposit all their energy in a very thin layer (\(\simeq 50\)\(\mu\)m). In this way, a thin active layer can efficiently detect neutrons, where gammas and minimum ionizing particles would deposit very little energy and their signal would fall below threshold. The thin layer would be coupled to a non-scintillating plate which would wavelength shift the light from the neutron detection reaction. This light would be trapped in this plate and the light would be readout as described above. Our extrusion process lends itself in a very natural way to producing the needed plates.
## 5 Discussion
We have shown in this paper that modern extruded plastic-scintillator technology using solid-state light detectors can find far-reaching use in and food safety and national security applications. There is still room for increased performance levels through detector optimization studies, but most of the underlying technology base is firmly in place. The cost-performance envelope for these types of systems is very attractive.
## 6 Conclusions
Our demo radiation detection panel, when extrapolated to the needs of the two applications described above, meets or exceeds their requirements. For food safety, we have shown that our technology
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Component** & **Number** & **Cost (\$)** & **Total (\$)** \\ \hline Mechanical engineering (NRE) & Lot & - & 3000 \\ \hline Electrical engineering (NRE) & Lot & - & 5000 \\ \hline Software engineering (NRE) & Lot & - & 2000 \\ \hline Scintillator & 720 & 100 & 72 \\ \hline Fiber & 17280 & 3 & 52 \\ \hline Photodetector module & 17280 & 5 & 86 \\ \hline Electronics & 17280 & 20 & 346 \\ \hline Enclosure & 720 & 500 & 360 \\ \hline Assembly and test & 720 & 1000 & 720 \\ \hline Installation & 720 & 3000 & 2160 \\ \hline \hline
**TOTAL** & & & **13796** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Cost Model
can meet the 100 Bq/kg specification used in Asia, which is 10X lower than that used in the US. For radiation detection security applications, our system has a sensitivity 12X the DHS minimum specification. The technology we have developed for HEP applications, when properly configured, is likely to be able to significantly surpass industry specifications, while being both cost effective and mechanically robust. Directed R&D can improve performance even further.
|
2310.07736 | Observatory: Characterizing Embeddings of Relational Tables | Language models and specialized table embedding models have recently
demonstrated strong performance on many tasks over tabular data. Researchers
and practitioners are keen to leverage these models in many new application
contexts; but limited understanding of the strengths and weaknesses of these
models, and the table representations they generate, makes the process of
finding a suitable model for a given task reliant on trial and error. There is
an urgent need to gain a comprehensive understanding of these models to
minimize inefficiency and failures in downstream usage.
To address this need, we propose Observatory, a formal framework to
systematically analyze embedding representations of relational tables.
Motivated both by invariants of the relational data model and by statistical
considerations regarding data distributions, we define eight primitive
properties, and corresponding measures to quantitatively characterize table
embeddings for these properties. Based on these properties, we define an
extensible framework to evaluate language and table embedding models. We
collect and synthesize a suite of datasets and use Observatory to analyze nine
such models. Our analysis provides insights into the strengths and weaknesses
of learned representations over tables. We find, for example, that some models
are sensitive to table structure such as column order, that functional
dependencies are rarely reflected in embeddings, and that specialized table
embedding models have relatively lower sample fidelity. Such insights help
researchers and practitioners better anticipate model behaviors and select
appropriate models for their downstream tasks, while guiding researchers in the
development of new models. | Tianji Cong, Madelon Hulsebos, Zhenjie Sun, Paul Groth, H. V. Jagadish | 2023-10-05T00:58:45Z | http://arxiv.org/abs/2310.07736v3 | # Observatory: Characterizing Embeddings of Relational Tables
###### Abstract.
Language models and specialized table embedding models have recently demonstrated strong performance on many tasks over tabular data. Researchers and practitioners are keen to leverage these models in many new application contexts; but limited understanding of the strengths and weaknesses of these models, and the table representations they generate, makes the process of finding a suitable model for a given task reliant on trial and error. There is an urgent need to gain a comprehensive understanding of these models to minimize inefficiency and failures in downstream usage.
To address this need, we propose Observer, a formal framework to systematically analyze embedding representations of relational tables. Motivated both by invariants of the relational data model and by statistical considerations regarding data distributions, we define eight primitive properties, and corresponding measures to quantitatively characterize table embeddings for these properties. Based on these properties, we define an extensible framework to evaluate language and table embedding models. We collect and synthesize a suite of datasets and use Observer to analyze nine such models. Our analysis provides insights into the strengths and weaknesses of learned representations over tables. We find, for example, that some models are sensitive to table structure such as column order, that functional dependencies are rarely reflected in embeddings, and that specialized table embedding models have relatively lower sample fidelity. Such insights help researchers and practitioners better anticipate model behaviors and select appropriate models for their downstream tasks, while guiding researchers in the development of new models.
[ ] +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
limitations of nine popular models through their learned representations over tabular data, which can inform researchers and practitioners of model selections and novel model designs.
Along with the implementation of Observatory, we collect and synthesize a suite of datasets for evaluation purposes, and present a comprehensive analysis of nine commonly used language and specialized table embedding models. Some key insights we surface in our analysis are that the embeddings of some models are sensitive to the order of rows and, in particular, the order of columns, while embeddings of some models are robust to uniform sampling. Moreover, we find that none of the models reflect functional dependencies among columns in tables. Although we do not aim to, and cannot, analyze all existing models, our implementation of Observatory is extensible such that researchers and practitioners can use Observatory for analysis of new models by specifying the procedure of embedding inference following the implemented interface. In summary, we make the following contributions:
* We propose Observatory, a framework including eight primitive properties and corresponding measures for systematically analyzing embedding representations over relational tables.
* We implement and open-source a prototype of Observatory, which covers nine popular table embedding models while also being extensible for evaluation of new models.
* We present a comprehensive analysis with Observatory and provide novel insights into the strengths and limitations of evaluated models and their learned table representations.
## 2. Related Work
### Language and Table Embedding Models
**Language Models.** One of the first pretrained language models (LMs) based on the transformer architecture is BERT (Han et al., 2017), which obtains contextual embeddings of natural language by predicting masked tokens. Subsequently, LMs advance rapidly in terms of their capability. Optimizations such as RoBERTa (Rosenbaum et al., 2017) and expansions in model size and tasks (e.g., T5 (Rosenbaum et al., 2017)) have contributed to this progress. These models soon evolve from being trained using predictive tasks to generating sequences from given language inputs, as demonstrated by the series of GPT-models (Rosenbaum et al., 2017). In addition to advances in unstructured language tasks, there have been investigations into the capabilities of language models for structured inputs, such as tabular data. Narayan et al. (Narayan et al., 2018), for example, explore the potential of LMs for data wrangling using T5. More recently, the latest GPT-based conversational models have been directly leveraged for table understanding tasks (Narayan et al., 2019; Narayan et al., 2020).
**Table Embedding Models.** TaBERT (Rosenbaum et al., 2019) is one of the early works extending the capabilities of pretrained language models to tabular data. Table embedding models capture embeddings at the token level and add additional positional embeddings for aggregating embeddings to the row or column level. TaBERT adapts several language modeling techniques for the tabular structure, such as vertical attention to capture information across rows and the pretraining objective of masked column name prediction (which is inspired by the masked language modeling objective of BERT). Thereafter a line of work has emerged including TURL (Turlan et al., 2019), TAPAS (Narayan et al., 2019), and TaPEx (Narayan et al., 2020). These models have enabled applications such as table question answering, table understanding, and data preparation. For a comprehensive overview of table embedding models and their applications, we refer readers to surveys by (Bedarao et al., 2019; Narayan et al., 2020). Badaro et al. (Bedarao et al., 2019) highlight the need for intrinsic analysis of table embedding models, which we take a first step towards addressing with Observatory.
### Analysis of Embedding Models
**Analysis of Language Embedding Models.** Significant effort has been made to understand and assess LMs. A straightforward source of understanding is from the performance of a given model across different tasks (Narayan et al., 2019). Additional to performance analysis on downstream tasks, task-agnostic analyses have been conducted to study the internal behavior and capacities of LMs. An example of such an analysis is CheckList (Rosenbaum et al., 2019), which explores capabilities of LMs (e.g., handling negation) and probe models with unit-test-like assessments. In a similar spirit, Observatory proposes properties and measures motivated by the relational data model and practical factors of data distributions considered in downstream applications. Lately, Sui et al. (Sui et al., 2019) introduce a benchmark to examine Structural Understanding Capabilities of LMs. This benchmark evaluates the performance of LMs on seven table tasks (e.g., cell lookup) while varying, among others, prompt designs and table input formatting. This analysis does not investigate how fundamental properties of relational tables and data distributions are reflected by the models, and it excludes specialized table embedding models.
Figure 1. Overview of Observatory and how it solicits understanding of opaque table embedding models by measuring properties motivated by the relational data model and data distributions. We illustrate the framework for two out of eight properties: 1) row order insignificance, and 2) sample fidelity.
**Analysis of Table Embedding Models.** Few analyses have been conducted regarding table embedding models. Wang et al. (Wang et al., 2018) inspect the effectiveness of explicitly modeling table structure in transformer architectures for table retrieval, revealing the limited contribution of table-specific model design. This evaluation is strictly focused on retrieval tasks and does not provide an understanding of the intrinsic limitations of models that affect downstream performance. Dr.Spider (Wang et al., 2018) benchmarks the perturbation robustness of text-to-SQL models through synonyms, abbreviations, and other variations. Observatory incorporates perturbation robustness and introduces seven additional properties that have not been studied before. Recent work (Wang et al., 2019) proposes LakeBench, a collection of benchmarks for data discovery over data lakes. While LakeBench identifies performance gaps across specialized table embedding models, Observatory evaluates embedding representations over table-specific properties that relate to a wide range of downstream tasks. Koleva et al. (Koleva et al., 2019) examines patterns of various table-specific attention mechanisms by inspecting aggregate attention over different inputs. While it is task-agnostic, unlike Observatory, it does not connect the analysis of models with relational and data distribution properties of tables.
## 3. Observatory
In this section, we present Observatory, our methodology for characterizing embedding representations over relational tables. Observatory features two sets of properties that are agnostic to downstream tasks and motivated by the relational model (Koleva et al., 2019; Wang et al., 2018) and data distributions. For each property, Observatory proposes a measure to quantify how well embedding representations align with the property specification. This allows users to gain insights into the strengths and weaknesses of different models and to even compare models through a consistent lens.
### Problem Statement
Various downstream applications may need different kinds of embeddings. For example, semantic column type detection is based on column embeddings whereas entity matching requires entity embeddings. Given that these embeddings look at different _levels of aggregation_ of the table structure, we refer to these kinds of embeddings as levels of embeddings.
**Definition 1** (Table Embedding Characterization).: Given a pre-trained model \(f\), a corpus of tables \(T\in\mathcal{T}\), and a property \(\mathcal{P}\) that characterizes a certain level of embeddings \(\mathbf{E}_{\mathcal{P}}\) with a measure \(\mathcal{M}\), table embedding characterization infers \(\mathbf{E}_{\mathcal{P}}\) with \(f\) over each \(T\in\mathcal{T}\) and computes \(\mathcal{M}\) over the distribution of \(\mathbf{E}_{\mathcal{P}}\).
A property \(\mathcal{P}\) can characterize one or more levels of embeddings (e.g., it can apply for both row- and column-level embeddings). Properties in Observatory span five levels of embeddings: table, column, row, cell, and entity (so called _table embeddings_) while many of them are relevant to column-level embeddings. Observatory also focuses on Transformer-based embedding models. Technically, any pretrained model \(f\), regardless of the architecture (encoder or encoder-decoder or decoder-only) can be integrated to and evaluated with Observatory, as long as \(f\) either natively exposes certain level of embeddings \(\mathbf{E}_{\mathcal{P}}\) specified by \(\mathcal{P}\) or exposes token-level embeddings that can be further aggregated to the level of \(\mathbf{E}_{\mathcal{P}}\).
### Relational Properties
The relational data model specifies both structural invariants and semantics. We first introduce two properties from structural invariants (namely, Row- and Column Order Insignificance), followed by two properties from structural semantics (namely, Join Relationship and Functional Dependencies).
**Property 1** (Row Order Insignificance).: A relational table can be viewed as a set of rows of which, in principle, the order is insignificant (Koleva et al., 2019). Tables may be stored in an ordered way, that is, rows may be ordered by dates, or ascending/descending values of a given column. Models that explicitly encode the table structure with position embeddings might reflect this order in the output embeddings. Awareness of the influence of row order on table embeddings is key to using them in a context of unordered tables. We consider column/row/table-level embeddings in this property.
**Measure 1**.: Given a table \(T\), let \(\mathbf{E}(D^{(i)})\) denote the embedding of column/row/table \(D\) in the \(i\)-th row-wise shuffle of \(T\) for \(\leq i\leq n\) (i.e., there are \(n\) row-wise permutations). We define the row order sensitivity as a high-dimensional dispersion measure \(\mathcal{M}\) of \(n\) samples drawn from the embedding distribution, i.e.,
\[\mathcal{M}(\,\mathbf{E}(D^{(1)}),\mathbf{E}(D^{(2)}),\dots,\mathbf{E}(D^{(n)} )\,)\,.\]
The coefficient of variation (CV), the ratio of the standard deviation to the mean in the univariate setting, is a well-known measure of variability relative to the mean of a population. It has the merit of allowing for the comparison of random variables with different units or different means. Thus, we consider multivariate extensions of CV (MCV) that summarize relative variation of a random vector (instead of a random variable) into a scalar quantity. In particular, we use Albert and Zhang's MCV (Albert and Zhang, 2018) to compare row order sensitivity across models for the reasons that it takes into account correlations between variables and does not require the covariance matrix to have an inverse (Albert and Zhang, 2018; Albert and Zhang, 2018), which is especially convenient when the number of observations (number of embeddings) is smaller than the number of variables (embedding dimensionality). Albert and Zhang's MCV of embeddings \(\{\mathbf{E}(C^{(i)})\}_{i=1}^{n}\) is computed as
\[YAZ=\sqrt{\frac{\mu^{t}\Sigma\mu}{(\mu^{t}\mu)^{2}}} \tag{1}\]
where \(\mu\) is the mean vector and \(\Sigma\) is the covariance matrix.
In practice, the number of possible permutations can be large (i.e., factorial of the number of rows) for tables with high cardinality. For computational efficiency in the experiments, we use at most 1000 randomly generated permutations of each table.
**Example.** Figure 2 gives an example of row permutations. Given 6 data rows, there are in total \(6!=720\) possible permutations. Then for each column, we have 720 observations of embeddings, which is smaller than some embedding dimensionality (e.g., 768 of BERT). In this case, the covariance matrix derived from observations is singular. Nevertheless, Albert and Zhang's MCV can be calculated whereas other MCVs surveyed in (Albert and Zhang, 2018) can not.
**Property 2** (Column Order Insignificance).: Besides row order, some models exploit neighboring columns as context when learning representations based on the intuition that neighboring columns can provide local context [53, 55]. Analogous to row order insignificance, relational tables usually store data without preserving a particular column order. The (insensitivity of embeddings regarding the column order informs their suitability for tasks such as join discovery and table understanding in relational databases with unordered tables versus views on Web and other media that may present data with related attributes next to each other. As in Property 1, we assess column/row/table embeddings.
**Measure 2**.: Given a table \(T\), let \(\mathbf{E}(D^{(i)})\) be the embedding of column/row/table \(D\) in the \(i\)-th column-wise shuffle of \(T\). Similarly, we measure the embedding variance using MCV in equation 1.
**Property 3** (Join Relationship).: The join operation combining tuples from two or more relational tables, is one of the essential operations for data analysis. Thus the problem of finding join candidates in a table repository has been extensively studied [15, 21, 54, 58, 59, 54]. Join candidates are typically identified by some notion of value overlap similarity such as Jaccard and containment [58, 59, 21] while the embedding approach has also been explored [15]. Their findings indicate that columns with significant value overlap are also close to each other in the embedding space. We investigate this postulate by assessing if there is a monotonic relationship between value overlap and embedding similarity.
**Measure 3**.: Consider pairs of query and candidate columns (\(C_{q}\), \(C_{c}\)) and their corresponding embeddings (\(\mathbf{E}(C_{q}),\mathbf{E}(C_{c})\)). Two random variables can be derived, the embedding similarity measure \(\mathcal{M}(\mathbf{E}(C_{q}),\mathbf{E}(C_{c}))\) and the value overlap measure \(\mathcal{R}(C_{q},C_{c})\). In experiments, we use cosine similarity for \(\mathcal{M}\) and containment for \(\mathcal{R}\) where \(\mathcal{R}\) = \(\frac{|C_{q}\cap C_{c}|}{|C_{q}|}\) and is not biased towards small sets [58, 59]. For completeness, we also experiment with Jaccard similarity (i.e., \(\frac{|C_{q}\cap C_{c}|}{|C_{q}\cup C_{c}|}\)) and multiset Jaccard similarity (i.e., \(\frac{|C_{q}\cap C_{c}|}{|C_{q}\cup C_{c}|}\)) for measuring overlap.
With embedding similarity measure \(\mathcal{M}\) and value overlap measure \(\mathcal{R}\) calculated over \(n\) pairs of query and candidate columns \(\{(M_{1},R_{1}),(M_{2},R_{2}),\ldots,(M_{n},R_{n})\}\), we compute the Spearman's rank correlation coefficient between \(\mathcal{M}\) and \(\mathcal{R}\) as
\[\rho=\frac{\text{cov}(R(\mathcal{M}),R(\mathcal{R}))}{\sigma_{R(\mathcal{M})} \sigma_{R(\mathcal{R})}} \tag{2}\]
where \(R(\cdot)\) denotes the rank of a sample, \(\text{cov}(\cdot,\cdot)\) is the covariance of the rank variables, and \(\sigma_{(\cdot)}\) denotes the standard deviation.
Note that the Spearman coefficient ranges between -1 and 1, and considers the ranking values of two variables instead of raw variable values. A coefficient of 1 means the rankings of each variable match up for all data pairs and indicates there is a very strong positive monotonic relationship between two variables. We adopt the Spearman coefficient since it does not make any assumption of the underlying variable distributions.
**Property 4** (Functional Dependencies).: Let \(T\) be a relation with a set of attributes \(U\). Relation \(T\) over \(U\) is said to satisfy a functional dependency, denoted \(T\models X\to Y\) where \(X,Y\subset U\), if for each pair \(s,t\) of tuples in \(T\), \(\pi_{X}(s)=\pi_{X}(t)\) implies \(\pi_{Y}(s)=\pi_{Y}(t)\)[1]. Functional dependencies between columns provide a formal mechanism to express semantic constraints to the stored data, which is useful in many applications such as improving schema design, data imputation, and query optimization.
This property surfaces if models implicitly capture the relationship of functional dependencies in their representations (we are not aware of any model that explicitly takes functional dependencies into consideration in pretraining). Analogous to relationships between words [31] and entities in knowledge bases [9], the functional dependency relationship can be interpreted as a translation in the embedding space. Consider the relation triple (\(\pi_{X}(s)\), \(r\), \(\pi_{Y}(s)\)), where \(r\) is the functional dependent relationship between value pair \(\pi_{X}(s)\), \(\pi_{Y}(s)\). As demonstrated in [9], such relationship reflects as a _translation_ between the embeddings \(\mathbf{E}(\pi_{X}(s))\) and \(\mathbf{E}(\pi_{Y}(s))\). The translation vector represents relationship \(r\), which can be expected to remain equal in direction and magnitude across tuples if the relationship is preserved [9, 31]. More precisely, consider any pair \(s,t\) of tuples in \(T\) with a functional dependency \(X\to Y\). We say that this functional dependency is preserved in an embedding space determined by a model \(f\) if
\[d(\ \mathbf{E}(\pi_{X}(s)),\ \mathbf{E}(\pi_{Y}(s))\ )=d(\ \mathbf{E}(\pi_{X}(t)),\ \mathbf{E}(\pi_{Y}(t))\ )\]
given \(\pi_{X}(s)=\pi_{X}(t)\) where \(\mathbf{E}(\cdot)\) is the embedding inferred with \(f\) and \(d\) denotes a distance metric preserving direction and magnitude.
**Example.** Consider a table \(T\) containing four columns in Figure 3. There exists a functional dependency between non-key attributes country and continent, i.e., country \(\to\) continent. \(T\) satisfies this functional dependency because every instance of a specific value in column country, _Netherlands_ for example, corresponds to the same value, i.e. _Europe_, in the corresponding tuples under column continent. By our definition, if an embedding space preserves functional dependencies, the squared Euclidean distances between embeddings generated for these specific value pairs will be (approximately) equal, despite influence of context on the embeddings.
Figure 3. Table with a functional dependency country \(\to\) continent. The colors illustrate different FD groups determined by the unique values in the country column.
Figure 2. Illustration of row permutations.
**Measure 4**.: Given a table \(T\) with functional dependency \(X\to Y\), we refer to the group of tuples \(\pi_{X\cup Y}\) with the same value \(v_{X}\) of determinant \(X\) as FD-group \(\mathcal{G}_{\text{\tiny RX}}\), to the value associated with \(v_{X}\) in the dependent attribute set \(Y\) as \(v_{Y}\), and to the embeddings of these values of the \(i\)-th entry in the group as \(\mathbf{E}(v_{X,i})\) and \(\mathbf{E}(v_{Y,i})\), respectively. For instance, there are three FD-groups under the functional dependency \(\mathsf{country}\to\mathsf{continent}\) in the table shown in Figure 3, i.e., (Netherlands, Europe), (Canada, North America), (USA, North America) where the FD-group (Netherlands, Europe) has three entries.
Within each FD-group \(\mathcal{G}_{j}\) of size \(m_{\mathcal{G}_{j}}\), we calculate distance metric \(d\) for each embedding pair \((\mathbf{E}(v_{X,i}),\mathbf{E}(v_{Y,i}))\), denoted as \(d_{ji}\).The average group-wise variance over all \(n\) FD-groups is calculated as:
\[\overline{S^{2}}=\frac{1}{n}\sum_{j=1}^{n}\frac{\sum_{i=1}^{m_{\mathcal{G}_{j} }}||d_{ji}-\overline{d}_{j}||_{2}^{2}}{m_{\mathcal{G}_{j}}-1}\]
In our experiments, we take as distance metric \(d\) the \(L_{1}\)- or \(L_{2}\)-norm following (Becker et al., 2017), while other distance metrics preserving norm direction and magnitude are valid too. \(\overline{S^{2}}\) approaches \(0\) if the _translation_ between the group-wise FD value pairs in \(X\) (\(\mathsf{country}\)) and \(Y\) (\(\mathsf{continent}\)) remains approximately equal for each FD group. We note that this does not require a strictly injective model. That is, the same value across different table contexts is not necessarily mapped to exactly the same vector in the embedding space in order for this measure to approach \(0\).
In addition, it is expected that this measure shows higher value ranges over column sets without functional dependencies. We collect a set of functional dependencies over tables \(\mathcal{T}_{FD}\) and a set of tables \(\mathcal{T}_{\neg FD}\) in which no table contains functional dependent columns. We calculate the measure for all tables in the sets \(\mathcal{T}_{FD}\) and \(\mathcal{T}_{\neg FD}\). This yields two distributions of \(\overline{S^{2}}\) values. If the embeddings preserve functional dependencies, \(\overline{S^{2}}\) values over \(\mathcal{T}_{FD}\) will be close to \(0\) and in general smaller than those over \(\mathcal{T}_{\neg FD}\).
### Data Distribution Properties
In practice, many aspects need to be considered when using embeddings including but not limited to the sample size, domain generalizability, robust representations of semantically similar values, and context. We introduce four properties involving data distributions that concern these four aspects.
**Property 5** (Sample Fidelity).: Large relational tables can easily have millions or even billions of rows. Embedding an entire table or even a single large column with a model is often infeasible due to constraints on the input length of models or memory constraints of computing resources. On the other hand, it may not be necessary to embed the full table for a downstream task (Han et al., 2017; Wang et al., 2018; Wang et al., 2018). In practice, existing work resorts to sampling, either up to the input limit or based on content relevance, as a straightforward workaround. While sampling provides a feasible solution, it also introduces a trade-off between computational cost and the fidelity of the embedding inferred from a smaller sample compared to the embedding that would have been obtained if the entire dataset were used. It is then essential to understand the fidelity of sample embeddings from a model by evaluating the extent to which sample embeddings deviate from the embeddings of full values.
**Measure 5**.: Given a full column \(C\) and a sample \(C_{\mathcal{S}}\), we define sample fidelity as a similarity measure \(\mathcal{M}\) between the embedding of the full column \(\mathbf{E}(C)\) and the sample embedding \(\mathbf{E}(C_{\mathcal{S}})\) where \(\mathcal{M}\) can be cosine similarity for instance. Similar to (Wang et al., 2018), we split a full column into chunks with the shared header and obtain the full embedding by aggregating the chunk embeddings. This is because a full column may not fit into a single sequence for model ingestion.
For each column \(C\), we perform uniform random sampling to get \(n\) distinct samples \(\{C_{1},C_{2},\ldots,C_{n}\}\) from \(C\) and report the average column sample fidelity
\[\frac{1}{n}\sum_{i=1}^{n}\mathcal{M}(\mathbf{E}(C_{i}),\mathbf{E}(C_{i \mathcal{S}}))\]
as well as the multivariate coefficient of variation over the embedding set \(\{\mathbf{E}(C),\mathbf{E}(C_{1}),...,\mathbf{E}(C_{n})\}\). Since tables in a corpus may have various sizes, we experiment with different sampling fractions (e.g., \(0.25\), \(0.5\), and \(0.75\)) instead of varying the absolute number of samples in evaluations.
This simple measure gives a good indication of computing efficiency and monetary cost. For example, provided that cloud vendors take a pay-as-you-go model, users do not need to pull out all their data to infer embeddings and pay the full scanning cost.
**Property 6** (Entity Stability).: Stability is a notion in NLP (Han et al., 2017; Wang et al., 2018) that indicates the variability of word embeddings relative to training data, training algorithms, and other factors in embedding model training. The idea is to use the overlap between \(K\) nearest neighbors of queries (i.e., words) found in different embedding spaces1 as a proxy of agreement between embedding spaces. We borrow this notion to explore the (in)stability of entity embeddings.
Footnote 1: An embedding space refers to a vector space that represents an original space of inputs (e.g., words or table columns).
Given \(n\) embedding spaces determined by embedding models \(\mathbf{f}_{1},\mathbf{f}_{2},\ldots,\mathbf{f}_{n}\), consider an entity cell \(\mathbf{e}=(e_{m},e_{md})\) in a relational table where \(e_{m}\) is the entity mention and \(e_{md}\) is associated metadata if exist (such as the entity linked to the cell from a knowledge base, the column name, and the table caption). Retrieve \(K\) nearest neighbor entities of \(\mathbf{e}\) in each embedding space. The stability of entity \(\mathbf{e}\) across \(n\) embedding spaces is defined as the average over all pairwise percent overlap between two embedding spaces.
**Example**.: Take the entity column _competition_ in Figure 2 for example. _World Championshipships_ is an entity mention that links to a Wikipedia entity _1997 World Championships in Athletics - Men's Decathlon_. Depending on the context, the same entity mention may link to another distinct entity, for instance, _BWF World Championships_.
**Measure 6**.: We consider the case when \(n=2\) (i.e., two embedding models \(\mathbf{f}_{1}\) and \(\mathbf{f}_{2}\)). We randomly sample \(m\) entities, and for each entity \(\mathbf{e}_{i}\), let \(s_{I}^{i}\) and \(s_{I}^{i}\) be the sets of \(K\) nearest neighbors of \(\mathbf{e}_{i}\) in two embedding spaces, respectively. We compute the average entity stability as
\[\frac{1}{m}\sum_{i=1}^{m}\frac{|s_{I}^{i}\cap s_{I}^{i}|}{K}\]
which ranges between 0 and 1. A value of 1 indicates a perfect agreement between two embedding spaces while 0 indicates a complete disagreement.
For entity-centric downstream tasks, one can run this experiment over a model \(\mathbf{f_{1}}\) to first see if the retrieved sets of \(K\) nearest neighbors to entities of interest fit their task domains. If not, one may want to try a different model \(\mathbf{f_{2}}\) with a low entity stability relative to \(\mathbf{f_{1}}\). This is because a model with high entity stability relative to \(\mathbf{f_{1}}\) will be more likely to retrieve a set of entities similar to that of \(\mathbf{f_{1}}\) and fail to fit task domains as well.
**Property 7** (Perturbation Robustness).: Neural model performance has been found vulnerable to input perturbations. For example, state-of-the-art text-to-SQL models are shown to suffer from nuanced perturbations to database tables, natural language questions, and SQL queries (Krishnan et al., 2017). Such perturbations are designed to preserve semantics and can reveal a model's capacity to capture semantics. We hypothesize that preserving semantic similarities in the embedding space is key, especially, for downstream tasks such as retrieval, text-to-SQL and question answering. We therefore inspect the impact of input perturbations in the embedding space by measuring the robustness of column-level embeddings with respect to semantics-preserving perturbations.
ExampleThree database perturbations curated by (Krishnan et al., 2017) include schema-synonym, schema-abbreviation, and column-equivalence. schema-synonym and schema-abbreviation replace the name of a column with its synonym ("country" - "nation") and abbreviation ("CountryName" - "cntry_name"), respectively. column-equivalentence further perturbs both column names and contents, and may replace numerical columns with semantic-equivalent ones ("age" \(\rightarrow\)"birthyear").
**Measure 7**.: Given a set of original columns \(\{C_{i}\}_{i=1}^{n}\), we consider a set of perturbed variants \(\{C^{\prime}_{i}\}_{j=1}^{m_{i}}\) for each \(C_{i}\). The perturbations are semantics-preserving and can be at the schema level or data level or both. We compute the embedding cosine similarity of \((\mathbf{E}(C_{i}),\mathbf{E}(C^{\prime}_{i,j}))\) and average over all \(m_{i}\) pairs for each \(C_{i}\). We draw a distribution plot of average cosine similarity over \(\{C_{i}\}_{i=1}^{n}\) across models and also report a single number of cosine similarity averaged over all \(\sum_{i=1}^{n}m_{i}\) pairs for each model.
**Property 8** (Heterogeneous Context).: Unlike coherent natural language sequences, tables are typically more heterogeneous comprising various types of data such as numeric, categorical, and date-time. As table embedding models mostly extend the architecture of language models and by default take context into consideration, it is less clear how much influence context has on embedding representations, especially for numeric data (Krishnan et al., 2017; Wang et al., 2018). Without context (e.g., subject columns2 or neighboring columns), non-textual types of data, especially numerical columns, are typically hard to discriminate. Thus, it is important to understand the impact of context for many downstream tasks like semantic type prediction and relation extraction. In this property, we probe into the difference between contextual column embeddings and single-column embeddings for both textual and non-textual types of data.
Footnote 2: The subject column of a table, if exists, contains the entities the table pertains to.
**Example.** Figure 4 shows a table from the SOTAB benchmark (Krishnan et al., 2017). The table does not have a header and consists of both textual and non-textual data columns. Without context, column 4 is hard to interpret on its own, which could be percentages, prices or any metric numbers. However, the neighbor column to the right, namely column 5, which refers to the currency of Romania, can provide clues to the semantic meanings of column 4. In this context, it is more likely that column 4 contains price values.
**Measure 8**.: To measure the effect of context, we consider four different input settings to get column embeddings as specified below. We compare embeddings of single columns with contextual column embeddings using their cosine similarity.
1. Only the column itself;
2. Subject column as context (if not exist, use the first textual column from the left of a table as the proxy);
3. Immediate neighboring columns on both sides as context;
4. The entire table as context.
## 4. Experiment Setup
### Embedding Models
We consider well-established models and their variants that have been adopted for data management problems and open-sourced for public access. In particular, we select representative models from two categories: LMs and specialized table embedding models. Vanilla LMs are those designed for modeling natural language sequences and thus do not take into account the structure of tables or tabular data distributions. We include them in Observatory for comparison as many table embedding models share very similar architectures with weights initialized from LMs.
**Language Models.** We take in BERT (Krishnan et al., 2017), RoBERTa (Wang et al., 2018), and T5 (Krishnan et al., 2017). BERT is a pioneer transformer-based model that learns contextual representations from unlabeled text. RoBERTa builds on top of BERT and systematically studies the impact of key hyperparameters and training data size. Both models are go-to options for a wide range of NLP tasks and are bases for many tabular language models. T5 is a representative of large language models whose largest variant has 11 billion parameters. We use base versions of all three models from the HuggingFace library (HuggingFace, 2019) in the experiments.
**Table Embedding Models.** We include TURL (Krishnan et al., 2017), DODUO (Wang et al., 2018), TAPAS (Krishnan et al., 2017), TaBERT (Wang et al., 2018), TaPEx (Wang et al., 2018), and TaPap (Wang et al., 2019). TURL, TAPAS, TaBERT, TaPEx, and TaPap first pretrain models over tables in an unsupervised manner by, for example, predicting masked column names or query execution results. The pretrained models are then fine-tuned for particular downstream tasks. We use pretrained
Figure 4. A table (without header) comprising textual and non-textual data columns.
models in the experiments as prescribed in our problem statement. DODOU directly fine-tunes a BERT-based model with labeled data from downstream tasks. See Table 1 for an overview of model specifications. The models we assess in experiments cover all levels of output embeddings, i.e., column, row, cell, and table embeddings.
### Datasets
We use both relational database tables and web tables for evaluation. WikiTables. The WikiTables (Han et al., 2017) corpus contains 1.6M HTML tables of relational data extracted from Wikipedia pages. TURL pre-processes WikiTables and obtains an entity-rich dataset of 670,171 tables. We use the test partition released by TURL (Han et al., 2017).
**Spider**. Spider (Srivastava et al., 2016), a widely-used semantic parsing and text-to-SQL dataset, includes 5,693 SQL queries over 200 databases across domains. We use the development set (Srivastava et al., 2016) and run HyFD (Srivastava et al., 2016), a functional dependency discovery algorithm, to create a dataset with annotated functional dependencies. To avoid mining a massive number of functional dependencies, we set the size of determinant to 1 and found 713 functional dependencies. We also collect an equal number of random pairs of columns without the relationship of functional dependencies for our experiments.
**Dr.Spider**. Dr.Spider (D'Srivastava et al., 2016) designs perturbations to databases, natural language questions, and SQL queries in Spider to test the robustness of text-to-SQL models. We take advantage of database perturbation tests in Dr.Spider (D'Srivastava et al., 2016) to evaluate the property of perturbation robustness.
**NextiaJD**. Flores et al. (Floers et al., 2016) collected 139 datasets from open repositories such as Kaggle and OpenML for predicting joinable columns. They also divided datasets into four testbeds based on dataset file size. For example, NextiaJD-XS includes datasets smaller than 1 MB while NextiaJD-L consists of datasets larger than 1 GB. Candidate pairs of columns are labeled with the join quality using a measure that takes account of both containment and cardinality proportion with empirically determined thresholds. For our evaluation, we use all pairs with join quality greater than 0.
**SOTAB**. The Schema.org Table Annotation Benchmark (Snim et al., 2016) provides about 50,000 annotated tables collected from the WDC Schema.org Table Corpus for both column type and column property annotation tasks. We extract a subset that contains 5,000 tables for 20 semantic data types. The subset is balanced in terms of the number of non-textual and textual data types. Non-textual types include DATE, ISBN, POSTAL CODES, MONEY (monetary values), and QUANTITY (measurements as of weight etc.). We use this subset for measuring the property of Heterogeneous Context.
Note that a dataset may not accommodate all the properties. For example, WikiTables does not have information of which two columns can be joined, so we do not measure the property of join Relationship over WikiTables. On the other hand, properties such as Functional Dependencies and Heterogeneous Context require synthesized datasets for evaluation purposes. Table 2 summarizes the datasets and assessed models for each property. Also note that TURL, TaBERT, and TaTopare are excluded from certain experiments. This is because TURL is designed and implemented to output embeddings from entity-rich tables like those in WikiTables; TaBERT yields only column embeddings after the fusion of the vertical attention mechanism; and TapTap encodes single rows independently using a text template serialization strategy and only gives row embeddings.
### Implementation
In general, we follow the original papers and their implementations in our evaluation. However, there are subtleties where extra consideration is needed, such as aligning the input and output across models for fair comparison. We make (minimal) design decisions in our implementation as discussed below.
**Table Serialization.** As Transformer-based models expect to take sequence inputs, a key input processing step is to serialize two-dimensional tabular data into flattened sequences of tokens. Table embedding models considered in this analysis generally follow two common types of serialization methods.
1. Row-wise serialization. Tables are parsed by rows, which are further concatenated with optional insertions of special tokens as delimiters. TURL, TAPAS, and TaBERT fall under this category despite the difference that TAPAS uses dedicated positional embeddings to indicate the row and column in which a token appears while TaBERT explicitly adds [SEP] tokens to mark boundaries of cells in the sequences.
2. Column-wise serialization. Alternatively, tables can be serialized by column. For DODOU, [CLS] tokens (as many as the number of columns) are inserted to separate values from different columns and are effectively used as column representations.
For each table embedding model, we adopt the serialization method as proposed in the original papers. Since vanilla language models do not have a default serialization method for tabular data, we experimentally apply row/column-wise serialization as applicable. In practice, models also enforce a length limit to token sequences (e.g. 512 is a common maximum). To ensure that all models take in (almost) the same inputs regardless of serialization methods, we keep all the columns for each table, if possible, and preserve as many rows as the length limit permits. We use binary search to find the maximum number of rows that can fit into the input limit.
**Embedding Retrieval.** We use the embeddings provided by a model, if they are available. However, due to designs for particular downstream tasks, a model may not readily expose certain levels of embeddings needed for measuring a property. For instance, TAPAS
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Model** & **Input** & **Output Embedding** & **Downstream Task** \\ \hline TURL & Table + metadata & Entity / Col. / Col. pair & Table interpretation/augmentation \\ DODOU & Table & Col. / Col. pair & Column type/relation prediction \\ TAPAS & NL question + table & Question / Table & Semantic parsing \\ TaBERT & NL question + table & Col. / Table & Semantic parsing \\ TaPEX & SQL query + table & Row / Table & Table Question Answering \\ TaTop & Taple & Row & Data augmentation/imputation \\ \hline \hline \end{tabular}
\end{table}
Table 1. Overview of table embedding models and their design specificationsn (Column is abbreviated to Col.).
\begin{table}
\begin{tabular}{l l} \hline \hline
**Property** & **Dataset** & **Models in Scope** \\ \hline Row order insignificance & WikiTables & Except TapTap \\ Column order insignificance & WikiTables & All \\ Join relationship & NextiaJD & Except TURL & TapTap \\ Functional dependencies & Spider & Except TURL, TaBERT, and TapTap \\ Sample fidelity & WikiTables & Except TapTap \\ Entity stability & WikiTables & Except TaBERT and TapTap \\ Perturbation robustness & Dr. Spider & Except TURL and TapTap \\ Heterogeneous Context & SOTAB & Except TURL and TapTap \\ \hline \hline \end{tabular}
\end{table}
Table 2. Overview of datasets and models for each property.
does not give row or column embeddings out of the box. We circumvent this obstacle by observing that all the models can output token-level embeddings and some table embedding models have additional mask embeddings or positional embeddings that indicate to which row and column a token belongs. Therefore, we can aggregate token embeddings (by averaging them for example) to embeddings on a level (e.g. row or column) as needed. In particular, we take advantage of different serialization methods and use special tokens to retrieve row or column or table embeddings. As to cell embeddings needed for the property Functional Dependencies and entity embeddings as needed for the Entity Stability property, we keep track of token positions in the table and aggregate them accordingly. We take this alternative since inserting special tokens for each cell quickly uses up the input limit.
The practice of inserting special tokens and aggregating lower level of embeddings is common in the literature (K
and MCV measures. In the interest of space, we only show column and row embeddings in Figure 7.
Considering, for example, the column embeddings in Figure 7, the median cosine similarity of RoBERTa embeddings drops by more than 5% and the same statistic of DODOUD embeddings drops by more than 15%. The median MCV of both RoBERTa and T5 also increases by four times. To verify such large variations, we again visualize the PCA projections of T5 embeddings in Figure 8 for the same table as used in Figure 6. This figure confirms that the first principal component of T5 embeddings manifests larger spread, and illustrates the spread along the horizontal axis across all columns (instead of merely 3, as when rows are shuffled) indicating a higher sensitivity to column order than row order.
### Join Relationship
**Table 3: Spearman coefficients between a value overlap measure and embedding cosine similarity on the NextiaJD-XS dataset. Multiset Jaccard is most positively correlated to embedding cosine similarity across all models. All coefficient numbers are statistically significant (p-value \(<0.01\)).**
Table 3 presents the Spearman coefficients between a value overlap measure and embedding cosine similarity over joinable pairs of columns from the NextiaJD-XS dataset. We find that, among the considered value overlap measures (containment, Jaccard, and multiset Jaccard), multiset Jaccard similarity is most positively correlated with embedding cosine similarity. For all models, the coefficient value between multiset Jaccard and embedding cosine similarity is above 0.5, which indicates a moderate positive correlation (TaBERT has a coefficient value of 0.72 which indicates a high positive correlation), and is significantly higher than that of the other two measures (\(0.08-0.43\)\(\uparrow\)). This difference can be attributed to the fact that containment and Jaccard similarity do not take duplicate values into account while we use all values for embedding inference. In Figure 9, we also show scatter plots of embedding cosine similarity versus multiset Jaccard over pairs of joinable columns from NextiaJD-XS for each model, which demonstrates the moderate positive correlation between the two variables. Note that the maximum possible value of multiset Jaccard similarity is 0.5.
Both syntactic and semantic approaches have been employed for data discovery [(8; 16; 33)]. It is valuable to be
Figure 8: PCA visualization of high-dimensional column embeddings from the same table as used in Figure 6. Each subplot draws 6i-720 variants of a column from column order shuffling. The embeddings exhibit similar patterns as in row order shuffling but show larger spread across all columns.
Figure 6. PCA visualization of high-dimensional column embeddings from a table of six columns, for BERT and T5. Each subplot draws 6i-720 row-wise permutation variants of a column. While BERT embeddings are centered around the origin with some variation, the T5 embeddings are more stretched along the horizontal axis, resulting in the relatively high cosine similarity as well as high MCV value.
Figure 7. Cosine similarity and MCV distributions of column (top) and row (bottom) embeddings from column shuffling. Both column and row embeddings manifest similar patterns as in row shuffling.
measure is highly correlated with a semantic measure based on embeddings so that one can ensemble less correlated syntactic and embedding measures if they want to find more diverse candidates. For instance, consider the task of join discovery over NextiaJD-XS. Based on Table 3, it is recommended to use containment as the syntactic similarity measure when BERT embeddings are used to measure semantic similarities because these two measures show the least correlation. Similarly, it is recommended to use the Jaccard similarity when TAPAS embeddings are used.
### Functional Dependencies
Table 4 gives the average variance of the L2 norm of translation embeddings over column pairs with and without the relationship of functional dependencies. For vanilla language models, the average variance of columns with functional dependencies is not smaller than that of columns without functional dependencies. This insight suggests that vanilla language models do not preserve functional dependencies, which is expected as these models do not consider the table structure that is essential to functional dependencies in pretraining. Although we observe the opposite case for table embedding models, the magnitude of the average variance of DOOUO is not close to 0. Although TAPAS reflects the expected patterns in terms of the mean of the variances and the magnitude, the variance distribution plots in Figure 10 illustrate that none of the models manifests clear separation between the two variance distributions (i.e. the column pairs with and without functional dependencies). This provides evidence that none of the models captures the relationship of functional dependencies in their representations.
### Sample Fidelity
Figure 11 shows the sample fidelity distributions of models over various sample ratios. Overall, as the sample ratio increases, sample embeddings tend to become closer to embeddings obtained from full values in terms of cosine similarity. This can be seen from the rising first quartile, median, and third quartile values of embedding cosine similarity of each model in the box plots.
Vanilla language models exhibit high sample fidelity with median over 0.9 even when the sample ratio is at 0.25 and median over 0.95 when the sample ratio is at 0.75. Among which, T5 is most robust to sampling with at least 75% tested pairs having cosine similarity over 0.95 when half of the values are sampled. In the meantime, tabular language models except TaBERT show larger distribution spread, especially when the sample ratio is as low as 0.25. TaBERT appears to be the most sample robust model with tested pairs 1.5 interquartile range below the first quartile having cosine similarity over 0.95 across all sample ratios. This mainly attributes to TaBERT internally always taking the first three rows (Wang et al., 2017). In other words, it is much more likely that TaBERT actually takes the same inputs or inputs with overlapping despite sampling. Although TaBERT makes a lucky hit on sample fidelity, there might be a negative effect of TaBERT only considering the first three rows. The next sample robust model is TAPAS, which achieves high sample fidelity comparable to vanilla language models when the sample ratio reaches 0.5. Similar to results of row and column shuffling, DOOUO lags behind and is more sensitive to sampling in all settings of sample ratios.
### Entity Stability
We select query entities from five domains and compare their \(K\)-nearest neighbors between two embedding spaces: ten greatest men tennis players (Tennis Players), ten most popular movies (Movies), ten most essential nutrients for the body (Biochemistry), ten most valuable technology companies in the U.S., and ten largest countries in the world by area. We plot pairwise average entity stability using heatmaps in Figure 12. Due to space limits, we only show heatmaps of Tennis Players, Movies, and Biochemistry with \(K\)=10. We observe that domain is a key factor in entity stability. In other words, for different domains, different pairs of models show high entity stability. For instance, BERT and TURL have the highest entity stability for movie entities while TAPAS and DOOUO have the highest entity stability for biochemistry entities. This suggests for domain-specific tasks, if one finds model A is not feasible, they may want to try model B with relatively lower entity stability with respect to A.
### Perturbation Robustness
Figure 13 shows distributions of embedding cosine similarities between pairs of an original column and a corresponding perturbed column. Even though both types of perturbations are at the schema level and data values remain unchanged, models exhibit different degrees of robustness, especially in terms of the spread and skewness of the distribution. Vanilla language models BERT and T5 are most
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & **BERT** & **RoBERTa** & **T5** & **TAPAS** & **DODUO** \\ \hline Columns w/ FD & 0.87 & 0.39 & 1.80 & 0.88 & 83.34 \\ Columns w/o FD & 0.78 & 0.34 & 1.13 & 1.12 & 229.77 \\ \hline \hline \end{tabular}
\end{table}
Table 4. Average group-wise variances of embedding translations over columns with and without functional dependencies across five models. Only TAPAS yields \(\overline{S^{2}}_{FD}<\overline{S^{2}}_{-FD}\) with \(\overline{S^{2}}_{FD}\) close to 0, while language models and other table embedding models do not follow this pattern.
Figure 9. Scatter plots of embedding cosine similarity vs. multiset Jaccard similarity derived from pairs of joinable columns in the NextiaJD XS dataset, which illustrate a positive correlation between the two measures.
robust to schema-level perturbations with first quartile above 0.97 while entire distributions are above 0.90. Despite being a language model, RoBERTa surprisingly shows a larger spread with outliers down to 0.75 in synonym perturbations and to 0.65 in abbreviation perturbations. On the table model side, TaBERT is least robust to perturbations with the lowest median and first quartile among all models. In contrast, TAPAS is more robust with first percentile near 0.95 for both perturbations while it shows relatively large variance as well. DOQUO does not show any variance because DOQUO only takes in data values for representation inference and simply ignores changes to the schema. Overall, table embedding models in comparison are more sensitive to schema perturbations as they explicitly model the header component of tables and distinguish between headers and data values in representation learning.
### Heterogeneous Context
For both non- and textual data types, we infer from each model column embeddings using only the columns themselves, and adding 1) subject columns; 2) immediate neighbor columns; and 3) the entire tables as context respectively. We compute the cosine similarity between corresponding pairs of single column embeddings and contextual embeddings and show their three-number summary in Table 5. Unsurprisingly, adding different contexts to the inputs changes the embeddings to various degrees. For non-textual columns, among three context settings, models except DOQUO preserve high cosine similarity when having subject columns as context (e.g., the median number of TaBERT is above 0.96 and that of BERT is close to 0.9) while they (except TaBERT) preserve relatively low cosine similarity when having the entire tables as context (e.g., the median number of TAPAS is below 0.65). We observe that TaBERT embeddings are insensitive to context (the median number is above 0.95 in all three settings) whereas DOQUO embeddings are more sensitive to context (the median number is below 0.5 when having the entire tables as context and around 0.6 in the other two settings). We see a consistent trend for textual data. This can have implications that TaBERT may not be a good choice for context sensitive downstream tasks and a user may want to try both single column embeddings and contextual embeddings when using DOQUO.
Figure 11. Distributions of sample fidelity of column embeddings under three sample ratios. Overall, vanilla LMs exhibit higher sample fidelity compared to table embedding models.
Figure 12. Pairwise top-10 entity stability with query entities from three distinct domains. Different pairs of models show high entity stability for different domains.
Figure 10. Distributions of the group-wise variances over embedding translations across column pairs with and without the relationship of functional dependencies. None of the models show clear separation between the two variance distributions.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Model** & **Subject Column** & **Neighboring Columns** & **Entire Table** \\ \hline BERT & 0.72 / 0.89 / 0.99 & 0.62 / 0.86 / 0.99 & **0.57** / 0.78 / 0.96 \\ & 0.72 / 0.93 / 1.00 & 0.64 / 0.88 / 1.00 & **0.51** / 0.79 / 0.99 \\ \hline RoBERTa & 0.76 / 0.83 / 0.89 & 0.71 / 0.82 / 0.93 & 0.75 / 0.84 / 0.92 \\ & 0.76 / 0.83 / 0.90 & 0.74 / 0.83 / 0.92 & 0.76 / 0.85 / 0.93 \\ \hline T5 & 0.77 / 0.85 / 0.93 & 0.75 / 0.88 / 0.97 & 0.74 / 0.83 / 0.92 \\ & 0.75 / 0.83 / 0.92 & 0.75 / 0.88 / 0.98 & 0.75 / 0.83 / 0.98 \\ \hline TAPAS & 0.68 / 0.84 / 0.95 & 0.58 / 0.80 / 0.97 & **0.35** / 0.64 / 0.92 \\ & 0.52 / 0.83 / 0.98 & 0.50 / 0.80 / 0.98 & **0.31** / 0.67 / 0.92 \\ \hline TaBERT & 0.94 / 0.97 / 1.00 & 0.93 / 0.97 / 1.00 & 0.89 / 0.95 / 0.99 \\ & 0.90 / 0.98 / 1.00 & 0.89 / 0.97 / 1.00 & 0.83 / 0.96 / 0.99 \\ \hline DOQUO & 0.25 / 0.62 / 0.99 & 0.14 / 0.59 / 0.99 & **0.06** / 0.45 / 0.87 \\ & 0.34 / 0.80 / 0.99 & 0.26 / 0.78 / 0.98 & **0.01** / 0.61 / 0.98 \\ \hline \hline \end{tabular}
\end{table}
Table 5. Summary statistics (min, median, and max) of cosine similarities between single column embeddings and contextual embeddings for non-textual and textual data types, on the first and second row, respectively. Incorporating context, especially the entire table, can change column embeddings significantly w.r.t cosine similarity (highlighted in bold).
## 6. Connection to Downstream Tasks
From the model characterization through their embedding representations as per the eight properties P1-8, we deduce below the model behaviors on downstream tasks. We illustrate three connections with experimental findings.
**Column Type Prediction (P1/P2).** In the experiments, DOOUO is found sensitive to row/column shuffling and sampling, which are indicators of unstable predictions of DOOUO over shuffled data in downstream tasks. To investigate this hypothesis, we randomly sample 1,000 tables from the WikiTables dataset used in the experiments and employ DOOUO to predict semantic column types for all columns. For each table, we consider at most 1,000 distinct row-wise permutations for computational efficiency and keep track of how many predictions change per permutation relative to the original order. We find that, over this subset of tables with 5.8 columns on average, 34.0% of the permutated tables yield at least 1 changed column type prediction (averaged over all permutations). 12.8% of the tables have at least 2 changed type predictions while 5.4% of tables have at least 3 changed type predictions.
**Join Discovery (P5).** T5 exhibits high sample fidelity even when the sample ratio is low, leading us to anticipate T5 to be sample efficient in downstream tasks. We implement T5 in the task of join discovery following the approach and setup in (Kumar et al., 2019). Over the Nexti-aJD testbeds, sampled T5 embeddings obtain comparable precision and recall as those from full values while the indexing time and lookup time are significantly faster. For instance, on NextiaJD-XS with a sample size of 100 (which is about 5% of the average number of rows in NextiaJD-XS), there is less than \(\pm\)3% variation in precision and recall between sampled T5 embeddings and full-value T5 embeddings. But the indexing time of using sampled values is more than 7x faster and the lookup time is more than 2x faster.
**Table Question Answering (P7).** The task of table question answering (TableQA) refers to answering natural language questions based on information from given tables. In our experiments of the Perturbation Robustness property (Section 5.7), we found that TAPAS, among other models, was sensitive to semantics-preserving perturbations to the table schema. Based on this observation, we hypothesize that TAPAS may suffer performance degradation on perturbed tables in downstream tasks, such as TableQA, for which it is designed. As anticipated, the TableQA accuracy of TAPAS under synonym- and abbreviation perturbation drops by 6.2 and 8.3 points respectively on WikiTableQuestions (Zhu et al., 2019), and 19.0 and 22.2 points respectively on WikiSQL (Zhu et al., 2019) (see Table 2 and 7 in (Zhu et al., 2019)).
We emphasize that, despite we focus on the characterization of pretrained models (e.g., pretrained version of TAPAS), our hypotheses predicated on such characterization propagate to finetuned models (in this case, TAPAS models fine-tuned for TableQA).
**Additional Connections.** Beyond the three empirically supported anticipations of model behaviors on downstream tasks, we also deduce informed expectations listed below as a result of the characterizations obtained with Observatory for the other properties. This list is not exhaustive as the connection between model characteristics and downstream tasks is not a one-to-one relationship.
**P3**: Low Spearman's coefficient between containment and embedding cosine similarity (e.g., BERT) \(\rightarrow\) Join discovery: the containment-based method will complement the embedding-based method in finding join candidates.
**P4**: Not preserving functional dependencies \(\rightarrow\) Data imputation: imputed values may not maintain functional dependencies between attributes.
**P6**: Relative to model A, model B has a lower entity stability than model C \(\rightarrow\) Entity retrieval: model B will return fewer entities in common with model A than with model C.
**P8**: Insensitive to context change (e.g., RoBERTa) \(\rightarrow\) Join discovery: candidates found by single-column and contextual embeddings will largely overlap.
## 7. Discussion
**Impact of Tables with Large Dimensionality.** To understand the influence of table dimensions on the findings, we analyze BERT and TAPAS on row- and column order insignificance on the NextiaJD-S dataset, which has more than 209k rows and 56 columns on average. We observe no significant differences for these two models as for tables from WikiTables. This is mainly because large tables are partitioned into small tables and the embeddings are aggregated accordingly. This is no different from our practice for smaller tables.
**Limitations.** Observatory presents a framework with a wide range of properties of relational tables and data distributions that are important for many applications. However, not all properties are included as measures would be infeasible to implement for them. For example, a table may have a latent topic which can be important for table retrieval tasks, yet measures have not been discussed in the literature and suitable datasets for evaluating this property are lacking. Moreover, we note that measuring the capacity of models to capture signals across different data types, from numeric to textual, remains an open challenge. We also limit our analysis to a selection of representative models. This selection has been driven by the availability of code and pretrained model weights, and the ability to construct embeddings from the token embeddings using, for example, positional embeddings. But Observatory is open-sourced for extension and available for analyzing other models. We also acknowledge that a detailed analysis of relationships between property metrics is worth investigation and leave it to future work. Finally, the general limitations of empirical work apply to our analysis as well. That is, we measure model behaviors on specific datasets which, therefore, may not always generalize. Nevertheless, we take the first step towards characterizing and understanding embedding representations over relational tables.
## 8. Conclusion
We present Observatory, a downstream-task agnostic analysis framework of table embeddings that measures if, and to what extent, pretrained table embeddings reflect core properties of the relational data model and data distributions. Our analysis with Observatory of nine language and table embedding models surfaces the diverse capabilities of different models. We identify that not all properties of the relational model and data distributions are reflected in table embeddings. Observatory will serve as a valuable framework for guiding practitioners and researchers in selecting appropriate models for various applications, facilitating researchers in assessing novel models, and informing future research on new model architectures for tabular data. |
2310.08363 | Localization of min-max minimal hypersurfaces | We show that min-max minimal hypersurfaces can be localized. As a
consequence, we obtain the sharp generalization to complete manifolds of the
famous Almgren-Pitts min-max theorem in closed manifolds. We use this result to
prove the existence of a complete embedded finite area minimal hypersurface of
index at most one in every balanced complete manifold. | Douglas Stryker | 2023-10-12T14:35:00Z | http://arxiv.org/abs/2310.08363v1 | # Localization of min-max minimal hypersurfaces
###### Abstract.
We show that min-max minimal hypersurfaces can be localized. As a consequence, we obtain the sharp generalization to complete manifolds of the famous Almgren-Pitts min-max theorem in closed manifolds. We use this result to prove the existence of a complete embedded finite area minimal hypersurface of index at most one in every balanced complete manifold.
## 1. Introduction
The existence of an abundance of closed minimal hypersurfaces in any closed Riemannian manifold has been established as the culmination of a Morse theory framework for the area functional, called _min-max_ (we refer the reader to [16, 17, 18, 19, 20, 21]). This framework additionally provides bounds on the area and the Morse index of the produced minimal hypersurfaces. However, it is difficult to deduce any additional information about these hypersurfaces.
One interesting feature of min-max minimal hypersurfaces that has received substantial attention is their topology. For surfaces in 3-manifolds, a modified Morse theory framework was developed in [23] (see also [1, 10, 11]), which yields an upper bound on the genus of the resulting minimal surface. We refer the reader to [11, 20] for exciting recent developments in this direction. Unfortunately, this modified framework does not work in higher dimensions, due to limitations of the required regularity theory.
In this paper, we focus on a different and somewhat overlooked feature of min-max minimal hypersurfaces: their location in the ambient space. While potentially useful in a variety of problems, we emphasize that localization of min-max minimal hypersurfaces is _essential_ to prove the existence of minimal hypersurfaces in _non-compact_ ambient spaces. Indeed, min-max sequences may run off to infinity in general without some form of localization.
### Previous work
There have been three particularly relevant results towards localization of min-max minimal hypersurfaces in general ambient spaces1.
Footnote 1: [23] also studies the localization of minimal hypersurfaces in a stronger sense (i.e. constraining the hypersurface the lie fully within a given set) in manifolds with cylindrical ends. However, this result crucially uses the rigid structure of these manifolds.
* [24]: If \((M,g)\) admits a bounded open subset \(U\) with strictly mean concave boundary (and \((M,g)\) satisfies some additional hypotheses about its asymptotic geometry), then [24] uses min-max to produce a closed embedded minimal hypersurface intersecting \(U\).
* [10]: If \((M,g)\) admits a bounded open subset \(U\) with the property that a certain notion of min-max width (for 1-parameter sweepouts) is at least ten times the area of the boundary of \(U\), then [10] uses min-max to produce an embedded complete minimal hypersurface of finite area intersecting any open neighborhood of \(\overline{U}\). Notably,
this result suffices to prove the existence of a complete minimal hypersurface of finite area in any complete manifold of finite volume, generalizing to higher dimensions the analogous result for geodesics due to [14, 15].
* [16]: If \((M,g)\) admits a compact domain \(K\) that has no singular strictly mean-convex foliation, then [16] uses min-max and the mean curvature flow to produce an embedded complete minimal hypersurface of finite area and index at most one intersecting \(K^{2}\). This result suffices to prove the existence of a complete minimal hypersurface of finite area and _index at most one_ in any complete manifold of finite volume, improving on the result of [10].
From the perspective of general existence problems, criteria related to the nontriviality of some min-max invariant seems to be the most natural and useful. One reason is that criteria related to the curvature of certain submanifolds can be hard to verify. Moreover, the _triviality_ of certain min-max invariants can be a useful hypothesis in proving existence by other means (i.e. Lyusternik-Schnirelmann theory, see [11]), enabling proofs of existence under weaker hypotheses by a dichotomy argument (for example, see the dichotomy set up in our proof of Theorem 5.1).
In the "min-max invariant" framework, several questions remain regarding the localization of min-max minimal hypersurfaces (including several questions posed in [10]).
* _Does the same result hold if the min-max width is only assumed to be greater than the area of the boundary of \(U\)?_ ([10, SS2.5 Question 1].)
* _Does the same result hold for a minimal hypersurface intersecting \(\overline{U}\)?_ ([10, SS2.5 Question 3].)
* _Does the same result hold for a minimal hypersurface with index at most 1?_ ([10, SS2.5 Question 2].)
* _Does an analogous result hold for min-max using higher parameter sweepouts?_
We emphasize the none of the previous work in this area deals with higher parameter min-max. While the modest applications we consider in this paper only use 1-parameter min-max constructions, we expect that localization for higher parameter sweepouts will be useful to prove the existence of more than one minimal hypersurface in some non-compact settings.
### Min-max localization
The following result is the sharp analogue in complete manifolds of the Almgren-Pitts min-max theorem (see [12]). Moreover, it answers all of the previous questions affirmatively3. We refer the reader to SS2 for the precise min-max definitions.
Footnote 2: See also [11] for an earlier proof of a similar result.
Footnote 3: We note that we use a different formulation of min-max from [10]. For a comparison of the min-max frameworks considered in [10] versus here, see §4.1. We emphasize that our proof will work in any min-max framework for which the multiplicity one result of [15] applies–we only use this formulation for simplicity, as it is precisely the formulation used in [15].
**Theorem 1.1**.: _Let \((M^{n+1},g)\) be complete of dimension \(3\leq n+1\leq 7\). Let \(X^{k}\subset I^{m}\) be a \(k\)-dimensional cubical complex, and let \(Z\subset X\) be a cubical subcomplex. Let \(\Phi_{0}:X\to\mathcal{C}(M)\) be an \(\mathbf{F}\)-continuous map. If_
\[\mathbf{L}(\Phi_{0},M)>\sup_{z\in Z}\mathcal{H}^{n}(\partial\Phi_{0}(z)),\]
_then there exists a smooth complete (possibly non-compact) embedded minimal hypersurface \(\Sigma\subset M\) satisfying_
* \(\mathcal{H}^{n}(\Sigma)\leq\mathbf{L}(\Phi_{0},\operatorname{span}(\Phi_{0}))\)_,_
* \(\operatorname{index}(\Sigma)\leq k\)_,_
* \(\Sigma\cap\operatorname{span}(\Phi_{0})\neq\varnothing\)_, where_ \(\operatorname{span}(\Phi_{0}):=\overline{\bigcup_{x\in X}\partial\Phi_{0}(x)}\)_._
The localization aspect of this result (i.e. nonempty intersection with the span) is new even for closed ambient spaces, and we expect that there may be interesting applications in that setting alone. Moreover, we prove the analogous statement in compact manifolds with strictly mean convex boundary in the course of the proof of Theorem 1.1.
### Balanced manifolds
As a modest application of this theory, we find that Theorem 1.1 applies to a large class of complete manifolds defined only by their asymptotic behavior. Roughly, we say that a complete manifold is _balanced_ if we can decompose \(M=E\sqcup K\sqcup F\) for \(K\) compact so that the asymptotic minimal area of cross sections of \(E\) and \(F\) agree; namely
\[\lim_{r\to\infty}\inf\{\mathcal{H}^{n}(\Sigma)\mid\Sigma\text{ is homologous to }\partial E\text{ in }E,\ \Sigma\cap B_{r}(K)=\varnothing\}\] \[\qquad=\lim_{r\to\infty}\inf\{\mathcal{H}^{n}(\Sigma)\mid\Sigma \text{ is homologous to }\partial F\text{ in }F,\ \Sigma\cap B_{r}(K)=\varnothing\}.\]
We refer to SS5 for the precise definition.
**Theorem 1.2**.: _Every complete balanced \((M^{n+1},g)\) of dimension \(3\leq n+1\leq 7\) admits a smooth complete embedded minimal hypersurface of finite area and index at most 1._
In SS5, we present an example to show the necessity of the balanced condition.
We emphasize that Theorem 1.2 applies to manifolds exhausted by compact subsets with smooth boundaries whose area tends to zero, which includes finite volume manifolds. Hence, we provide a simpler proof of [10, Corollary 1.3] and [11, Corollary 2.2 (1)]:
**Corollary 1.3**.: _Every complete finite volume \((M^{n+1},g)\) of dimension \(3\leq n+1\leq 7\) admits a smooth complete embedded minimal hypersurface of finite area and index at most 1._
In most cases (see Remark 5.3), Theorem 1.2 is completely new, as it requires the sharpness of our localization result.
### Sketch of the proof of Theorem 1.1
As compared to the careful cut-and-paste constructions of [10], our proof is extremely geometrically simple. The ideas here somewhat resemble the "cutting along hypersurfaces" arguments in [11], but our proof involves many fewer complications and tools. For example, we do not require the use of the level set flow.
We first reduce to the case of compact manifolds with smooth, compact, strictly mean convex boundary by modifying the metric near the boundary of a compact exhaustion of \(M\) and using the compactness theory of [14].
We further reduce to the case where the metric is bumpy, again using the compactness theory of [14].
By the resolution of the multiplicity one conjecture due to [13] in bumpy metrics and the quantitative minimality of strictly stable minimal hypersurfaces due to [12] (see also [15]), min-max will produce a connected _unstable_ 2-sided smooth closed embedded minimal hypersurface \(\Sigma\) with bounded area and bounded index. Suppose that \(\Sigma\) does not
intersect \(\operatorname{span}(\Phi_{0})\). Since each side of a small tubular neighborhood of a connected 2-sided unstable minimal hypersurface has a strictly mean convex foliation, there is a subdomain \(W\subset M\) disjoint from a component of \(\Sigma\) and containing \(\operatorname{span}(\Phi_{0})\) that has strictly mean convex boundary, so we can now restart the argument in \(W\). Since there are only finitely many smooth closed embedded minimal hypersurfaces satisfying the area and index bound (by bumpiness), we produce a hypersurface that intersects \(\operatorname{span}(\Phi_{0})\) after only finitely many iterations of this argument. We refer the reader to Figure 1 for an illustration of the argument.
**Acknowledgements**.: I am grateful to my advisor Fernando Coda Marques for his support and for suggesting the topic of min-max in non-compact spaces. I am also grateful to Lorenzo Sarnataro and Otis Chodosh for providing feedback on earlier versions of this paper.
I was supported by an NDSEG Fellowship.
## 2. Setup and Notation
Let \((M^{n+1},g)\) be a smooth Riemannian manifold, either closed, complete non-compact, or compact with smooth compact boundary.
**Definition 2.1** (\(\mathcal{C}(M)\)).: We define the appropriate class of finite perimeter sets in each case:
* If \((M,g)\) is closed, we define \(\mathcal{C}(M)\) to be the set of finite perimeter sets \(\Omega\subset M\).
* If \((M,g)\) is complete non-compact, we define \(\mathcal{C}(M)\) to be the set of finite perimeter sets \(\Omega\subset M\) so that \(\partial\Omega\) is bounded and nonempty. We emphasize that \(\Omega\in\mathcal{C}(M)\) need not be bounded; we only require that \(\partial\Omega\) is bounded.
* If \((M,g)\) is compact with smooth compact boundary, we define \(\mathcal{C}(M)\) to be the set of finite perimeter sets \(\Omega\subset M\) so that \(\partial\Omega\cap\operatorname{Int}(M)\neq\varnothing\) and \(\overline{\partial\Omega\cap\operatorname{Int}(M)}\cap\partial M=\varnothing\). In other words, \(\partial\Omega\cap\operatorname{Int}(M)\) is nonempty and contained in a compact subset of \(\operatorname{Int}(M)\).
We note that in any case, if \(W\subset M\) is a bounded open domain with smooth boundary, \(\Omega\in\mathcal{C}(M)\), and \(\partial\Omega\subset W\), then \(\Omega\cap W\in\mathcal{C}(W)\).
Henceforth, we follow the setup and notation of [10].
**Definition 2.2** (\(\mathbf{F}\)-metric on \(\mathcal{C}(M)\)).: For \(\Omega\in\mathcal{C}(M)\), we let \([\Omega]\) denote the integral \((n+1)\)-current associated to \(\Omega\), and we let \(|\partial\Omega|\) denote the varifold associated to \(\partial\Omega\). For
Figure 1. An illustration of the proof of Theorem 1.1.
\(\mathcal{C}(M)\), we define
\[\mathbf{F}(\Omega_{1},\Omega_{2}):=\mathcal{F}([\Omega_{1}]-[\Omega_{2}])+\mathbf{ F}(|\partial\Omega_{1}|,|\partial\Omega_{2}|),\]
where \(\mathcal{F}\) is the flat norm on currents and \(\mathbf{F}\) is the usual \(\mathbf{F}\)-metric on varifolds.
Let \(X^{k}\subset I^{m}\) be a \(k\)-dimensional cubical complex, and let \(Z\subset X\) be a cubical subcomplex. Let \(\Phi_{0}:X\to\mathcal{C}(M)\) be a continuous map in the \(\mathbf{F}\) topology.
**Definition 2.3** (Span).: The _span_ of \(\Phi_{0}\) is the set
\[\operatorname{span}(\Phi_{0}):=\overline{\bigcup_{x\in X}\partial\Phi_{0}(x)}.\]
By definition, \(\operatorname{span}(\Phi_{0})\subset\operatorname{Int}(M)\) is compact.
**Definition 2.4** (Homotopy class).: Let \(\Pi(\Phi_{0},M)\) be the set of all sequences of \(\mathbf{F}\)-continuous maps \(\{\Phi_{i}:X\to\mathcal{C}(M)\}_{i\in\mathbb{N}}\) with the property that there are flat-continuous homotopies \(\{\Psi_{i}:[0,1]\times X\to\mathcal{C}(M)\}_{i\in\mathbb{N}}\) satisfying \(\Psi_{i}(0,\cdot)=\Phi_{0}\), \(\Psi_{i}(1,\cdot)=\Phi_{i}\), and
\[\limsup_{i\to\infty}\sup\{\mathbf{F}(\Psi_{i}(s,z),\Phi_{0}(z))\mid s\in[0,1],\ z\in Z\}=0.\]
\(\Pi(\Phi_{0},M)\) is called the _homotopy class of \(\Phi_{0}\) in \(M\)_.
**Definition 2.5** (Homotopy class on subdomain).: If \(W\subset M\) is a bounded open domain with smooth boundary and \(\operatorname{span}(\Phi_{0})\subset W\), then we define
\[\Pi(\Phi_{0},W):=\Pi(\Phi_{0}\cap W,W),\]
where \((\Phi_{0}\cap W)(x):=\Phi_{0}(x)\cap W\). \(\Pi(\Phi_{0},W)\) is called the _homotopy class of \(\Phi_{0}\) in \(W\)_.
**Definition 2.6** (Width).: The _width_ of \(\Pi(\Phi_{0},W)\) is
\[\mathbf{L}(\Phi_{0},W):=\inf_{\{\Phi_{i}\}\in\Pi(\Phi_{0},W)}\limsup_{i\to \infty}\sup_{x\in X}\mathcal{H}^{n}(\partial\Phi_{i}(x)).\]
**Remark 2.7**.: We emphasize the ambient space in the definition because we must consider restrictions to subdomains. It follows from a straightforward extension argument that if \(\operatorname{span}(\Phi_{0})\subset W_{1}\subset W_{2}\subset M\) are bounded open domains with smooth boundary, then
\[\mathbf{L}(\Phi_{0},W_{1})\geq\mathbf{L}(\Phi_{0},W_{2}).\]
**Definition 2.8** (Width on subset).: If \(\operatorname{span}(\Phi_{0})\subset C\subset M\) is an arbitrary subset, we define
\[\mathbf{L}(\Phi_{0},C):=\sup\{\mathbf{L}(\Phi_{0},W)\mid W\supset C\text{ bounded open domain with smooth boundary}\},\]
and the supremum is finite because \(\mathbf{L}(\Phi_{0},W)\leq\sup_{x\in X}\mathcal{H}^{n}(\partial\Phi_{0}(x))<\infty\) for any bounded open domain with smooth boundary \(W\) containing \(\operatorname{span}(\Phi_{0})\).
In particular, this definition allows us to make sense of \(\mathbf{L}(\Phi_{0},\operatorname{span}(\Phi_{0}))\).
## 3. Localization in Compact Manifolds
Suppose \((M^{n+1},g)\) is a smooth compact Riemannian manifold of dimension \(3\leq n+1\leq 7\) with smooth compact (possibly empty) boundary.
It will be convenient to approximate arbitrary metrics by _bumpy_ metrics.
**Definition 3.1**.: The metric \(g\) is _weakly bumpy_ if every smooth closed embedded minimal hypersurface in \(M\) is non-degenerate.
**Lemma 3.2**.: _The set of weakly bumpy metrics on \(M\) is \(C^{\infty}\) dense._
Proof.: Let \(g\) be any metric on \(M\). By a standard extension argument, there is a smooth closed Riemannian manifold \((N,\overline{g})\) that has \((M,g)\) as a subset. By White's bumpy metric theorem [14, Theorem 2.2], there is a sequence of bumpy metrics on \(N\) converging smoothly to \(\overline{g}\). The restrictions of these metrics to \(M\) are weakly bumpy by definition, and they converge smoothly to \(g\).
We collect facts about min-max from many foundational works in the following lemma.
**Lemma 3.3**.: _Suppose the metric \(g\) is weakly bumpy and \(\partial M\) is strictly mean convex. If_
\[\mathbf{L}(\Phi_{0},M)>\sup_{z\in Z}\mathcal{H}^{n}(\partial\Phi_{0}(z)),\]
_then there is a 2-sided smooth closed embedded minimal hypersurface \(\Sigma\subset M\) satisfying_
* \(\mathcal{H}^{n}(\Sigma)=\mathbf{L}(\Phi_{0},M)\)_,_
* \(1\leq\operatorname{index}(\Sigma)\leq k\)_._
Proof.: We first review the proof in the closed case (i.e. \(\partial M=\varnothing\)). By [13, Theorem 4.1], there is a 2-sided smooth closed embedded minimal hypersurface \(\Sigma\subset M\) satisfying \(\mathcal{H}^{n}(\Sigma)=\mathbf{L}(\Phi_{0},M)\) and \(\operatorname{index}(\Sigma)\leq k\). Since \(\Sigma\) is nondegenerate, [12, Theorem 1.1] (see also [16, Theorem 6.1]) implies that \(\Sigma\) cannot be stable, so \(\operatorname{index}(\Sigma)\geq 1\).
In the case of strictly mean convex boundary, we observe as in [16] (noting that by definition \(\partial\Omega\cap\operatorname{Int}(M)\) for \(\Omega\in\mathcal{C}(M)\) is a cycle in \(\operatorname{Int}(M)\)) that we can perform the pull-tight procedure in the min-max construction using only isotopies generated by vector fields pointing into \(M\) to produce constrained stationary varifolds, which are strictly contained in the interior of \(M\) by the strict mean convexity of \(\partial M\) and the strong maximum principle (see [14, Theorem 7]). The regularity theory, index lower bound, index upper bound, and approximation by PMC hypersurfaces are all local arguments and still apply in this setting4. We note that we must choose the approximating prescribing function to be small relative the the positive lower bound on the mean curvature of \(\partial M\) so that we can constrain the PMC solutions to the interior of \(M\). There is no problem because we ultimately take the prescribing function to zero uniformly in the proof of [13, Theorem 4.1].
Footnote 4: The observation that these local arguments extend to manifolds with strictly mean convex boundary can be seen in the proof of [14, Theorem 6.6].
The essential point of Lemma 3.3 is that the min-max hypersurface is _unstable_. We can now remove the bumpiness assumption and argue by approximation.
**Theorem 3.4**.: _Suppose \(\partial M\) is strictly mean convex. If_
\[\mathbf{L}(\Phi_{0},M)>\sup_{z\in Z}\mathcal{H}^{n}(\partial\Phi_{0}(z)),\]
_then there exists a smooth closed embedded minimal hypersurface \(\Sigma\subset M\) satisfying_
* \(\mathcal{H}^{n}(\Sigma)\leq\mathbf{L}(\Phi_{0},\operatorname{span}(\Phi_{0}))\)_,_
* \(\operatorname{index}(\Sigma)\leq k\)_,_
* \(\Sigma\cap\operatorname{span}(\Phi_{0})\neq\varnothing\)_._
Proof.: We begin by assuming that the metric \(g\) is weakly bumpy.
Let \(\mathcal{M}\) denote the set of connected 2-sided smooth closed embedded minimal hypersurfaces \(\Sigma\subset M\) satisfying
* \(\mathcal{H}^{n}(\Sigma)\leq\mathbf{L}(\Phi_{0},\operatorname{span}(\Phi_{0}))\),
* \(1\leq\operatorname{index}(\Sigma)\leq k\).
By Sharp's compactness theorem [20, Theorem 2.3]\(\mathcal{M}\) is a finite set.
By Lemma 3.3 applied to \(\mathbf{L}(\Phi_{0},M)\), \(\mathcal{M}\neq\varnothing\).
Suppose for contradiction that \(\Sigma\cap\operatorname{span}(\Phi_{0})=\varnothing\) for all \(\Sigma\in\mathcal{M}\).
Let \(\mathcal{D}\subset\mathcal{M}\) be a maximal disjoint subset; namely, (1) if \(\Sigma,\Sigma^{\prime}\in\mathcal{D}\) and \(\Sigma\cap\Sigma^{\prime}\neq\varnothing\), then \(\Sigma=\Sigma^{\prime}\), and (2) every \(\Sigma\in\mathcal{M}\) satisfies \(\Sigma\cap\Sigma^{\prime}\neq\varnothing\) for some \(\Sigma^{\prime}\in\mathcal{D}\). Such a set \(\mathcal{D}\) exists by induction.
Since each \(\Sigma\in\mathcal{D}\) is 2-sided and has index at least 1, there is a mean convex foliation (with curvature pointing away from \(\Sigma\)) in a neighborhood of \(\Sigma\) obtained by flowing \(\Sigma\) in the normal direction according to the positive first eigenfunction of the Jacobi operator (see [13, Proof of Lemma 3.2]). Hence, by deleting a small neighborhood of each \(\Sigma\in\mathcal{D}\) (and then deleting any resulting components that do not contain \(\operatorname{span}(\Phi_{0})\)), there is an open set \(W\subset M\) satisfying
* \(\operatorname{span}(\Phi_{0})\subset W\),
* \(W\) is disjoint from \(\Sigma\) for all \(\Sigma\in\mathcal{D}\),
* \(W\) has smooth, strictly mean convex boundary.
By Lemma 3.3 for \(\mathbf{L}(\Phi_{0},W)\), there is a smooth closed embedded minimal hypersurface \(\Sigma^{*}\subset W\) so that \(\Sigma^{*}\in\mathcal{M}\), contradicting the defining property of \(\mathcal{D}\).
By Sharp's compactness theorem [20, Theorem A.6] for changing ambient metrics and Lemma 3.2, the conclusion now follows for arbitrary metrics.
## 4. Localization in Complete Non-Compact Manifolds
Suppose \((M^{n+1},g)\) is a smooth complete non-compact Riemannian manifold of dimension \(3\leq n+1\leq 7\).
**Theorem 4.1**.: _If_
\[\mathbf{L}(\Phi_{0},M)>\sup_{z\in Z}\mathcal{H}^{n}(\partial\Phi_{0}(z)),\]
_then there exists a smooth complete (possibly non-compact) embedded minimal hypersurface \(\Sigma\subset M\) satisfying_
* \(\mathcal{H}^{n}(\Sigma)\leq\mathbf{L}(\Phi_{0},\operatorname{span}(\Phi_{0}))\)_,_
* \(\operatorname{index}(\Sigma)\leq k\)_,_
* \(\Sigma\cap\operatorname{span}(\Phi_{0})\neq\varnothing\)_._
Proof.: Let \(\{M_{i}\}_{i\in\mathbb{N}}\) be an exhaustion of \(M\) by nested precompact open subsets with smooth boundary satisfying \(\operatorname{span}(\Phi_{0})\subset M_{1}\).
We construct a modified metric \(\tilde{g}_{i}\) on \(M_{i}\) so that
* \(\tilde{g}_{i}\) agrees with \(g\) outside \(B_{1}(\partial M_{i})\setminus B_{1}(\operatorname{span}(\Phi_{0}))\),
* \(\tilde{g}_{i}\geq g\) in the sense of bilinear forms,
* \(\partial M_{i}\) has strictly mean convex boundary in \(\tilde{g}_{i}\).
Namely, using Fermi coordinates in a small tubular neighborhood of \(\partial M_{i}\), we can smoothly transition from the metric \(g\) to a metric on the cylinder \((0,1)\times\partial M_{i}\) that is pointwise larger than \(g\) and so that \(\{0\}\times M_{i}\) has strictly mean convex boundary.
Since \(\tilde{g}_{i}\geq g\), we have \(\mathbf{L}(\Phi_{0},M_{i},\tilde{g}_{i})\geq\mathbf{L}(\Phi_{0},M_{i},g)\geq \mathbf{L}(\Phi_{0},M,g)\). Moreover, since \(\tilde{g}_{i}=g\) on \(B_{1}(\operatorname{span}(\Phi_{0}))\), we have \(\mathbf{L}(\Phi_{0},M_{i},\tilde{g}_{i})\leq\mathbf{L}(\Phi_{0},\operatorname {span}(\Phi_{0}),\tilde{g}_{i})=\mathbf{L}(\Phi_{0},\operatorname{span}( \Phi_{0}),g)\).
By Theorem 3.4, there is a smooth closed embedded minimal hypersurface \(\Sigma_{i}\subset M_{i}\) (with respect to \(\tilde{g}_{i}\)) satisfying
* \(\mathcal{H}^{n}(\Sigma_{i})\leq\mathbf{L}(\Phi_{0},\operatorname{span}(\Phi_{0}),g)\),
* \(\operatorname{index}_{\widehat{g}_{i}}(\Sigma)\leq k\),
* \(\Sigma_{i}\cap\operatorname{span}(\Phi_{0})=\varnothing\).
By Sharp's compactness theorem [15, Theorem 2.3]5, the sequence \(\{\Sigma_{i}\}\) converges locally smoothly away from at most \(k\) points to a smooth complete (not necessarily compact) embedded minimal hypersurface \(\Sigma\subset M\) satisfying the conclusions of the theorem.
Footnote 5: While the statement of [15, Theorem 2.3] concerns closed hypersurfaces in closed manifolds, the arguments are local and apply on compact subsets of complete manifolds.
### Comparison to [14]
We begin by observing that we use the Almgren-Pitts min-max framework as in [16] (for example), whereas [14] uses the constructions of [13]. Since the equivalence of these frameworks is besides the point, we make comparisons by analogy only.
Let \((M^{n+1},g)\) be a smooth complete Riemannian manifold, and let \(U\subset M\) be a bounded open set with smooth boundary.
In [14], the _width_ of \(U\), denoted \(W(U)\), is defined to be the min-max width over families of hypersurfaces \(\{\Sigma_{t}=\partial\Omega_{t}\}_{t\in[0,1]}\) as in [13] so that \(U\cap\Omega_{0}=\varnothing\) and \(U\subset\Omega_{1}\). The _relative width_ of \(U\), denoted \(W_{\partial}(U)\), is defined to be the min-max width over families as above that are furthermore nested, and where we only compute the area of each hypersurface inside \(U\). Ultimately, [14, Theorem 8.2] shows that for any \(\delta>0\), there is a smooth complete (not necessarily compact) embedded minimal hypersurface of area at most \(W_{\partial}(U)+\mathcal{H}^{n}(\partial U)\) intersecting the \(\delta\)-neighborhood of \(U\) so long as
\[W_{\partial}(U)\geq 10\mathcal{H}^{n}(\partial U).\]
In our framework, let \(\{\Sigma_{t}=\partial\Omega_{t}\}\) be a nested family of hypersurfaces satisfying \(\Sigma_{0}\sqcup\Sigma_{1}=\partial U\). We can view this family as a map \(\Phi_{0}:[0,1]\to\mathcal{C}(M)\), which satisfies \(\operatorname{span}(\Phi_{0})=\overline{U}\). We show that there is a smooth complete (not necessarily compact) embedded minimal hypersurface of area at most \(\mathbf{L}(\Phi_{0},\overline{U})\) and index at most \(1\) intersecting \(\overline{U}\) so long as
\[\mathbf{L}(\Phi_{0},M)>\max\{\mathcal{H}^{n}(\Sigma_{0}),\mathcal{H}^{n}( \Sigma_{1})\}.\]
We draw the analogy between the two frameworks.
* \(\mathbf{L}(\Phi_{0},M)\) is analogous to \(W(U)\), and \(W(U)\geq W_{\partial}(U)\).
* \(\mathbf{L}(\Phi_{0},\overline{U})\) is analogous to a quantity bounded from above by \(W_{\partial}(U)+\mathcal{H}^{n}(\partial U)\).
* \(\max\{\mathcal{H}^{n}(\Sigma_{0}),\mathcal{H}^{n}(\Sigma_{1})\}\leq\mathcal{ H}^{n}(\partial U)\).
Hence, our result has weaker hypotheses and stronger conclusions than [14].
**Remark 4.2**.: The result of [14] applies when the dimension is \(n+1\geq 3\), with the usual optimal non-smooth regularity when \(n+1>7\). We emphasize that our proof only works when \(3\leq n+1\leq 7\), which are the dimensions where [14] produces a _smooth_ minimal hypersurface.
## 5. Existence in Balanced Manifolds
Let \((M^{n+1},g)\) be a smooth complete non-compact Riemannian manifold of dimension \(3\leq n+1\leq 7\).
Let \(K\subset M\) be a compact set with smooth compact boundary. Let \(E\subset M\setminus K\) be the disjoint union of some of the components of \(M\setminus K\) (where we allow \(E=\varnothing\) and \(E=M\setminus K\)). We let \(F:=(M\setminus K)\setminus E\).
Let \(\alpha\in H_{n}(M,\mathbb{Z}/2\mathbb{Z})\) be the homology class of \(\partial E\) in \(M\) (which is the same as the homology class of \(\partial F\) in \(M\)). Let \(\alpha_{E}\in H_{n}(E,\mathbb{Z}/2\mathbb{Z})\) be the homology class of \(\partial E\) in \(E\). Let \(\alpha_{F}\in H_{n}(F,\mathbb{Z}/2\mathbb{Z})\) be the homology class of \(\partial F\) in \(F\).
Fix some \(x_{0}\in K\).
We define
\[\mathcal{A}_{E} :=\lim_{r\to\infty}\inf\{\mathbb{M}(T)\mid T\in\alpha_{E}\text{ and }\operatorname{ spt}(T)\subset E\setminus B_{r}(x_{0})\},\] \[\mathcal{A}_{F} :=\lim_{r\to\infty}\inf\{\mathbb{M}(T)\mid T\in\alpha_{F}\text{ and } \operatorname{ spt}(T)\subset F\setminus B_{r}(x_{0})\}.\]
For convenience, we let \(\mathcal{A}_{E}\) (resp. \(\mathcal{A}_{F}\)) equal \(0\) if \(E\) (resp. \(F\)) is bounded.
We say \((M,g)\) is _balanced_ if there is a choice of \(K\), \(x_{0}\), \(E\), and \(F\) so that \(\mathcal{A}_{E}=\mathcal{A}_{F}\).
**Theorem 5.1**.: _Every complete balanced \((M^{n+1},g)\) of dimension \(3\leq n+1\leq 7\) admits a smooth complete (not necessarily compact) embedded minimal hypersurface of finite area and index at most 1._
**Remark 5.2**.: If \((M,g)\) is furthermore _thick at infinity_, meaning that any connected finite area complete minimal hypersurface in \((M,g)\) is closed (see [10]), then the minimal embedding is closed.
**Remark 5.3**.: We emphasize that Theorem 5.1 is completely new when \(\mathcal{A}_{E}=\mathcal{A}_{F}>0\), as the localization results of [10] and [10] are insufficient to handle these cases.
### Examples
We present two simple classes of manifolds that satisfy the hypotheses of Theorem 5.1.
**Example 5.4** (Finite Volume Manifolds).: Suppose \((M^{n+1},g)\) has finite volume. We can define \(K=\overline{B}_{2\varepsilon}(x)\setminus B_{\varepsilon}(x)\) for some \(x\in M\) and \(\varepsilon<\operatorname{inj}(M,p)\), and let \(E=B_{\varepsilon}(x)\) and \(F=M\setminus\overline{B}_{2\varepsilon}(x)\). Since \(E\) is bounded, we have \(\mathcal{A}_{E}=0\). Since \(M\) has finite volume, \(\mathcal{A}_{F}=0\).
In fact, the same argument applies to any manifold admitting an exhaustion by compact sets with smooth boundaries whose areas tend to zero6, as in [10].
Footnote 6: In fact, the proof in this case follows immediately from the relative isoperimetric inequality in a compact subset with smooth boundary and Theorem 1.1.
**Example 5.5** (Balanced Asymptotically Cylindrical Manifolds).: Let \((M^{n+1},g)\) be _asymptotically cylindrical_. Namely, there is a compact subset \(K\subset M\), a smooth closed (not necessarily connected) Riemannian manifold \((N^{n},h)\), and a diffeomorphism \(\phi:\mathbb{R}_{+}\times N\to M\setminus K\) so that
\[\limsup_{r\to\infty}\|(dt^{2}\oplus h)-\phi^{*}g\|_{C^{\infty}((r,\infty)\times N )}=0.\]
These manifolds generalize the manifolds with cylindrical ends considered by [10], although they need not be rigid on any finite scale.
If there is a decomposition of \(N=R\sqcup S\) so that each of \(R\) and \(S\) is the disjoint union of components of \(N\) and \(\mathcal{H}^{n}(R)=\mathcal{H}^{n}(S)=\frac{1}{2}\mathcal{H}^{n}(N)\), then \((M,g)\) is balanced and Theorem 5.1 applies.
**Example 5.6** (Unbalanced).: The balanced condition in Theorem 5.1 is necessary, as demonstrated by the following example. Let \((\mathbb{R}\times S^{n},g)\) be a warped product metric given by
\[g=dt^{2}\oplus f(t)g_{S^{n}},\]
where \(g_{S^{n}}\) is the round metric on \(S^{n}\) and \(f:\mathbb{R}\to(1/2,2)\) is a strictly decreasing smooth function. Then \((M,g)\) has a strictly mean convex foliation and bounded geometry, so
\((M,g)\) does not admit any finite area embedded minimal hypersurfaces. Importantly, since \(\lim_{t\to-\infty}f(t)>\lim_{t\to\infty}f(t)\), \((M,g)\) is not balanced.
### Proof of Theorem 5.1
We define two geometric invariants.
**Definition 5.7**.: The _area_ of the homology class \(\alpha\in H_{n}(M,\mathbb{Z}/2\mathbb{Z})\) is
\[\operatorname{area}(\alpha):=\inf_{T\in\alpha}\mathbb{M}(T).\]
**Definition 5.8**.: Let
\[\mathcal{A}_{E}^{r}:=\inf\{\mathbb{M}(T)\mid T\in\alpha_{E}\text{ and } \operatorname{spt}(T)\subset E\setminus B_{r}(x_{0})\},\]
\[\mathcal{A}_{F}^{r}:=\inf\{\mathbb{M}(T)\mid T\in\alpha_{F}\text{ and } \operatorname{spt}(T)\subset F\setminus B_{r}(x_{0})\}.\]
Let \(\mathcal{S}_{r}\) denote the set of \(\mathbf{F}\)-continuous maps \(\Phi:[0,1]\to\mathcal{C}(M)\) so that7
Footnote 7: In the case that (without loss of generality) \(E\) is bounded, we instead require that \(\partial\Phi(0)\subset E\) and satisfies \(\mathcal{H}^{n}(\partial\Phi(0))\leq 1/r\).
* \(\partial\Phi(t)\in\alpha\) for all \(t\in[0,1]\),
* \(\partial\Phi(0)\subset E\setminus B_{r}(x_{0})\) and satisfies \(\mathcal{H}^{n}(\partial\Phi(0))\leq\mathcal{A}_{E}^{r}+\frac{1}{r}\),
* \(\partial\Phi(1)\subset F\setminus B_{r}(x_{0})\) and satisfies \(\mathcal{H}^{n}(\partial\Phi(1))\leq\mathcal{A}_{F}^{r}+\frac{1}{r}\).
The _width_ of the homology class \(\alpha\in H_{n}(M,\mathbb{Z}/2\mathbb{Z})\) is
\[\operatorname{width}(\alpha):=\limsup_{r\to\infty}\sup_{\Phi\in\mathcal{S}_{r }}\mathbf{L}(\Phi,M).\]
Note that we always have \(\operatorname{area}(\alpha)\leq\operatorname{width}(\alpha)\).
**Lemma 5.9**.: _If \(\operatorname{area}(\alpha)<\min\{\mathcal{A}_{E},\mathcal{A}_{F}\}\), then there is a smooth complete (not necessarily compact) embedded minimal hypersurface \(\Sigma\subset M\) satisfying_
* \(\mathcal{H}^{n}(\Sigma)\leq\operatorname{area}(\alpha)\)_,_
* \(\operatorname{index}(\Sigma)=0\)_._
Proof.: Let \(\{\Omega_{i}\}_{i\in\mathbb{N}}\) be an exhaustion of \(M\) by precompact connected nested open sets with smooth boundary. We suppose without loss of generality that \(K\subset\Omega_{i}\) for all \(i\).
We define
\[\operatorname{area}_{i}(\alpha):=\inf_{T\in\alpha|\operatorname{spt}(T)\subset \overline{\Omega}_{i}}\mathbb{M}(T).\]
Since the sets \(\Omega_{i}\) are nested, we have \(\operatorname{area}_{i}(\alpha)\geq\operatorname{area}_{j}(\alpha)\) if \(i<j\). Since \(\Omega_{i}\) is an exhaustion of \(M\) and each \(T\in\alpha\) has compact support, we have
\[\lim_{i\to\infty}\operatorname{area}_{i}(\alpha)=\operatorname{area}(\alpha).\]
By the standard compactness theory for currents (see [20, Theorem 6.8.2]), there is a current \(T_{i}^{*}\in\alpha\) satisfying \(\operatorname{spt}(T_{i}^{*})\subset\overline{\Omega}_{i}\) and \(\mathbb{M}(T_{i}^{*})=\operatorname{area}_{i}(\alpha)\). Since \(T_{i}^{*}\) is area minimizing in \(\Omega_{i}\), \(T_{i}^{*}\sqcup\Omega_{i}\) is the current of a smooth stable embedded minimal hypersurface with integer multiplicity (see [20, Theorem 7.5.8]).
Since \(\min\{\mathcal{A}_{E},\mathcal{A}_{F}\}>\operatorname{area}(\alpha)\), there is a compact set \(C\subset M\) so that \(\operatorname{spt}(T_{i}^{*})\cap C\neq\varnothing\) for all \(i\) sufficiently large.
By the curvature estimates for stable minimal immersions [16, Corollary 1], there is a subsequence of \(\{T_{i}^{*}\}_{i\in\mathbb{N}}\) that converges locally smoothly to a nontrivial smooth complete (not necessarily closed) stable embedded minimal hypersurface \(\Sigma\subset M\) satisfying \(\mathcal{H}^{n}(\Sigma)\leq\operatorname{area}(\alpha)\).
**Lemma 5.10**.: _If \(\operatorname{width}(\alpha)>\max\{\mathcal{A}_{E},\mathcal{A}_{F}\}\), then there is a smooth complete (not necessarily compact) embedded minimal hypersurface \(\Sigma\subset M\) satisfying_
* \(\mathcal{H}^{n}(\Sigma)<\infty\)_,_
* \(\operatorname{index}(\Sigma)\leq 1\)_._
Proof.: By assumption, there is an \(\mathbf{F}\)-continuous map \(\Phi:[0,1]\to\mathcal{C}(M)\) satisfying
\[\operatorname{width}(\alpha)\geq\mathbf{L}(\Phi,M)>\max\{\mathcal{H}^{n}( \partial\Phi(0)),\mathcal{H}^{n}(\partial\Phi(1))\}.\]
Hence, Theorem 4.1 applies, and the conclusion follows.
**Lemma 5.11**.: _If \(\operatorname{area}(\alpha)=\operatorname{width}(\alpha)\), then for every \(x\in M\) there is a smooth complete (not necessarily compact) embedded minimal hypersurface \(\Sigma_{x}\subset M\) satisfying_
* \(\mathcal{H}^{n}(\Sigma_{x})\leq\operatorname{area}(\alpha)\)_,_
* \(\operatorname{index}(\Sigma_{x})=0\)_,_
* \(x\in\Sigma_{x}\)_._
_In particular, there are infinitely many such minimal embeddings._
Proof.: Fix \(x\in M\). Fix \(D\subset M\) a compact set with smooth boundary whose interior contains \(x\). By the relative isoperimetric inequality, there is a constant \(c>0\) so that for any \(\Omega\in\mathcal{C}(M)\) satisfying \(\mathcal{H}^{n+1}(\Omega\cap D)=\frac{1}{2}\mathcal{H}^{n+1}(D)\), we have \(\mathcal{H}^{n}(\partial\Omega\cap D)\geq c\).
Let \(r_{j}\to\infty\), and let \(\Phi^{j}\in\mathcal{S}_{r_{j}}\). Let \(\{\Phi^{j}_{i}\}_{i\in\mathbb{N}}\in\Pi(\Phi^{j},M)\) so that
\[\limsup_{i\to\infty}\sup_{x\in X}\mathcal{H}^{n}(\partial\Phi^{j}_{i}(x))= \mathbf{L}(\Phi^{j},M),\]
which exists by taking a diagonal sequence. Since \(D\subset B_{r_{j}}(x_{0})\) for all \(j\) (without loss of generality), there is some \(t^{j}_{i}\in[0,1]\) so that \(\mathcal{H}^{n+1}(\Phi^{j}_{i}(t^{j}_{i})\cap D)=\frac{1}{2}\mathcal{H}^{n+1}(D)\). Moreover, by the assumption that \(\operatorname{width}(\alpha)=\operatorname{area}(\alpha)\), there is a diagonal sequence so that \(\Sigma_{j}:=\partial\Phi^{j}_{i_{j}}(t^{j}_{i_{j}})\) is a minimizing sequence for area in \(\alpha\) and satisfies \(\mathcal{H}^{n}(\Sigma_{j}\cap D)\geq c\) for all \(j\). Hence, there is a smooth complete (not necessarily compact) embedded stable minimal hypersurface \(\Sigma\subset M\) satisfying \(\mathcal{H}^{n}(\Sigma)\leq\operatorname{area}(\alpha)\) and \(\Sigma\cap D\neq\varnothing\).
By taking the set \(D\) smaller, the compactness theory for stable minimal hypersurfaces with uniform area bounds (see [13, Corollary 1]) produces the desired minimal hypersurface.
Since \(\mathcal{A}_{E}=\mathcal{A}_{F}\) for balanced manifolds, either Lemma 5.11 or one of Lemma 5.9 or 5.10 applies, so Theorem 5.1 follows.
|
2304.05900 | Managing Portfolio for Maximizing Alpha and Minimizing Beta | Portfolio management is an essential component of investment strategy that
aims to maximize returns while minimizing risk. This paper explores several
portfolio management strategies, including asset allocation, diversification,
active management, and risk management, and their importance in optimizing
portfolio performance. These strategies are examined individually and in
combination to demonstrate how they can help investors maximize alpha and
minimize beta. Asset allocation is the process of dividing a portfolio among
different asset classes to achieve the desired level of risk and return.
Diversification involves spreading investments across different securities and
sectors to minimize the impact of individual security or sector-specific risks.
Active management involves security selection and risk management techniques to
generate excess returns while minimizing losses. Risk management strategies,
such as stop-loss orders and options strategies, aim to minimize losses in
adverse market conditions. The importance of combining these strategies for
optimizing portfolio performance is emphasized in this paper. The proper
implementation of these strategies can help investors achieve their investment
goals over the long-term, while minimizing exposure to risks. A call to action
for investors to utilize portfolio management strategies to maximize alpha and
minimize beta is also provided. | Soumyadip Sarkar | 2023-04-01T06:54:13Z | http://arxiv.org/abs/2304.05900v1 | # Managing Portfolio for Maximizing Alpha and Minimizing Beta
###### Abstract
Portfolio management is an essential component of investment strategy that aims to maximize returns while minimizing risk. This paper explores several portfolio management strategies, including asset allocation, diversification, active management, and risk management, and their importance in optimizing portfolio performance. These strategies are examined individually and in combination to demonstrate how they can help investors maximize alpha and minimize beta. Asset allocation is the process of dividing a portfolio among different asset classes to achieve the desired level of risk and return. Diversification involves spreading investments across different securities and sectors to minimize the impact of individual security or sector-specific risks. Active management involves security selection and risk management techniques to generate excess returns while minimizing losses. Risk management strategies, such as stop-loss orders and options strategies, aim to minimize losses in adverse market conditions. The importance of combining these strategies for optimizing portfolio performance is emphasized in this paper. The proper implementation of these strategies can help investors achieve their investment goals over the long-term, while minimizing exposure to risks. A call to action for investors to utilize portfolio management strategies to maximize alpha and minimize beta is also provided.
Portfolio Management, Asset Allocation, Diversification, Active Management, Risk Management
## I Introduction
Portfolio management is the process of selecting and managing a portfolio of investments to achieve specific investment objectives. It involves constructing a portfolio of assets that align with an investor's goals, risk tolerance, and investment horizon. Portfolio management includes asset allocation, diversification, active management, and risk management techniques to maximize portfolio returns while minimizing risk. The primary goal of portfolio management is to generate optimal returns while reducing the impact of market fluctuations and minimizing the risk of loss.
Maximizing alpha and minimizing beta are important concepts in portfolio management because they provide investors with a measure of the performance of their portfolio compared to a benchmark, typically an index of the broader market. Alpha represents the excess returns generated by a portfolio relative to the benchmark, while beta represents the degree to which the portfolio moves in line with the market.
Maximizing alpha is important because it reflects the portfolio's ability to generate excess returns that are not explained by the overall performance of the market. This is achieved by selecting securities or using investment strategies that outperform the benchmark. By generating excess returns, investors can increase their portfolio's overall returns and potentially outperform the market over time.
Minimizing beta is important because it reflects the portfolio's ability to reduce the impact of market fluctuations and reduce overall risk. By minimizing beta, investors can reduce the impact of market downturns and potentially experience less volatility in their portfolio's returns.
In combination, maximizing alpha and minimizing beta can help investors achieve their investment objectives, whether that be long-term capital appreciation or generating income. By achieving excess returns while minimizing risk, investors can increase their chances of meeting their goals while reducing the impact of market fluctuations. Overall, maximizing alpha and minimizing beta are important concepts in portfolio management that can help investors achieve optimal returns while managing risk.
The paper examines several strategies for portfolio management that can be used to optimize alpha and minimize beta. These strategies include:
#### I-1 Asset allocation
Asset allocation is the process of dividing a portfolio among different asset classes such as stocks, bonds, and cash. By diversifying across different asset classes, investors can reduce their overall risk while potentially increasing returns.
#### I-2 Diversification
Diversification involves investing in a variety of assets within each asset class to reduce the impact of individual security or sector-specific risks. Diversification helps to minimize the overall risk of the portfolio.
#### I-3 Active management
Active management involves selecting individual securities or using investment strategies to try to generate excess returns compared to a benchmark. Active management strategies can include fundamental analysis, technical analysis, and quantitative analysis.
#### I-4 Risk management
Risk management involves using strategies to minimize the risk of loss in the portfolio. This may include techniques such as stop-loss orders or options strategies.
The paper examines each of these strategies in-depth, highlighting their importance and contribution towards achieving the objective of maximizing alpha and minimizing beta. Additionally, the paper explores how these strategies can be combined to construct portfolios that generate excess returns while reducing market risk. The study provides valuable insights to investors and portfolio managers seeking
to enhance portfolio performance and achieve their investment objectives.
## II Asset Allocation
Asset allocation is the process of dividing an investment portfolio among different asset classes such as stocks, bonds, and cash. The purpose of asset allocation is to spread an investor's money across different asset classes that have different levels of risk and return, in order to achieve a desired level of diversification and balance risk and reward.
The goal of asset allocation is to optimize the risk and return profile of the portfolio, based on the investor's goals, risk tolerance, and investment horizon. By allocating assets across different asset classes, investors can potentially earn higher returns while reducing overall risk.
The allocation of assets is typically based on the investor's goals, investment horizon, and risk tolerance. Investors with a longer investment horizon and higher risk tolerance may allocate more of their portfolio to stocks, which historically have higher returns but also higher volatility. Investors with a shorter investment horizon or lower risk tolerance may allocate more of their portfolio to fixed-income securities, which offer lower returns but also lower volatility.
Asset allocation requires ongoing monitoring and adjustment, as market conditions and the investor's circumstances change. Investors should periodically review their portfolio and make adjustments to their asset allocation as needed to ensure that it remains aligned with their goals and risk tolerance.
### _Strategic Asset Allocation_
Strategic asset allocation is a long-term investment strategy that involves allocating a portfolio across different asset classes based on the investor's goals, risk tolerance, and investment horizon. This strategy is based on the belief that the performance of different asset classes varies over time and that by diversifying across asset classes, investors can achieve optimal returns while reducing overall risk.
Strategic asset allocation involves setting target allocations for different asset classes and periodically rebalancing the portfolio to maintain those targets. The target allocation is typically based on the investor's goals and risk tolerance, as well as historical performance data for different asset classes.
For example, a strategic asset allocation plan might target a 60% allocation to stocks, 30% allocation to bonds, and 10% allocation to cash. If the stock market performs well and the value of the stock holdings in the portfolio increases, the portfolio may become overweighted in stocks, and the investor may need to rebalance the portfolio by selling some of the stocks and buying more bonds or cash to bring the portfolio back to its target allocation.
Strategic asset allocation is a passive investment strategy that does not involve active management of individual securities. Instead, it focuses on diversification across different asset classes and maintaining a long-term investment horizon. This strategy may not provide the highest returns in the short-term but may provide a more stable and predictable return over the long-term.
Overall, strategic asset allocation is an important strategy in portfolio management, as it can help investors achieve their long-term investment objectives by balancing risk and return across different asset classes.
### _Tactical Asset Allocation_
Tactical asset allocation is an active investment strategy that involves adjusting a portfolio's allocation to different asset classes based on short-term market conditions and economic forecasts. This strategy seeks to take advantage of opportunities to generate excess returns by temporarily shifting investments to asset classes that are expected to outperform or reducing exposure to asset classes that are expected to underperform.
Unlike strategic asset allocation, which is a long-term investment strategy, tactical asset allocation is focused on short-term adjustments to the portfolio's allocation. This strategy requires active management and monitoring of the portfolio and market conditions.
For example, if an investment manager believes that the stock market is overvalued and likely to decline, they may reduce the portfolio's allocation to stocks and increase its allocation to bonds or cash. Conversely, if the manager believes that the stock market is undervalued and likely to rise, they may increase the portfolio's allocation to stocks.
Tactical asset allocation is based on the belief that market conditions and economic forecasts can provide valuable information for generating excess returns. However, this strategy requires a high level of skill and expertise, as it is difficult to predict short-term market movements accurately. Additionally, tactical asset allocation may lead to higher trading costs and tax implications.
Overall, tactical asset allocation can be an effective strategy for generating excess returns in the short-term, but it should be used in combination with a long-term strategic asset allocation plan to balance risk and return over the long-term.
Asset allocation is a critical component of portfolio management because it can help investors optimize portfolio returns while reducing overall risk. By diversifying a portfolio across different asset classes, investors can potentially achieve a higher return while reducing the risk of losing money.
#### Ii-B1 Optimizing portfolio returns
Different asset classes have varying levels of risk and potential returns. By allocating assets across different asset classes, investors can potentially earn higher returns while diversifying their portfolio's risk. For example, stocks have historically provided higher returns than bonds, but also come with higher risk. By allocating a portion of a portfolio to stocks and a portion to bonds, an investor can potentially achieve higher returns than they would by investing solely in one asset class.
#### Ii-B2 Reducing overall risk
Diversification through asset allocation can also help reduce the overall risk of a portfolio. When one asset class underperforms, the losses can
potentially be offset by gains in other asset classes. This can help mitigate the impact of market volatility and reduce the risk of a significant loss.
Asset allocation can also help investors align their portfolio with their goals and risk tolerance. Investors with a longer investment horizon and higher risk tolerance may allocate more of their portfolio to stocks, which historically have higher returns but also higher volatility. Investors with a shorter investment horizon or lower risk tolerance may allocate more of their portfolio to fixed-income securities, which offer lower returns but also lower volatility.
Overall, asset allocation is a critical strategy in portfolio management as it can help investors optimize returns while reducing overall risk. By diversifying a portfolio across different asset classes, investors can potentially achieve their investment objectives while mitigating the impact of market volatility.
## III Diversification
Diversification is an investment strategy that involves spreading an investor's portfolio across different asset classes, sectors, industries, and geographic regions to reduce overall risk. The objective of diversification is to minimize the impact of any individual investment's performance on the portfolio's overall return.
The basic idea behind diversification is that different types of investments perform differently under different market conditions. By diversifying across different types of investments, an investor can potentially offset losses in one area with gains in another. For example, if a portfolio is heavily invested in one industry, such as technology, and that industry experiences a downturn, the portfolio's overall performance will be negatively impacted. However, if the portfolio is diversified across different sectors and industries, the losses from the technology sector may be offset by gains in other areas.
Diversification can be achieved through asset allocation, as discussed earlier, or through individual security selection. For example, an investor may choose to invest in stocks across different industries, or in bonds with different maturities or credit ratings.
Diversification is not a guarantee against losses, but it can help reduce the risk of large losses in a portfolio. It is important to note that over-diversification can also have negative consequences, such as lower potential returns or higher transaction costs. Finding the right balance between diversification and concentration is key.
### _Asset Class Diversification_
Asset class diversification is an investment strategy that involves diversifying a portfolio across different types of asset classes. Asset classes are broad categories of investments that have similar characteristics and behavior. The most common asset classes are stocks, bonds, and cash equivalents.
Asset class diversification seeks to reduce the risk of loss by spreading investments across different asset classes. The basic idea behind this strategy is that different asset classes tend to perform differently under different market conditions. For example, when stock prices are falling, bond prices may rise. By holding a mix of stocks, bonds, and cash equivalents, an investor can potentially reduce the overall risk of their portfolio.
There are several different ways to achieve asset class diversification. One approach is to use a strategic asset allocation model that establishes target percentages for each asset class based on an investor's goals and risk tolerance. For example, an investor with a longer investment horizon and higher risk tolerance may allocate more of their portfolio to stocks, while an investor with a shorter investment horizon or lower risk tolerance may allocate more of their portfolio to fixed-income securities.
Another approach to asset class diversification is to use mutual funds or exchange-traded funds (ETFs) that invest in a diversified mix of asset classes. For example, a target-date fund may hold a mix of stocks, bonds, and cash equivalents that is appropriate for an investor with a specific retirement date in mind.
Asset class diversification can help reduce the risk of loss in a portfolio, but it is important to note that it is not a guarantee against losses. Market conditions can change quickly, and no investment strategy is foolproof. Nonetheless, asset class diversification is a sound investment strategy that can help investors achieve their goals while managing risk.
### _Sector Diversification_
Sector diversification is an investment strategy that involves diversifying a portfolio across different sectors or industries. Sectors are groups of companies that operate in similar industries, such as technology, healthcare, energy, or consumer goods.
Sector diversification seeks to reduce risk by spreading investments across different sectors. The basic idea behind this strategy is that different sectors tend to perform differently under different market conditions. For example, the technology sector may perform well during a period of economic growth, while the healthcare sector may perform well during a recession. By holding a mix of sectors, an investor can potentially reduce the impact of market volatility on their portfolio.
There are several different ways to achieve sector diversification. One approach is to use a tactical asset allocation model that adjusts the percentage of the portfolio allocated to each sector based on the current market conditions. For example, if the technology sector is performing well, an investor may increase their allocation to that sector, while decreasing their allocation to a sector that is underperforming.
Another approach to sector diversification is to use mutual funds or exchange-traded funds (ETFs) that invest in a diversified mix of sectors. For example, a sector-specific ETF may invest in a mix of technology, healthcare, and consumer goods companies.
Sector diversification can help reduce the risk of loss in a portfolio, but it is important to note that it is not a guarantee against losses. Market conditions can change quickly, and
no investment strategy is foolproof. Nonetheless, sector diversification is a sound investment strategy that can help investors achieve their goals while managing risk.
Diversification is an important investment strategy because it can help minimize the impact of individual security or sector-specific risks on a portfolio. Individual security risks refer to risks that are specific to a particular company, such as the risk of a company going bankrupt or facing legal challenges. Sector-specific risks refer to risks that are specific to a particular industry or sector, such as the risk of a recession affecting the healthcare sector.
By diversifying across different securities or sectors, investors can reduce their exposure to these risks. For example, if an investor holds a portfolio of only tech stocks, they are highly exposed to the risks specific to the technology sector. However, if they diversify their portfolio to include stocks from other sectors such as healthcare, energy, and consumer goods, they can reduce the impact of sector-specific risks.
Diversification can also help minimize the impact of individual security risks. If an investor holds a portfolio of only one or two stocks, they are highly exposed to the risks specific to those companies. However, if they diversify their portfolio to include a mix of stocks from different companies, they can reduce the impact of any one company's individual risks.
While diversification cannot eliminate all investment risks, it can help reduce the overall risk of a portfolio. By spreading investments across different securities, sectors, and asset classes, investors can potentially achieve higher returns and reduce their exposure to market volatility. Diversification is a key component of any well-designed investment portfolio and is essential for managing risk and achieving long-term financial goals.
## IV Active Management
Active management is an investment strategy in which a portfolio manager or team of managers seeks to outperform the overall market by actively selecting individual securities or adjusting the portfolio's asset allocation. In contrast to passive management, which seeks to replicate the performance of a market index, active management involves ongoing analysis and decision-making to try to beat the market.
Active managers use a variety of strategies to achieve their goal, such as fundamental analysis, technical analysis, quantitative analysis, and market timing. Fundamental analysis involves analyzing a company's financial statements and other data to identify undervalued or overvalued stocks. Technical analysis involves studying market trends and patterns to identify trading opportunities. Quantitative analysis involves using mathematical models to evaluate securities based on factors such as earnings growth, cash flow, and dividend yields. Market timing involves making investment decisions based on an analysis of current market conditions and economic trends.
Active management can be more expensive than passive management, as it requires more time and resources to conduct ongoing research and analysis. However, active managers believe that their expertise and insights can lead to higher returns than those achieved by passive investing.
While active management can potentially generate higher returns than passive management, it is also associated with higher risks. Active managers may make incorrect investment decisions or fail to anticipate changes in market conditions, resulting in lower returns or losses. Additionally, active management can be influenced by factors such as manager bias, market timing errors, and high fees.
### _Quantitative and Qualitative Analysis_
Quantitative and qualitative analysis are two methods of evaluating securities or investment opportunities.
Quantitative analysis involves using numerical and statistical data to evaluate the performance and characteristics of securities. This type of analysis is often used by investors who rely on mathematical models and algorithms to make investment decisions. Quantitative analysis can include metrics such as earnings growth, price-to-earnings ratios, dividend yields, and other financial data.
Qualitative analysis, on the other hand, involves evaluating the subjective, non-numerical aspects of an investment opportunity. This can include evaluating a company's management team, corporate culture, competitive advantages, and other intangible factors that can impact the success of an investment. Qualitative analysis often involves conducting interviews with company executives, industry experts, and other stakeholders to gather insights and perspectives.
Both quantitative and qualitative analysis have strengths and weaknesses. Quantitative analysis can provide precise, objective data that can be used to compare and rank different investment opportunities. However, it may not capture all of the factors that can impact a company's performance, such as changes in market conditions or industry trends. Qualitative analysis can provide a deeper understanding of a company's operations, culture, and competitive position. However, it may be subject to bias or personal opinions, and it can be difficult to quantify the impact of qualitative factors on investment performance.
To effectively evaluate investment opportunities, many investors use a combination of quantitative and qualitative analysis. By combining these two methods, investors can gain a more comprehensive understanding of the risks and opportunities associated with a particular investment.
### _Fundamental Analysis_
Fundamental analysis is a method of evaluating securities that involves analyzing a company's financial and economic fundamentals to determine its intrinsic value. This type of analysis seeks to identify the underlying factors that drive a company's performance, such as its revenue growth, profitability, cash flow, and competitive position within its industry.
The goal of fundamental analysis is to estimate the fair value of a security based on its underlying economic and financial factors. To conduct fundamental analysis, investors typically examine a company's financial statements, such as
its balance sheet, income statement, and cash flow statement. They also consider other factors such as management quality, brand strength, and industry trends.
Some of the key metrics that investors may use in fundamental analysis include earnings per share, price-to-earnings ratio, return on equity, and debt-to-equity ratio. These metrics can help investors determine whether a company is undervalued or overvalued relative to its peers and the overall market.
Fundamental analysis is often used by value investors who seek to identify stocks that are trading at a discount to their intrinsic value. Value investors believe that over the long term, the market will eventually recognize the true value of a company, and its stock price will rise accordingly. Fundamental analysis can also be used to identify companies that are poised for long-term growth, based on their financial and competitive strengths.
However, fundamental analysis has some limitations. It may not capture all of the factors that can impact a company's performance, such as changes in market conditions or emerging competitive threats. Additionally, the valuation of a company can be subjective, and different analysts may arrive at different estimates of a company's intrinsic value.
Overall, fundamental analysis can be a useful tool for investors who are looking to identify undervalued or high-growth companies. However, it should be used in conjunction with other types of analysis, such as technical analysis and qualitative analysis, to gain a comprehensive understanding of an investment opportunity.
### _Technical Analysis_
Technical analysis is a method of evaluating securities that involves studying charts and other technical indicators to identify patterns and trends in a security's price and volume data. This type of analysis is based on the premise that historical price and volume data can provide insights into future market movements.
Technical analysts use a variety of tools and techniques to analyze market data, including trend lines, moving averages, momentum indicators, and chart patterns. They look for patterns in the data that suggest changes in market sentiment or supply and demand dynamics, and use these patterns to make investment decisions.
One of the key principles of technical analysis is that market trends tend to persist over time. This means that once a trend is established, it is more likely to continue than to reverse. Technical analysts use trend lines and moving averages to identify trends and to determine whether they are likely to continue or to reverse.
Another key principle of technical analysis is that market participants tend to behave in predictable ways. Technical analysts use indicators such as volume, open interest, and sentiment data to gauge the behavior of market participants and to identify potential turning points in the market.
Technical analysis can be used to analyze a wide range of securities, including stocks, bonds, commodities, and currencies. It is often used by short-term traders and day traders who seek to profit from short-term fluctuations in the market.
However, technical analysis has some limitations. It may not capture all of the factors that can impact a security's price, such as changes in market conditions or news events. Additionally, technical analysis can be subject to interpretation, and different analysts may arrive at different conclusions based on the same data.
Overall, technical analysis can be a useful tool for investors who are looking to profit from short-term market movements. However, it should be used in conjunction with other types of analysis, such as fundamental analysis and qualitative analysis, to gain a comprehensive understanding of an investment opportunity.
Active management refers to the process of actively managing a portfolio of securities in order to generate excess returns compared to a benchmark index or other passive investment strategy. This approach typically involves a combination of fundamental analysis, technical analysis, and other quantitative and qualitative methods to identify undervalued or high-growth securities.
One of the key benefits of active management is the potential to generate excess returns relative to the market. By actively managing a portfolio, an investor can take advantage of market inefficiencies and mispricings to generate returns that exceed those of a passive investment strategy.
Additionally, active management can help to minimize risk by allowing an investor to diversify across multiple asset classes and sectors. By carefully selecting securities based on their individual risk profiles and their correlation with other holdings in the portfolio, an active manager can construct a portfolio that is less volatile and more resilient to market downturns.
Another benefit of active management is the ability to adapt to changing market conditions. Unlike a passive investment strategy, which simply tracks a benchmark index, an active manager can adjust the portfolio holdings in response to changes in the market environment. This flexibility can help to mitigate downside risk and capture upside potential as market conditions evolve.
However, active management also has some drawbacks. It typically involves higher fees than passive investment strategies, which can erode returns over time. Additionally, active managers may underperform the market in certain market conditions or during periods of market volatility.
Overall, the importance of active management in generating excess returns while minimizing risk depends on the specific investment goals and risk tolerance of the investor. Active management can be a useful tool for investors who are willing to accept higher fees in exchange for the potential to outperform the market and to minimize risk through diversification and flexibility. However, it may not be appropriate for all investors, and it should be carefully considered in the context of an overall investment strategy.
## V Risk Management
Risk management is the process of identifying, assessing, and controlling risks that may negatively impact an investment portfolio or business operation. The goal of risk management is to minimize the likelihood and impact of potential risks while maximizing the opportunity for returns.
Risk management involves several steps. The first step is to identify potential risks that may affect the portfolio or business. This may include risks related to market conditions, economic factors, regulatory changes, geopolitical events, and other factors that may impact the performance of the investments or business operations.
Once potential risks have been identified, the next step is to assess the likelihood and impact of each risk. This involves analyzing the probability of the risk occurring and the potential magnitude of its impact on the portfolio or business.
Once the risks have been identified and assessed, risk management strategies can be implemented to control and mitigate the risks. This may involve diversifying the portfolio across different asset classes and sectors, hedging against specific risks through the use of derivatives, and implementing risk limits or stop-loss orders to limit potential losses.
Effective risk management also requires ongoing monitoring and evaluation of the portfolio or business operations to ensure that risks are being effectively managed and controlled. This may involve regular reviews of the portfolio holdings, monitoring of market conditions and economic indicators, and ongoing assessment of regulatory and geopolitical risks.
### _Stop-loss Orders_
Stop-loss orders are a type of risk management tool that investors can use to limit potential losses on an investment. A stop-loss order is an order placed with a broker or trading platform that automatically executes a trade to sell a security if its price falls below a certain level, known as the stop-loss price.
For example, if an investor purchases a stock at $50 per share and places a stop-loss order at $45 per share, the order will automatically execute if the stock price falls to $45 or below. This can help to limit potential losses by ensuring that the investor exits the position before the price falls further.
Stop-loss orders can be a useful tool for managing risk, particularly in volatile or uncertain market conditions. They can help to prevent emotional or impulsive decision-making by automatically executing a trade when a predetermined threshold is reached, rather than relying on the investor to make a decision in the moment.
However, it is important to note that stop-loss orders are not foolproof and can sometimes result in unexpected losses. For example, in a rapidly declining market, the stop-loss order may execute at a lower price than the investor intended, resulting in a larger loss than anticipated. Additionally, stop-loss orders can be triggered by short-term market fluctuations, which may not reflect the long-term value of the investment.
Overall, stop-loss orders can be a useful risk management tool when used appropriately in conjunction with other risk management strategies. Investors should carefully consider their investment goals and risk tolerance when deciding whether to use stop-loss orders, and should also regularly monitor and adjust their stop-loss orders as market conditions evolve.
### _Options Strategies_
Options strategies are investment strategies that involve the use of options contracts to achieve specific investment objectives. Options are financial derivatives that give the holder the right, but not the obligation, to buy or sell an underlying asset at a specific price and time.
There are several common options strategies that investors can use to manage risk and enhance returns. These include:
#### V-B1 Covered call strategy
This involves holding a long position in an asset and selling call options on that asset. If the price of the asset rises, the holder of the call option can exercise the option and buy the asset at the strike price, which limits the potential gains of the holder of the asset. This strategy can be used to generate additional income from a portfolio.
#### V-B2 Protective put strategy
This involves purchasing a put option on an asset to protect against potential losses in the event that the price of the asset declines. This strategy can help investors limit their downside risk while maintaining their upside potential.
#### V-B3 Long call strategy
This involves purchasing a call option on an asset, giving the holder the right to buy the asset at the strike price before the expiration date. This strategy can be used to profit from a price increase in the underlying asset.
#### V-B4 Long put strategy
This involves purchasing a put option on an asset, giving the holder the right to sell the asset at the strike price before the expiration date. This strategy can be used to profit from a price decrease in the underlying asset.
#### V-B5 Bull call spread strategy
This involves buying a call option at a lower strike price and selling a call option at a higher strike price. This strategy can be used to profit from a higher strike price. This strategy can be used to profit from a losses.
#### V-B6 Bauterfly spread strategy
This involves buying a call option and a put option with the same strike price and selling two options with a strike price between them. This strategy can be used to profit from a potential losses.
#### V-B8 Iron condor strategy
This involves buying a call spread and a put spread with different strike prices. This strategy can be used to profit from a range-bound market, while limiting potential losses.
#### V-B9 Straddle strategy
This involves buying both a call option and a put option on an asset with the same strike price and expiration date. This strategy can be used when an investor
expects a significant price move in either direction, as it allows them to profit from a price increase or decrease.
#### V-B10 Strangle strategy
This involves buying both a call option and a put option on an asset with different strike prices, but with the same expiration date. This strategy can be used when an investor expects a significant price move in either direction, but is unsure which way the price will move.
Overall, options strategies can be a useful tool for managing risk and enhancing returns, but they can also be complex and involve significant risk. Investors should carefully consider their investment goals, risk tolerance, and knowledge of options before using options strategies in their portfolio.
The importance of risk management in minimizing losses cannot be overstated. Risk management is the process of identifying, assessing, and controlling risks that can affect an investment portfolio. By implementing effective risk management strategies, investors can reduce the likelihood and impact of potential losses.
There are many types of risks that can affect an investment portfolio, including market risk, credit risk, liquidity risk, and operational risk. Market risk is the risk of losses due to fluctuations in the overall market, while credit risk is the risk of losses due to defaults or credit downgrades of individual securities. Liquidity risk is the risk of losses due to an inability to sell a security quickly and at a fair price, while operational risk is the risk of losses due to internal or external operational failures.
To minimize losses due to these and other risks, investors can use a variety of risk management strategies. Stop-loss orders, for example, can help limit potential losses by automatically selling a security when its price falls below a certain level. Options strategies, such as protective puts, can also be used to limit losses while maintaining upside potential. Diversification can also help to reduce the impact of individual security or sector-specific risks on a portfolio.
By taking a proactive approach to risk management, investors can minimize the impact of potential losses on their portfolios, which can help to protect their long-term investment goals. It's important to remember, however, that risk cannot be eliminated entirely and that all investments carry some level of risk. Therefore, it's essential to carefully assess and manage risk to ensure that it aligns with an investor's individual investment objectives and risk tolerance.
## VI Conclusion
In this paper, we examined several strategies for portfolio management with the goal of maximizing alpha and minimizing beta. These strategies included:
#### Vi-1 Asset Allocation
The process of dividing an investment portfolio among different asset classes, such as stocks, bonds, and cash, to achieve a specific investment objective.
#### Vi-2 Strategic Asset Allocation
A long-term investment strategy that involves establishing a target mix of assets and periodically rebalancing the portfolio to maintain that mix.
#### Vi-3 Tactical Asset Allocation
A short-term investment strategy that involves adjusting the portfolio's asset allocation to take advantage of market opportunities or to manage risk.
#### Vi-4 Diversification
The process of spreading investments across different securities or asset classes to reduce the impact of individual security or sector-specific risks.
#### Vi-5 Active Management
The process of actively managing a portfolio to generate excess returns while minimizing risk.
#### Vi-6 Fundamental Analysis
The process of evaluating securities based on their financial and economic characteristics, such as revenue, earnings, and growth potential.
#### Vi-7 Technical Analysis
The process of evaluating securities based on their price and trading patterns to identify trends and potential price movements.
#### Vi-8 Risk Management
The process of identifying, assessing, and controlling risks that can affect an investment portfolio.
These strategies can be used in combination or individually to optimize portfolio returns while minimizing risk.
Combining strategies for portfolio management is essential for optimizing portfolio performance. Each strategy has its own strengths and weaknesses, and by combining them, investors can benefit from the strengths of each while minimizing their weaknesses.
For example, asset allocation provides a framework for diversifying a portfolio among different asset classes, but it does not address the timing of trades or the selection of individual securities. Tactical asset allocation, on the other hand, can help to take advantage of short-term market opportunities, but it does not provide a long-term investment plan. By combining these strategies, investors can achieve a balanced approach to portfolio management that considers both short-term and long-term investment objectives.
Similarly, active management can help to generate excess returns while minimizing risk, but it requires in-depth analysis and research of individual securities. Fundamental analysis and technical analysis can provide valuable insights into the potential performance of individual securities, but they may not account for market trends or other external factors that can impact the performance of a portfolio. By combining active management with fundamental and technical analysis, investors can benefit from a more comprehensive approach to security selection and portfolio management.
Finally, risk management is essential for protecting the long-term value of a portfolio. Stop-loss orders and options strategies can help to limit losses due to market fluctuations, but they may not account for broader market trends or other macroeconomic factors that can impact a portfolio. By combining risk management strategies with asset allocation and active management, investors can benefit from a more holistic approach to portfolio management that considers both individual securities and broader market trends.
In conclusion, combining strategies for portfolio management is essential for optimizing portfolio performance. By taking a balanced approach that considers both short-term and
long-term investment objectives, investors can achieve their investment goals while minimizing risk.
As an investor, the ultimate goal is to maximize returns while minimizing risk. The strategies discussed in this paper, including asset allocation, diversification, active management, and risk management, can help you achieve this goal.
By utilizing these strategies, you can build a well-diversified portfolio that takes advantage of market opportunities while limiting exposure to individual security and sector-specific risks. You can also benefit from a more comprehensive approach to security selection and risk management, which can help you generate excess returns while minimizing losses.
Therefore, it is essential to take the time to understand these portfolio management strategies and implement them into your investment approach. Consulting with a financial advisor or portfolio manager can also be beneficial in designing and implementing a portfolio management strategy that is tailored to your individual investment goals and risk tolerance.
Investing can be a challenging and complex process, but by utilizing these portfolio management strategies, you can make informed decisions and achieve your investment goals over the long-term. So, don't hesitate to take action and start utilizing these strategies today to maximize alpha and minimize beta in your investment portfolio.
|
2301.11173 | Double Deep Reinforcement Learning Techniques for Low Dimensional
Sensing Mapless Navigation of Terrestrial Mobile Robots | In this work, we present two Deep Reinforcement Learning (Deep-RL) approaches
to enhance the problem of mapless navigation for a terrestrial mobile robot.
Our methodology focus on comparing a Deep-RL technique based on the Deep
Q-Network (DQN) algorithm with a second one based on the Double Deep Q-Network
(DDQN) algorithm. We use 24 laser measurement samples and the relative position
and angle of the agent to the target as information for our agents, which
provide the actions as velocities for our robot. By using a low-dimensional
sensing structure of learning, we show that it is possible to train an agent to
perform navigation-related tasks and obstacle avoidance without using complex
sensing information. The proposed methodology was successfully used in three
distinct simulated environments. Overall, it was shown that Double Deep
structures further enhance the problem for the navigation of mobile robots when
compared to the ones with simple Q structures. | Linda Dotto de Moraes, Victor Augusto Kich, Alisson Henrique Kolling, Jair Augusto Bottega, Raul Steinmetz, Emerson Cassiano da Silva, Ricardo Bedin Grando, Anselmo Rafael Cuckla, Daniel Fernando Tello Gamarra | 2023-01-26T15:23:59Z | http://arxiv.org/abs/2301.11173v1 | # Double Deep Reinforcement Learning
###### Abstract
In this work, we present two Deep Reinforcement Learning (Deep-RL) approaches to enhance the problem of mapless navigation for a terrestrial mobile robot. Our methodology focus on comparing a Deep-RL technique based on the Deep Q-Network (DQN) algorithm with a second one based on the Double Deep Q-Network (DDQN) algorithm. We use 24 laser measurement samples and the relative position and angle of the agent to the target as information for our agents, which provide the actions as velocities for our robot. By using a low-dimensional sensing structure of learning, we show that it is possible to train an agent to perform navigation-related tasks and obstacle avoidance without using complex sensing information. The proposed methodology was successfully used in three distinct simulated environments. Overall, it was shown that Double Deep structures further enhance the problem for the navigation of mobile robots when compared to the ones with simple Q structures.
Keywords:Deep Reinforcement Learning, Double Deep Reinforcement Learning, Mobile Robots, Mapless Navigation
## 1 Introduction
The field of robotics has an intrinsic relation with Reinforcement Learning (RL) since many problems in robotics can use this artificial intelligence technique. Many algorithms have already been developed for a range of tasks to teach an agent to learn a perfect set of actions through trial-and-error interactions. The feedback that the agent receives from the workplace guides the learning step-by-step toward an optimal policy [1]. These techniques have achieved a state-of-art performance in some problems in robot learning, given the evolution of deep learning neural networks (Deep ANN). By representing the agent as a Deep ANN, it was possible to improve the agent's ability to evolve around complex environments and with distinct tasks. However, since this improvement relies on ANN constraints, such as the gradient for learning, other problems
have emerged - namely, the problem related to the ability to learn from high-dimensional data. Given this limitation, specific Deep-RL techniques, such as the Contrastive Learning [2] have been used to suppress and make the agent learn. For navigation-related tasks in specific, it has been shown that a good performance is possible to be yielded through simple sensing information, like for terrestrial mobile robots [3][4], aerial robots [5], underwater robots [6], and even hybrid robots [7]. This so-called mapless navigation is the base for many problems in mobile robotics, where many Deep-RL algorithms have been employed.
With that in mind, this research proposes to demonstrate and evaluate two Deep-RL techniques' effectiveness in tasks involving the goal-oriented navigation of a terrestrial mobile robot. The DQN and DDQN algorithms are used, performing a comparison. The approaches are based on the simple sensing idea, with an architecture of 26 samples for the state, arranged by 24 laser sensor readings and the distance and angle of the mobile robot related to the target. We also focus on showing how Double Q architectures perform when compared with simple Q architectures. Figure 1 shows our proposed architecture for learning.
Overall, this work has the following contributions: 1) It is shown that Double Q approaches are capable of performing mapless navigation of terrestrial mobile robots; 2) It is shown that a combination of Double Q approaches and low dimensional sensing deal better with some emergent problems in Deep-RL, such as the convergence of the gradient descent or the catastrophic forgetting problems, consistently softening them;
This work has seven sections. After a brief introduction, the works of other researchers in the field that served as inspiration for this one are described in Section 2. Section 3 provides a theoretical basis for the algorithms used in the experiments. In Section 4, the tools, software, and environments used in this paper are described. The methodology used to teach the agent how to reach a target is shown in Section 5, along with an explanation of the network structure and reward function. The results achieved in this study are detailed in Section
Figure 1: Turtlebot3 Burger performs among obstacles in one of the scenarios (left) and the model structure with inputs and outputs of our methods, DQN and Double DQN (right).
6. Finally, the last section discusses the main results achieved and applications of Deep-RL.
## 2 Related Works
Deep-RL has been applied previously to mapless navigation tasks for mobile robots [3], [8], [9], [4]. Mnih et al. [10] calculated a value function for future reward using an algorithm called deep Q-network (DQN) for the Atari games, [11], [10], [12]. It is necessary to outline that the DQN just uses discrete actions when it is applied to a problem like the control of a robot. In order to extend the DQN to a continuous control, Lillicrap et al. [13] proposed a deep deterministic policy gradients (DDPG) algorithm. This algorithm paves the way to the use of Deep-RL in mobile robot navigation.
Dobrevski et al. [14] have adopted an approach for mapless goal-driven navigation optimizing a behavioral policy within the framework of reinforcement learning, specifically the Advantage Actor-Critic (A2C) method. The results are compared with the performance of the standard Turtlebot3 navigation package and it was found that the approach reached a more robust performance.
Tai et al. [3] developed a mapless motion planner for a mobile robot and used as input to their system the range findings sensor information and the target position and the output was the continuous steering commands for the robot. Initially, they employed discrete steering commands on [15]. It was demonstrated that, with the asynchronous Deep-RL method, it is possible to train an agent to arrive to a predetermined target using a mapless motion planner.
As in the work of Tai in [3] and the others related, this paper focuses on developing a mapless motion planner based on low-dimensional range readings. Our work differs by using a deterministic approach based on the DDQN for the resolution of navigation-related problems for terrestrial mobile robots and the insertion of a dynamic target for the robot in the environments without an asynchronous training. Overall, we show that these tasks for terrestrial mobile robots can be excelled using low dimensional sensing data and simple Deep-RL approaches, such as de DDQN. By this mean, we show that a common problem in Deep-RL such as the convergence of the gradient descent or the catastrophic forgetting problems can be consistently smoothed.
## 3 Theorical Background
### Deep Q-Network - DQN
At the forefront of the recent Deep-RL breakthroughs, we have the Deep-Q Network(DQN)[10] method, developed by Mnih et al. Applying the concepts of reinforcement learning, such as the _Bellman equation_ given by:
\[Q^{*}(s,a)=E_{s^{\prime}\sim\varepsilon}\left[r+\gamma\underset{a^{\prime}}{ max}Q^{*}(s^{\prime},a^{\prime})|s,a\right] \tag{1}\]
This equation returns the optimal value given a state-action pair. Employing a neural network with weights \(\theta\) as a function approximator to estimate the action-value function in the _Bellman equation_, we have \(Q(s,a,\theta)\approx Q^{*}(s,a)\), guaranteeing the convergence to the optimal value. This neural network can be trained by minimizing a loss \(\mathcal{L}(\theta)\) given by the following equation:
\[\mathcal{L}(\theta)=[(y-Q(s,a,\theta))^{2}]. \tag{2}\]
With \(y=E_{s^{\prime}\sim\varepsilon}[r+\gamma\underset{a^{\prime}}{max}Q(s^{\prime },a^{\prime};\theta_{t})|s,a]\) being a target function, employing the network weights from the \(t\) episode, used to create a temporal difference between the value found and the expected value.
The inputs to the neural network are a state-action pair sampled randomly from an experience replay buffer, which stores each transition achieved by the agent. This makes the method _model-free_, reducing the problems of correlated data and non-stationary distributions. The action chosen by the agent follows a \(\epsilon\)-greedy policy, with \(a=\underset{a}{max}Q(s,a;\theta)\) when \(1-\epsilon\) is true or a random action with \(\epsilon\) probability.
### Double Deep-Q Network - DDQN
In regular Q-Learning and Deep Q-Network, the max operator employs the same values for selecting and evaluating an action. This increases the likelihood of choosing overestimated values, leading to overly optimistic value estimates. As a potential solution for the overestimated values issue, Hasselt [16] developed a new algorithm called Double Q-leaning, which uses a double estimator approach to estimate the value of the next state. Using the Double Q-Learning strategy associated with DQN, Van Hasselt et al. [12] bring a new approach called Double Deep-Q Network (DDQN). In DDQN, the target function is presented as
\[y_{t}^{DoubleDQN}=r_{t+1}+\gamma Q(S_{t+1},\underset{a}{argmax}Q(s_{t+1},a; \theta_{t}),\theta_{t}^{-}), \tag{3}\]
where \(\mathcal{L}(\theta_{t}^{-})\) is the weight of the target network for the evaluation of the current greedy policy.
## 4 Experimental Setup
In this section we describe the robots, tools, and environments used in the project.
### PyTorch
Python was the programming language used in the development of the algorithms, and it has applications in different areas, such as image processing[17] and robotics [18]. One of Python's libraries is PyTorch. PyTorch is an open-source machine learning library applied here on the creation of deep neural networks [19]. It provides two high-level features: tensor computation with acceleration via a graphical processing unit and a tape-based automatic differentiation system.
### ROS, Gazebo and Turtlebot
The robot operating system (ROS) is an open-source robotics middleware that acts as an intermediary, managing the connections between software and hardware in robots. ROS [20] is not an operating system but provides some operational system standard services, like messages between processes, low-level device control, package management and hardware abstraction. Graph architecture is utilized to represent the processes running by ROS, where the processes are called nodes and messages between two or more processes communicating are the topics. For message exchange between nodes and topics, the concept of publishing and subscribing is used.
As a support tool for simulated experiments, the open-source 3D simulator Gazebo[21] is an excellent facilitator for developments using ROS. The simulation of different environments is one of the biggest advantages of its integration with ROS because Gazebo has real-world rules and concepts, facilitating the simulation of applicability.
The mobile robot used is the Turtlebot 3, from Robotis company. Because it is a robot with low hardware costs and open source software, it facilitates the implementation of projects. The version used in the experiments was TurtleBot3, with Raspberry Pi3 development hardware, which further enhances the vast community that uses it.
### Simulation Environments
Three simulated environments were constructed in Gazebo, which are limited by walls. The first one is empty and without obstacles besides the walls, and it can be observed in Figure 1(a). The image also shows the position of the four fixed goals used on the test phase. Aiming to add challenge to the next scenario, in the second environment, four cylinders were placed as fixed obstacles, as shown in Figure 1(b). In the third environment, the obstacles are white blocks placed in the middle of the scenario. And as can be seen in Figure 1(c), in contrast to the first and second environments, in the third environment, the robot starts the simulation positioned near the corner. The agent receives a negative reward
Figure 2: Simulated scenarios created in the Gazebo simulator.
each time it has a collision with a wall or an obstacle, and the episode ends. If the robot reaches the target, the episode also ends.
In the training phase, the target is placed in a new random position on the scenario map each time a new episode begins. By doing this, the scenario becomes more dynamic, and the robot is prevented from memorizing a navigation strategy. During the simulated testing phase, the target was placed in four fixed positions to ensure that the comparison between the results of the two algorithms in the same environment could be more accurate.
## 5 Methodology
The goal of this work is to train, test, and compare the performance of two algorithms, DQN and Double DQN, applied to a mobile robot. The algorithms aim to support the robot in order to avoid walls and obstacles, planning its movements to navigate without map knowledge on the environment. The robot is set with a fixed linear velocity and varies between five angular velocities, which are the five actions that the robot can take according to the network output.
### Network Structure
After the system's states and actions were established, a Q-Network was created to compose the DQN and Double DQN architectures. The Q-network has 26 inputs, including the laser range sensor measurements, previous angular and linear velocities, and the position and orientation of the target. The output of the network is a discrete value between 0 and 4 and is equivalent to the angular velocity, as it is shown on Table 1. The network structure of the DQN and DDQN is shown in Fig 3.
Figure 3: Q-Network architecture used in both DQN and DDQN.
The actor-network has as input the current state of the mobile robot followed by three fully-connected neural network layers with 256 nodes. This input of the networks is transformed on the angular velocity that will be the commands sent to the motor of the mobile robot. The linear velocity is fixed and set on 0.15m/s, and there's no backward move.
### Reward Function
Once the simulated environments are defined, it is possible to simulate the robot in the task of navigation. First, the Deep-RL network needs to have the reward and penalty mechanism specified. The rewards and penalties given to the agent are just numbers of a function that models how it wants the agent to behave. It can be based on empirical knowledge and created during the process of a solution to the problem.
Regarding the reward system, only three rewards were given, one for making the task correctly, the second in case of failure and the last in case of idle. The agent receives 200 of reward when the goal is reached in a margin of 0.25 meters. In a collision case against an obstacle or reaching the scenario limits, a negative reward of \(-20\) is given. The collisions are verified if the distance sensor readings are lower than a distance of 0.12 meters. Meanwhile, if the agent stays away from the target and doesn't collide in a range of 500 steps, it's considered a condition of idle. In this case, the episode ends and a reward of 0 is given. This simplified rewarding system also helps focus on the Deep-RL approaches, their similarities, and differences, instead of the scenario.
## 6 Results
This section highlights the results yielded in this work. For each scenario and model, an extensive amount of statistics are collected. The evaluation was done in a simulation workplace, and the test tasks were performed for 100 trials in each pre trained model. Considering there are 4 fixed goals, it is performed 25 trials for each fixed goal, and the total of successful trials was recorded. Also, the
\begin{table}
\begin{tabular}{r r} \hline \hline Output & Angular velocity \\ \hline
0 & \(-1.5\)\(rad/s\) \\
1 & \(-0.75\)\(rad/s\) \\
2 & 0 \(rad/s\) \\
3 & 0.75\(rad/s\) \\
4 & 1.5\(rad/s\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Output actions and equivalent angular velocity
average navigation time with their standard deviations is recorded. The learning through time can be seen in Figure 4, while the behavior of the agents during the evaluation can be seen in Figure 5. Also, Table 2 shows the overall results gathered.
Overall, one of the main contributions of this work can be observed in Figure 4. We can see how the learning converged stably throughout the three scenarios that were used to evaluate our agents. From the results of many related works, it is possible to see that the agents manage to achieve the best rewards possible in a fast manner; and, most importantly, showing to be stable and consistent after achieving the learning. In this high complexity scenario - mainly in the third scenario - it would be normal if the agent presented some kind of forgetting or even forgetting to do it at all. However, it can be observed that the agent managed to maintain stable reward throughout the evolution.
These contributions can be validated by the statistics of Table 2, where it is possible to see that both agents managed to achieve a good performance. The overall performance, with almost 100% precision for all algorithms and scenarios, was reached by the Double Q algorithm, which is very
Figure 4: Moving average of the reward over 3000, 5000, and 5000 training episodes for the respectively scenarios.
Figure 5: The behavior of each approach was tested in each simulated scenario over 100 navigation trials.
continuous motion in terrestrial mobile robots. Even though these algorithms are not as sophisticated as the constrastive learning algorithms or the other more recent Deep-RL algorithms for continuous actions, we can see that the simplicity of the methodology presented is capable of reaching a good performance.
## 7 Conclusions
In this paper, we presented two simple Deep-RL approaches that rely on low-dimensional data to achieve a good performance on motion tasks for terrestrial mobile robots. It was shown that Double Q algorithms are capable of providing good performance even in relation to sophisticated Deep-RL algorithms, such as actor-critic or constrastive architectures. This validation was achieved by comparing the testing results in the simulated environments.
Another interesting conclusion was the stable pattern of the learning process shown throughout different scenarios and the time, showing small signs of typical Deep-RL problems like gradient convergence or the forgetting. More studies are on the way to confirm this pattern through other types of mobile robots and Deep-RL techniques.
|
2305.03080 | Large-Scale Ejecta of Z CMa -- Proper Motion Study and New Features
Discovered | Z Canis Majoris is a fascinating early-type binary with a Herbig Be primary
and a FU Orionis-type secondary. Both of the stars exhibit sub-arcsecond
jet-like ejecta. In addition, the primary is associated with the extended jet
as well as with the large-scale outflow. In this study, we investigate further
the nature of the large-scale outflow, which has not been studied since its
discovery almost three and a half decades ago. We present proper motion
measurements of individual features of the large-scale outflow and determine
their kinematical ages. Furthermore, with our newly acquired deep images, we
have discovered additional faint arc-shaped features that can be associated
with the central binary. | Tiina Liimets, Michaela Kraus, Lydia Cidale, Sergey Karpov, Anthony Marston | 2023-05-04T18:00:02Z | http://arxiv.org/abs/2305.03080v1 | # Large-Scale Ejecta of Z CMa--Proper Motion Study and New Features Discovered
###### Abstract
Z Canis Majoris is a fascinating early-type binary with a Herbig Be primary and a FU Orionis-type secondary. Both of the stars exhibit sub-arcsecond jet-like ejecta. In addition, the primary is associated with the extended jet as well as with the large-scale outflow. In this study, we investigate further the nature of the large-scale outflow, which has not been studied since its discovery almost three and a half decades ago. We present proper motion measurements of individual features of the large-scale outflow and determine their kinematical ages. Furthermore, with our newly acquired deep images, we have discovered additional faint arc-shaped features that can be associated with the central binary.
circumstellar matter: jets and outflows; stars: individual Z CMa; stars: emission-line; Be Article
## 1 Introduction
Z Canis Majoris (Z CMa) has intrigued astronomers for decades. It is an active early-type emission line binary consisting of a Herbig Be primary and a FU Orionis-type (FU Ori) secondary separated by \(0^{\prime\prime}.1\) (e.g., Bonnefoy et al. [1]). Current high-spatial resolution observations show that the primary is located in the northwest (NW) direction from the secondary (e.g., Bonnefoy et al. [1]; Dong et al. [2]). The light curve of the system is rich in long-term variability, months to years, as well as in day-by-day variability with a non-periodic nature and varying amplitudes (Sicilia-Aguilar et al. [3] and references therein).
The system is surrounded with multiple circumstellar and large-scale outflows. The brightest feature around Z CMa is a reflection nebula extending up to about \(35^{\prime\prime}\) toward the NW from the central binary. It was discovered from photographic plates from 1953 by Herbig [4]. On these plates, the reflection nebula has a bar-shaped morphology (see Figure 10 in Herbig [4]). However, on the newer, higher resolution and more sensitive CCD images, the morphology more closely resembles a comma (Figure 1 left and Figure 2). We refer to this feature as comma nebula.
Figure 1: (**Left**): Schematic view of the various extended nebular features surrounding Z CMa. The stars in the field of view are drawn as black filled circles, large-scale outflow features are marked with grayish areas or circles with black contures. The dashed line is drawn at the 60\({}^{\circ}\) position angle. Individual numbers refer to radial velocities (Poetzel et al. [5]). Field of view is \(9\aas@@fstack{\prime\prime}5\times 5\aas@@fstack{\prime\prime}6\). The base of the figure is depicted from Poetzel et al. [5]. Reproduced with permission © ESO. (**Right**): Schematic view of the sub-arcsecond features around Z CMa presented in true proportions. The lengths of the micro-jets are taken from Whelan et al. [6] and those for the streamer are taken from Dong et al. [2]. The dashed line represents the position angle of the large-scale outflow. Field of view is \(1\aas@@fstack{\prime\prime}6\times 2\aas@@fstack{\prime\prime}7\). A white square representing the same size is drawn on the left panel at the position of the central binary inside the comma nebula. On both panels, north is up and east is to the left.
Figure 2: (**Left**): [S ii] image of Z CMa acquired with GMOS attached to Gemini-South. The white square shape feature, indicated with a white arrow, is an artifact from vignetting of the guiding probe. (**Right**): Insets of the resolved features in the [S ii] image. The FOVs of the smallest insets are \(20\arcsec\times 15\arcsec\) each. On all images, north is up, east is to the left, and the intensity is in log scale to improve the contrast. See text for more details.
In the sub-arcsecond scale, jets emerge from both components--micro-jet A from the primary and micro-jet B from the secondary at the position angle (PA, measured from north to east) 245\({}^{\circ}\) and 235\({}^{\circ}\), respectively (Whelan et al. [6]; Figure 1 right). Both jets are slightly wiggly and show associated knots (Whelan et al. [6]; Antoniucci et al. [7]). The micro-jet A extends out to about 30\({}^{\prime\prime}\) and is referred to as (extended) jet A (Figure 1 left) in the literature. This jet was discovered by Poetzel et al. [5], and it has also a wiggly nature (Whelan et al. [6]).
Millan-Gabet and Monnier [8] discovered another jet-like small-scale feature at PA 215\({}^{\circ}\). This feature is designated in the literature as a streamer, and its length is about 2\({}^{\prime\prime}\) (Figure 1 right). However, it does not emanate from either of the binary components. In fact, it appears to start 0\({}^{\prime\prime}\).7 toward the south (S) from the central binary (Dong et al. [2]). The same authors find a point source a further \(\sim\)2\({}^{\prime\prime}\) away from the streamer at the same PA and therefore confirm that the streamer is most likely created in a rare flyby event. Furthermore, these authors point out that the flyby event explains also the anomalous double-jet activity in this system, which considering the masses of the binary components could happen only with the probability of less than 1%.
Looking at greater distances, further out from the extended jet A at the same PA, a large-scale outflow was discovered by Poetzel et al. [5] from the narrow-band H\(\alpha\) and [S ii] images acquired in the end of 1990s. While the micro and extended jets are primarily detected as one-sided objects in the southwest (SW) direction, the large-scale outflow has emission features also toward the northeast (NE) (Figure 1 left). The large-scale outflow consists of blobby and elongated features. The kinematics of the features refer to a bipolar nature.
The NE features are all red-shifted, while the SW ones appear blue-shifted. Eight features are identified by Poetzel et al. [5] in the SW side extending up to 4\({}^{\prime}\).7 and seven features are identified in the NE reaching up to 6\({}^{\prime}\) from the central object. This is the largest known outflow1 for this type of stars extending across 3.5 pc when considering the distance of 1125 pc (Dong et al. [2]). At the discovery years, the average PA of the outflow features was 60\({}^{\circ}\) (equivalent to 240\({}^{\circ}\)). This PA aligns with that of the extended jet A, which is associated with the primary, and it is therefore widely accepted that the large-scale outflow is a result of the ejections from the Herbig Be component.
Footnote 1: The \(\sim\)20% of the total outflow is \(\sim\)20% of the total outflow.
The large-scale outflow is what we concentrate on in this paper. In particular, we will measure proper motions of the individual features, and in combination with their respective radial velocities, we aim to reveal the true 3D nature of this huge nebulosity. For that, an accurate distance estimation is essential. Several distance estimates for Z CMa exist in the literature. They are all based on the fact that Z CMa is a member of the OB association CMa OB1. Published values are 1150 \(\pm\) 50 pc by Claria [12], 990 \(\pm\) 50 pc by Whelan et al. [6], and more recently 1125 \(\pm\) 30 pc by Dong et al. [2]. Throughout this paper, we use the latest estimate, 1125 pc, because it was calculated by using the largest number of members of the association (50) and is therefore the most accurate one. We note here that the estimated Gaia Data Release 3 (Gaia DR3) (Gaia Collaboration et al. [13]; Gaia Collaboration et al. [14]) distance of the Z CMa is not reliable due to the very large value of RUWE2, as described in Dong et al. [2].
## 2 Observations and Data Reduction
Our first imaging data were obtained with the 60-inch telescope at Mt. Palomar on 2002 February 28. A single 20-minute exposure in a narrow-band H\(\alpha\) filter (\(\lambda\) = 6564.8 A, \(\Delta\lambda\) = 20 A) was secured with a seeing of 1\({}^{\prime\prime}\).5. The field of view (FOV) was 12\({}^{\prime}\).5, and a chosen binning of 2 \(\times\) 2 provided a pixel scale of 0\({}^{\prime\prime}\).756 pix\({}^{-1}\). This image was reduced using the standard routines in IRAF3 (Tody [15,16]).
Footnote 3: The \(\sim\)20% of the total outflow is \(\sim\)20% of the total outflow.
The second set of images of Z CMa was acquired on 2019 September 27 with the 8.1 m telescope. We used the Gemini Multi-Object Spectrographs (GMOS, Hook et al. [17]) mounted at Gemini-South as part of the observing proposal AR-2019B-020. The images were collected in
the narrow band H\(\alpha\) G0336 (\(\lambda=6567.0\) A, \(\Delta\lambda=70\) A) and [S ii] C0335 (\(\lambda=6717.2\) A, \(\Delta\lambda=43\) A) filters with the total exposure time of 145 and 435 seconds, respectively. The observations in both filters consisted of several shorter exposures that have been dithered to eliminate the gaps between the detectors and to minimize contamination (saturation effects) due to the bright central star. The bin \(2\times 2\) was used, yielding a pixel scale of 0\({}^{\prime\prime}\).16 pix\({}^{-1}\). The FOV of the final reduced images is \(6^{\prime}\times 5^{\prime}.5\). The observations were carried out with a seeing between 1\({}^{\prime\prime}\).3 and 1\({}^{\prime\prime}\).4. Data reduction was performed using the Gemini software DRAGONS (Labrie et al. [18]). Details of the observations are in Table 1.
We also acquired a set of stacked images from the Pan-STARRS images archive (Waters et al. [19]) that are results of co-adding multiple exposures made between 2010 and 2015 during the 3\(\pi\) survey (Chambers et al. [20]). We downloaded stacked images covering the region around \(Z\) CMa in \(g\), \(r\), \(i\), and \(z\) filters, re-scaled them to the common photometric zero point, and created mosaics in each individual filter with the original spatial resolution of the Pan-STARRS stacked images of 0\({}^{\prime\prime}\).25 pix\({}^{-1}\). We then created a composite RGB image from \(z\), \(i\), and \(g\) mosaics with logarithmic intensity scaling applied. We excluded the \(r\) filter from the composite image as it shows the largest number of stacking artifacts in the background, and it is mostly unusable for studying the morphology of the nebular features. The FOV of the final image was \(9^{\prime}\times 9^{\prime}\).
### Pre-Analysis Processing of the Narrow-Band Images
To accurately analyze the possible morphological and/or kinematical changes between our two epochs, our narrow-band H\(\alpha\) images first had to be matched pixel by pixel. For this, we used 32 stars in the FOV whose proper motions were smaller or equal to \(\pm\)5 mas yr\({}^{-1}\) and RUWE \(<\) 1.4; all values were taken from the Gaia DR3. The 2019 frame was matched against the 2002 frame, because the latter had a larger pixel scale. The matching was completed in IRAF using the tasks _geomap_ and _geotran_. The errors of the matching were \(\sigma_{\rm RA}=0^{\prime\prime}.18\) and \(\sigma_{\rm DEC}=0^{\prime\prime}.23\). With this procedure, both frames were given the same pixel scale, 0\({}^{\prime\prime}\).756 pix\({}^{-1}\). The last step in matching the coordinates is to compensate for the possible proper motion of the central star. In our case, this effect is insignificant, considering the small proper motion of \(Z\) CMa (see Section 3.4) and that our two datasets are separated by 17.58 years. In this stage of the image processing, the frames were ready to be compared by blinking to find any obvious movement of the outflow features in the plane of the sky or to measure directly the coordinates of individual features to calculate proper motions.
For the features for which the blinking of the frames did not reveal any visual expansion and/or which, due to their elongated shape, are not suitable for direct coordinate measuring, further processing was needed in order to use the magnification method (see Section 3.2). These steps included seeing and flux matching. The first was not needed because the seeing of the original frames was already similar, and after pixel by pixel matching, it became equal. Flux matching was completed using the analyzed feature (in our case feature \(D\); see Section 3.2) by summing up all the flux in a rectangle-shaped area equivalent to the size and shape of the feature and then arithmetically matching it with the same area flux on the second epoch image.
\begin{table}
\begin{tabular}{c c c c} \hline
**Date** & **Telescope** & **Filter** & **Total Exp.** \\
**YYYY-MM-DD** & & \(\lambda/\Delta\lambda\) (Å) & **Time (s)** \\ \hline
2002-02-28 & 60-inch Mt. Palomar & H\(\alpha\) 6564.8/20 & 1200 \\
2019-09-27 & Gemini-South & H\(\alpha\) 6567.0/70 & 145 \\
2019-09-27 & Gemini-South & [S ii] 6717.2/43 & 435 \\ \hline \end{tabular}
\end{table}
Table 1: Log of the observations. The first column lists the start date of the observing night. Column 2 lists the telescope used. Column 3 contains the central wavelength (\(\lambda\)) and the width (\(\Delta\lambda\)) of the filter. The last column is the total exposure time in seconds.
Beforehand, the sky was removed. We estimate that the flux matching is accurate down to a few percent.
## 3 Results
In Figure 3, we present our 2002 H\(\alpha\) image which covers the whole large-scale outflow of Z CMa, extending 10\(\aas@@fstack{\prime}\).7 from NE to SW. Our GMOS images from 2019 have a smaller FOV as demonstrated with the black rectangle in Figure 3. The GMOS image taken in the lines of [S ii] \(\lambda\lambda\)6716, 6731 is considerably deeper and presents the individual features with a better S/N (Figure 2). For a meaningful analysis (see Sections 3.1 and 3.2), it is important to use data in the same filter/spectral lines, especially when the aim is to find any morphological and/or kinematical changes between two epochs. The reason is that the excitation of emission lines from diverse elements can occur under different physical conditions so that the lines do not necessarily trace the same gaseous regions. Therefore, we restrict our proper motion analysis to the H\(\alpha\) frames, because we do not have a [S ii] frame from 2002. In addition, our two H\(\alpha\) frames have a similar S/N, hence presenting similar detectability of the features, further making them a suitable match for the analysis. However, we note here that all the features that are resolvable in the GMOS H\(\alpha\) frame have the same morphology and position in the GMOS [S ii] frame.
We refer to the individual features as they have been named by previous authors. The features of the large-scale outflow were named by Poetzel et al. [5] using capital letters from \(A\) to \(O\) (Figure 1). In addition, the designation of \(f1\) and \(f2\) was given to refer to the filaments in
Figure 3: (**Left**): H\(\alpha\) image of Z CMa taken in 2002. GMOS FOV is shown for comparison. (**Right**): Insets of the features resolvable in the H\(\alpha\) image and which are outside GMOS FOV. On all images, north is up, east is to the left, and the intensity is in log scale to improve the contrast. See text for more details.
the SW side nearby the blobby features \(F\), \(G\), and \(H\). On our figures, the labels of the features are always directly above the feature itself, apart from the label \(f1\) which is to the left from the feature (Figure 3). We note here that not all the features presented by Poetzel et al. (2016) are detectable and/or resolvable in our 2002 H\(\alpha\) frame due to the slightly lower S/N. From our GMOS [S ii] frame, we could identify the features \(A\), \(B\), \(C\), \(D\), \(E\), \(M\), \(N\), \(K\), and \(L\). Features \(O\), \(G\), \(H\), and \(f1\) are outside the GMOS FOV. We could not identify features \(I\) and \(J\), situated between the central star and the feature \(K\), on any of our images, which is probably due to a slightly lower S/N of our images compared to the images from Poetzel et al. (2016).
The morphology of the large-scale outflow has not changed during the past 30 years when comparing our 2019 image with the 2002 one and the one from 1988-1989 from Poetzel et al. (2016) (compare their Figure 1 with our Figures 2 and 3). The large-scale outflow has a bipolar nature and it consists of individual features (features \(A\) to \(O\)) with varying shapes--blobby, elongated, filamentary, arced. The PA of the outflow is \(\sim\)60\({}^{\circ}\) (or \(\sim\)240\({}^{\circ}\)), as measured from our images. The approximate value is due to the slightly different PAs of individual features. Nevertheless, this shows that the PA of the outflow has not changed during the past 30 years either (\(\sim\)60\({}^{\circ}\) is measured also by Poetzel et al. (2016)).
In our [S ii] frame (Figure 2), we refer to a few other features related to the ejections from Z CMa: in particular, the extended jet A (see also Figure 3 in Whelan et al. (2016)), the PA of the micro-jet B (see also Figure 1 in Antoniucci et al. (2016)), and the PA of the streamer (see also Figure 2 in Canovas et al. (2016)). Figure 2 shows also the previously known bright comma nebula which is almost perpendicular to the large-scale outflow. Furthermore, our image reveals another, fainter and previously not detected extended arc-shaped feature in the NW direction, which will be discussed further in Section 3.4 and 4.
As a first step in finding any expansion in the plane of the sky, we have used the simple blinking of the two H\(\alpha\) matched frames. It reveals that feature \(C\) has a visible expansion, while the rest of the features appear to stand still. Overall, a reliable analysis is only possible for the brightest features, which are those labeled with \(C\) and \(D\). Therefore, we focus in the following on these two and determine their proper motions before we take a closer look at the faint arc structures.
### Proper Motion Calculations of Feature \(C\)
Feature \(C\) is one of the brightest among the large-scale outflow. It has a roundish shape and is clearly detectable in both of our H\(\alpha\) frames taken in 2002 and 2019. The exact temporal separation of the two images is 17.58 years. The shape of feature \(C\) does not change during that period. As mentioned above, feature \(C\) presents the fastest motion from the central star compared to other features. In addition, feature \(C\) also has the largest radial velocity compared to other features as measured by Poetzel et al. (2016). Therefore, considering the inclination angle out of the plane of the sky (see below), it is not surprising that this feature would show a clear expansion in the plane of the sky while others do not. The movement in the plane of the sky of feature \(C\) is in accordance with the general direction of the features in SW direction, confirming that it must have been ejected from Z CMa.
Due to the roundish shape of feature \(C\), it was possible to measure directly its central coordinates on both of our images. The total movement in the plane of the sky during the 17.58 years considered is 1\({}^{\prime\prime}\).\(A\), yielding a proper motion of 0\({}^{\prime\prime}\).08 yr\({}^{-1}\) and a tangential velocity of \(\sim\)420 km s\({}^{-1}\). Considering the radial velocity of \(-\)390 km s\({}^{-1}\) of feature \(C\)(Poetzel et al. (2016)), its expansion velocity is about 580 km s\({}^{-1}\) and the inclination out of the plane of the sky is 43\({}^{\circ}\) using the ordinary cosine relation between the velocity vectors (\(i=arccos(v_{sky}/v_{exp})\)). The found inclination angle agrees with the estimates made for the micro-jet B, which was proposed to have an inclination angle between 28\({}^{\circ}\) and 64\({}^{\circ}\)(Antoniucci et al. (2016)) according to the tangential and radial velocity estimates by Whelan et al. (2016).
The distance of feature \(C\) from the central star is 68\({}^{\prime\prime}\) and 69\({}^{\prime\prime}\) during the observations taken in 2002 and 2019, respectively. The position angle of feature \(C\) has not changed during the time duration between our 2002 and 2019 images, and it is 246\({}^{\circ}\). Precise measurements with errors for all the calculated values are in Table 2.
Using the above calculated proper motion, the distance from the central star, and assuming constant expansion velocity since the ejection, it is possible to calculate the age of feature \(C\) at our first epoch, 2002. It is on the order of 850 years, which is in accordance with the estimates in Poetzel et al. [5].
### Proper Motion Calculations of Feature \(D\)
The second feature for which we were able to calculate the expansion in the plane of the sky is the arc-shaped feature \(D\). Due to its elongated shape, a direct measuring of the coordinates was not an appropriate method. For that reason the magnification method (see, e.g., Santander-Garcia et al. [22]; Liimets et al. [23]) was used, which is suitable to find the proper motion of extended structures without a clear central point and/or when the total movement in the plane of the sky is as small as a tenth of a pixel. Both criteria are valid for feature \(D\). The magnification method is based on finding a magnification factor \(M\), which represents an image with minimum residuals of the magnified first epoch image, which is subtracted from the second epoch image. The method provides the proper motion, tangential velocity, and age. In order to use the magnification method, the frames being analyzed have to have coordinates, seeing, and flux matched. This was completed using the procedures described in Section 2.1. Further details about the magnification method and the derivation of the formulas used in the following can be found in Section 3.3 of the PhD thesis by Liimets [24]. The best magnification factor for feature \(D\) was determined to be \(M=1.003\pm 0.001\). However, we note here that this result should be used with caution. We can say with confidence that \(M\) is not larger than 1.003. Consequently, all the following numerical values should be taken as upper limits. The proper motion can be calculated, in convenient units, in the following way
\[\mu[^{\prime\prime}yr^{-1}]=\frac{(M-1)\cdot d[^{\prime\prime}]}{\Delta t[yr]}, \tag{1}\]
where \(d\) is the distance of feature \(D\) from the central star on the first epoch, in our case the year 2002, and \(\Delta t\) is the time interval between the two epochs. In the case of the elongated feature \(D\), the distance from the central star is somewhat challenging to estimate. However, we are
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Description** & **Feature \(C\)** & **Feature \(D\)** \\ \hline Total movement in the plane of the sky (\({}^{\prime\prime}\)) & \(1.4\pm 0.3\) & — \\ Proper motion \(\mu\) (\({}^{\prime\prime}\) yr\({}^{-1}\)) & \(0.08\pm 0.02\) & \(0.013\pm 0.004\) \\ Radial velocity \({}^{a}\)\(v_{rad}\) (km s\({}^{-1}\)) & \(-390\pm 24\) & \(-110\pm 24\) \\ Tangential velocity \(v_{sky}\) (km s\({}^{-1}\)) & \(423\pm 88\) & \(69\pm 23\) \\ Expansion velocity \(v_{exp}\) (km s\({}^{-1}\)) & \(576\pm 67\) & \(130\pm 24\) \\ Inclination out of the plane of the sky \(i\) (\({}^{\circ}\)) & \(43\pm 15\) & \(58\pm 14\) \\ Distance from the central star in 2002 \(d_{2002}\) (\({}^{\prime\prime}\)) & \(67.8\pm 0.3\) & \(75.3\pm 0.8\) \\ Distance from the central star in 2019 \(d_{2019}\) (\({}^{\prime\prime}\)) & \(69.2\pm 0.3\) & \(75.5\,^{b}\pm 0.8\) \\ PA at 2002 (\({}^{\circ}\)) & \(246.5\pm 0.2\) & \(222.9\pm 0.6\) \\ PA at 2019 (\({}^{\circ}\)) & \(246.6\pm 0.2\) & \(222.9\pm 0.6\) \\ Age at 2002 (years) & \(854\pm 177\) & \(5859\pm 1953\) \\ Magnification factor \(M\) & — & 1.003 \(\pm\) 0.001 \\ \hline \hline \end{tabular} \({}^{a}\) From Poetzel et al. [5]. \({}^{b}\) Calculated from \(d_{2002}\) and our derived proper motion.
\end{table}
Table 2: Results of the calculations from direct measuring of feature \(C\) and from using the magnification method for feature \(D\). A distance of 1125 pc toward \(Z\) CMa was adopted. See text for more details.
confident that when considering a somewhat larger error of 1 pixel, it is accurate enough to serve the purpose of the simple calculations presented in this paper. Hence, the distance of feature \(D\) from the central star at our 2002 epoch is 75'' \(\pm\) 1'', and considering Equation (1), the proper motion becomes \(\mu=0\arcsec.013\) yr\({}^{-1}\). As for feature C, the precise values with their errors for all the calculations for feature \(D\) are in Table 2.
Using the proper motion and the distance to feature \(D\), it is possible to calculate the tangential velocity in the following convenient units
\[v_{sky}\;[km\;s^{-1}]=4.74\cdot\mu\;[^{\prime\prime}\;yr^{-1}]\cdot D\;[pc]. \tag{2}\]
We consider the distance to Z CMa to be 1125 pc and therefore \(v_{sky}=69\) km s\({}^{-1}\). The radial velocity of feature \(D\) has been measured to be \(-110\) km s\({}^{-1}\) (Poetzel et al. [5]), which results in an expansion velocity of 130 km s\({}^{-1}\). The inclination angle would therefore be slightly larger than for feature C but with a value of 58deg again matching with the estimates in Antoniucci et al. [7].
The magnification factor can also be used to calculate the age of the feature at the first epoch,
\[T\;[yr]=\frac{\Delta t\;[yr]}{(M-1)}, \tag{3}\]
which for feature \(D\) gives a value of about 6000 yrs.
The PA of feature \(D\) is constant between our two observing epochs and has a value of 223deg.
### Proper Motion of Other Features
We tried to measure the expansion in the plane of the sky for the two other features, \(E\) and \(K\), which were considerably fainter than the features \(C\) and \(D\) but still resolvable compared to the rather marginal detections of the features \(L\), \(M\), and \(N\). Features \(E\) and \(K\) have an irregular shape, and therefore, the magnification method was used. We find no measurable movement in the plane of the sky for both features. For feature \(E\), it is somewhat expected due to its RV being 0 km s\({}^{-1}\) (Poetzel et al. [5]), while the RV of feature \(K\) is quoted as +55 km s\({}^{-1}\) by the same authors. Considering the pixel scale of 0''.756 of our matched images and the fact that the magnification method is able to measure an expansion of about one tenth of a pixel, the smallest tangential velocity that we should be able to detect is \(\sim\)20 km s\({}^{-1}\). This, in return, would mean an inclination out of the plane of the sky 70deg, which, within our error estimate of \(\pm\)10deg, agrees with the estimates by Antoniucci et al. [7].
### Faint Extended Arc
From our deep GMOS [S ii] image, we have discovered that the bright comma nebula has a fainter continuation. We designate this feature a faint extended arc (see Figure 2). Despite the lower S/N of our GMOS H\(\alpha\) image, the faint arc is also detectable on that frame, but we refrain from showing it because it does not provide new information. With an orientation of the faint arc toward the NW, it is perpendicular to the main large-scale outflow. The arc is more pronounced on the Pan-STARRS RGB image (Figure 4) on which additional related features become visible. We detect a repeating pattern of filaments, inside the main arc, which seem to mimic "feathers" (see white arrows in Figure 4). Interestingly, while most of the feathers do not have a direct connection to the star, the closest feather to the central binary, designated with number 1, seems to have connecting filaments. These filaments start from the bright comma nebula as a small arc resembling a "fishtail" (marked with solid red arrow) and then continue toward feather 1 with less homogeneous emission. From the image, it is visible that the diffuse
emission in the direction of feather 2 is continuous further than that of our Pan-STARRS FOV. However, the faint extended arc which ends with feather 4 extends up to 3\({}^{\prime}\) toward the west.
## 4 Discussion
Looking at the Figure 4, it is tempting to assume that the newly discovered faint extended features, as well as the bright comma-shaped feature, are created by matter expelled from the central star during a single or multiple independent mass ejection event(s) and now appear misplaced from the central position of the object due to the movement of Z CMa through the interstellar medium. However, the proper motion of the star, \(\mu_{RA}=-4.4\) mas yr\({}^{-1}\) and \(\mu_{DEC}=2.0\) mas yr\({}^{-1}\) (Hipparcos-2 catalog, van Leeuwen (2015)), does not support that. Z CMa is moving toward the faint extended arc and not away from it. Our choice of using the proper motion values from the Hipparcos-2 catalog is due to the mentioned large RUWE value of Z CMa in Gaia DR3 (see Section 1), referring to a problematic astrometric solution. The latter is most probably a result of the binary nature of the object and its generally rather bright photometric values. However, we note that Gaia DR3 proper motion values agree relatively well with the Hipparcos-2 catalog.
Figure 4: Pan-STARRS RGB image presenting the faint extended features around Z CMa. Red channel corresponds to \(z\) filter, green channel corresponds to \(i\), and blue channel corresponds to \(g\) filter. The intensity is in log scale to improve the contrast. North is up and east is to the left. FOV 9\({}^{\prime}\times 9^{\prime}\). See text for more details.
Another possibility, due to the roundish shape, is that the faint extended arc with the feathers is related to the orbital motion of the binary. Considering the recent high-contrast imaging polarimetry observations (Canovas et al. [21]), the orbital movement of the FU Ori companion around the primary is 0\({}^{\circ}\).7 yr\({}^{-1}\) when considering a circular orbit. This implies an orbital period of about 500 years. Currently, the FU Ori companion is located in the southeast (SE) direction from the primary (e.g., Figure 2 in Bonnefoy et al. [1]). Therefore, considering the hypothesis of a single ejection and the position of the faint extended arc and its feathers, they could have been ejected half a period ago. However, taking into account the distance of these features from the central binary, it would imply a peculiarly large tangential velocity of \(\sim\)3700 km s\({}^{-1}\), which has not been measured in any other features related to the small- or large-scale outflows of Z CMa. The obvious repeating pattern of the feathers could also suggest consecutive mass ejections occurring on every orbit at a specific location. We can then assume that the closest feather could have been ejected half a period ago and the next ones could have been ejected 1.5, 2.5, and 3.5 periods ago, respectively. The equivalent tangential velocities would then be about 1800, 900, 600, and 530 km s\({}^{-1}\), considering distance estimates for each feather from the central binary of 85, 130, 148, and 180\({}^{\prime\prime}\), respectively. While the radial velocities of \(-\)600 km s\({}^{-1}\) have been measured to be associated with the micro-jets (Poetzel et al. [5]; Whelan et al. [6]), all these estimated values exceed our measured tangential velocities of the large-scale outflow features (see Table 2). If we consider our largest tangential velocity of 420 km s\({}^{-1}\), and the largest extent of the faint arc structures (3\({}^{\prime}\)), it would have had to be ejected \(\sim\)2300 years ago. This timescale is comparable with the ages we have found for the large-scale outflow features \(C\) and \(D\), 850 and 5900 years, respectively. However, if these newly discovered features are related to the mass ejections occurring at the particular location during the orbit of the FU Ori-type companion around the primary, it is more likely that the orbit is not circular but elliptical. The latter would include periastron passage, which can enhance the mass-loss and possibly initiate outflows, as has been seen in other binary systems (e.g., in a case of symbiotic binary R Aquarii (Limets et al. [26]) and proposed for the formation of the circumbinary molecular ring in the B[e] supergiant system GG Car (Kraus et al. [27])). At the same time, while feathers 2 and 3 do not seem to be physically connected with the central binary, the faint extended arc and its extension feather 4 are clearly a continuation from the star and its bright comma nebula, implying a constant flow of matter. At this point, we also mention that when inspecting the Figure 4 more thoroughly, it is possible that there is a dark cloud blocking the connections of the feathers with the central binary or with the faint arc, as there are no stars detected in the west from the binary below the faint arc. However, according to the study based on 2 Micron All Sky Survey by Dobashi [28], there are no dark clouds in the FOV of our Figure 4.
We can further calculate the possible tangential velocities related to the new discovered features when considering their maximum extent of 3\({}^{\prime}\) and the ages found for the features \(C\) and \(D\). The age of 850 years would result in a velocity of about 1100 km s\({}^{-1}\), while the 5900 years (feature \(D\)) would result in a velocity of about 160 km s\({}^{-1}\). The latter is more in line with the tangential velocities measured in this work and with the radial velocities measured by Poetzel et al. [5]. It is possible that the elongated feature \(D\), which indeed has a larger deviation from the average PA of the large-scale outflow compared to other features, potentially referring to a different origin, is related to the feather 1. However, the seemingly similarly shaped feature marked with dashed cyan arrow in Figure 4 is not feature \(D\). Feature \(D\) is 15\({}^{\prime\prime}\) further away from the central star and has a slightly smaller PA. A red dashed arrow is indicating its position in Figure 4, and it shows that this feature is situated on the edge of the feather 1, but there is no brightness enhancement in that position other than the boarder line of the feathery feature.
On the other hand, it cannot be ruled out that the faint features are accidentally aligned with Z CMa and that they are actually part of the huge nebula Sh 2-296 on which edge Z CMa is
located (see Figure 6 in Fernandes et al. (2020)). However, due to the obvious positional proximity to Z CMa, we are inclined to favor the idea that the discovered features are connected to Z CMa rather than being aligned by chance.
The new detected faint features as well as our measured different ages of different features (850 versus 5900 years for feature \(C\) and \(D\), respectively) are in accordance with the nature of the Z CMa binary, which has had several eruptions in the past. The knotty nature of the large-scale outflow as well as the (micro) jets (e.g., Whelan et al. (2019), Antoniucci et al. (2019)) additionally refer to several discrete mass ejections. In addition, the RVs of the individual features vary a lot with values reaching up to \(\sim\)\(\pm\)400 km s\({}^{-1}\)(Poetzel et al. (2019)), further supporting the scenario that the central object has experienced in the past several mass ejections with different initial velocities.
Another possible explanation could be that the measured tangential velocities of feature \(C\) and \(D\) have not been constant since their ejection from the central binary, which, in return, would affect the calculated ages. However, observationally, we cannot assess it at this point. We have not found suitable observational data prior to 2002 to check the potential velocity ranges before our first dataset. In addition, considering the measured velocities, we will have to wait for at least another \(\sim\)20 years to obtain a new set of observations, which could potentially show a change in tangential velocities. However, independently of the two scenarios, we wish to emphasize that our measured ages are more precise than previous estimates (Poetzel et al. (2019)) because we have accurately measured the tangential velocities, while formerly, those were approximated according to the radial velocities and the possible inclination angle of the large-scale outflow.
As measured from our 2002 and 2019 images, the PA of feature \(D\) is 223\({}^{\circ}\) and does not change during our observing period, 17.58 years. To expand the epoch of analysis, we estimate from the schematic Figure 1 in Poetzel et al. (2019) (they do not publish any numerical values of the PAs nor the distances from the central star) that the PA of feature \(D\) at their observing time between 1989 and 1990 was \(\sim\)225\({}^{\circ}\). Therefore, we conclude that the PA of feature \(D\) has not changed during the past 30 years. Following this result, the claim made by Whelan et al. (2019) that feature \(D\) is related with the micro-jet B (emanating from the FU Ori-type companion with a PA \(\sim\)235\({}^{\circ}\), which is indicated with the blue solid line in Figure 2), according to their observations, is not supported by our precise PA measurements.
We investigate further whether feature \(D\) could be related to the additional sub-arcsecond component emerging from Z CMa, the jet-like structure identified as a streamer (see Figure 3 in Millan-Gabet and Monnier (2019), Figure 2 in Canovas et al. (2020), Figure 1 in Liu et al. (2020), and Figure 1 in Dong et al. (2020)). Unfortunately, these authors do not provide any PA measurements for the straight part of the streamer (the outer edge of this feature is slightly curved toward west), but from their figures, we can estimate it to be approximately 215\({}^{\circ}\). We also note that the streamer does not start from the central binary but about 0\({}^{\prime\prime}\).7 straight toward the south from the binary (see Figure 2 in Canovas et al. (2020)). We indicate the PA of this feature with the green dotted line in our Figure 2. Even though the angle of the streamer is more similar to the PA of feature \(D\) than the angle of the micro-jet B is, it is still clearly evident that their PAs do not align. Dong et al. (2020) explains the streamer as a result of the flyby event because they discovered a faint component whose PA matches with the one of the streamer. The fact that feature \(D\) is not aligned with the streamer provides an additional support for the flyby event because it shows that the streamer is not related to any small or large-scale ejecta.
## 5 Conclusions
We have presented the first proper motion study of features \(C\) and \(D\) within the large-scale outflow of Z CMa. The two very different proper motion values obtained for these two features confirm the previous suggestion that the large-scale outflow is a result of several active
ejection phases with varying initial velocities in the life of Z CMa. Our precise position angle measurements of the same features reveal that they are not aligned with the feature streamer, providing further support for the occurrence of the flyby event in this complex system.
We have discovered new features most probably related to Z CMa--a faint extended arc with several features mimicking feathers. It is very likely that these features are connected to the central binary and are the result of previous mass ejection(s) possibly related to the orbital motion of the binary system.
**Author Contributions:** Conceptualization and Project administration, T.L., M.K., L.C.; Resources and Data curation: T.L., L.C., S.K. and A.M.; Formal analysis, Investigation, Software, Methodology and Validation, T.L.; Supervision, M.K.; Visualization, T.L., S.K.; Writing--original draft preparation, T.L., S.K.; Writing--review and editing, T.L., M.K., S.K., L.C., A.M.; Funding acquisition, M.K., L.C., S.K. All authors have read and agreed to the published version of the manuscript.
**Funding:** This research was funded by the Czech Science Foundation (GA CR, grant number 20-00150S), by CONICET (PIP 1337) and the Universidad Nacional de La Plata (Programa de Incintivos 11/G160). The Astronomical Institute of the Czech Academy of Sciences is supported by the project RVO:67985815. This project has also received funding from the European Union's Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant Agreement No. 823734. S.K. acknowledges support from the European Structural and Investment Fund and the Czech Ministry of Education, Youth and Sports (Project CoGraDS-CZ.02.1.01/0.0/0.0/15_003/0000437).
**Data Availability Statement:** Publicly available GMOS-S and Pan-STARRS datasets analyzed in this study can be retrieved from dedicated archives [https://archive.gemini.edu/searchform](https://archive.gemini.edu/searchform) (accessed on 27 February 2023), and [https://pslimages.stsci.edu/cgi-bin/pslcutouts](https://pslimages.stsci.edu/cgi-bin/pslcutouts) (accessed on 27 February 2023), respectively. The Mt. Palomar image is available from the corresponding author on reasonable request.
**Acknowledgments:** This research made use of the NASA Astrophysics Data System (ADS) and of the SIMBAD database, which is operated at CDS, Strasbourg, France. This publication is based on observations obtained at the international Gemini Observatory, which is a program of NSF's NOIRLab (processed using DRAGONS (Data Reduction for Astronomy from Gemini Observatory North and South)), which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigacion y Desarrollo (Chile), Ministerio de Ciencia, Tecnologia e Innovacion (Argentina), Ministerio da Ciencia, Tecnologia, Inovacoes e Comunicacoes (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea) under program ID GS-2019B-Q-210. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This work has made use of data from the European Space Agency (ESA) mission _Gaia_ ([https://www.cosmos.esa.int/gaia](https://www.cosmos.esa.int/gaia)), processed by the _Gaia_ Data Processing and Analysis Consortium (DPAC, [https://www.cosmos.esa.int/web/gaia/dpac/consortium](https://www.cosmos.esa.int/web/gaia/dpac/consortium)) (accessed on 27 February 2023). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the _Gaia_ Multilateral Agreement.
**Conflicts of Interest:** The authors declare no conflict of interest.
**Abbreviations**
The following abbreviations are used in this manuscript:
Z CMa
Z Canis Majors
FU Ori
FU Orionis
PA position angle
FOV
field of view
\begin{tabular}{l l} RUWE & renormalized unit weight error \\ S & south \\ NW & northwest \\ NE & northeast \\ SW & southwest \\ SE & southeast \\ \end{tabular}
Notes
* Although evolved massive stars can have nebula exceeding this size by far, such as the huge bipolar nebula of the B[e] supergiant MWC 314 extending across 13 pc (Marston and McCollum (2009); Limets et al. (2010)) as well as the 10 pc size elaborate filamentary structures around the Luminous Blue Variable P Cygni (see Boumis et al. (2011) and references therein).
* Renormalised Unit Weight Error (RUWE). RUWE is expected to be around 1.0 for sources where the single-star model provides a good fit to the astrometric observations. A value significantly greater than 1.0 (e.g., \(>\)1.4) could indicate that the source is non-single or otherwise problematic for the astrometric solution.
* IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation.
|
2310.07050 | On the well-posedness of the Cahn-Hilliard-Biot model and its
applications to tumor growth | We study the Cahn-Hilliard-Biot model with respect to its mathematical
well-posedness. The system models flow through deformable porous media in which
the solid material has two phases with distinct material properties. The two
phases of the porous material evolve according to a generalized Ginzburg-Landau
energy functional, with additional influence from both viscoelastic and fluid
effects. The flow-deformation coupling in the system is governed by Biot's
theory. This results in a three-way coupled system that can be viewed as an
extension of the Cahn-Larche equations by adding a fluid flowing through the
medium. We distinguish the cases between a spatially dependent and a
state-dependent Biot-Willis function. In the latter case, we consider a
regularized system. In both cases, we use a Galerkin approximation to
discretize the system and derive suitable energy estimates. Moreover, we apply
compactness methods to pass to the limit in the discretized system. In the case
of Vegard's law and homogeneous elasticity, we show that the weak solution
depends continuously on the data and is unique. Lastly, we present some
numerical simulations to highlight the features of the system as a tumor growth
model. | Marvin Fritz | 2023-10-10T22:24:06Z | http://arxiv.org/abs/2310.07050v2 | # On the well-posedness of the Cahn-Hilliard-Biot model and its applications to tumor growth
###### Abstract.
We study the Cahn-Hilliard-Biot model regarding its mathematical well-posedness. The system models flow through deformable porous media in which the solid material has two phases with distinct material properties. The two phases of the porous material evolve according to a generalized Ginzburg-Landau energy functional, with additional influence from both elastic and fluid effects. The flow-deformation coupling in the system is governed by Biot's theory. This results in a three-way coupled system that can be viewed as an extension of the Cahn-Larche equations by adding a fluid flowing through the medium. We use a Galerkin approximation to discretize the system and derive suitable energy estimates. Moreover, we apply compactness methods to pass to the limit in the discretized system. In the case of Vegard's law and homogeneous elasticity, we prove that the weak solution depends continuously on the data and that it is unique. Thus, the system is well-posed. Lastly, we present some numerical simulations to highlight the features of the system as a tumor growth model.
Key words and phrases:Cahn-Hilliard-Biot system; Well-posedness; Existence of weak solutions; Tumor growth model; Nonlinear PDEs 2020 Mathematics Subject Classification: Primary: 35A01, 35A02, 35D30, 35Q92, 65M60 +
Footnote †: Corresponding author: Marvin Fritz.
## 1. Introduction
In this paper, we undertake a comprehensive well-posedness analysis of the Cahn-Hilliard-Biot model that is capable of capturing complex scenarios involving fluid flow through deformable porous media at the Darcy scale. This model has been derived recently in [31] and it exhibits the capacity to adapt to changing material characteristics, encompassing variations in stiffness, permeability, compressibility, and poroelastic coupling strength. These variations arise from Cahn-Hilliard-type phase changes occurring within the solid matrix, which is a phenomenon with diverse applications.
One prominent application domain for this model is the study of solid tumor evolution. It has been argued that stress effects resulting from tumor growth exert a profound influence on the evolution of tumors themselves [24, 25]. Additionally, stress has the potential to both promote and inhibit tumor growth [4, 32, 22]. Moreover, most solid malignant tumors exhibit elevated interstitial fluid pressure and alterations in the elastic properties of their surrounding matrix [26]. Conceptually, the porous medium can be viewed as a representation of the coexistence of malignant and healthy cells enveloped by an extracellular matrix, with the fluid mimicking the interstitial fluid within this intricate biological environment. One may add several
additional phenomena to the model such as chemotaxis [13], haptotaxis [10] and angiogenesis [12]. We refer to [11] for an overview of tumor modeling using phase-field equations.
The proposed mathematical framework represents an extension that seamlessly combines elements from the Cahn-Hilliard model and the quasi-static linear Biot equations. In [31], it was shown that the mathematical model possesses a generalized gradient flow structure, affirming the thermodynamic consistency of the model. The Cahn-Hilliard component governs solid phase changes within the system through a smooth phase-field variable, while the Biot equations oversee fluid flow and elasticity. The Cahn-Hilliard equation [27] employs interfacial free energy to model phase separation. Coupling the Cahn-Hilliard model with elasticity is often referred to as the Cahn-Larche model [17], and it has found applications in diverse areas, including li-ion batteries [2] and tumor evolution [9, 15]. Other models have considered a viscoelastic coupling, see [19, 21]. In the Cahn-Hilliard-Biot model, fluid dynamics is incorporated into the system, assuming Biot-type coupling between flow and elasticity [6].
We prove the existence of weak solutions to the Cahn-Hilliard-Biot model based on a spatial discretiziation and deriving suitable energy estimates. We may conclude the existence of weakly/weakly-\(\star\) convergent subsequences, which will turn out to be the weak solution to the original problem. In this step, we apply the Aubin-Lions compactness lemma to achieve strong convergences so that we can pass to the limit in the nonlinear terms of the model. Moreover, we prove under several more restrictive assumptions that the weak solution is well-posed, i.e., it is unique and depends continuously on the data.
The structure of the article is as follows: The Cahn-Hilliard-Biot model is shortly presented in Section 2. After introducing conservation laws for each of the three coupled processes (phase-field evolution, elasticity, fluid flow), we propose free energies and constitutive relations to conclude the Cahn-Hilliard-Biot system. In Section 3, we introduce the relevant function spaces and inequalities that appear in the well-posedness analysis of the model in Section 4. Here, we state the definition and the existence of a weak solution. This is the main result of this work, and we prove such a theorem in several steps. Moreover, we show the uniqueness and the continuous dependence of the weak solution under several assumptions. Lastly, we show some numerical simulations in Section 5 and highlight the various effects of the elasticity and flow equations on the tumor's growth and shape.
## 2 Modeling
In this section, we shortly present the mathematical formulation of the Cahn-Hilliard-Biot model, which describes a saturated porous medium containing one fluid phase and two solid phases with distinct material properties. The solid phases are represented using a diffuse interface approach of Cahn-Hilliard type, where surface tension, solid material deformation, and pore pressure act as driving forces. We follow the derivation and explanations in [31].
Let \(\Omega\subset\mathbb{R}^{d}\), \(d\in\{1,2,3\}\), be a bounded domain and \(T>0\) a fixed final time. Further, \(\varphi:\Omega\times[0,T]\to[0,1]\) describes the phase-filed representing the two phases, \(u\) the infinitesimal displacement with \(|\nabla u|\ll 1\), \(\varepsilon(u)=\frac{1}{2}(\nabla u+\nabla u^{\top})\) the strain measure of \(u\), \(p\) the pore pressure, and \(q\) the fluid flux. The phase-field equation accounts for phase-change conservation through a phase-field flux \(J\) and reactions \(S_{\varphi}\):
\[\partial_{t}\varphi+\text{div}J=S_{\varphi}. \tag{1}\]
Mechanical equilibrium is attained on a much faster time scale than diffusion takes place. Therefore, we will assume a quasi-static equilibrium for \(u\), i.e.,
\[-\mathrm{div}\sigma=0, \tag{2}\]
where \(\sigma\) is the stress tensor. One may consider a non-zero right-hand side as originally derived in [31]. However, following the usual assumptions of the Cahn-Larche equations in [15, 17], we consider no external body forces. Additionally, we consider a volume balance law for the fluid:
\[\partial_{t}\theta+\mathrm{div}q=S_{\theta}, \tag{3}\]
where \(\theta\) represents the volumetric fluid content, influenced by fluid flux \(q\) and a source term \(S_{\theta}\).
The model's closure relies on its free energy, encompassing three components: the regularized surface energy, elastic energy, and fluid energy:
\[\mathcal{E}(\varphi,u,\theta)=\mathcal{E}_{\varphi}(\varphi)+\mathcal{E}_{u} (\varphi,u)+\mathcal{E}_{\theta}(\varphi,u,\theta). \tag{4}\]
The regularized surface energy is expressed as:
\[\mathcal{E}_{\varphi}(\varphi):=\int_{\Omega}\Psi(\varphi)+\frac{\gamma}{2}| \nabla\varphi|^{2}\,\mathrm{d}x, \tag{5}\]
where \(\Psi(\varphi)\) is a double-well potential that penalizes deviations from pure phases, and the second term accounts for interfacial energy. Here, \(\gamma\) represents the interfacial tension. The elastic energy, following the Cahn-Larche equations [15], takes the form:
\[\mathcal{E}_{u}(\varphi,u)=\int_{\Omega}W(\varphi,\varepsilon(u))\,\mathrm{d }x=\int_{\Omega}\frac{1}{2}\big{(}\varepsilon(u)-\mathcal{T}(\varphi)\big{)} \!:\mathbb{C}(\varphi)\big{(}\varepsilon(u)-\mathcal{T}(\varphi)\big{)}\, \mathrm{d}x, \tag{6}\]
where \(\mathbb{C}(\varphi)\) is the elasticity tensor and \(\mathcal{T}(\varphi)\) is the eigenstrain (or: stress free strain) at \(\varphi\), accounting for solid phase-specific properties. Lastly, the fluid energy extends the classical fluid energy and is expressed as:
\[\mathcal{E}_{\theta}(\varphi,u,\theta)=\int_{\Omega}\frac{M(\varphi)}{2} \left(\theta-\alpha(\varphi)\mathrm{div}u\right)^{2}\,\mathrm{d}x, \tag{7}\]
where both the compressibility parameter \(M(\varphi)\) and the Biot-Willis coupling coefficient \(\alpha(\varphi)\) are functions of the phase-field.
The source functions \(S_{\varphi}\) and \(S_{\theta}\) appearing in (1) and (3), are short forms for \(S_{\varphi}=S_{\varphi}(\varphi)\) and \(S_{\theta}=S_{\theta}(\varphi,\theta)\). Typically, we assume bounded source functions. When excluding effects from the Biot model, it was proposed in [15] that \(S_{\varphi}=-\lambda_{a}k(\varphi)+\lambda_{p}f(\varphi)\sigma/(1+D_{ \varepsilon(u)}W)\) for \(\lambda_{a},\lambda_{p}\) being the apoptosis and proliferation factors, respectively, \(\sigma\) the nutrients, and \(f,k\) bounded functions. Typically, one chooses \(f(\varphi)=\varphi(1-\varphi)\) but one replaces \(\varphi\) by a cut-off operator in analytical proofs to ensure the boundedness. Biologically, it should hold \(\varphi\in[0,1]\) anyway.
The model relies on various constitutive relations. For instance, Fick's law for non-ideal mixtures connects the flux \(J\) to the negative gradient of the chemical potential i.e. \(J=-m(\varphi)\nabla\mu\) with \(m(\varphi)\) representing the chemical mobility. The chemical potential \(\mu\) is obtained as the variational derivative of the free energy with
respect to \(\varphi\) i.e.
\[\mu=\delta_{\varphi}\mathcal{E} =\Psi^{\prime}(\varphi)-\gamma\Delta\varphi+\frac{1}{2}\left( \varepsilon(u)-\mathcal{T}(\varphi)\right)\colon\mathbb{C}^{\prime}(\varphi) \left(\varepsilon(u)-\mathcal{T}(\varphi)\right)\] \[\quad-\mathcal{T}^{\prime}(\varphi)\colon\mathbb{C}(\varphi) \left(\varepsilon(u)-\mathcal{T}(\varphi)\right)+\frac{M^{\prime}(\varphi)}{2} (\theta-\alpha(\varphi)\mathrm{div}u)^{2}\] \[\quad-M(\varphi)(\theta-\alpha(\varphi)\mathrm{div}u)\alpha^{ \prime}(\varphi)\mathrm{div}u.\]
The stress tensor \(\sigma\) and pore pressure \(p\) are defined based on the rate of change of energy with respect to strain and volumetric fluid content, respectively, i.e.
\[\sigma =\delta_{\varepsilon(u)}\mathcal{E}=\mathbb{C}(\varphi)\left( \varepsilon(u)-\mathcal{T}(\varphi)\right)+\alpha(\varphi)pI,\] \[p =\delta_{\theta}\mathcal{E}=M(\varphi)(\theta-\alpha(\varphi) \mathrm{div}u).\]
These quantities are described by constitutive relations, ultimately coupled to the balance equations and free energy. Finally, Darcy's law governs fluid flow through the porous medium, relating the fluid flux \(q\) to the gradient of pore pressure \(p\) i.e. \(q=-\kappa(\varphi)\nabla p\) where \(\kappa(\varphi)\) represents the permeability, accounting for the phase-field's influence on fluid flow. Darcy flow has been considered in tumor models in, e.g., [29], whereas Darcy-Brinkman flow was studied in [7].
Combining the balance equations with the constitutive relations and making appropriate identifications, the resulting Cahn-Hilliard-Biot model comprises a system of partial differential equations, providing insights into complex phenomena within porous media:
\[\partial_{t}\varphi-\mathrm{div}(m(\varphi)\nabla\mu) =S_{\varphi}(\varphi)\] \[\mu+\gamma\Delta\varphi-\Psi^{\prime}(\varphi)-\delta_{\varphi} \mathcal{E}_{u}(\varphi,u)-\delta_{\varphi}\mathcal{E}_{\theta}(\varphi,u,\theta) =0\] \[\mathrm{div}\big{(}\mathbb{C}(\varphi)\left(\varepsilon(u)- \mathcal{T}(\varphi)\right)\big{)}+\nabla(\alpha(\varphi)p) =0 \tag{8}\] \[\partial_{t}\theta+\mathrm{div}(\kappa(\varphi)\nabla p) =S_{\theta}(\varphi,\theta)\] \[p-M(\varphi)(\theta-\alpha(\varphi)\mathrm{div}u) =0.\]
We equip the system with the initial conditions \(\varphi(0)=\varphi_{0}\) and \(\theta(0)=\theta_{0}\), and the boundary conditions:
\[\nabla\varphi\cdot n=m(\varphi)\nabla\mu\cdot n=\kappa(\varphi )\nabla p\cdot n =0 \text{on }\Gamma:=\partial\Omega,\] \[u =0 \text{on }\Gamma_{D}\subset\Gamma,\] \[\nabla u\cdot n =0 \text{on }\Gamma\backslash\Gamma_{D}.\]
As usual, we consider no-flux boundary conditions for \(\varphi\) and \(\mu\) in the Cahn-Hilliard equation. Similarly, we assume a no-flux boundary condition for the pressure \(p\). For the deformation \(u\), we postulate a zero Dirichlet condition on \(\Gamma_{D}\) to consider the possible presence of a rigid part of the body such as a bone which prevents variations of the displacement, and a no-flux condition on the rest of the boundary. Alternatively, one may consider a non-homogeneous Neumann condition on the rest of the boundary as done in [15] so that the normal component of the stress is equal to some load given by a fixed source.
## 3. Preliminaries
In the following, we assume that \(\Omega\subset\mathbb{R}^{d}\), \(d\in\{1,2,3\}\), is a bounded domain with a sufficiently smooth boundary \(\partial\Omega\) and \(T>0\) is a given fixed time horizon. The space-time cylinder is denoted by \(\Omega_{t}=[0,t]\times\Omega\) for \(t\in(0,T]\). Notationally, we write \((f,g)\mapsto(f,g)_{\Omega}:=\int_{\Omega}f(x)g(x)\,\mathrm{d}x\) for the integration of two functions \(f\in L^{p}(\Omega)\), \(g\in L^{p^{\prime}}(\Omega)\) with \(\frac{1}{p}+\frac{1}{q}=1\) for \(p,q\in[1,\infty]\). Moreover, we shall
use standard notation for Lebesgue, Sobolev and Bochner spaces. When denoting norms, we shall omit the spatial domain when no confusion is possible.
We denote a generic constant simply by \(C>0\) (which may change from line to line) and for brevity we may write \(x\lesssim y\) instead of \(x\leq Cy\). We recall the Holder, Young convolution, Poincare-Wirtinger, Korn and Sobolev inequalities:
\[\begin{split}\|uv\|_{L^{p}}&\leq\|u\|_{L^{p}}\|v\|_{L^ {q}}\qquad\quad\forall u\in L^{\bar{p}}(\Omega),\ v\in L^{\bar{q}}(\Omega),\\ \|u\ast v\|_{L^{s}}&\leq\|u\|_{L^{\bar{p}}}\|v\|_{L^ {\bar{q}}}\qquad\quad\forall u\in L^{\bar{p}}(\Omega),\ v\in L^{\hat{q}}( \Omega),\\ \|u-\langle u\rangle\|_{L^{p}}&\lesssim\|\nabla u\|_ {L^{p}}\qquad\quad\forall u\in W^{1,p}(\Omega),\\ \|\nabla u\|_{L^{p}}^{p}&\lesssim\|u\|_{L^{p}}^{p}+ \|\varepsilon(u)\|_{L^{p}}^{p}\quad\forall u\in W^{1,p}(\Omega),\\ \|u\|_{W^{m,4}}&\lesssim\|u\|_{W^{k,\bar{p}}}\qquad \quad\quad\forall u\in W^{k,\bar{p}}(\Omega),\end{split} \tag{9}\]
where the exponents satisfy the relationship \(\frac{1}{\bar{p}}+\frac{1}{\bar{q}}=\frac{1}{\bar{r}}\), \(\frac{1}{\bar{p}}+\frac{1}{\bar{q}}=1+\frac{1}{\bar{r}}\) and \(k-\frac{d}{\bar{p}}\geq m-\frac{d}{\bar{q}}\) for \(k\geq m\), respectively. Here, \(\langle u\rangle=\frac{1}{|\Omega|}(u,1)_{L^{2}(\Omega)}\) denotes the mean of \(u\) with respect to \(\Omega\). We denote by \(W^{1,p}_{D}(\Omega)\) the space of functions in \(W^{1,p}(\Omega)\) that have zero trace on \(\Gamma_{D}\subset\partial\Omega\). Then, see [5, Theorem 6.15], the Poincare and Korn inequalities simplify to
\[\begin{split}\|u\|_{L^{p}}&\lesssim\|\nabla u\|_{L^ {p}}\quad\quad\forall u\in W^{1,p}_{D}(\Omega),\\ \|\nabla u\|_{L^{p}}^{p}&\lesssim\|\varepsilon(u)\|_{ L^{p}}^{p}\quad\forall u\in W^{1,p}_{D}(\Omega),\end{split} \tag{10}\]
To achieve strong convergence and pass the limit in the nonlinear parts of evolutionary PDEs, we require compact embeddings of Bochner spaces. We consider the Banach spaces \(X\), \(Y\), \(Z\) such that \(X\) is compactly embedded in \(Y\) and \(Y\) is continuously embedded in \(Z\), i.e., the triple \(X\hookrightarrow Y\hookrightarrow Z\) is considered. The Aubin-Lions compactness lemma [30, Corollary 4] states
\[\begin{split} L^{p}(0,T;X)\cap W^{1,1}(0,T;Z)& \hookrightarrow L^{p}(0,T;Y),\quad\ \ 1\leq p<\infty,\\ L^{\infty}(0,T;X)\cap W^{1,r}(0,T;Z)& \hookrightarrow C^{0}([0,T];Y),\quad\ \ \ \ \ r>1.\end{split} \tag{11}\]
## 4 Well-posedness of the Cahn-Hilliard-Biot model
In this section, we state and prove the main theorem of this work. We will show that the Cahn-Hilliard-Biot system (8) admits a weak solution in the sense of the following definition.
**Definition 4.1**.: We call the quintuple \((\varphi,\mu,u,p,q)\) a solution to the Cahn-Hilliard-Biot system (8) if it satisfies the variational equations
\[\begin{split}(\partial_{t}\varphi,\zeta_{\varphi})_{\Omega}+(m( \varphi)\nabla\mu,\nabla\zeta_{\varphi})_{\Omega}&=(S_{\varphi} (\varphi),\zeta_{\varphi})_{\Omega}\\ -(\mu,\zeta_{\mu})_{\Omega}+\gamma(\nabla\varphi,\nabla\zeta_{ \mu})_{\Omega}+(\Psi^{\prime}(\varphi),\zeta_{\mu})_{\Omega}&=( \delta_{\varphi}\mathcal{E}_{u}+\delta_{\varphi}\mathcal{E}_{\theta},\zeta_{ \mu})_{\Omega}\\ \left(\mathbb{C}(\varphi)\left(\varepsilon(u)-\mathcal{T}( \varphi)\right),\nabla\zeta_{u}\right)_{\Omega}&=(\alpha(\varphi )p,\mathrm{div}\zeta_{u})_{\Omega}\\ (\partial_{t}\theta,\zeta_{\theta})_{\Omega}+(\kappa(\varphi) \nabla p,\nabla\zeta_{\theta})_{\Omega}&=(S_{\theta}(\varphi, \theta),\zeta_{\theta})_{\Omega}\\ (p,\zeta_{p})_{\Omega}&=(\theta-\alpha(\varphi) \mathrm{div}u,M(\varphi)\zeta_{p})_{\Omega}\end{split} \tag{12}\]
for any test functions \(\zeta_{\varphi},\zeta_{\mu},\zeta_{\theta},\zeta_{p}\in H^{1}(\Omega)\), \(\zeta_{u}\in H^{1}_{D}(\Omega)^{d}\), and the initials \(\varphi(0)=\varphi_{0}\), \(\theta(0)=\theta_{0}\)
Before we prove that a weak solution exists, we consider some assumptions on the functions that appear in the model.
### Assumption 4.2
Let the following assumptions hold:
1. \(\theta_{0}\in L^{2}(\Omega)\), \(\varphi_{0}\in H^{1}(\Omega)\).
2. \(\gamma>0\) is fixed.
3. \(S_{\varphi}\), \(S_{\theta}\) are continuous and bounded.
4. \(\Psi\in C^{1}(\mathbb{R};\mathbb{R}_{\geq 0})\) satisfies the growth condition \(|\Psi^{\prime}(x)|\leq C_{\Psi}(1+|x|)\) for any \(x\in\mathbb{R}\) where \(C_{\Psi}>0\).
5. \(\mathbb{C}(\varphi)\) is bounded, Lipschitz continuous, and fulfills \(\varepsilon(u):\mathbb{C}(\varphi)\varepsilon(u)\geq C_{\mathbb{C}}| \varepsilon(u)|^{2}\) for a constant \(C_{\mathbb{C}}>0\).
6. \(\mathcal{T}(\varphi)\) is Lipschitz continuous and differentiable.
7. \(\kappa,m\in C^{0}(\mathbb{R})\), \(\alpha,M\in W^{1,\infty}(\mathbb{R})\) s.t. \(M(x)\geq M_{0}>0\), \(\kappa(x)\geq\kappa_{0}>0\), \(m(x)\geq m_{0}>0\) for any \(x\in\mathbb{R}\).
Alternatively to (A4), one can assume growth conditions on \(\Psi\) and its derivatives relating to other orders of derivatives, see [15]. We recall that \(W(x,M)=\frac{1}{2}(M-\mathcal{T}(x)):\mathbb{C}(x)(M-\mathcal{T}(x))\) so that \(\mathcal{E}_{u}(\varphi,u)=\int_{\Omega}W(\varphi,\varepsilon(u))\,\mathrm{d}x\). Then (A5) implies that it holds
\[(D_{M_{1}}W(x,M_{1})-D_{M_{2}}W(x,M_{2})):(M_{1}-M_{2}) \geq C|M_{1}-M_{2}|^{2}, \tag{13}\] \[|W(x,M)|+|D_{\varphi}W(x,M)| \leq C(1+|x|^{2}+|M|^{2}),\] \[|D_{M}W(x,M)| \leq C(1+|x|+|M|).\]
for any \(x\in\mathbb{R}\), \(M_{1},M_{2}\in\mathbb{R}^{d\times d}\), \(M\in\mathbb{R}^{d\times d}_{\mathrm{sym}}\).
Then, the existence theorem reads as follows.
**Theorem 4.3** (Existence of a weak solution).: _Let Assumption 4.2 hold. Then there exists a global-in-time solution to the Cahn-Hilliard-Biot system in the sense of Definition 4.1. Moreover, the solution admits the regularity_
\[\varphi \in C_{w}([0,T];H^{1}(\Omega))\cap C([0,T];L^{r}(\Omega))\cap H^{ 1}(0,T;(H^{1}(\Omega))^{\prime}),\] \[\mu \in L^{2}(0,T;H^{1}(\Omega)),\] \[u \in L^{2}(0,T;H^{1}_{D}(\Omega)),\] \[p \in L^{\infty}(0,T;L^{2}(\Omega))\cap L^{2}(0,T;H^{1}(\Omega)),\] \[\theta \in C_{w}([0,T];L^{2}(\Omega))\cap C([0,T];(H^{1}(\Omega))^{ \prime}),\]
_with \(r<6\) for \(d=3\) or \(r<\infty\) for \(d=2\), and fulfills the energy inequality_
\[\|\varphi(t)\|_{H^{1}}^{2}+\|\nabla\mu\|_{L^{2}_{t}(L^{2})}^{2}+ \|\nabla p\|_{L^{2}_{t}(L^{2})}^{2}+\|\Psi(\varphi(t))\|_{L^{1}}+\|u(t)\|_{H^ {1}}^{2}+\|p(t)\|_{L^{2}}^{2} \tag{14}\] \[\lesssim 1+\|\varphi_{0}\|_{H^{1}}^{2}+\|\theta_{0}\|_{L^{2}}^{2}.\]
Under additional assumptions, we are able to prove the uniqueness of the weak solution. Assuming a constant elasticity tensor and an affine linear eigenstrain is quite standard in literature and this case is referred to as Vergard's law, see [15]. The appropriate testing and cancelling requires constants \(m\) and \(\kappa\). Moreover, we assume that \(M\) and \(\alpha\), which simplifies the model. Lastly, we assume that the potential function is semiconvex. If the constant \(C_{*}\) in (A4\({}^{*}\)) would be zero, then this simply implies that \(\Psi\) is convex whereas \(C_{*}<0\) implies strong convexity of \(\Psi\). However, \(C_{*}>0\) is a weaker condition than convexity, which is enough for proving the uniqueness of a solution.
**Theorem 4.4** (Continuous dependence and uniqueness).: _Let (A1)-(A3) of Assumption 4.2 hold. Additionally, we assume:_
* \(\Psi\in C^{1}(\mathbb{R};\mathbb{R}_{\geq 0})\) _with_ \(\Psi(0)=\Psi^{\prime}(0)=0\) _satisfies the growth conditions of (A4) and it is semiconvex i.e._ \((\Psi^{\prime}(x)-\Psi^{\prime}(y))(x-y)\geq-C_{*}|x-y|^{2}\) _for any_ \(x,y\in\mathbb{R}\) _for some_ \(C_{*}\geq 0\)_._
* \(\mathbb{C}(\varphi)=\mathbb{C}>0\) _is constant._
* \(\mathcal{T}(\varphi)\) _is affine linear i.e._ \(\mathcal{T}(\varphi)=\mathcal{T}_{1}+\mathcal{T}_{2}\varphi\) _for_ \(\mathcal{T}_{1},\mathcal{T}_{2}\in\mathbb{R}^{d\times d}\)_._
* \(\kappa,m,\alpha,M>0\) _are constant._
_Then the weak solution \((\varphi,\mu,u,p,\theta)\) to the Cahn-Hilliard-Biot system is unique. Moreover, any two weak solutions \((\varphi_{i},\mu_{i},u_{i},p_{i},\theta_{i})\), \(i\in\{1,2\}\), depend continuously on the data \((\varphi_{0,i},\theta_{0,i})\), \(i\in\{1,2\}\), in the sense that it holds_
\[\begin{split}&\|\varphi_{1}-\varphi_{2}\|_{L^{\infty}(H^{1})^{ \prime}\cap L^{2}(L^{2})}^{2}+\|\mu_{1}-\mu_{2}\|_{L^{2}(H^{1})^{\prime}}^{2}+ \|u_{1}-u_{2}\|_{L^{2}(H^{1}_{D})}^{2}\\ &\quad+\|\theta_{1}-\theta_{2}\|_{L^{\infty}(H^{1})^{\prime}\cap L ^{2}(L^{2})}^{2}+\|p_{1}-p_{2}\|_{L^{2}(L^{2})}^{2}\\ &\lesssim\|\varphi_{0,1}-\varphi_{0,2}\|_{(H^{1})^{\prime}}^{2}+ \|\theta_{0,1}-\theta_{0,2}\|_{(H^{1})^{\prime}}^{2}.\end{split} \tag{15}\]
To prove the existence theorem, we follow the Galerkin procedure by deriving energy estimates on a discrete level before returning to the continuous problem. In fact, the steps of the procedure read as follows:
1. **Approximate problem** We select the eigenfunctions of the Laplace operator, whose span is dense in \(H^{1}(\Omega)\). We approximate the system by a problem in a finite-dimensional space. This reduces the problem to an ODE, and we can apply standard theory to ensure the existence of a solution of this finite-dimensional problem. As a result, we obtain a sequence of solutions \((\varphi_{k},\mu_{k},u_{k},\theta_{k},p_{k})\) of the respective finite-dimensional problem.
2. **Energy estimates.** In this step, one shows that the sequence of solutions \((\varphi_{k},\mu_{k},u_{k},\theta_{k},p_{k})\) is uniformly bounded in the norm of reflexive/separable Banach spaces. According to the theorem of Banach-Alaoglu, there is a subsequence (denoted by the same index) \((\varphi_{k},\mu_{k},u_{k},\theta_{k},p_{k})\) that converges weakly-\(*\) to some element \((\varphi,\mu,u,\theta,p)\).
3. **Compactness.** We prove that the derivative of \(\varphi_{k}\) is bounded in another Bochner space and thus, we can apply the Aubin-Lions lemma, see (11), to conclude that \((\varphi_{k})_{k}\) converges strongly in some space. This strong convergence is essential for the limit process later on. Otherwise, we would not be able to conclude the convergence of the nonlinear functions appearing in the system.
4. **Initial conditions.** We show that the limit functions \(\varphi\) and \(\theta\) also fulfills the imposed initial condition \(\varphi(0)=\varphi_{0}\) and \(\theta(0)=\theta_{0}\) in some sense. This is performed using the strong convergence at \(t=0\) and the uniqueness of limits.
5. **Limit process.** We are at the point where we have already proved the existence of functions \((\varphi_{k},\mu_{k},u_{k},\theta_{k},p_{k})\) fulfilling the \(k\)-th Galerkin equations, respectively. In this step, we take the limit \(k\to\infty\) of the \(k\)-th Galerkin equations to obtain the variational Cahn-Hilliard equation. Thus, the weak-\(*\) limit of a subsequence of \((\varphi_{k},\mu_{k},u_{k},\theta_{k},p_{k})\) turns out to be a solution of the variational Cahn-Hilliard-Biot system. This finishes the proof of Theorem 4.3.
6. **Continuous dependence and uniqueness.** In Theorem 4.4, we consider several simplifications, e.g., by assuming a constant mobility. We prove the solution's continuous dependence on the data. From here, we can directly obtain the uniqueness of the solution.
Proof of Theorem 4.3.: As described, we follow the Galerkin procedure to prove the existence of a weak solution.
**Step 1 (Approximate problem).** We choose \(\{z_{i}\}_{i\in\mathbb{N}}\) as the set of eigenfunctions of the Neumann-Laplacian operator that is orthonormal in \(L^{2}(\Omega)\) and orthogonal in \(H^{1}(\Omega)\) with \(z_{1}\) being the constant function \(|\Omega|^{-1/2}\) and \((z_{i},1)_{\Omega}=0\) for \(i\geq 2\). Further, we choose \(\{y_{i}\}_{i\in\mathbb{N}}\) as the eigenfunctions of a boundary value problem for an elasticity system which is orthogonal in \(L^{2}(\Omega)^{d}\), see [23, Thm. 3.12.1]. Then, we define the finite-dimensional spaces \(Z_{k}\) and \(Y_{k}\) as the linear span of the first \(k\) eigenfunctions \(\{z_{i}\}_{i\in\mathbb{N}}\) and \(\{y_{i}\}_{i\in\mathbb{N}}\), respectively. Moreover, we denote by \(\Pi_{k}\) the \(L^{2}\)-projection onto \(Z_{k}\). Then, the Galerkin approximation of (12) reads as: for any \(k\in\mathbb{N}\) find \((\varphi_{k},\mu_{k},u_{k},\theta_{k},p_{k})\) of the form
\[\varphi_{k}(t,x) =\sum_{i=1}^{k}a_{i}^{k}(t)z_{i}(x),\quad\mu_{k}(t,x)=\sum_{i=1}^ {k}b_{i}^{k}(t)z_{i}(x),\quad u_{k}(t,x)=\sum_{i=1}^{k}c_{i}^{k}(t)y_{i}(x),\] \[\theta_{k}(t,x) =\sum_{i=1}^{k}d_{i}^{k}(t)z_{i}(x),\quad p_{k}(t,x)=\sum_{i=1}^ {k}e_{i}^{k}(t)z_{i}(x),\]
satisfying for a.e. \(t\in(0,T)\) the system
\[(\partial_{t}\varphi_{k},z)_{\Omega}+\left(m(\varphi_{k})\nabla \mu_{k},\nabla z\right)_{\Omega} =(S_{\varphi}(\varphi_{k}),z)_{\Omega}\] \[-(\mu_{k},z)_{\Omega}+\gamma(\nabla\varphi_{k},\nabla z)_{\Omega }+(\Psi^{\prime}(\varphi_{k}),z)_{\Omega} =-(\delta_{\varphi_{k}}\mathcal{E}_{\theta}^{k}+\delta_{\varphi_ {k}}\mathcal{E}_{u}^{k},z)_{\Omega}\] \[\left(\mathbb{C}(\varphi_{k})\left(\varepsilon(u_{k})-\mathcal{ T}(\varphi)\right),\nabla y\right)_{\Omega} =(\alpha(\varphi_{k})p_{k},\mathrm{div}y)_{\Omega} \tag{16}\] \[(\partial_{t}\theta_{k},z)_{\Omega}+(\kappa(\varphi_{k})\nabla p _{k},\nabla z)_{\Omega} =(S_{\theta}(\varphi_{k},\theta_{k}),z)_{\Omega}\] \[(p_{k},z)_{\Omega} =(M(\varphi_{k})(\theta_{k}-\alpha(\varphi_{k})\mathrm{div}u_{k} ),z)_{\Omega}\]
for all \(z\in Z_{k}\), \(y\in Y_{k}\). Here, we have defined \(\mathcal{E}_{u}^{k}=\mathcal{E}_{u}(\varphi_{k},u_{k})\) and in the same way for \(\mathcal{E}_{\theta}^{k}\). We equip the system with the initial conditions \(\varphi_{k}(0)=\varphi_{k,0}:=\Pi_{k}\varphi_{0}\) and \(\theta_{k}(0)=\theta_{k,0}:=\Pi_{k}\theta_{0}\). The orthogonality of \(\{z_{i}\}_{i\in\mathbb{N}}\) regarding the \(L^{2}\)-inner product allows us to express the Galerkin approximation as a system of ordinary differential equations in the coefficient vectors \(\boldsymbol{a}:=(a_{1}^{k},\ldots,a_{k}^{k})\), \(\boldsymbol{b}:=(b_{1}^{k},\ldots,b_{k}^{k})\), \(\boldsymbol{c}:=(c_{1}^{k},\ldots,c_{k}^{k})\), \(\boldsymbol{d}:=(d_{1}^{k},\ldots,d_{k}^{k})\) and \(\boldsymbol{e}:=(d_{1}^{k},\ldots,d_{k}^{k})\). Since it holds \(\varphi_{k}(0,x)=\Pi_{k}\varphi_{0}(x)=\sum_{i=1}^{k}(\varphi_{0},z_{i})_{ \Omega}z_{i}(x)\), we set the initial condition as \(a_{i}^{k}(0)=(\varphi_{0},z_{i})_{\Omega}\) for any \(i\in\mathbb{N}\) and in the same way for the other variables. Since all the involved functions \(m\), \(\alpha\), \(M\), \(\kappa\), \(\Psi^{\prime}\), \(S_{\varphi}\), \(S_{\theta}\), \(\mathbb{C}\), \(\mathcal{T}\) are continuous regarding their arguments, the differential-algebraic system contains only contributions that are continuous in \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d},\boldsymbol{e}\). We invoke the Cauchy-Peano theorem to obtain the existence of \(T_{k}\in(0,T]\) and local solutions \(\boldsymbol{a},\boldsymbol{b},\boldsymbol{c},\boldsymbol{d},\boldsymbol{e}\in C ^{1}([0,T_{k}];\mathbb{R}^{k})\) solving the Galerkin system.
**Step 2 (Energy estimates).** In this section, we derive the important energy estimate that will guarantee the existence of weakly converging subsequences.
**Step 2a.** First, we consider the test functions \(\mu_{k}\) and \(K\varphi_{k}\) in (16)\({}_{1}\), \(K>0\) to be determined later on, and \(\partial_{t}\varphi_{k}\) in (16)\({}_{2}\) to get
\[(\partial_{t}\varphi_{k},\mu_{k})_{\Omega}+(m(\varphi_{k}),|\nabla\mu_{k}|^{2 })_{\Omega} =(S_{\varphi}(\varphi_{k}),\mu_{k})_{\Omega},\]
\[K(\partial_{t}\varphi_{k},\varphi_{k})_{\Omega}+K(m(\varphi_{k})\nabla\mu_{k}, \nabla\varphi_{k})_{\Omega} =K(S_{\varphi}(\varphi_{k}),\varphi_{k})_{\Omega},\]
\[-(\mu_{k},\partial_{t}\varphi_{k})_{\Omega}+\gamma(\nabla\varphi_{k}, \partial_{t}\nabla\varphi_{k})_{\Omega}+(\Psi^{\prime}(\varphi_{k}),\partial_ {t}\varphi_{k})_{\Omega} =-(\delta_{\varphi_{k}}\mathcal{E}_{\theta}^{k}+\delta_{\varphi_ {k}}\mathcal{E}_{u}^{k},\partial_{t}\varphi_{k})_{\Omega}.\]
We add the three equations, which cancels the mixed term \((\partial_{t}\varphi_{k},\mu_{k})_{\Omega}\) and we obtain
\[\begin{split}& K(\partial_{t}\varphi_{k},\varphi_{k})_{\Omega}+(m (\varphi_{k}),|\nabla\mu_{k}|^{2})_{\Omega}+\gamma(\nabla\varphi_{k},\partial _{t}\nabla\varphi_{k})_{\Omega}+(\Psi^{\prime}(\varphi_{k}),\partial_{t} \varphi_{k})_{\Omega}\\ &+(\delta_{\varphi_{k}}\mathcal{E}_{u}^{k}+\delta_{\varphi_{k}} \mathcal{E}_{\theta}^{k},\partial_{t}\varphi_{k})_{\Omega}=(S_{\varphi}( \varphi_{k}),\mu_{k}+K\varphi_{k})_{\Omega}-K(m(\varphi_{k})\nabla\mu_{k}, \nabla\varphi_{k})_{\Omega}.\end{split} \tag{17}\]
The left-hand sides of (17) can be rewritten using the chain rule and the term involving the mobility function \(m\) can be estimated from below by the lower bound of \(m\), see (A7). The second term on right-hand side of (17) can be estimated by the upper bound of the mobility \(m\) (which we denote by \(m_{\infty}\)) and the Young inequality as follows:
\[-K(m(\varphi_{k})\nabla\mu_{k},\nabla\varphi_{k})_{\Omega}\leq\frac{m_{0}}{4 }\|\nabla\mu_{k}\|_{L^{2}}^{2}+\frac{m_{\infty}^{2}K^{2}}{m_{0}}\|\nabla \varphi_{k}\|_{L^{2}}^{2}.\]
We note that we can absorb the term involving \(\nabla\mu_{k}\) by the left-hand side of (17). Further, the source function \(S_{\varphi}\) is bounded due to (A3) and for one of the two terms, we simply have by the Young inequality
\[(S_{\varphi}(\varphi_{k}),K\varphi_{k})_{\Omega}\leq CK+C\|\varphi_{k}\|_{L^{2 }}^{2}.\]
For the other term, we apply the Poincare and Young inequalities to get
\[\begin{split}(S_{\varphi}(\varphi_{k}),\mu_{k})_{\Omega}& =(S_{\varphi}(\varphi_{k}),\mu_{k}-\langle\mu_{k}\rangle)_{\Omega }+\langle\mu_{k}\rangle(S_{\varphi}(\varphi_{k}),1)_{\Omega}\\ &\leq C\|\mu_{k}-\langle\mu_{k}\rangle\|_{L^{1}}+C|\langle\mu_{k} \rangle|\\ &\leq C+\frac{m_{0}}{4}\|\nabla\mu_{k}\|_{L^{2}}^{2}+C|\langle\mu _{k}\rangle|.\end{split}\]
We observe that we still need to estimate the mean of \(\mu_{k}\) on the right-hand side of the estimate. We test (16)\({}_{2}\) by \(z=1\in Z_{k}\) to obtain
\[\langle\mu_{k}\rangle=\int_{\Omega}\Big{[}\Psi^{\prime}(\varphi_{k})+D_{ \varphi_{k}}W(\varphi_{k},\varepsilon(u_{k}))+\delta_{\varphi_{k}}\mathcal{E} _{\theta}^{k}\Big{]}\,\mathrm{d}x\]
Regarding the last term, we use the upper bound of \(M^{\prime}\) and \(\alpha^{\prime}\) (which we denote by \(M^{\prime}_{\infty}\) and \(\alpha^{\prime}_{\infty}\), respectively) and the lower bound of \(M\), see (A7), to get
\[\begin{split}\int_{\Omega}\delta_{\varphi_{k}}\mathcal{E}_{ \theta}^{k}\,\mathrm{d}x&=\int_{\Omega}\Big{[}\frac{M^{\prime}( \varphi_{k})}{2(M(\varphi_{k}))^{2}}p_{k}^{2}-p_{k}\alpha^{\prime}(\varphi_{k} )\mathrm{div}u_{k}\Big{]}\,\mathrm{d}x\\ &\leq\frac{M^{\prime}_{\infty}}{2M_{0}^{2}}\|p_{k}\|_{L^{2}}^{2}+ \alpha^{\prime}_{\infty}\|p_{k}\|_{L^{2}}\|\mathrm{div}u_{k}\|_{L^{2}}.\end{split}\]
Lastly, we note that we can estimate \(\Psi^{\prime}(\varphi_{k})\) by \(C_{\Psi}(1+|\varphi_{k}|)\) as assumed in (A4) and use the Sobolev embedding theorem to get the final bound on the mean of \(\mu_{k}\)
\[\langle\mu_{k}\rangle\lesssim 1+\|\varphi_{k}\|_{L^{2}}^{2}+\|u_{k}\|_{H^{1}}^{2}+ \|p_{k}\|_{L^{2}}^{2}. \tag{18}\]
Plugging all the estimates of the right-hand side of (17) back into the inequality (17), we obtain
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}\bigg{[}\frac{K}{2}\| \varphi_{k}\|_{L^{2}}^{2}+\frac{\gamma}{2}\|\nabla\varphi_{k}\|_{L^{2}}^{2}+\| \Psi(\varphi_{k})\|_{L^{1}}\bigg{]}+(\delta_{\varphi_{k}}\mathcal{E}_{u}^{k}+ \delta_{\varphi_{k}}\mathcal{E}_{\theta}^{k},\partial_{t}\varphi_{k})_{ \Omega}\\ &+\frac{m_{0}}{2}\|\nabla\mu_{k}\|_{L^{2}}^{2}\lesssim 1+\|\varphi_{k} \|_{H^{1}}^{2}+\|\Psi(\varphi_{k})\|_{L^{1}}+\|u_{k}\|_{H^{1}}^{2}+\|p_{k}\|_{ L^{2}}^{2}.\end{split} \tag{19}\]
We observe that we require bounds on \(u_{k}\) and \(p_{k}\) to complete the energy estimate.
**Step 2b.** Secondly, we consider the test function \(\partial_{t}\varepsilon(u_{k})\) in (16)\({}_{3}\), \(p_{k}\) in (16)\({}_{4}\) and \(\partial_{t}\theta_{k}\) in (16)\({}_{5}\) to obtain the following:
\[\begin{split}(\delta_{\varepsilon(u_{k})}(\mathcal{E}_{u}^{k}+ \mathcal{E}_{\theta}^{k}),\partial_{t}\varepsilon(u_{k}))_{\Omega}& =0,\\ (\partial_{t}\theta_{k},p_{k})_{\Omega}+(\kappa(\varphi_{k}),| \nabla p_{k}|^{2})_{\Omega}&=(S_{\theta}(\varphi_{k},\theta_{k}), p_{k})_{\Omega},\\ (\delta_{\theta_{k}}\mathcal{E}_{\theta}^{k},\partial_{t}\theta_ {k})_{\Omega}&=(p_{k},\partial_{t}\theta_{k})_{\Omega}.\end{split}\]
We add these equations to the estimate (19) to obtain
\[\begin{split}&\frac{\mathrm{d}}{\mathrm{d}t}\bigg{[}\frac{K}{2}\| \varphi_{k}\|_{L^{2}}^{2}+\frac{\gamma}{2}\|\nabla\varphi_{k}\|_{L^{2}}^{2}+\| \Psi(\varphi_{k})\|_{L^{1}}+\mathcal{E}_{u}^{k}+\mathcal{E}_{\theta}^{k} \bigg{]}+\frac{m_{0}}{2}\|\nabla\mu_{k}\|_{L^{2}}^{2}+\kappa_{0}\|\nabla p_{k }\|_{L^{2}}^{2}\\ &\lesssim 1+\|\varphi_{k}\|_{H^{1}}^{2}+\|u_{k}\|_{H^{1}}^{2}+\|p_{k}\|_ {L^{2}}^{2},\end{split}\]
where we already estimated \(\kappa\) by \(\kappa_{0}\) from below according to (A7) and \(S_{\theta}(\varphi_{k},\theta_{k})\) by its upper bound according to (A3). We integrate the estimate over the time interval \((0,t)\) for \(t\leq T_{k}\) to get
\[\begin{split}&\frac{K}{2}\|\varphi_{k}(t)\|_{L^{2}}^{2}+\frac{ \gamma}{2}\|\nabla\varphi_{k}(t)\|_{L^{2}}^{2}+\|\Psi(\varphi_{k}(t))\|_{L^{1} }+\frac{m_{0}}{2}\|\nabla\mu_{k}\|_{L^{2}_{t}(L^{2})}^{2}\\ &\quad+\kappa_{0}\|\nabla p_{k}\|_{L^{2}_{t}(L^{2})}^{2}+ \mathcal{E}_{u}(\varphi_{k}(t),u_{k}(t))+\mathcal{E}_{\theta}(\varphi_{k}(t), u_{k}(t),\theta_{k}(t))\\ &\lesssim 1+\|\varphi_{k,0}\|_{H^{1}}^{2}+\|\Psi(\varphi_{k,0})\|_{L^{1} }+\mathcal{E}_{u}(\varphi_{k,0},u_{k,0})+\mathcal{E}_{\theta}(\varphi_{k,0}, u_{k,0},\theta_{k,0})\\ &\quad+\|\varphi_{k}\|_{L^{2}_{t}(H^{1})}^{2}+\|u_{k}\|_{L^{2}_{t }(H^{1})}^{2}+\|p_{k}\|_{L^{2}_{t}(L^{2})}^{2}.\end{split} \tag{20}\]
It remains to treat the energies \(\mathcal{E}_{u}\) and \(\mathcal{E}_{\theta}\) in the inequality. This is done in the next substep before we return to this energy estimate.
**Step 2c.** First, we study the elastic energy \(\mathcal{E}_{u}\) that appears on both sides in (20). Using (A5) and (13), i.e. the strict monotonicity of \(D_{\varepsilon(u_{k})}W\) with regards to its second argument, we find
\[\begin{split} W(s,M)&=W(s,0)+\int_{0}^{1}D_{2}W(s, tM):tM\frac{1}{t}\,\mathrm{d}t\\ &\geq C|M|^{2}-C(1+|s|^{2}),\end{split}\]
for any \(s\in\mathbb{R}\) and \(M\in\mathbb{R}^{d\times d}\). Thus, it gives
\[\mathcal{E}_{u}(\varphi_{k},u_{k})=\int_{\Omega}W(\varphi_{k},\varepsilon(u_{k }))\,\mathrm{d}x\geq C\|\varepsilon(u_{k})\|_{L^{2}}^{2}-C(1+\|\varphi_{k}\|_{ L^{2}}^{2}). \tag{21}\]
On the other hand, the initial energy \(\mathcal{E}_{u}(\varphi_{k,0},u_{k,0})\) on the right-hand side of (20) can be estimated from above by using the upper bound of \(W\), see (13)\({}_{2}\), as follows
\[\mathcal{E}_{u}(\varphi_{k,0},u_{k,0})\leq C(1+\|\varphi_{k,0}\|_{L^{2}}^{2}+ \|\varepsilon(u_{k,0})\|_{L^{2}}^{2}). \tag{22}\]
Since \(u_{k,0}=u_{k}(0)\) fulfills (16)\({}_{3}\) at \(t=0\), we test it by \(u_{k,0}\) and obtain by symmetry
\[\begin{split}(\varepsilon(u_{k,0}),\mathbb{C}(\varphi_{k,0}) \varepsilon(u_{k,0}))_{\Omega}&=(\alpha(\varphi_{k,0})p_{k,0}, \mathrm{div}u_{k,0})_{\Omega}\\ &\quad+(\mathbb{C}(\varphi_{k,0})\mathcal{T}(\varphi_{k,0}), \varepsilon(u_{k,0}))_{\Omega}.\end{split} \tag{23}\]
We want to get rid of \(p_{k,0}\) on the right-hand side by repeating the argument and get by testing \(z=p_{k,0}/M(\varphi_{k,0})\) in (16)\({}_{5}\) at \(t=0\)
\[\|M(\varphi_{k,0})^{-1/2}p_{k,0}\|_{L^{2}}^{2}=(\theta_{k,0}-\alpha(\varphi_{ k,0})\mathrm{div}u_{k,0},p_{k,0})_{\Omega}.\]
Thus, adding it to (23), it yields
\[\begin{split}&(\varepsilon(u_{k,0}),\mathbb{C}(\varphi_{k,0}) \varepsilon(u_{k,0}))_{\Omega}+\|M(\varphi_{k,0})^{-1/2}p_{k,0}\|_{L^{2}}^{2} \\ &=(\mathbb{C}(\varphi_{k,0})\mathcal{T}(\varphi_{k,0}), \varepsilon(u_{k,0}))_{\Omega}+(\theta_{k,0},p_{k,0})_{\Omega}.\end{split}\]
According to (A5), we can estimate the first term on the left-hand side from below by \(C_{\mathbb{C}}\|\varepsilon(u_{k,0})\|_{L^{2}}^{2}\). Moreover, may estimate the second term on the left-hand side from below by using the upper bound of \(M\), see (A7). Further, using the Lipschitz continuity of \(\mathcal{T}\) and the boundedness of \(\mathbb{C}\), see (A5) and (A6), we can estimate the right-hand side by the Holder inequality to finally obtain
\[C_{\mathbb{C}}\|\varepsilon(u_{k,0})\|_{L^{2}}^{2}+M_{\infty}^{-1}\|p_{k,0}\|_ {L^{2}}^{2}\leq C\|\varphi_{k,0}\|_{L^{2}}\|\varepsilon(u_{k,0})\|_{L^{2}}+\| \theta_{k,0}\|_{L^{2}}\|p_{k,0}\|_{L^{2}}.\]
Thus, we have by the Young inequality
\[C_{\mathbb{C}}\|\varepsilon(u_{k,0})\|_{L^{2}}^{2}+M_{\infty}^{-1}\|p_{k,0}\|_ {L^{2}}^{2}\lesssim\|\varphi_{k,0}\|_{L^{2}}^{2}+\|\theta_{k,0}\|_{L^{2}}^{2}, \tag{24}\]
and we can bound the initial deformation on the right-hand side of (22).
Next, we investigate fluid energy on both sides of the estimate (20). First, we use the definition of \(E_{\theta}\) and (16)\({}_{5}\) to obtain
\[\begin{split}\mathcal{E}_{\theta}(\varphi_{k},u_{k},\theta_{k})& =\frac{1}{2}(M(\varphi_{k}),(\theta_{k}-\alpha(\varphi_{k})\mathrm{ div}u_{k})^{2})_{\Omega}\\ &=\frac{1}{2}(M^{-1}(\varphi_{k}),p_{k}^{2})_{\Omega}\\ &\geq\frac{M_{\infty}^{-1}}{2}\|p_{k}\|_{L^{2}}^{2}.\end{split} \tag{25}\]
Moreover, we can bound the initial fluid energy as follows:
\[\begin{split}\mathcal{E}_{\theta}^{k}(\varphi_{k,0},u_{k,0}, \theta_{k,0})&=\frac{1}{2}(M(\varphi_{k,0}),(\theta_{k,0}-\alpha( \varphi_{k,0})\mathrm{div}u_{k,0})^{2})_{\Omega}\\ &\leq M_{\infty}\big{(}\|\theta_{k,0}\|_{L^{2}}^{2}+\alpha_{ \infty}^{2}\|\mathrm{div}u_{k,0}\|_{L^{2}}^{2}\big{)},\end{split}\]
and we can estimate the initial deformation as in (24) to obtain
\[\mathcal{E}_{\theta}^{k}(\varphi_{k,0},u_{k,0},\theta_{k,0})\lesssim\|\theta_ {k,0}\|_{L^{2}}^{2}+\|\varphi_{k,0}\|_{L^{2}}^{2}. \tag{26}\]
**Step 2d.** Now, we are in the position to return to the integrated estimate (20). We insert the upper and lower bounds of the elastic and fluid energy, see (21)-(26), back into (20) to obtain
\[\begin{split}&(\tfrac{K}{2}-C)\|\varphi_{k}(t)\|_{L^{2}}^{2}+ \frac{\gamma}{2}\|\nabla\varphi_{k}(t)\|_{L^{2}}^{2}+\|\Psi(\varphi_{k}(t))\|_ {L^{1}}+\frac{m_{0}}{2}\|\nabla\mu_{k}\|_{L^{2}_{t}(L^{2})}^{2}\\ &\quad+\kappa_{0}\|\nabla p_{k}\|_{L^{2}_{t}(L^{2})}^{2}+C\| \varepsilon(u_{k}(t))\|_{L^{2}}^{2}+\frac{M_{\infty}}{2}\|p(t)\|_{L^{2}}^{2}\\ &\lesssim 1\!+\!\|\varphi_{k,0}\|_{H^{1}}^{2}\!+\!\|\theta_{k,0}\|_{L^{2} }^{2}\!+\!\|\Psi(\varphi_{k,0})\|_{L^{1}}\!+\!\|\varphi_{k}\|_{L^{2}_{t}(H^{1} )}^{2}\!+\!\|u_{k}\|_{L^{2}_{t}(H^{1})}^{2}\!+\!\|p_{k}\|_{L^{2}_{t}(L^{2})}^{2},\end{split}\]
where we used Korn's inequality on the left-hand side of the inequality to estimate \(\|\varepsilon(u_{k})\|_{L^{2}}\) by the full \(H^{1}(\Omega)^{d}\)-norm of \(u\). At this point, we choose \(K\) sufficiently large to get the prefactor in front of \(\|\varphi_{k}(t)\|_{L^{2}}^{2}\) positive. We can estimate the initials on the right-hand side, using the properties of the orthogonal projection, by \(\|\varphi_{k,0}\|_{H^{1}}^{2}\leq\|\varphi_{0}\|_{H^{1}}^{2}\) and similarly for \(\|\theta_{k,0}\|_{L^{2}}^{2}\). Moreover, we can integrate the growth condition in (A4) to get
\[\|\Psi(\varphi_{k,0})\|_{L^{1}}\leq C_{\Psi}(1+\|\varphi_{k,0}\|_{L^{2}}^{2}) \leq C_{\Psi}(1+\|\varphi_{0}\|_{L^{2}}^{2}).\]
Then, by an application of the Gronwall inequality, we get
\[\begin{split}&\|\varphi_{k}(t)\|_{H^{1}}^{2}+\|\nabla\mu_{k}\|_{L _{t}^{2}(L^{2})}^{2}+\|\nabla p_{k}\|_{L_{t}^{2}(L^{2})}^{2}+\|\Psi(\varphi_{k }(t))\|_{L^{1}}+\|u_{k}(t)\|_{H^{1}}^{2}\\ &+\|p_{k}(t)\|_{L^{2}}^{2}\lesssim 1+\|\varphi_{0}\|_{H^{1}}^{2}+\| \theta_{0}\|_{L^{2}}^{2}.\end{split} \tag{27}\]
Since the right-hand side is uniform in \(k\), we can argue by a no-blow up criterion to extend the existence interval by setting \(T_{k}=T\) for any \(k\). Moreover, we already proved that \(\langle\mu_{k}\rangle(t)\) is bounded in \(L^{\infty}(0,T)\), see (18), and thus, we obtain a \(k\)-uniform bound of \(\mu_{k}\) in the \(L^{2}(0,T;H^{1}(\Omega))\)-norm. By the energy estimate (27) and the Eberlein-Smulian theorem, we can already extract weakly converging subsequences (that we denote by the same index by a typical abuse of notation). In fact, we get the existence of limit functions \((\varphi,\mu,u,\theta,p)\) such that
\[\begin{split}&\varphi_{k}\to\varphi_{k}\quad\text{weakly* in }L^{\infty}(0,T;H^{1}(\Omega)),\\ &\mu_{k}\to\mu_{k}\quad\text{weakly in }L^{2}(0,T;H^{1}(\Omega)), \\ & u_{k}\to u\quad\quad\text{weakly* in }L^{\infty}(0,T;H^{1}_{D}( \Omega)^{d}),\\ & p_{k}\to p\quad\quad\text{weakly* in }L^{\infty}(0,T;L^{2}( \Omega))\cap L^{2}(0,T;H^{1}(\Omega)),\\ &\theta_{k}\to\theta_{k}\quad\text{ weakly* in }L^{\infty}(0,T;L^{2}( \Omega)).\end{split} \tag{28}\]
We note that the energy inequality (27) holds in a continuous setting by replacing \(\varphi_{k}\) by \(\varphi\) and so on, by taking the limit inferior as \(k\to\infty\) and using that norm are weakly/weakly* lower semicontinuous. Moreover, we apply the Fatou lemma on the non-negative continuous function \(\Psi\) to achieve
\[\int_{\Omega}\Psi(\varphi(x))\,\mathrm{d}x\leq\liminf_{k\to\infty}\int_{ \Omega}\Psi(\varphi_{k}(x))\,\mathrm{d}x.\]
Thus, the quintuple \((\varphi,\mu,u,\theta,p)\) satisfies (14) as stated in Theorem 4.3.
**Step 3 (Compactness).** Since the system is nonlinear, we require strong convergence. To do so, we want to apply the Aubin-Lions compactness lemma, see [28], which still requires a uniform bound of a time-derivative. We consider an arbitrary element \(\zeta\in L^{2}(0,T;H^{1}(\Omega))\). Then we test (16)\({}_{1}\) with \(\Pi_{k}\zeta(t)\in Z_{k}\), which gives
\[\begin{split}\langle\partial_{t}\varphi_{k},\zeta\rangle_{L^{2} (H^{1})}&=-(m(\varphi_{k})\nabla\mu_{k},\nabla\zeta)_{\Omega_{T}} +(S_{\varphi}(\varphi_{k}),\zeta)_{\Omega_{T}}\\ &\leq m_{\infty}\|\nabla\mu_{k}\|_{L^{2}(L^{2})}\|\zeta\|_{L^{2}(H ^{1})}+C\|\zeta\|_{L^{1}(L^{1})}\\ &\leq C\|\zeta\|_{L^{2}(H^{1})},\end{split}\]
where we used the uniform bound of \(\nabla\mu_{k}\) as shown in (27). Since \(\zeta\) was chosen arbitrarily, we obtain the uniform bound of \(\partial_{t}\varphi_{k}\) in \(L^{2}(0,T;(H^{1}(\Omega))^{\prime})\). Employing the Aubin-Lions compactness lemma, we infer the compact embedding
\[L^{\infty}(0,T;H^{1})\cap H^{1}(0,T;(H^{1}(\Omega))^{\prime})\hookrightarrow \hookrightarrow C^{0}([0,T];L^{r}(\Omega)),\]
where \(r<6\) for \(d=3\) and \(r<\infty\) for \(d=2\). Thus, we obtain the convergences
\[\begin{split}\varphi_{k}&\to\varphi\qquad\text{strongly in }C^{0}([0,T];L^{r}(\Omega)),\\ \partial_{t}\varphi_{k}&\to\partial_{t}\varphi\quad \text{weakly in }L^{2}(0,T;(H^{1}(\Omega))^{\prime}).\end{split} \tag{29}\]
Repeating the arguments, we find
\[\begin{split}\theta_{k}&\to\theta\qquad\text{ strongly in }C^{0}([0,T];(H^{1}(\Omega))^{\prime}),\\ \partial_{t}\theta_{k}&\to\partial_{t}\theta\quad \text{ weakly in }L^{2}(0,T;(H^{1}(\Omega))^{\prime}).\end{split} \tag{30}\]
since testing \((\ref{eq:11})_{4}\) by \(\Pi_{k}\zeta(t)\in Z_{k}\) yields in a straightforward manner
\[\begin{split}\langle\partial_{t}\theta_{k},\zeta\rangle_{L^{2}(H ^{1})}&=-(\kappa(\varphi_{k})\nabla p_{k},\nabla\zeta)_{\Omega_{ T}}+(S_{\theta}(\varphi_{k},\theta_{k}),\zeta)_{\Omega_{ T}}\\ &\leq C\|\zeta\|_{L^{2}(H^{1})}.\end{split}\]
**Step 4 (Initial conditions).** It holds \(\varphi_{k}(0)\to\varphi(0)\) in \(L^{r}(\Omega)\) as \(k\to\infty\) according to the strong convergence (29) by setting \(t=0\). However, it holds \(\varphi_{k}(0)=\varphi_{k,0}\to\varphi_{0}\) in \(H^{1}(\Omega)\) by the definition of \(\varphi_{k,0}=\Pi_{k}\varphi_{0}\) and the properties of the orthogonal projection. Thus, we obtain \(\varphi(0)=\varphi_{0}\) in \(L^{r}(\Omega)\) by the uniqueness of limits. Moreover, we make use of the embedding, see [3, Lemma II.5.9]
\[C([0,T];L^{r}(\Omega))\cap L^{\infty}(0,T;H^{1}(\Omega))\hookrightarrow C_{w} ([0,T];H^{1}(\Omega)),\]
to infer that \(\varphi\) is weakly continuous with values in \(H^{1}(\Omega)\) and thus, the initial condition \(\varphi_{0}\) is satisfied in the sense
\[\langle w,\varphi(t)\rangle_{H^{1}}\to\langle w,\varphi_{0}\rangle_{H^{1}} \quad\forall w\in(H^{1}(\Omega))^{\prime}.\]
In the same manner, we obtain \(\theta(0)=\theta_{0}\) in \((H^{1}(\Omega))^{\prime}\) and
\[(\theta(t),w)_{\Omega}\to(\theta_{0},w)_{\Omega}\quad\forall w\in L^{2}( \Omega).\]
**Step 5 (Limit process).** In this step, we pass to the limit \(k\to\infty\) in the \(k\)-th Galerkin system using the convergences that we have derived in (28)-(30). First, we multiply each of the Galerkin equations (16) by an arbitrary function \(\eta\in C_{0}^{\infty}(0,T)\) and integrate over \((0,T)\), giving
\[\begin{split}\langle\partial_{t}\varphi_{k},\eta z\rangle_{L^{2} (H^{1})}+(m(\varphi_{k})\nabla\mu_{k},\eta\nabla z)_{\Omega_{T}}& =(S_{\varphi}(\varphi_{k}),\eta z)_{\Omega_{T}},\\ -(\mu_{k},\eta z)_{\Omega_{T}}+\gamma(\nabla\varphi_{k},\eta \nabla z)_{\Omega_{T}}+(\Psi^{\prime}(\varphi_{k}),\eta z)_{\Omega_{T}}& =-(\delta_{\varphi_{k}}\mathcal{E}_{\theta}^{k}+\delta_{\varphi_{k}} \mathcal{E}_{u}^{k},\eta z)_{\Omega_{T}},\\ \big{(}\mathbb{C}(\varphi_{k})\left(\varepsilon(u_{k})-\mathcal{ T}(\varphi)\right),\eta\nabla y\big{)}_{\Omega_{T}}& =(\alpha(\varphi_{k})p_{k},\eta\text{div}y)_{\Omega_{T}},\\ \langle\partial_{t}\theta_{k},\eta z\rangle_{L^{2}(H^{1})}+( \kappa(\varphi_{k})\nabla p_{k},\eta\nabla z)_{\Omega_{T}}& =(S_{\theta}(\varphi_{k},\theta_{k}),\eta z)_{\Omega_{T}},\\ (M(\varphi_{k})(\theta_{k}-\alpha(\varphi_{k})\text{div}u_{k}), \eta z)_{\Omega_{T}}&=(p_{k},\eta z)_{\Omega_{T}},\end{split} \tag{31}\]
for any \(z\in Z_{k}\), \(y\in Y_{k}\) and \(\eta\in C_{0}^{\infty}(0,T)\).
Taking the limit \(k\to\infty\) in the linear terms such as \((p_{k},\eta z)_{\Omega_{T}}\) follows directly due to the weak convergences (28). Therefore, we only study the nonlinearities. We notice that all the nonlinear functions terms are bounded and depend solely on \(\varphi_{k}\). Their convergence can be treated using the strong convergence of \(\varphi_{k}\), see (29), and the Lebesgue dominated convergence theorem. Lastly, by the weak-strong convergence lemma, we can take the limit in all the remaining terms such as \((m(\varphi_{k})\nabla\mu_{k},\eta\nabla z)_{\Omega_{T}}\). Then we use that \(\cup_{k}Z_{k}\) is dense in \(H^{1}(\Omega)\) and \(\cup_{k}Y_{k}\) is dense in \(H^{1}_{D}(\Omega)^{d}\). Together with the fundamental lemma of calculus of variations, it yields that \((\varphi,\mu,u,\theta,p)\) is a weak solution to the Cahn-Hilliard-Biot system in the sense of Theorem 4.1.
We have proved the existence of a weak solution. Next, we prove the uniqueness of said solution in the case of stricter assumptions as stated in Theorem 4.4.
Proof of Theorem 4.4.: Now, we are in the setting that \(m(\varphi)=m\), \(\mathbb{C}(\varphi)=\mathbb{C}\), \(\kappa(\varphi)=\kappa\), \(\alpha(\varphi)=\alpha\), \(M(\varphi)=M\) are constant, see (A7\({}^{*}\)), and \(\mathcal{T}(\varphi)\) is affine linear i.e. \(\mathcal{T}(\varphi)=\mathcal{T}_{1}+\mathcal{T}_{2}\varphi\), see (A6\({}^{*}\)). In particular, this implies
\[D_{\varphi}W(\varphi,\varepsilon(u))=-\mathbb{C}(\varepsilon(u)-\mathcal{T}_{ 1}-\mathcal{T}_{2}\varphi):\mathcal{T}_{2}.\]
In this case, we can easily prove that it holds \(\varphi\in L^{2}(0,T;H^{2}(\Omega))\) by solving the equation of the chemical potential \(\mu\) for \(\Delta\varphi\).
We consider two weak solutions \((\varphi_{1},\mu_{1},u_{1},\theta_{1},p_{1})\) and \((\varphi_{2},\mu_{2},u_{2},\theta_{2},p_{2})\), and we denote their difference by \(\varphi=\varphi_{1}-\varphi_{2}\) and in the same way for the other variables. Subtracting their weak forms, we obtain
\[\langle\partial_{t}\varphi,\zeta_{\varphi}\rangle_{H^{1}}+m( \nabla\mu,\nabla\zeta_{\varphi})_{\Omega} =(S_{\varphi}(\varphi_{1})-S_{\varphi}(\varphi_{2}),\zeta_{\varphi })_{\Omega},\] \[-(\mu,\zeta_{\mu})_{\Omega}+\gamma(\nabla\varphi,\nabla\zeta_{ \mu})_{\Omega}+(\Psi^{\prime}(\varphi_{1})\!-\!\Psi^{\prime}(\varphi_{2}), \zeta_{\mu})_{\Omega} =-(\mathbb{C}(\varepsilon(u)\!-\!\mathcal{T}_{1}\!-\!\mathcal{T}_{2} \varphi),\mathcal{T}_{2}\zeta_{\mu})_{\Omega},\] \[\big{(}\mathbb{C}\big{(}\varepsilon(u)-\mathcal{T}_{1}-\mathcal{ T}_{2}\varphi\big{)},\varepsilon(\zeta_{u}\big{)}\big{)}_{\Omega} =\alpha(p,\operatorname{div}\!\zeta_{u})_{\Omega},\] \[\langle\partial_{t}\theta,\zeta_{\theta}\rangle_{H^{1}}+\kappa( \nabla p,\nabla\zeta_{\theta})_{\Omega} =(S_{\theta}(\varphi_{1},\theta_{1})-S_{\theta}(\varphi_{2}, \theta_{2}),\zeta_{\theta})_{\Omega},\] \[(p,\zeta_{p})_{\Omega} =(\theta-\alpha\mathrm{div}u,M\zeta_{p})_{\Omega},\]
for any \(\zeta_{\varphi},\zeta_{\mu},\zeta_{\theta},\zeta_{p}\in H^{1}(\Omega)\), \(\zeta_{u}\in H^{1}_{D}(\Omega)\).
**First testing.** Taking the test functions \(\zeta_{\varphi}=(-\Delta)^{-1}\varphi\) and \(\zeta_{\mu}=m\varphi\), it yields
\[\langle\partial_{t}\varphi,(-\Delta)^{-1}\varphi\rangle_{H^{1}}+m (\nabla\mu,\nabla(-\Delta)^{-1}\varphi)_{\Omega} =(S_{\varphi}(\varphi_{1})-S_{\varphi}(\varphi_{2}),\zeta_{\varphi })_{\Omega},\] \[-m(\mu,\varphi)_{\Omega}+m\gamma(\nabla\varphi,\nabla\varphi)_{ \Omega} =-m(\Psi^{\prime}(\varphi_{1})-\Psi^{\prime}(\varphi_{2}),\varphi)_{\Omega}\] \[\quad-m(\mathbb{C}(\varepsilon(u)\!-\!\mathcal{T}_{1}\!-\! \mathcal{T}_{2}\varphi),\mathcal{T}_{2}\varphi)_{\Omega}\]
Exploiting the property \((\nabla\mu,\nabla(-\Delta)^{-1}\varphi)_{\Omega}=(\mu,\varphi)_{\Omega}\) of the Neumann-Laplace operator, after adding the equations and canceling, it yields
\[\langle\partial_{t}\varphi,(-\Delta)^{-1}\varphi\rangle_{H^{1}}+m \gamma\|\nabla\varphi\|_{L^{2}}^{2}+m(\Psi^{\prime}(\varphi_{1})-\Psi^{\prime }(\varphi_{2}),\varphi)_{\Omega} \tag{32}\] \[=(S_{\varphi}(\varphi_{1})-S_{\varphi}(\varphi_{2}),(-\Delta)^{-1 }\varphi)_{\Omega}-m(\mathbb{C}(\varepsilon(u)\!-\!\mathcal{T}_{2}\varphi), \mathcal{T}_{2}\varphi)_{\Omega}.\]
The graph norm \(\|\nabla(-\Delta)^{-1}\cdot\|_{L^{2}}\) is equivalent to the usual norm of \((H^{1}(\Omega))^{\prime}\). Thus, we set \(\|\cdot\|_{(H^{1})^{\prime}}=\|\nabla(-\Delta)^{-1}\cdot\|_{L^{2}}\). First, we note that we may rewrite the first term on the left-hand side of (32) as
\[\langle\partial_{t}\varphi,(-\Delta)^{-1}\varphi\rangle_{H^{1}} =\langle-\Delta(-\Delta)^{-1}\partial_{t}\varphi,(-\Delta)^{-1} \varphi\rangle_{H^{1}} \tag{33}\] \[=\langle\partial_{t}\nabla(-\Delta)^{-1}\varphi,\nabla(-\Delta)^{ -1}\varphi\rangle_{H^{1}}\] \[=\frac{1}{2}\frac{\mathrm{d}}{\mathrm{dt}}\|\varphi\|_{(H^{1})^{ \prime}}^{2}.\]
Using the semiconvexity of \(\Psi\), see (A4\({}^{*}\)), it yields
\[m(\Psi^{\prime}(\varphi_{1})-\Psi^{\prime}(\varphi_{2}),\varphi)_{\Omega}\geq- C_{*}m\|\varphi\|_{L^{2}}^{2},\]
and consequently, we obtain by the Young inequality
\[m(\Psi^{\prime}(\varphi_{2})-\Psi^{\prime}(\varphi_{1}),\varphi)_{\Omega} \leq C_{*}m\|\varphi\|_{L^{2}}^{2} \tag{34}\] \[=C_{*}m(\nabla(-\Delta)^{-1}\varphi,\nabla\varphi)_{\Omega}\] \[\leq\frac{m\gamma}{4}\|\nabla\varphi\|_{L^{2}}^{2}+\frac{C_{*}^{2} m}{\gamma}\|\nabla(-\Delta)^{-1}\varphi\|_{L^{2}}^{2}.\]
Using the boundedness of \(S_{\varphi}\), the right-hand side of (32) can be treated using the usual inequalities as follows:
\[\begin{split}&(S_{\varphi}(\varphi_{1})-S_{\varphi}(\varphi_{2}),( -\Delta)^{-1}\varphi)_{\Omega}-m(\mathbb{C}(\varepsilon(u)-\mathcal{T}_{2} \varphi),\mathcal{T}_{2}\varphi)_{\Omega}\\ &\leq C\|(-\Delta)^{-1}\varphi\|_{L^{2}}+C\|\varepsilon(u)\|_{L ^{2}}\|\varphi\|_{L^{2}}+C\|\varphi\|_{L^{2}}^{2}\\ &\leq\frac{m\gamma}{4}\|\nabla\varphi\|_{L^{2}}^{2}+C\|\varphi\| _{(H^{1})^{\prime}}^{2}+\frac{C_{\mathbb{C}}}{4}\|\varepsilon(u)\|_{L^{2}}^{2 }.\end{split}\]
Therefore, applying this estimate and (33) to (32), it yields
\[\frac{1}{2}\frac{\mathrm{d}}{\mathrm{dt}}\|\varphi\|_{(H^{1})^{\prime}}^{2}+ \frac{m\gamma}{2}\|\nabla\varphi\|_{L^{2}}^{2}\leq C\|\varphi\|_{(H^{1})^{ \prime}}^{2}+\frac{C_{\mathbb{C}}}{4}\|\varepsilon(u)\|_{L^{2}}^{2}. \tag{35}\]
### Second testing
Next, we consider the test functions \(\zeta_{u}=\kappa u\), \(\zeta_{\theta}=(-\Delta)^{-1}\theta/\kappa\), \(z_{p}=p/M\) to obtain
\[\begin{split}\big{(}\mathbb{C}\big{(}\varepsilon(u)-\mathcal{T}_ {1}-\mathcal{T}_{2}\varphi\big{)},\varepsilon(u)\big{)}_{\Omega}& =(\alpha p,\mathrm{div}u)_{\Omega},\\ \frac{1}{2\kappa}\frac{\mathrm{d}}{\mathrm{dt}}\|\theta\|_{(H^ {1})^{\prime}}^{2}+(\nabla p,\nabla(-\Delta)^{-1}\theta)_{\Omega}& =\frac{1}{\kappa}(S_{\theta}(\theta_{1})-S_{\theta}(\theta_{2}),(-\Delta)^{-1}\theta)_{\Omega},\\ \frac{1}{M}\|p\|_{L^{2}}^{2}+(\theta,p)_{\Omega}&=-( \alpha\mathrm{div}u,p)_{\Omega}.\end{split}\]
Adding the tested equations gives
\[\begin{split}&(\mathbb{C}\varepsilon(u),\varepsilon(u))_{\Omega} +\frac{1}{2\kappa}\frac{\mathrm{d}}{\mathrm{dt}}\|\theta\|_{(H^{1})^{\prime}} ^{2}+\frac{1}{M}\|p\|_{L^{2}}^{2}\\ &=(\mathbb{C}(\mathcal{T}_{1}+\mathcal{T}_{2}\varphi), \varepsilon(u))_{\Omega}+\frac{1}{\kappa}(S_{\theta}(\theta_{1})-S_{\theta}( \theta_{2}),(-\Delta)^{-1}\theta)_{\Omega}.\end{split}\]
We apply the Holder inequality on the terms on the right-hand side of this equality and with the boundedness of \(S_{\theta}\), see (A3), we obtain
\[\begin{split}& C_{\mathbb{C}}\|\varepsilon(u)\|_{L^{2}}^{2}+ \frac{1}{2\kappa}\frac{\mathrm{d}}{\mathrm{dt}}\|\theta\|_{(H^{1})^{\prime}}^{ 2}+\frac{1}{M}\|p\|_{L^{2}}^{2}\\ &\leq C(1+\|\varphi\|_{L^{2}})\|\varepsilon(u)\|_{L^{2}}+C\|(- \Delta)^{-1}\theta\|_{L^{2}}.\end{split} \tag{36}\]
We apply the Young inequality on the right-hand side of the inequality. In particular, we have
\[\begin{split}& C(1+\|\varphi\|_{L^{2}})\|\varepsilon(u)\|_{L^{2}} \\ &\leq\frac{2C^{2}}{C_{\mathbb{C}}}+\frac{C_{\mathbb{C}}}{8}\| \varepsilon(u)\|_{L^{2}}^{2}+\frac{2C^{2}}{C_{\mathbb{C}}}\|\varphi\|_{L^{2}} ^{2}+\frac{C_{\mathbb{C}}}{8}\|\varepsilon(u)\|_{L^{2}}^{2}\\ &\leq\frac{2C^{2}}{C_{\mathbb{C}}}+\frac{C_{\mathbb{C}}}{4}\| \varepsilon(u)\|_{L^{2}}^{2}+\frac{m\gamma}{4}\|\nabla\varphi\|_{L^{2}}^{2}+ \frac{4C^{4}}{C_{\mathbb{C}}^{2}m\gamma}\|\varphi\|_{(H^{1})^{\prime}}^{2}, \end{split}\]
where we applied (34) in the last step. Inserting this estimate into (36), we obtain
\[\begin{split}& C_{\mathbb{C}}\|\varepsilon(u)\|_{L^{2}}^{2}+\frac{ 1}{2\kappa}\frac{\mathrm{d}}{\mathrm{dt}}\|\theta\|_{(H^{1})^{\prime}}^{2}+ \frac{1}{M}\|p\|_{L^{2}}^{2}\\ &\leq C+\frac{C_{\mathbb{C}}}{4}\|\varepsilon(u)\|_{L^{2}}+\frac{m \gamma}{4}\|\nabla\varphi\|_{L^{2}}^{2}+C\|\varphi\|_{(H^{1})^{\prime}}^{2}+C \|\theta\|_{(H^{1})^{\prime}}^{2}.\end{split} \tag{37}\]
### Together
We add (35) to (37), which gives
\[\begin{split}&\frac{1}{2}\frac{\mathrm{d}}{\mathrm{dt}}\|\varphi\|_{(H^ {1})^{\prime}}^{2}+\frac{m\gamma}{2}\|\nabla\varphi\|_{L^{2}}^{2}+\frac{C_{ \mathbb{C}}}{2}\|\varepsilon(u)\|_{L^{2}}^{2}+\frac{1}{2\kappa}\frac{\mathrm{d} }{\mathrm{dt}}\|\theta\|_{(H^{1})^{\prime}}^{2}+\frac{1}{M}\|p\|_{L^{2}}^{2}\\ &\leq C+C\|\varphi\|_{(H^{1})^{\prime}}^{2}+C\|\theta\|_{(H^{1})^{ \prime}}^{2}.\end{split}\]
We integrate and apply the Gronwall inequality, which gives, after taking the essential supremum of \(t\in(0,T)\),
\[\frac{1}{2}\|\varphi\|_{L^{\infty}(H^{1})^{\prime}}^{2}+\frac{m\gamma }{2}\|\nabla\varphi\|_{L^{2}(L^{2})}^{2}+\frac{C_{\mathbb{C}}}{2}\|\varepsilon( u)\|_{L^{2}(L^{2})}^{2}+\frac{1}{2\kappa}\|\theta\|_{L^{\infty}(H^{1})^{\prime}}^{2}+ \frac{1}{M}\|p\|_{L^{2}(L^{2})}^{2}\] \[\lesssim\|\varphi_{0,1}-\varphi_{0,2}\|_{(H^{1})^{\prime}}^{2}+\| \theta_{0,1}-\theta_{0,2}\|_{(H^{1})^{\prime}}^{2}.\]
We complete the continuous dependence by testing \(\zeta_{p}=\theta\) to get
\[M\|\theta\|_{L^{2}}^{2}=(p,\theta)_{\Omega}+\alpha M(\mathrm{div}u,\theta)_{ \Omega},\]
and thus, by the typical inequalities we have
\[\|\theta\|_{L^{2}(L^{2})}^{2}\lesssim\|p\|_{L^{2}(L^{2}+\|\varepsilon(u)\|_{L ^{2}(L^{2})}^{2}\lesssim\|\varphi_{0,1}-\varphi_{0,2}\|_{(H^{1})^{\prime}}^{2} +\|\theta_{0,1}-\theta_{0,2}\|_{(H^{1})^{\prime}}^{2}.\]
Moreover, considering the test function \(\zeta_{\mu}=(-\Delta)^{-1}\mu\) gives
\[\|\mu\|_{L^{2}(H^{1})^{\prime}}^{2}\lesssim\|\varphi\|_{L^{2}(H^{1})}+\| \varepsilon(u)\|_{L^{2}(L^{2})}^{2}\lesssim\|\varphi_{0,1}-\varphi_{0,2}\|_{( H^{1})^{\prime}}^{2}+\|\theta_{0,1}-\theta_{0,2}\|_{(H^{1})^{\prime}}^{2}.\]
This completes the continuous dependence on the data as stated in (15). Moreover, in the case of \(\varphi_{1,0}=\varphi_{2,0}\) and \(\theta_{1,0}=\theta_{2,0}\), it yields that the weak solutions coincide.
## 5. Numerical simulations
In this section, we present some numerical simulations to highlight the differences of the Cahn-Hilliard-Biot model to the established Cahn-Hilliard and Cahn-Larche equations. Specifically, we consider an application in tumor growth modeling by considering \(\varphi\) as the tumor volume fraction, i.e., \(\varphi(x)=1\) means that all (\(100\%\)) of the cells at \(x\in\Omega\) are cancerous. On the other hand, \(\varphi(x)=0\) corresponds to healthy cells. We assume an endless supply of nutrients by taking a growth function \(S_{\varphi}(\varphi)=\lambda\varphi(1-\varphi)\) with \(\lambda>0\) being the tumor proliferation factor. However, to ensure the boundedness of \(S_{\varphi}\), we replace \(\varphi\) by \(\mathcal{C}(\varphi)\) where \(\mathcal{C}\) is the cutoff operator that is defined by \(\mathcal{C}(\varphi)=\max\{0,\min\{1,\varphi\}\}\). Moreover, it would be straight-forward to model the nutrients by their own reaction-diffusion equation, as done in [10, 13, 18], or by a Keller-Segel model as done in [20]. However, this is not the focus of this work. We are interested in the effects of the Biot model on the tumor's evolution.
### Setup
We consider the unit square domain \(\Omega=[0,1]^{2}\) and the time domain \([0,T]\) with \(T=0.5\), which are discretized with \(\Delta x=2^{-8}\) and \(\Delta t=2^{-7}\). As initials, we choose \(\varphi_{0}(x)=\exp(1-1/(1-16|x-\frac{1}{2}|^{2}))\) whereas we select zero initial data for the other variables. The variational system (12) is discretized in time by a semi-implicit Euler method by using the classical convex-concave splitting of the nonlinear potential \(\Psi=\Psi_{e}+\Psi_{c}\) into its expansive \(\Psi_{e}\) and contractive part \(\Psi_{c}\). In the case of \(\Psi(\varphi)=\frac{1}{4}\varphi^{2}(1-\varphi^{2})\), we set \(\Psi_{e}(\varphi)=\varphi^{3}-\frac{3}{2}\varphi^{2}-\frac{1}{4}\varphi\) and \(\Psi_{c}(\varphi)=\frac{3}{4}\varphi\). We treat the expansive part explicitly and the contractive part implicitly. Like this, we obtain an unconditionally stable scheme, see [8]. The three-way coupled nonlinear system is then solved by an iterative decoupling scheme, starting with the elasticity equation governing \(u\), then the flow system governing \((\theta,p)\), and finally the Cahn-Hilliard model governing \((\varphi,\mu)\). The nonlinear equations are solved by a Newton method in each iterative decoupling-iteration. For each variable, we select the bilinear rectangular finite element space \(Q_{1}\). The system is implemented in the finite element library FEniCS [1].
In the model, we choose the functions \(S_{T}=10\varphi(1-\varphi)\), \(S_{\theta}=0\), \(m(\varphi)=10^{-16}+\varphi^{2}(1-\varphi)^{2}\) and \(\mathcal{T}(\varphi)=\frac{3}{10}\varphi I\). Further, the interfacial parameter \(\gamma\) is selected as \(\gamma=10^{-4}\). Following [31], the permeability \(\kappa(\varphi)\), compressibility \(M(\varphi)\)
Biot-Willlis coefficient \(\alpha(\varphi)\) and elasticity tensor \(\mathbb{C}(\varphi)\) are depending on the phase-field variable \(\varphi\) through the interpolation function \(\pi(\varphi)\) that is defined by
\[\pi(\varphi)=\begin{cases}0,&\varphi<0,\\ -2\varphi^{3}+3\varphi^{2},&\varphi\in[0,1],\\ 1,&\varphi>1,\end{cases}\]
i.e., it fulfills the conditions \(\pi(0)=0\), \(\pi(1)=1\), \(\pi^{\prime}(1)=\pi^{\prime}(0)=0\), as demanded in [16]. Specifically, the interpolated functions are chosen as
\[\kappa(\varphi) =\kappa_{-1}+\pi(\varphi)(\kappa_{1}-\kappa_{-1})=1-\frac{9}{10} \pi(\varphi),\] \[M(\varphi) =M_{-1}+\pi(\varphi)(M_{1}-M_{-1})=1-\frac{9}{10}\pi(\varphi),\] \[\alpha(\varphi) =\alpha_{-1}+\pi(\varphi)(\alpha_{1}-\alpha_{-1})=1-\frac{1}{2} \pi(\varphi),\] \[\mathbb{C}(\varphi) =\mathbb{C}_{-1}+\pi(\varphi)(\mathbb{C}_{1}-\mathbb{C}_{-1})= \begin{pmatrix}4&2\\ 2&4\end{pmatrix}-\pi(\varphi)\begin{pmatrix}3&3/2\\ 3/2&3\end{pmatrix}.\]
### Results
As our first test, we compare the Cahn-Hilliard (CH), Cahn-Larche (CL) and Cahn-Hilliard-Biot (CHB) systems by investigating the evolution of the tumor mass \(t\mapsto\int_{\Omega}\varphi(t,x)\,\mathrm{d}x\). Here, the Cahn-Hilliard equation reads
\[\begin{split}\partial_{t}\varphi-\operatorname{div}(m(\varphi) \nabla\mu)&=S_{\varphi}(\varphi),\\ \mu+\gamma\Delta\varphi-\Psi^{\prime}(\varphi)&=0,\end{split} \tag{38}\]
and the Cahn-Larche equation
\[\begin{split}\partial_{t}\varphi-\operatorname{div}(m(\varphi) \nabla\mu)&=S_{\varphi}(\varphi),\\ \mu+\gamma\Delta\varphi-\Psi^{\prime}(\varphi)& =\frac{1}{2}\left(\varepsilon(u)-\mathcal{T}(\varphi)\right) \colon\mathbb{C}^{\prime}(\varphi)\left(\varepsilon(u)-\mathcal{T}(\varphi) \right)\\ &\quad-\mathcal{T}^{\prime}(\varphi)\colon\mathbb{C}(\varphi) \left(\varepsilon(u)-\mathcal{T}(\varphi)\right),\end{split} \tag{39}\] \[\operatorname{div}\bigl{(}\mathbb{C}(\varphi)\left(\varepsilon(u )-\mathcal{T}(\varphi)\right)\bigr{)} =0.\]
The simulated results are shown in Figure 1. The first curve, represented in blue, corresponds to the fundamental Cahn-Hilliard equation. This basic model serves as a reference point for understanding tumor growth dynamics. As expected, the tumor mass exhibits a linear increase over time, aligning with conventional growth expectations, see [14, Section 5.3]. The second curve, depicted in orange, introduces elastic effects into the model. This alteration significantly impacts the tumor's growth behavior. Notably, we observe a substantial increase in tumor mass over the simulated time frame, and the growth does not follow a linear trajectory. Instead, it exhibits behavior resembling a quadratic growth pattern. This observation underscores the influence of elastic forces on tumor growth dynamics. The third curve, shown in red, extends the model by incorporating flow effects through the Biot model, in addition to elasticity. This combined approach results in an intermediate growth pattern. While not as pronounced as the Cahn-Larche model, the tumor's growth still deviates from linearity. Notably, the inclusion of flow effects seems to moderate the growth of the tumor, suggesting that flow phenomena have a restraining influence on tumor mass expansion.
Figure 2 provides a visual representation of the dynamic evolution of a tumor over time, as simulated using the three mathematical models: the Cahn-Hilliard (CH), Cahn-Larche (CL) and Cahn-Hilliard-Biot (CHB) equations. Each model offers unique insights into how different factors influence the tumor's shape and symmetry. In the upper row of Figure 2, we observe the tumor's evolution under the Cahn-Hilliard equation. As anticipated, the tumor remains circular, preserving its initial symmetry throughout the simulation. This behavior aligns with expectations, as the standard model does not incorporate additional physical factors that would distort the tumor's shape, see also [13] for the influence of various flow models on the tumor's shape. Moving to the middle row of Figure 2, the simulation results for the Cahn-Larche model are displayed. Here, the tumor exhibits a notable departure from circular symmetry. The diagonal stretching of the tumor is readily apparent and is attributed to the specific choice of the elasticity matrix in this model. Additionally, it's worth noting that this model results in the largest interface width among the three models, indicating a broader region of transition between tumor and non-tumor tissue. The lower row of Figure 2 illustrates the tumor's evolution in the more complex Cahn-Hilliard-Biot model. At \(t=0.3\), the tumor closely resembles the circular symmetry observed in the standard model. However, at \(t=0.5\), an interesting phenomenon becomes evident. There is a noticeable stretch in the off-diagonal direction, deviating from the initial circular symmetry. Similar to the elasticity model, this model also exhibits a wider interface between tumor and non-tumor tissue compared to the standard model. In summary, Figure 2 visually illustrates how different models affect the shape and symmetry of the evolving tumor. The presence of elasticity and other complex factors introduces intriguing variations in tumor morphology.
Figure 3 provides a visual representation of the deformation profiles observed in the Cahn-Larche and Cahn-Hilliard-Biot models. These profiles offer valuable insights into how different factors influence the deformation patterns of the tumor and the surrounding tissue. In the upper row of Figure 3, we examine the deformation profile resulting from the Cahn-Larche model, which incorporates elasticity effects. The deformation pattern in this model is notably simpler and exhibits a concentric profile. This means that the deformation points outward from the center of the domain towards the boundary. Additionally, it's important to highlight that the most significant deformation occurs at the interface of the tumor, emphasizing the impact of elasticity on the tumor's mechanical response. Moving to the lower row of Figure 3, we consider the deformation profile from the Cahn-Hilliard-Biot model where the deformation equation has additionally coupling effects to the Biot model. This model, influenced by flow effects in addition to elasticity, generates a more intricate deformation pattern. Unlike the concentric deformation observed in the Cahn-Larche system, the deformation in the Cahn-Hilliard-Biot model is influenced by the flow dynamics, resulting in a more complex and non-uniform deformation pattern. Importantly, the deformation seems to exhibit a connection with the flow dynamics as depicted in Figure 4, with both the Darcy velocity and deformation profiles having similar concentrations along the off-diagonal of the domain. This observation aligns with the tumor's movement towards the off-diagonal, suggesting a relationship between flow effects and the deformation behavior in this model.
Figure 2. Evolution of the tumor volume fraction \(\varphi(t,x)\) over time in the domain \(\Omega\)
**Acknowledgments.** Supported by the state of Upper Austria.
Figure 4. Evolution of the Darcy velocity \(q(t,x)=-\kappa(\varphi)\nabla p\) (which is computed in a post-processing step) over time in the domain \(\Omega\)
Figure 3. Evolution of the deformation \(u(t,x)\) over time in the domain \(\Omega\) (left color bar is for the CL model, right color bar is for the CHB model) |
2307.03821 | Mediation Analysis with Graph Mediator | This study introduces a mediation analysis framework when the mediator is a
graph. A Gaussian covariance graph model is assumed for graph representation.
Causal estimands and assumptions are discussed under this representation. With
a covariance matrix as the mediator, parametric mediation models are imposed
based on matrix decomposition. Assuming Gaussian random errors,
likelihood-based estimators are introduced to simultaneously identify the
decomposition and causal parameters. An efficient computational algorithm is
proposed and asymptotic properties of the estimators are investigated. Via
simulation studies, the performance of the proposed approach is evaluated.
Applying to a resting-state fMRI study, a brain network is identified within
which functional connectivity mediates the sex difference in the performance of
a motor task. | Yixi Xu, Yi Zhao | 2023-07-07T20:25:42Z | http://arxiv.org/abs/2307.03821v1 | # Mediation Analysis with Graph Mediator
###### Abstract
This study introduces a mediation analysis framework when the mediator is a graph. A Gaussian covariance graph model is assumed for graph representation. Causal estimands and assumptions are discussed under this representation. With a covariance matrix as the mediator, parametric mediation models are imposed based on matrix decomposition. Assuming Gaussian random errors, likelihood-based estimators are introduced to simultaneously identify the decomposition and causal parameters. An efficient computational algorithm is proposed and asymptotic properties of the estimators are investigated. Via simulation studies, the performance of the proposed approach is evaluated. Applying to a resting-state fMRI study, a brain network is identified within which functional connectivity mediates the sex difference in the performance of a motor task.
**Keywords:** Common diagonalization; Covariance regression; Decomposition method; Gaussian covariance graph model
Introduction
Mediation analysis has been widely used in clinical and biomedical studies to delineate the intermediate effect of a third variable, called mediator, in the causal pathway between the exposure/treatment and the outcome. It helps dissect the underlying causal mechanism by decomposing the effect of the exposure on the outcome into the part through the mediator and the part not through the mediator. Since the introduction of the classic Baron and Kenny framework (Baron and Kenny, 1986), mediation analysis has been extensively studied over decades, including under the causal inference framework (see review articles by Imai et al., 2010; VanderWeele, 2015, 2016, among others). With the advances of biological technologies, mediation analysis has been extended to study cases of high-dimensional mediator (such as Huang and Pan, 2016; Chen et al., 2017; Derkach et al., 2019; Zhao and Luo, 2022, among others) and complex data output, including functional data (Lindquist, 2012; Zhao et al., 2018; Zeng et al., 2021), time series data (Gu et al., 2014; Zhao and Luo, 2019), network mediator (Zhao et al., 2022), image mediator (Jiang and Colditz, 2023; Chen and Zhou, 2023), and so on. In this study, we introduce a framework considering a graph as the mediator.
Our study is motivated by resting-state functional magnetic resonance imaging (fMRI) experiments. A primary interest in resting-state fMRI is to portray brain coactivation patterns, the so-called brain functional connectivity or connectome, when the participant is at rest without any external stimulus. These coactivation patterns are captured by the covariance matrices (or correlation matrices after standardizing the data) of the fMRI signals extracted from brain voxels or regions of interest (ROI), where each ROI is a group of voxels defined by a chosen brain parcellation atlas (Friston, 2011). Considering a cognitive behavior, it is also of great interest to understand the brain mechanism related to the divergence in the behavior under various exposure/treatment conditions. Formulating it into a mediation analysis framework, brain functional connectivity is considered as the mediator between the exposure and behavior. Straightforward approaches include pairwise mediation analysis, where mediation analysis is repeated for each pair of functional connectivity followed by a multiple-testing correction, and high-dimensional mediation analysis, where the mediator is the vectorization of the upper triangular portion of the connectivity matrix. The former ignores the dependence between the correlations and suffers from power deficiency due to multiplicity and the latter disregards the structure property and the positive definiteness of a covariance matrix.
In general, resting-state neuronal fluctuations measured by fMRI can be considered as Gaussian processes (Lindquist, 2008). The functional connectivity matrix can thus be modeled via a _Gaussian covariance graph model_. As a subclass of graphical models, the Gaussian covariance graph model defines a correspondence between the graph and the correlation pattern embedded in the covariance matrix (Richardson and Spirtes, 2002; Chaudhuri et al., 2007). A missing connection between two nodes in the graph is in concordance with zero correlation between the two corresponding variables in the covariance matrix, which also coincides with the marginal independence. In this study, we will consider such a connectivity graph as the mediator to articulate the intermediate effect of brain connectome on the pathway between exposure and cognitive behavior. Figure 1 presents a conceptual diagram, where the graph is the mediator (denoted as \(\mathcal{G}\)). Closely related to this concept, Zhao et al. (2022) recently proposed a Bayesian network mediation analysis. In their framework, stochastic block models (SBMs) were employed on individual elements in the adjacency matrix of brain regions and the vectorized model parameters were treated as the latent mediators to integratively impose feature selection and effect estimation. As the network mediator was represented by the adjacency matrix, only symmetry was imposed on the matrix structure and less stringent conditions were required for modeling. In this study, instead of assuming underlying models on the adjacency matrix, it is proposed to directly model the covariance matrix as the mediator. It extends the linear structural equation modeling (LSEM) framework with a network mediator and at the same time preserves the positive definite property of a covariance matrix in parsimonious modeling. Here, we would also like to distinguish the proposed framework from the case of image mediator considered by Jiang and Colditz (2023) and Chen and Zhou (2023), where an image is (multidimensional) scalar output with spatial information.
Regression models for covariance matrix outcomes have been long before studied to capture the heterogeneity in the covariance structure across individuals/populations. In order to maintain the positive definiteness in modeling, example parametric approaches include modeling the covariance matrix as a linear combination of symmetric matrices (Anderson, 1973), modeling the logarithmic transformation of matrix elements (Chiu et al., 1996), modeling the covariance matrix via a quadratic link function (Hoff and Niu, 2012; Seiler and Holmes, 2017) or a linear combination of similarity matrices (Zou et al., 2017) of the covariates, and decomposition approaches based on the common diagonalization assumption via either eigendecomposition (Flury, 1984; Boik, 2002; Hoff, 2009; Franks and Hoff, 2019; Zhao et al., 2021) or Cholesky decomposition (Pourahmadi et al., 2007). In this study, we adapt the covariance regression framework introduced by Zhao et al.
(2021) to mediation analysis. In their framework, orthogonal common linear projections were assumed on the covariance matrices and data heteroscedasticity in the projection space was revealed via a log-linear model. The advantages are to preserve the positive definiteness of the covariance matrices, offer high flexibility in modeling, and enable a network-level interpretation, where each projection can be considered a subnetwork after proper thresholding or sparsifying. Integrating this framework with LSEM, common linear projections are assumed on the covariance matrices and the logarithmic transformation of data variance in the projection space is considered as the mediator measurement in the structural equation models. The objective is to simultaneously identify the linear projections and the causal parameters. Contributions of this study include (i) to enrich mediation analysis literature with graph mediator; (ii) to extend covariance regression modeling to SEM framework; (iii) and to offer a better understanding of brain mechanisms when studying the exposure effect on cognitive behaviors.
The rest of the manuscript is organized as follows. Section 2 discusses the causal definitions and assumptions and the proposed parametric mediation model when the mediator is a graph. Likelihood-based estimators are proposed for estimating model parameters, an efficient algorithm is introduced for computation, and a resampling approach is considered for inference. Asymptotic properties of the estimators are investigated under regularity conditions. Via simulation studies in Section 3, the performance of the proposed framework is evaluated and compared with competing approaches. Section 4 applies the proposed mediation analysis to data acquired from the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA) to dissect the mediation effect of brain functional connectome on sex difference in the performance of a motor task. Section 5 summarizes the manuscript with a discussion.
Figure 1: A conceptual mediation diagram with graph mediator (\(\mathcal{G}\)). \(X\): exposure; \(Y\): outcome; \(W\): confounding factor.
## 2 Model and Methods
### Causal definitions and assumptions
Let \(\mathcal{G}(x)=(\mathcal{V},\mathcal{E}(x))\) denote the graph object when the exposure is at level \(x\), where \(\mathcal{V}\) is the set of vertices/nodes and \(\mathcal{E}(x)\) is the set of edges under the exposure level. Denote \(\mathscr{G}=\{\mathcal{G}\}\) as the set of such graph objects. A type of graph, _covariance graph_, is considered in this study, which can be represented by a covariance matrix embedded with association structure between vertices (Chaudhuri et al., 2007). Denote \(\mathbf{M}(x)=(M_{1}(x),\ldots,M_{p}(x))^{\top}\in\mathbb{R}^{p}\) as the potential outcome of the \(p\)-dimensional random variables in the vertex set \(\mathcal{V}=\{1,\ldots,p\}\) and \(\mathbf{\Sigma}(x)=(\sigma_{jk}(x))\in\mathbb{R}^{p\times p}\) as the corresponding covariance matrix when the exposure is \(x\). As the study focus is on the covariance matrix, without loss of generality, the variable means are assumed to be zero. Assuming \(\mathbf{M}(x)\) follows a multivariate Gaussian distribution, a _Gaussian covariance graph model_ is considered, where a missing edge between two vertices is equivalent to the marginal independence between the corresponding random variables (Edwards, 2012): for \(\forall\ j,k\in\mathcal{V}\) and \(j\neq k\),
\[(j,k)\notin\mathcal{E}(x)\quad\Leftrightarrow\quad M_{j}(x)\perp\!\!\!\! \perp M_{k}(x)\quad\Leftrightarrow\quad\sigma_{jk}(x)=\sigma_{kj}(x)=0. \tag{1}\]
Let \(Y(x,\mathcal{G}(x))\in\mathbb{R}\) denote the potential outcome of \(Y\) when the exposure is \(x\) and the mediator graph is \(\mathcal{G}(x)\). Let \(\mathscr{X}\) denote the possible exposure values. Assume a binary exposure, \(\mathscr{X}=\{0,1\}\). The following defines the average total effect (ATE) of the exposure on the outcome.
\[\tau_{\text{ATE}}=\mathbb{E}\left\{Y(1,\mathcal{G}(1))-Y(0,\mathcal{G}(0)) \right\}. \tag{2}\]
With a graph mediator, the above total effect can be decomposed into direct and indirect effects. Extending the framework in Imai et al. (2010), the following defines the average indirect effect (AIE) under exposure level \(x\) and the average direct effect (ADE) when the mediator graph is at level \(\mathcal{G}(x)\).
\[\tau_{\text{AIE}}(x) = \mathbb{E}\left\{Y(x,\mathcal{G}(1))-Y(x,\mathcal{G}(0))\right\}, \quad x=0,1; \tag{3}\] \[\tau_{\text{ADE}}(x) = \mathbb{E}\left\{Y(1,\mathcal{G}(x))-Y(0,\mathcal{G}(x))\right\}, \quad x=0,1. \tag{4}\]
The AIE quantifies the difference between the potential outcome corresponding to the graph of \(\mathcal{G}(1)\) and that under \(\mathcal{G}(0)\) fixing the exposure at level \(x\). It is also called the _natural indirect effect_(Pearl, 2001) or the _pure indirect effect_ for \(\tau_{\text{AIE}}(0)\) and _total indirect effect_ for \(\tau_{\text{AIE}}(1)\)(Robins and Greenland, 1992). The ADE quantifies the exposure effect on the outcome not through the
mediator. It is easy to show that the ATE is the sum of AIE and ADE,
\[\tau_{\text{ATE}}=\tau_{\text{AIE}}(x)+\tau_{\text{ADE}}(1-x),\quad x=0,1. \tag{5}\]
The following discusses the identification assumptions of the causal estimands defined above. For mediation analysis, the identification assumptions have been extensively discussed under various settings in the literature (examples include Robins and Greenland, 1992; Pearl, 2001; Imai et al., 2010; VanderWeele, 2015, and so on). In this study, we extend the assumptions in Imai et al. (2010) to the scenario of a graph mediator.
**Assumption A1** (SUTVA): Stable unit treatment value assumption.
**Assumption A2** (Positivity): Let \(X\) denote the actual treatment assignment, then \(\mathbb{P}(X=x)>0\) for \(\forall\ x\in\mathscr{X}\).
**Assumption A3** (Ignorability): Let \(Y(x,\boldsymbol{\mathcal{G}})\) denote the potential outcome of \(Y\) under the exposure level \(x\) and the mediator graph taking the value of \(\boldsymbol{\mathcal{G}}\in\mathscr{G}\) and \(\mathbf{W}\in\mathbb{R}^{q}\) denote a vector of \(q\)-dimensional observed confounding factors.
\[\{Y(1,\boldsymbol{\mathcal{G}}),Y(0,\boldsymbol{\mathcal{G}}),\mathbf{M}(1), \mathbf{M}(0)\}\ \mathop{\perp\!\!\perp}X\ |\ \mathbf{W}, \tag{6}\]
for any \(\boldsymbol{\mathcal{G}}\in\mathscr{G}\).
**Assumption A4** (Sequential ignorability): \[Y(x,\boldsymbol{\mathcal{G}})\ \mathop{\perp\!\!\perp}\mathbf{M}(x)\ |\ X=x, \mathbf{W},\] (7)
for any \(x\in\mathscr{X}\) and \(\boldsymbol{\mathcal{G}}\in\mathscr{G}\).
**Assumption A5** (Parametric LSEM): The following linear structural equation models (LSEMs) with independent errors are assumed.
\[\log(\boldsymbol{\theta}^{\top}\boldsymbol{\Sigma}(x)\boldsymbol{ \theta}) = \alpha_{0}+\alpha x+\mathbf{W}^{\top}\boldsymbol{\phi}_{1}+\eta, \tag{8}\] \[Y(x,\boldsymbol{\mathcal{G}}) = \gamma_{0}+\gamma x+\beta\log(\boldsymbol{\theta}^{\top} \boldsymbol{\Sigma}\boldsymbol{\theta})+\mathbf{W}^{\top}\boldsymbol{\phi}_{2}+\epsilon, \tag{9}\]
where \(\boldsymbol{\theta}\in\mathbb{R}^{p}\) is a projection vector with unit \(L_{2}\)-norm (i.e. \(\|\boldsymbol{\theta}\|_{2}=1\)), \(\{\alpha_{0},\alpha,\boldsymbol{\phi}_{1},\gamma_{0},\gamma,\beta,\boldsymbol{ \phi}_{2}\}\) are model coefficients, and \(\eta\) and \(\epsilon\) are independent model errors with mean zero.
The SUTVA assumption (Rubin, 1980) assumes that the treatment assignment regime remains the same across units and the outcome of one unit is not influenced by the treatment assignment of other units. In the application, this no interference assumption holds as the imaging scan and cognition evaluation were performed for each individual independently. The positivity assumption is standard in causal inference literature when possible treatment assignments are finite. Assumption A3 assumes that there is no unmeasured mediator-exposure or outcome-exposure confounding given the observed covariates. In the application study, sex is considered as the preceding exposure, which is assumed to be naturally randomized among the population of the same age. Assumption A4 assumes the conditional independence between the potential outcome of \(Y\) and the potential outcome of \(\mathbf{M}\) under the so-called single-world intervention graphs (Richardson and Robins, 2013), which is a relaxation of the cross-world independence assumption (Robins and Richardson, 2010). Assumption 5 assumes parametric linear structural equation models, where \(\boldsymbol{\mathcal{G}}\) and \(\boldsymbol{\Sigma}\) have a one-to-one correspondence under the Gaussian covariance graph model. With Assumptions A3-A5, it is sufficient to identify the ADE and AIE (Andrews and Didelez, 2020). Considering a Gaussian covariance graph model, the covariance matrix \((\boldsymbol{\Sigma})\) has a one-to-one correspondence with the graph object (\(\mathcal{G}\)). Under Assumption A5, the potential outcome of \(Y\) under a multiple-worlds model can be expressed as
\[Y(x,\mathcal{G}(x^{\prime})) = \gamma_{0}+\gamma x+\beta(\alpha_{0}+\alpha x^{\prime}+\mathbf{W }^{\top}\boldsymbol{\phi}_{1}+\eta)+\mathbf{W}^{\top}\boldsymbol{\phi}_{2}+\epsilon\] \[= \gamma x+\alpha\beta x^{\prime}+(\gamma_{0}+\alpha_{0}\beta+ \beta\mathbf{W}^{\top}\boldsymbol{\phi}_{1}+\mathbf{W}^{\top}\boldsymbol{\phi }_{2})+\beta\eta+\epsilon.\]
The following theorem gives the parametric representation of the causal estimands.
**Theorem 1**.: _Under Assumptions A1-A5,_
\[\tau_{\mathrm{ATE}}=\mathbb{E}\left\{Y(1,\mathcal{G}(1))-Y(0, \mathcal{G}(0))\right\}=\gamma+\alpha\beta;\] \[\tau_{\mathrm{AIE}}(x)=\mathbb{E}\left\{Y(x,\mathcal{G}(1))-Y(x, \mathcal{G}(0))\right\}=\alpha\beta,\quad\text{for $x=0,1$};\] \[\tau_{\mathrm{ADE}}(x)=\mathbb{E}\left\{Y(1,\mathcal{G}(x))-Y(0, \mathcal{G}(x))\right\}=\gamma,\quad\text{for $x=0,1$}.\]
### Method
This section introduces an approach to estimate model parameters in (8) and (9) and draw inference based on resampling techniques. The parameter set includes not only the model coefficients, but also the projection vector, \(\boldsymbol{\theta}\). In addition, the covariance matrices, \(\boldsymbol{\Sigma}\)'s, are not directly observable. Rather, the observed data is the realization of the vertices. Let
denote the \(t\)th replicate of the \(p\) vertices acquired from unit \(i\), for \(t=1,\ldots,T_{i}\) and \(i=1,\ldots,n\), where \(T_{i}\) is the number of realizations in unit \(i\) and \(n\) is the number of units. Let \(X_{i}\) denote the actual treatment assignment, \(Y_{i}\) denote the observed outcome, and \(\mathbf{W}_{i}\in\mathbb{R}^{q}\) denote the \(q\)-dimensional observed confounding factors of unit \(i\). Assume the model errors, \(\eta_{i}\) and \(\epsilon_{i}\), are normally distributed with mean zero and variances \(\pi^{2}\) and \(\sigma^{2}\), respectively. Models (8) and (9) can be viewed as a multilevel model. Thus, it is proposed to estimate the parameters by maximizing the hierarchical likelihood, which has been shown to be asymptotically equivalent to maximizing the marginal likelihood (Lee and Nelder, 1996). To simplify the notations, let
\[\mathbf{X}_{i}=\begin{pmatrix}X_{i},\\ \mathbf{W}_{i}\end{pmatrix}\in\mathbb{R}^{q+1},\ \boldsymbol{\alpha}= \begin{pmatrix}\alpha\\ \boldsymbol{\phi}_{1}\end{pmatrix}\in\mathbb{R}^{q+1},\ \text{and}\ \boldsymbol{ \gamma}=\begin{pmatrix}\gamma\\ \boldsymbol{\phi}_{2}\end{pmatrix}\in\mathbb{R}^{q+1}.\]
The following gives the negative hierarchical likelihood (with a constant difference).
\[\ell = \sum_{i=1}^{n}\frac{T_{i}}{2}\left\{(\alpha_{0i}+\mathbf{X}_{i}^ {\top}\boldsymbol{\alpha})+(\boldsymbol{\theta}^{\top}\mathbf{S}_{i} \boldsymbol{\theta})\exp(-\alpha_{0i}-\mathbf{X}_{i}^{\top}\boldsymbol{\alpha})\right\}\] \[+\sum_{i=1}^{n}\frac{1}{2}\left\{\log\sigma^{2}+\frac{1}{\sigma^{2 }}(Y_{i}-\gamma_{0}-\mathbf{X}_{i}^{\top}\boldsymbol{\gamma}-\beta\log( \boldsymbol{\theta}^{\top}\boldsymbol{\Sigma}_{i}\boldsymbol{\theta}))^{2} \right\}+\sum_{i=1}^{n}\frac{1}{2}\left\{\log\pi^{2}+\frac{1}{\pi^{2}}(\alpha _{0i}-\alpha_{0})^{2}\right\},\]
where \(\alpha_{0i}=\alpha_{0}+\eta_{i}\) and \(\mathbf{S}_{i}=T_{i}^{-1}\sum_{t=1}^{T_{i}}\mathbf{M}_{it}\mathbf{M}_{it}^{\top}\). The first component in the likelihood corresponds to the conditional likelihood of \(\mathbf{M}_{it}\) given \(\alpha_{0i}\) and \(\mathbf{X}_{i}\) under model (8); the second component corresponds to the conditional likelihood of \(Y_{i}\) given \(\alpha_{0i}\), \(\mathbf{X}_{i}\), and the graph, \(\boldsymbol{\Sigma}_{i}\), under model (9); and the third component corresponds to the likelihood of the random intercept, \(\alpha_{0i}\). For the second component in (10), the covariance matrices, \(\boldsymbol{\Sigma}_{i}\)'s, are not directly observed. A sample counterpart needs to be introduced and utilized instead in practice for estimation. For fixed \(p\), denote \(\hat{\boldsymbol{\Sigma}}_{i}\) as a consistent estimator of \(\boldsymbol{\Sigma}_{i}\). For example, the sample covariance matrix, \(\mathbf{S}_{i}\), is a consistent estimator of \(\boldsymbol{\Sigma}_{i}\). Denote \(\hat{\ell}\) as the sample counterpart of \(\ell\) by replacing \(\boldsymbol{\Sigma}_{i}\) with \(\hat{\boldsymbol{\Sigma}}_{i}\).
\[\hat{\ell} = \sum_{i=1}^{n}\frac{T_{i}}{2}\left\{(\alpha_{0i}+\mathbf{X}_{i}^ {\top}\boldsymbol{\alpha})+(\boldsymbol{\theta}^{\top}\mathbf{S}_{i} \boldsymbol{\theta})\exp(-\alpha_{0i}-\mathbf{X}_{i}^{\top}\boldsymbol{\alpha})\right\}\] \[+\sum_{i=1}^{n}\frac{1}{2}\left\{\log\sigma^{2}+\frac{1}{\sigma^{ 2}}(Y_{i}-\gamma_{0}-\mathbf{X}_{i}^{\top}\boldsymbol{\gamma}-\beta\log( \boldsymbol{\theta}^{\top}\hat{\boldsymbol{\Sigma}}_{i}\boldsymbol{\theta}))^{ 2}\right\}+\sum_{i=1}^{n}\frac{1}{2}\left\{\log\pi^{2}+\frac{1}{\pi^{2}}( \alpha_{0i}-\alpha_{0})^{2}\right\}.\]
**Lemma 1**.: _For fixed \(p\) and \(n\), assume \(\hat{\boldsymbol{\Sigma}}_{i}\) is a consistent estimator of \(\boldsymbol{\Sigma}_{i}\) as \(\min_{i}T_{i}\to\infty\), for \(\forall\ i\in\{1,\ldots,n\}\). Then,_
\[\hat{\ell}\to\ell,\quad\text{as}\ \min_{i}T_{i}\to\infty.\]
With Lemma 1, it is proposed to estimate model parameters by minimizing \(\hat{\ell}\). A coordinate descent algorithm is considered to solve for solution over the parameter set \(\boldsymbol{\Theta}=(\boldsymbol{\theta},\alpha_{0i},\alpha_{0},\boldsymbol{ \alpha},\gamma_{0},\boldsymbol{\gamma},\beta,\pi^{2},\sigma^{2})\).
For \(\mathbf{\theta}\), a constraint is required to identify a unique solution. The following optimization problem is considered.
minimize \[\hat{\ell},\] such that \[\mathbf{\theta}^{\top}\mathbf{H}\mathbf{\theta}=1,\] (12)
where \(\mathbf{H}\in\mathbb{R}^{p\times p}\) is a positive definite matrix. The choice of \(\mathbf{H}\) can be subject dependent. Similar to the choice in PCA or CCA, one can set \(\mathbf{H}\) to the \(p\)-dimensional identity matrix. As \(\mathbf{M}_{it}\)'s are assumed to be normally distributed, one can set \(\mathbf{H}=\bar{\mathbf{S}}=\sum_{i=1}^{n}\sum_{j=1}^{T_{i}}\mathbf{M}_{it} \mathbf{M}_{it}^{\top}/\sum_{i=1}^{n}T_{i}\) to incorporate distributional information of the data. This type of choice was also considered in the study of common PCA (Krzanowski, 1984) and a covariance regression model by Zhao et al. (2021).
Algorithm 1 summarizes the optimization steps of problem (12). For parameters without an analytic solution, including \(\alpha_{0i}\) and \(\mathbf{\alpha}\), the Newton-Raphson algorithm is employed to find the update. For \(\{\alpha_{0},\gamma_{0},\mathbf{\gamma},\beta,\pi^{2},\sigma^{2}\}\), explicit solutions can be obtained and utilized for update. For \(\mathbf{\theta}\), with the constraint, the following gives the Lagrangian form.
\[\mathcal{L}(\mathbf{\theta},\lambda)=\frac{1}{2}\sum_{i=1}^{n}\left\{\mathbf{\theta}^ {\top}(T_{i}U_{i}\mathbf{S}_{i})\mathbf{\theta}+\frac{1}{\sigma^{2}}(V_{i}-\beta \log(\mathbf{\theta}^{\top}\hat{\mathbf{\Sigma}}_{i}\mathbf{\theta}))^{2}\right\}-\lambda( \mathbf{\theta}^{\top}\mathbf{H}\mathbf{\theta}-1), \tag{13}\]
where \(U_{i}=\exp(-\alpha_{0i}-\mathbf{X}_{i}^{\top}\mathbf{\alpha})\), \(V_{i}=Y_{i}-\gamma_{0}-\mathbf{X}_{i}^{\top}\mathbf{\gamma}\), and \(\lambda\) is the Lagrangian parameter. Take the partial derivative over \(\mathbf{\theta}\) and set it to zero, it gives
\[\frac{\partial\mathcal{L}(\mathbf{\theta},\lambda)}{\partial\mathbf{\theta}}=\sum_{i= 1}^{n}\left[(T_{i}U_{i}\mathbf{S}_{i})\mathbf{\theta}-\frac{1}{\sigma^{2}}\left\{ V_{i}-\beta\log(\mathbf{\theta}^{\top}\hat{\mathbf{\Sigma}}_{i}\mathbf{\theta}) \right\}\frac{2\beta\hat{\mathbf{\Sigma}}_{i}\mathbf{\theta}}{\mathbf{\theta}^{\top}\hat{ \mathbf{\Sigma}}_{i}\mathbf{\theta}}\right]-2\lambda\mathbf{H}\mathbf{\theta}=\mathbf{0}. \tag{14}\]
The above equation is hard to derive the explicit form to solve for \(\mathbf{\theta}\). It is observed that the quantity of \(\mathbf{\theta}^{\top}\hat{\mathbf{\Sigma}}_{i}\mathbf{\theta}\) is in both a logarithmic function and the denominator. To simplify the computation, it is proposed to plug in the value of \(\mathbf{\theta}\) from the previous step \(s\) into \(\mathbf{\theta}^{\top}\hat{\mathbf{\Sigma}}_{i}\mathbf{\theta}\) denoted as \(\xi_{i}^{(s)}=\mathbf{\theta}^{(s)\top}\hat{\mathbf{\Sigma}}_{i}\mathbf{\theta}^{(s)}\). Let
\[\mathbf{A}^{(s)}=\sum_{i=1}^{n}\left\{T_{i}U_{i}^{(s)}\mathbf{S}_{i}-\frac{2 \beta^{(s)}(V_{i}^{(s)}-\beta^{(s)}\log\xi_{i}^{(s)})}{\sigma^{2(s)}\xi_{i}^{( s)}}\hat{\mathbf{\Sigma}}_{i}\right\},\]
where \(U_{i}^{(s)}\), \(V_{i}^{(s)}\), \(\beta^{(s)}\), and \(\sigma^{2(s)}\) are the solutions from the \(s\)th step. It is proposed to solve the following instead of the equation in (14),
\[\mathbf{A}^{(s)}\mathbf{\theta}-\lambda\mathbf{H}\mathbf{\theta}=\mathbf{0}, \tag{15}\]
where the solution \((\mathbf{\theta},\lambda)\) is the pair of eigenvector and eigenvalue of \(\mathbf{A}^{(s)}\) with respect to \(\mathbf{H}\) that minimizes \(\mathcal{L}(\mathbf{\theta},\lambda)\). More details of the optimization algorithm is presented in Section B of the
```
1:\(\{X_{i},\{{\bf M}_{i1},\ldots,{\bf M}_{iT_{i}}\},Y_{i},{\bf W}_{i}\ |\ i=1,\ldots,n\}\)
2:initialization: \((\boldsymbol{\theta}^{(0)},\alpha_{0i}^{(0)},\alpha_{0}^{(0)},\boldsymbol{ \alpha}^{(0)},\gamma_{0}^{(0)},\boldsymbol{\gamma}^{(0)},\beta^{(0)},\pi^{2(0)},\sigma^{2(0)})\)
3:repeat for iteration \(s=0,1,2,\ldots\)
4: update \(\boldsymbol{\alpha}\) and \(\{\alpha_{0i}\}\) using the Newton-Raphson algorithm, denoted as \(\boldsymbol{\alpha}^{(s+1)}\) and \(\{\alpha_{0i}^{(s+1)}\}\),
5: update \((\alpha_{0},\gamma_{0},\boldsymbol{\gamma},\beta,\pi^{2},\sigma^{2})\) with \[\alpha_{0}^{(s+1)}=\frac{1}{n}\sum_{i=1}^{n}\alpha_{0i}^{(s+1)},\quad\pi^{2(s+ 1)}=\frac{1}{n}\sum_{i=1}^{n}\left(\alpha_{0i}^{(s+1)}-\alpha_{0}^{(s+1)} \right)^{2},\] \[\boldsymbol{\mu}^{(s+1)}=\left(\sum_{i=1}^{n}{\bf Z}_{i}^{(s)}{\bf Z}_{i}^ {(s)\top}\right)^{-1}\left(\sum_{i=1}^{n}Y_{i}{\bf Z}_{i}^{(s)}\right),\quad \sigma^{2(s+1)}=\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-{\bf Z}_{i}^{(s)\top} \boldsymbol{\mu}^{(s+1)}\right)^{2},\] where \[{\bf Z}_{i}^{(s)}=\begin{pmatrix}1\\ {\bf X}_{i}\\ \log(\boldsymbol{\theta}^{(s)\top}{\bf S}_{i}\boldsymbol{\theta}^{(s)}) \end{pmatrix}\ \text{and}\ \boldsymbol{\mu}^{(s+1)}=\begin{pmatrix}\gamma_{0}^{(s+1)}\\ \boldsymbol{\gamma}^{(s+1)}\\ \beta^{(s+1)}\end{pmatrix},\]
6: update \(\boldsymbol{\theta}\) by solving (15), denoted as \(\boldsymbol{\theta}^{(s+1)}\),
7:until the objective function in (12) converges;
8: consider a random series of initializations, repeat Steps 1-6, and choose the estimates with the minimum objective value.
9:\((\hat{\boldsymbol{\theta}},\hat{\alpha}_{0i},\hat{\alpha}_{0},\hat{ \boldsymbol{\alpha}},\hat{\gamma}_{0},\hat{\boldsymbol{\gamma}},\hat{\beta}, \hat{\pi}^{2},\hat{\sigma}^{2})\)
```
**Algorithm 1** The optimization algorithm for problem (12).
supplementary materials. After obtaining an estimate of model parameters, causal estimands are estimated following Theorem 1.
Algorithm 1 offers an approach to identify one mediation component based on the likelihood criterion. To identify higher-order components, it is proposed to remove the identified components from the data first and then replace the input in Algorithm 1 with the new data to identify the next component. Let \(\hat{\boldsymbol{\Theta}}^{(k)}=(\hat{\boldsymbol{\theta}}_{1},\ldots,\hat{ \boldsymbol{\theta}}_{k})\in\mathbb{R}^{p\times k}\) denote the first \(k\) identified components. For \(i=1,\ldots,n\), set
\[\hat{\mathbf{M}}_{i}^{(k+1)}=\mathbf{M}_{i}-\mathbf{M}_{i}\hat{\boldsymbol{ \Theta}}^{(k)}\hat{\boldsymbol{\Theta}}^{(k)\top}\text{ and }\hat{Y}_{i}^{(k+1)}=Y_{i}-\sum_{j=1}^{k}\hat{\beta}_{j}\log(\hat{ \boldsymbol{\theta}}_{j}^{\top}\hat{\boldsymbol{\Sigma}}_{i}\hat{\boldsymbol{ \theta}}_{j}) \tag{16}\]
as the new data, where \(\mathbf{M}_{i}=(\mathbf{M}_{i1},\ldots,\mathbf{M}_{iT_{i}})^{\top}\in\mathbb{ R}^{T_{i}\times p}\) is the mediator outcome of unit \(i\) and \(\hat{\beta}_{j}\) is the estimate of \(\beta\) in the \(j\)th component, for \(j=1,\ldots,k\). \(\hat{\mathbf{M}}_{i}^{(k+1)}\) is created following a similar strategy as in the principal component analysis to identify orthogonal components, which is also employed in a recently introduced covariance regression model (Zhao et al., 2021). It is also proposed to remove the identified mediation effect from the outcome in order to identify a new mediation component. In this sense, the mediation mechanism of the identified components are considered parallel. Fitting the mediation components marginally is then equivalent to fitting them jointly (VanderWeele, 2015). To determine the number of mediation components, a criterion called the average deviation from diagonality introduced in Zhao et al. (2021) is utilized, which is defined as
\[\text{DfD}(\hat{\boldsymbol{\Theta}}^{(k)})=\prod_{i=1}^{n}\left[\frac{\det \left\{\text{diag}(\hat{\boldsymbol{\Theta}}^{(k)\top}\hat{\boldsymbol{\Sigma} }_{i}\hat{\boldsymbol{\Theta}}^{(k)})\right\}}{\det(\hat{\boldsymbol{\Theta}}^ {(k)\top}\hat{\boldsymbol{\Sigma}}_{i}\hat{\boldsymbol{\Theta}}^{(k)})}\right]^ {T_{i}/\sum_{i}T_{i}}, \tag{17}\]
where \(\text{diag}(\mathbf{A})\) is a diagonal matrix taking the diagonal elements in a square matrix \(\mathbf{A}\) and \(\det(\mathbf{A})\) is the determinant of \(\mathbf{A}\). The above DfD metric achieves its minimum value of one when \(\hat{\boldsymbol{\Theta}}^{(k)}\) commonly diagonalizes \(\hat{\boldsymbol{\Sigma}}_{i}\) for all \(i\in\{1,\ldots,n\}\). As \(k\) increases, it gets more difficult to diagonalize all estimated covariance matrices and the value of the DfD metric may increase dramatically. The number of components can be then chosen by setting a threshold. For example, choose \(k\) such that \(\text{DfD}(\hat{\boldsymbol{\Theta}}^{(k)})\leq 2\) as recommended in Zhao et al. (2021).
### Inference
To draw inference on the causal estimands, a resampling procedure is considered. Particularly, inference on the average indirect effect, which is denoted as the product of \(\alpha\) and \(\beta\) according to
Theorem 1, raises concerns. In general, even under the normality assumption, the distribution of the product of two estimates can be far from Gaussian in finite sample. Thus, the following nonparametric bootstrap procedure is introduced.
**Step 0.**: Using all sample units, \(k\) mediation components are identified and denote the estimated linear projections as \(\hat{\mathbf{\theta}}_{j}\), for \(j=1,\ldots,k\).
**Step 1.**: Generate a bootstrap sample of size \(n\) by sampling with replacement, denoted as
\(\{X_{i}^{*},(\mathbf{M}_{i1},\ldots,\mathbf{M}_{iT_{i}})^{*},Y_{i}^{*}, \mathbf{W}_{i}^{*}\ |\ i=1,\ldots,n\}\).
**Step 2.**: For \(j=1,\ldots,k\), using the resampled data, estimate model coefficients and variances following Algorithm 1 with \(\mathbf{\theta}=\hat{\mathbf{\theta}}_{j}\).
**Step 3.**: Repeat Steps 1-2 for \(B\) times.
**Step 4.**: Construct bootstrap confidence intervals for the causal estimands under a prespecified significance level.
As the study focus is on the causal parameters, the resampling is conducted at the unit level. Once a unit is sampled, all mediator observations remain for estimation. The above procedure is implemented fixing the \(k\) identified projections. Thus, inference on \(\mathbf{\theta}\) is unachievable. If one seeks to perform inference on \(\mathbf{\theta}\) via a bootstrap procedure, it requires a matching step on the estimates across bootstrap samples as the number of components and the order of identifying the components may differ. In addition, the inference performance can be sensitive to the quality of the matching step and the metrics used for matching. On the other hand, considering the computation cost, it will be significantly inflated when adding the estimation of \(\mathbf{\theta}\) and matching. Thus, we leave the inference on \(\mathbf{\theta}\) to future research.
### Asymptotic properties
In this section, we study the asymptotic properties of the proposed estimator. We first discuss the regularity conditions on the graph mediator. As discussed in Section 2.1, under the Gaussian covariance graph model, the graph and the covariance matrix have one-to-one correspondence. Considering the covariance matrix, \(\mathbf{\Sigma}_{i}\) (for \(i=1,\ldots,n\)), it is assumed that it has the eigendecomposition of \(\mathbf{\Sigma}_{i}=\mathbf{\Pi}_{i}\mathbf{\Lambda}_{i}\mathbf{\Pi}_{i}^{\top}\), where \(\mathbf{\Pi}_{i}=(\mathbf{\pi}_{i1},\ldots,\mathbf{\pi}_{ip})\in\mathbb{R}^{p\times p}\) is an orthonormal matrix and \(\mathbf{\Lambda}_{i}=\text{diag}\{\lambda_{i1},\ldots,\lambda_{ip}\}\in\mathbb{R}^ {p\times p}\) is a diagonal matrix of corresponding eigenvalues. Let
\(\mathbf{\zeta}_{it}=\mathbf{\Pi}_{i}^{\top}\mathbf{M}_{it}\in\mathbb{R}^{p}\), and then \(\text{Cov}(\mathbf{\zeta}_{it})=\mathbf{\Lambda}_{i}\). The elements in \(\mathbf{\zeta}_{it}\) are uncorrelated. Under the normality assumption, they are mutually independent. In addition, the following assumptions are imposed.
**Assumption B1**: Let \(T=\min_{i}T_{i}\). \(p\ll T\) is fixed.
**Assumption B2**: The eigenvectors of \(\mathbf{\Sigma}_{i}\) are identical, that is \(\mathbf{\Pi}_{i}=\mathbf{\Pi}\), for \(i=1,\ldots,n\).
**Assumption B3**: For \(\forall\ i=1,\ldots,n\), there exists (at least) one column in \(\mathbf{\Pi}_{i}\) indexed by \(k_{i}\), such that \(\mathbf{\theta}=\mathbf{\pi}_{ik_{i}}\) and the parametric model assumption (Assumption A5) holds.
Assumption B1 assumes a low-dimensional scenario for the Gaussian covariance graph model. Under this assumption, the sample covariance matrices are well-conditioned and consistent estimators. Replace \(\hat{\mathbf{\Sigma}}_{i}\) with the sample covariance matrix, Lemma 1 holds. Assumption B2 is a common-diagonalization assumption assuming all the covariance matrices have the same set of eigenvectors. However, the order of the eigenvectors may vary if following the descending order of the corresponding eigenvalues. Assumption B3 assumes the parametric models in Assumption A5 are correctly specified. Under these assumptions, one can choose the eigenvectors of \(\bar{\mathbf{S}}\) as the initial value of \(\mathbf{\theta}\) in Algorithm 1, where \(\bar{\mathbf{S}}=\sum_{i=1}^{n}\sum_{j=1}^{T_{i}}\mathbf{M}_{it}\mathbf{M}_{it }^{\top}/\sum_{i=1}^{n}T_{i}\) is the average sample covariance across all units. The following proposition demonstrates the consistency of the proposed estimator.
**Proposition 1**.: _Assume Assumptions B1-B3 hold. As \(n,T\rightarrow\infty\), the proposed estimator of model parameters is asymptotically consistent._
## 3 Simulation Study
This section compares the performance of the proposed mediation framework with a competing approach via simulation studies. As no existing approach can be directly implemented for mediation analysis with a graph mediator, or when the mediator is a covariance matrix, an approach integrating the covariate assisted principal (CAP) regression (Zhao et al., 2021) and regression-based mediation analysis, named as **CAP-Med**, is considered as the competing method. The CAP-Med approach has two steps. (i) Perform CAP analysis under Model (8) to identify the mediator components. (ii) For each identified component in Step (i), perform the regression-based mediation analysis as in Imai et al. (2010). The proposed graph mediation approach is named as **GMed**.
Two simulation studies are conducted: (1) to examine and compare the performance of GMed to CAP-Med; and (2) to examine the robustness of GMed to model misspecification. In both studies, common eigenstructure is assumed across covariance matrices and the covariance matrices are generated from the eigendecomposition, \(\mathbf{\Sigma}_{i}=\mathbf{\Pi}\mathbf{\Lambda}_{i}\mathbf{\Pi}^{\top}\), where \(\mathbf{\Pi}=(\mathbf{\pi}_{1},\ldots,\mathbf{\pi}_{p})\in\mathbb{R}^{p\times p}\) is an orthonormal matrix and \(\mathbf{\Lambda}_{i}=\text{diag}\{\lambda_{i1},\ldots,\lambda_{ip}\}\in\mathbb{R} ^{p\times p}\) is a diagonal matrix of individual eigenvalues, for \(i=1,\ldots,n\). In Simulation (1), two components, the second component (D2) and the forth component (D4), are chosen to satisfy the model assumption, Assumption A5. In both components, the magnitude of model parameters are set to one and both the direct effect (\(\tau_{\text{ADE}}\)) and indirect effect (\(\tau_{\text{AIE}}\)) are with the value of one. The exposure (\(X\)) is generated from a Bernoulli distribution with probability 0.5 of being one. The random errors, \(\eta\) and \(\epsilon\), are generated from normal distributions with mean zero and standard deviation 0.1. For the rest dimensions, the eigenvalues are generated from a normal distribution with mean value exponentially decaying from 3 to \(-1\) and standard deviation 0.1. With the generated covariance matrices, \(\mathbf{M}_{it}\) are generated from the multivariate normal distribution with mean zero and \(Y_{i}\) is generated from Model (9). In this simulation study, no covariate (\(W\)) is considered. Two scenarios of data dimension are considered, \(p=10\) and \(p=50\), and the sample sizes are set to be \((n,T)=(500,100)\) and \((n,T)=(500,500)\), where the number of observations within each unit is set to be identical (\(T_{i}=T\)). The number of units (\(n\)) is set to 500 to be comparable with the sample size in the application study in Section 4. In Simulation (2), two covariates (\(q=2\)), \(\mathbf{W}=(W_{1},W_{2})^{\top}\), one continuous and one binary, are considered and the corresponding model coefficients with the magnitude of 0.5. The continuous covariate (\(W_{1}\)) is generated from a normal distribution with mean zero and standard deviation 0.5 and the binary covariate (\(W_{2}\)) is generated from a Bernoulli distribution with probability 0.5 of being one. The rest model parameters are the same as in Simulation (1) and data are generated following (8) and (9). In this simulation, to evaluate the performance to model misspecification, the proposed GMed approach is applied ignoring the covariates, named as **GMed-Mis**. The result under the correctly specified model is denoted as **GMed**. To evaluate the performance of identifying mediation components, the absolute value of the inner product of the estimated projection and the truth, denoted as \(|\langle\hat{\mathbf{\theta}},\mathbf{\pi}_{j}\rangle|\) (\(j=2,4\)), is used as a metric of similarity between two unit-norm vectors. For both CAP and GMed approaches, the number of components is determined using the \(\text{DfD}\leq 2\) criterion. Simulations are repeated for 200 times.
Table 1 presents the performance of estimating mediation components and the corresponding average indirect effect. In Simulation (1), considering Model (8) only, the CAP approach
shall achieve the optimal performance in estimating \(\mathbf{\theta}\) and model coefficients as the method is likelihood-based and assumptions required are satisfied. The proposed approach yields comparable performance to the CAP-Med approach. As the number of observations within each unit increases, the performance of GMed in estimating \(\mathbf{\theta}\) improves more significantly compared to the CAP-Med approach. In addition, the GMed approach performs consistently better in estimating \(\mathbf{\theta}\) in the second mediation component (D4). This suggests that incorporating the outcome information in the estimating procedure helps improve performance. To evaluate the finite sample performance of the GMed approach, Figure 2 presents the performance with \(p=10\) and various sample size combinations. From the figure, as both the number of units and the number of observations within each unit increase, the similarity of \(\hat{\mathbf{\theta}}\) to the truth converges to one and the bias and mean squared error (MSE) of the estimated average indirect effect converge to zero.
Simulation (2) aims to evaluate the robustness of GMed to model misspecification. Under the current setting, it also examines the robustness of GMed to the existence of an additive unmeasured mediator-outcome confounding not induced by the exposure. From Table 1, when the models are misspecified, the projections are still correctly identified though the estimate of the average indirect effect is off. This demonstrates the robustness of the GMed approach in estimating relevant mediation components. Utilizing this property, Section C.1 of the supplementary materials suggests an approach for sensitivity analysis.
## 4 Application
We apply the proposed approach to data acquired from the National Consortium on Alcohol and NeuroDevelopment in Adolescence (NCANDA). The consortium aims to identify the deterministic effect of alcohol use on the developing adolescent brain. A core battery of measurements, including structural and functional brain scans and cognitive testing, has been developed. In adolescence, the sex difference in cognitive behaviors has been identified in different functional domains, including spatial, verbal, math abilities, social recognition and so on (Lauer et al., 2019; Meinhardt-Injac et al., 2020; Esnaola et al., 2020). This difference has been shown to be partially mediated by brain functional connectivity captured by resting-state fMRI (Alarcon et al., 2018). In this study, we aim to identify the brain subnetwork within which functional connectivity mediates the sex difference in cognition. To exclude the impact of alcohol consumption, \(n=621\) subjects (312 Males and 309 Females) aged between 12 and 22 without excessive drinking are analyzed. Among these subjects,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Case} & \multirow{2}{*}{\(p\)} & \multirow{2}{*}{\((n,T)\)} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{\(\hat{\mathbf{\theta}}\)} & \multicolumn{2}{c}{\(\hat{\tau}_{\text{AIE}}\)} \\ \cline{5-8} & & & & \(|\langle\hat{\mathbf{\theta}},\mathbf{\pi}_{j}\rangle|\) (SE) & Bias & MSE \\ \hline \multirow{8}{*}{10} & \multirow{8}{*}{\((500,100)\)} & \multirow{2}{*}{GMed} & D2 & 0.879 (0.113) & \(-0.677\) & 0.460 \\ & & & D4 & 0.929 (0.140) & \(-0.653\) & 0.428 \\ \cline{3-8} & & \multirow{2}{*}{CAP-Med} & D2 & 0.886 (0.138) & \(-0.664\) & 0.443 \\ & & & D4 & 0.905 (0.120) & \(-0.635\) & 0.406 \\ \cline{2-8} & & \multirow{2}{*}{GMed} & D2 & 0.962 (0.047) & \(-0.292\) & 0.088 \\ & & & D4 & 0.977 (0.096) & \(-0.276\) & 0.080 \\ \cline{3-8} & & \multirow{2}{*}{CAP-Med} & D2 & 0.911 (0.134) & \(-0.243\) & 0.067 \\ & & & D4 & 0.919 (0.121) & \(-0.129\) & 0.038 \\ \cline{3-8} & & \multirow{2}{*}{GMed} & D2 & 0.841 (0.101) & \(-0.665\) & 0.444 \\ & & & D4 & 0.910 (0.142) & \(-0.659\) & 0.436 \\ \cline{3-8} & & \multirow{2}{*}{CAP-Med} & D2 & 0.841 (0.104) & \(-0.660\) & 0.438 \\ & & & D4 & 0.898 (0.115) & \(-0.632\) & 0.402 \\ \cline{3-8} & & \multirow{2}{*}{GMed} & D2 & 0.915 (0.095) & \(-0.289\) & 0.087 \\ & & & D4 & 0.985 (0.073) & \(-0.277\) & 0.080 \\ \cline{3-8} & & \multirow{2}{*}{CAP-Med} & D2 & 0.905 (0.109) & \(-0.253\) & 0.071 \\ & & & D4 & 0.904 (0.137) & \(-0.129\) & 0.041 \\ \hline \multirow{8}{*}{(2)} & \multirow{8}{*}{\((500,100)\)} & \multirow{2}{*}{GMed} & D2 & 0.993 (0.005) & \(-0.669\) & 0.449 \\ & & & D4 & 0.999 (0.001) & \(-0.674\) & 0.456 \\ \cline{3-8} & & \multirow{2}{*}{GMed-Mis} & D2 & 0.993 (0.005) & \(-1.319\) & 1.863 \\ \cline{3-8} & & & D4 & 0.999 (0.001) & \(-0.114\) & 0.014 \\ \cline{3-8} & & \multirow{2}{*}{GMed} & D2 & 0.999 (0.001) & \(-0.291\) & 0.087 \\ \cline{3-8} & & & D4 & 1.000 (0.000) & \(-0.279\) & 0.081 \\ \cline{3-8} & & \multirow{2}{*}{GMed-Mis} & D2 & 0.999 (0.001) & \(-1.370\) & 2.076 \\ \cline{3-8} & & & D4 & 1.000 (0.000) & \(-0.002\) & 0.001 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Performance of estimating target mediation components (\(\mathbf{\theta}\)) and corresponding average indirect effect (\(\tau_{\text{AIE}}\)) over 200 replicates in the simulation study. SE: standard error; MSE: mean squared error.
Figure 2: Performance of estimating target mediation components (\(\mathbf{\theta}\)) and corresponding average indirect effect (\(\tau_{\text{AIE}}\)) over 200 replicates in the simulation study as the sample size \((n,T)\) varies with \(p=10\). MSE: mean squared error.
a significantly lower median response time in the motor speed test is observed in males compared to females after adjusting for age (ATE = \(-0.324\), \(p\)-value \(<0.001\)), where the motor task measures sensorimotor ability via having the participant use the mouse to click on a shrinking box when it moves to a new position on the screen. We further apply the proposed approach for mediation analysis, where sex is the binary exposure (\(X\), \(X=1\) for male), resting-state fMRI data are the mediator (\(\mathbf{M}\)), the \(z\)-score of median response time for correct responses of the motor speed task is the outcome (\(Y\)), and age is the confounding factor (\(W\)). After preprocessing, fMRI time courses are extracted from \(p=75\) brain regions (\(60\) cortical and \(15\) subcortical regions) spanning the whole brain using the Harvard-Oxford Atlas in FSL (Smith et al., 2004). These regions are grouped into \(10\) functional modules, which will be used for an _ad hoc_ procedure of sparsifying the loading profile using the fused lasso (Tibshirani et al., 2005) to impose local smoothness and constancy within each module. To remove the temporal dependence in the time courses, a subsample is taken with an effective sample size of \(T_{i}=T=125\) and denote the subsampled data as \(\mathbf{M}_{i}\in\mathbb{R}^{T_{i}\times p}\), for \(i=1,\ldots,n\).
Using the DfD \(\leq 2\) criterion in (17), the proposed approach identifies four components. Table 2 presents the estimated AIE, as well as \(\alpha\) and \(\beta\) coefficient. The confidence intervals are obtained from \(500\) bootstrap samples. Among these identified components, the third component (\(M_{3}\)) shows a significantly positive AIE with both \(\alpha\) and \(\beta\) negative. Figure 3 presents the sparsified loading profile of \(\mathbf{\theta}_{3}\) and the corresponding brain map. Section C.2 of the supplementary materials presents the results from the CAP-Med approach introduced in Section 3. The identified component with a significant mediation effect is consistent with \(M_{3}\) identified by the proposed approach. In \(M_{3}\), four regions with a non-zero loading are all in the limbic-system network, including the temporal pole (left and right) and the medial orbitofrontal cortex (left and right). Compared to females, (weighted) functional connectivity within this network is lower in males, while this lower functional connectivity increases the reaction time. The temporal pole has been found associated with high-level cognitive functions (Herlin et al., 2021). Though no direct relation to reaction time has been established, an indirect influence was hypothesized by contributing to processes involving decision-making, response selection, and emotion evaluation (Pessoa, 2010). One of the primary functions of the medial orbitofrontal cortex is to integrate emotional reaction with sensory and/or contextual stimuli playing a role in reward processing and value-based decision making, which allows individuals to make adaptive responses to stimuli based on emotional significance (Rudebeck and Murray, 2014). Thus, indirectly, activation in the area may prolong the reaction time. Regional
sex differences of the temporal and frontal cortices have been observed in the developing brain using multiple imaging modalities. Sex difference in the brain was also suggested to be relevant to the symptomatic sex difference in psychiatric disorders (Kaczkurkin et al., 2019). Via a mediation analysis, the proposed approach offers a way of articulating the underlying mechanism.
## 5 Discussion
This study introduces a mediation analysis framework when the mediator is a graph. A Gaussian covariance graph model is assumed for graph representation. Causal estimands and assumptions are discussed under this representation. With a covariance matrix as the mediator, parametric mediation models are considered based on matrix decomposition. Assuming Gaussian random errors, likelihood-based estimators are introduced to simultaneously identify the decomposition and causal parameters. An efficient computational algorithm is proposed and the asymptotic properties
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline & \multicolumn{2}{c}{AIE} & \multicolumn{4}{c}{\(\alpha\)} & \multicolumn{2}{c}{\(\beta\)} \\ \cline{2-10} & \multicolumn{1}{c}{Est. (SE)} & \multicolumn{1}{c}{\(p\)-value} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{Est. (SE)} & \multicolumn{1}{c}{\(p\)-value} & \multicolumn{1}{c}{95\% CI} & \multicolumn{1}{c}{Est. (SE)} & \multicolumn{1}{c}{\(p\)-value} & \multicolumn{1}{c}{95\% CI} \\ \hline \(M_{1}\) & \(-0.018\) (\(0.031\)) & \(0.574\) & \((-0.079,0.044)\) & \(-0.325\) (\(0.069\)) & \(<0.001\) & \((-0.460,-0.190)\) & \(0.053\) (\(0.094\)) & \(0.572\) & \((-0.131,0.237)\) \\ \(M_{2}\) & \(-0.014\) (\(0.032\)) & \(0.676\) & \((-0.077,0.050)\) & \(-0.286\) (\(0.065\)) & \(<0.001\) & \((-0.414,-0.157)\) & \(0.051\) (\(0.110\)) & \(0.646\) & \((-0.166,0.267)\) \\ \(M_{3}\) & \(0.066\) (\(0.027\)) & \(0.014\) & \((0.013,0.119)\) & \(-0.211\) (\(0.064\)) & \(<0.001\) & \((-0.336,-0.086)\) & \(-0.318\) (\(0.094\)) & \(0.001\) & \((-0.503,-0.134)\) \\ \(M_{4}\) & \(-0.035\) (\(0.033\)) & \(0.287\) & \((-0.100,0.030)\) & \(0.256\) (\(0.042\)) & \(<0.001\) & \((0.173,0.338)\) & \(-0.138\) (\(0.127\)) & \(0.279\) & \((-0.388,0.112)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Estimated average indirect effect (AIE) and \(\alpha\) and \(\beta\) coefficient of the identified mediator components in the NCANDA dataset. Confidence intervals are constructed from 500 bootstrap samples. Est.: estimate; SE: standard error; CI: confidence interval.
Figure 3: The sparsified loading profile and brain map of the component with a significant AIE (\(M_{3}\)).
of the estimators are investigated. Via simulation studies, the performance of the proposed approach is evaluated. Applying to a resting-state fMRI study, a brain network is identified within which functional connectivity mediates the sex difference in the performance of a motor task.
In causal mediation analysis, an essential while untestable assumption is the assumption of no unmeasured mediator-outcome confounding. A sensitivity analysis is usually conducted to justify the validity of the conclusion to this assumption. For parametric approaches, one type of commonly used approach is to parametrize the confounding effect to evaluate the causal effects under various values, such as the one proposed in Imai et al. (2010). Using simulation studies, it is demonstrated that the proposed approach is robust to the existence of unmeasured mediator-outcome confounding in identifying the mediation component, that is in estimating the projection vector \(\mathbf{\theta}\). With given \(\mathbf{\theta}\), one can employ the approach in Imai et al. (2010) for sensitivity analysis.
The asymptotic consistency of the proposed estimator requires the common diagonalization assumption on the covariance matrices. Via simulation studies, Zhao et al. (2021) pointed out that this assumption can be relaxed to partial common diagonalization. As this study also introduces a likelihood-based procedure, it is expected that the proposed approach is robust to this relaxation to partial common diagonalization.
Considering a graph mediator, under the Gaussian covariance graph model, this study assumes the number of nodes in the graph is fixed and low dimensional. The sample covariance matrices are thus well-conditioned and a likelihood-based approach is introduced to estimate model parameters. In many practical settings, for example, in voxel-level fMRI analysis, data dimension can be even higher than the number of fMRI data points. A well-conditioned estimator of the covariance matrix is required and we leave the introduction of such an estimator and the study of theoretical results as one of future directions. As discussed in Section 2.3, inference on projection vectors is not straightforward and requires rigorous theoretical and numerical investigations, which we leave to future research.
## Appendix
This Appendix collects the technical proof of the theorems in the main text, additional technical details, and additional data analysis results.
Theory and Proof
### Proof of Theorem 1
Proof.: As discussed in Section 2.1, the potential outcome of \(Y\) under a multiple-worlds model is expressed as
\[Y(x,\mathcal{G}(x^{\prime}))=\gamma x+\alpha\beta x^{\prime}+(\gamma_{0}+\alpha_ {0}\beta+\beta\mathbf{W}^{\top}\boldsymbol{\phi}_{1}+\mathbf{W}^{\top} \boldsymbol{\phi}_{2})+\beta\eta+\epsilon.\]
\[\tau_{\text{ATE}} = \mathbb{E}\left\{Y(1,\mathcal{G}(1))-Y(0,\mathcal{G}(0))\right\}\] \[= \left\{\gamma+\alpha\beta+(\gamma_{0}+\alpha_{0}\beta+\beta \mathbf{W}^{\top}\boldsymbol{\phi}_{1}+\mathbf{W}^{\top}\boldsymbol{\phi}_{2} )\right\}-\left\{(\gamma_{0}+\alpha_{0}\beta+\beta\mathbf{W}^{\top} \boldsymbol{\phi}_{1}+\mathbf{W}^{\top}\boldsymbol{\phi}_{2})\right\}\] \[= \gamma+\alpha\beta.\]
\[\tau_{\text{ATE}}(x) = \mathbb{E}\left\{Y(x,\mathcal{G}(1))-Y(x,\mathcal{G}(0))\right\}\] \[= \left\{\gamma x+\alpha\beta+(\gamma_{0}+\alpha_{0}\beta+\beta \mathbf{W}^{\top}\boldsymbol{\phi}_{1}+\mathbf{W}^{\top}\boldsymbol{\phi}_{2 })\right\}-\left\{\gamma x+(\gamma_{0}+\alpha_{0}\beta+\beta\mathbf{W}^{\top} \boldsymbol{\phi}_{1}+\mathbf{W}^{\top}\boldsymbol{\phi}_{2})\right\}\] \[= \alpha\beta.\]
\[\tau_{\text{ADE}}(x) = \mathbb{E}\left\{Y(1,\mathcal{G}(x))-Y(0,\mathcal{G}(x))\right\}\] \[= \left\{\gamma+\alpha\beta x+(\gamma_{0}+\alpha_{0}\beta+\beta \mathbf{W}^{\top}\boldsymbol{\phi}_{1}+\mathbf{W}^{\top}\boldsymbol{\phi}_{2} )\right\}-\left\{\alpha\beta x+(\gamma_{0}+\alpha_{0}\beta+\beta\mathbf{W}^{ \top}\boldsymbol{\phi}_{1}+\mathbf{W}^{\top}\boldsymbol{\phi}_{2})\right\}\] \[= \gamma.\]
The above proves Theorem 1.
### Proof of Lemma 1
From the lemma assumption, \(\hat{\boldsymbol{\Sigma}}_{i}\) is a consistent estimator of \(\boldsymbol{\Sigma}_{i}\). With given \(\boldsymbol{\theta}\), the likelihood function is a continuous function of \(\hat{\boldsymbol{\Sigma}}_{i}\) (for \(i=1,\ldots,n\)). Thus, \(\hat{\ell}\) converges to \(\ell\) as \(\min_{i}T_{i}\to\infty\).
### Proof of Proposition 1
Assumption B1 assumes a low-dimensional scenario and the dimension of the mediator is fixed. Lemma 1 demonstrates the consistency of the approximated likelihood function used for optimization. Asymptotically, the proposed estimator is likelihood-based. Thus, under the imposed assumptions, the consistency of the estimator follows.
## Appendix B Details of Algorithm 1
\[\hat{\ell} = \sum_{i=1}^{n}\frac{T_{i}}{2}\left\{(\alpha_{0i}+\mathbf{X}_{i}^{ \top}\boldsymbol{\alpha})+(\boldsymbol{\theta}^{\top}\mathbf{S}_{i}\boldsymbol {\theta})\exp(-\alpha_{0i}-\mathbf{X}_{i}^{\top}\boldsymbol{\alpha})\right\}\] \[+\sum_{i=1}^{n}\frac{1}{2}\left\{\log\sigma^{2}+\frac{1}{\sigma^{ 2}}(Y_{i}-\gamma_{0}-\mathbf{X}_{i}^{\top}\boldsymbol{\gamma}-\beta\log( \boldsymbol{\theta}^{\top}\hat{\boldsymbol{\Sigma}}_{i}\boldsymbol{\theta}))^ {2}\right\}+\sum_{i=1}^{n}\frac{1}{2}\left\{\log\pi^{2}+\frac{1}{\pi^{2}}( \alpha_{0i}-\alpha_{0})^{2}\right\}.\]
First considering \(\boldsymbol{\alpha}\) and \(\alpha_{0i}\) (\(i=1,\ldots,n\)), analytical solution is not available, thus the Newton-Raphson algorithm is employed. For \(\boldsymbol{\alpha}\),
\[\frac{\partial\hat{\ell}}{\partial\boldsymbol{\alpha}}=\frac{1}{2}\sum_{i=1}^ {n}T_{i}\left\{1-(\boldsymbol{\theta}^{\top}\mathbf{S}_{i}\boldsymbol{\theta} )\exp(-\alpha_{0i}-\mathbf{X}_{i}^{\top}\boldsymbol{\alpha})\right\}\mathbf{ X}_{i},\]
\[\frac{\partial^{2}\hat{\ell}}{\partial\boldsymbol{\alpha}\partial\boldsymbol{ \alpha}^{\top}}=\frac{1}{2}\sum_{i=1}^{n}T_{i}(\boldsymbol{\theta}^{\top} \mathbf{S}_{i}\boldsymbol{\theta})\exp(-\alpha_{0i}-\mathbf{X}_{i}^{\top} \boldsymbol{\alpha})\mathbf{X}_{i}\mathbf{X}_{i}^{\top},\]
\[\boldsymbol{\alpha}^{(s+1)}=\boldsymbol{\alpha}^{(s)}-\left(\left.\frac{ \partial^{2}\hat{\ell}}{\partial\boldsymbol{\alpha}\partial\boldsymbol{ \alpha}^{\top}}\right|_{\boldsymbol{\alpha}^{(s)}}\right)^{-1}\left(\left. \frac{\partial\hat{\ell}}{\partial\boldsymbol{\alpha}}\right|_{\boldsymbol{ \alpha}^{(s)}}\right).\]
For \(\alpha_{0i}\) (\(i=1,\ldots,n\)),
\[\frac{\partial\hat{\ell}}{\partial\alpha_{0i}}=\frac{1}{2}\left[T_{i}\left\{1 -(\boldsymbol{\theta}^{\top}\mathbf{S}_{i}\boldsymbol{\theta})\exp(-\alpha_{ 0i}-\mathbf{X}_{i}^{\top}\boldsymbol{\alpha})\right\}+\frac{2}{\tau^{2}}( \alpha_{0i}-\alpha_{0})\right],\]
\[\frac{\partial^{2}\hat{\ell}}{\partial\alpha_{0i}^{2}}=\frac{1}{2}\left\{T_{i }(\boldsymbol{\theta}^{\top}\mathbf{S}_{i}\boldsymbol{\theta})\exp(-\alpha_{ 0i}-\mathbf{X}_{i}^{\top}\boldsymbol{\alpha})+\frac{2}{\tau^{2}}\right\},\]
\[\alpha_{0i}^{(s+1)}=\alpha_{0i}^{(s)}-\left(\left.\frac{\partial^{2}\hat{\ell }}{\partial\alpha_{0i}^{2}}\right|_{\alpha_{0i}^{(s)}}\right)^{-1}\left(\left. \frac{\partial\hat{\ell}}{\partial\alpha_{0i}}\right|_{\alpha_{0i}^{(s)}}\right)\]
For \(\alpha_{0}\) and \(\tau\), explicit form for update is available,
\[\alpha_{0}^{(s+1)}=\frac{1}{n}\sum_{i=1}^{n}\alpha_{0i}^{(s+1)},\quad\tau^{2( s+1)}=\frac{1}{n}\sum_{i=1}^{n}\left(\alpha_{0i}^{(s+1)}-\alpha_{0}^{(s+1)} \right)^{2}.\]
For \((\gamma_{0},\boldsymbol{\gamma},\beta)\), the analytical solution for update can be found jointly. Let
\[\mathbf{Z}_{i}=\begin{pmatrix}1\\ \mathbf{X}_{i}\\ \log(\boldsymbol{\theta}^{\top}\hat{\boldsymbol{\Sigma}}_{i}\boldsymbol{ \theta})\end{pmatrix},\quad\boldsymbol{\mu}=\begin{pmatrix}\gamma_{0}\\ \boldsymbol{\gamma}\\ \beta\end{pmatrix},\]
\[\frac{\partial\hat{\ell}}{\partial\boldsymbol{\mu}}=\frac{1}{2}\sum_{i=1}^{n }\frac{2}{\sigma^{2}}(Y_{i}-\mathbf{Z}_{i}^{\top}\boldsymbol{\mu})(-\mathbf{ Z}_{i})=\mathbf{0},\]
\[\Rightarrow\quad\boldsymbol{\mu}^{(s+1)}=\left(\sum_{i=1}^{n}\mathbf{Z}_{i}^ {(s)}\mathbf{Z}_{i}^{(s)\top}\right)^{-1}\left(\sum_{i=1}^{n}Y_{i}\mathbf{Z}_{i }^{(s)}\right)=(\mathbf{Z}^{(s)\top}\mathbf{Z}^{(s)})^{-1}\mathbf{Z}^{(s) \top}\mathbf{Y},\]
where \(\mathbf{Z}=\left(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\right)^{\top}\). For \(\sigma\),
\[\sigma^{2(s+1)}=\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\mathbf{Z}_{i}^{(s)\top}\boldsymbol {\mu}^{(s+1)})^{2}.\]
For \(\boldsymbol{\theta}\), to find the solution to (15), it is equivalent to find the eigenvectors and eigenvalues of \(\mathbf{A}\) with respect to \(\mathbf{H}\). We first assume \(\boldsymbol{\theta}_{0}\) is a solution eigenvector with unit norm, \(\|\boldsymbol{\theta}_{0}\|_{2}=1\). Since \(\mathbf{H}\) is positive definite, let \(\boldsymbol{\theta}=\mathbf{H}^{-1/2}\boldsymbol{\theta}_{0}\), then
\[\boldsymbol{\theta}^{\top}\mathbf{H}\boldsymbol{\theta}=\boldsymbol{\theta}_{ 0}^{\top}\mathbf{H}^{-1/2}\mathbf{H}\mathbf{H}^{-1/2}\boldsymbol{\theta}_{0} =\boldsymbol{\theta}_{0}^{\top}\boldsymbol{\theta}_{0}=1,\]
which satisfies the constraint condition. Replace \(\boldsymbol{\theta}\) with \(\boldsymbol{\theta}=\mathbf{H}^{-1/2}\boldsymbol{\theta}_{0}\) in (15),
\[\mathbf{A}\mathbf{H}^{-1/2}\boldsymbol{\theta}_{0}-\lambda\mathbf{H}\mathbf{H} ^{-1/2}\boldsymbol{\theta}_{0}=\mathbf{0}.\]
\[\Rightarrow\quad\mathbf{H}^{-1/2}\mathbf{A}\mathbf{H}^{-1/2}\boldsymbol{ \theta}_{0}=\lambda\boldsymbol{\theta}_{0}.\]
Therefore, \(\boldsymbol{\theta}_{0}\) is the eigenvector of matrix \(\mathbf{H}^{-1/2}\mathbf{A}\mathbf{H}^{-1/2}\) and \(\lambda\) is the corresponding eigenvalue. Then, \(\boldsymbol{\theta}=\mathbf{H}^{-1/2}\boldsymbol{\theta}_{0}\) is the update for \(\boldsymbol{\theta}\).
## Appendix C Additional Results of the NCANDA Study
### Sensitivity Analysis
As discussed in Section 3, the proposed approach is robust to the existence of an additive unmeasured mediator-outcome confounding in identifying the projection vectors. Based on this property, a sensitivity analysis introduced in Imai et al. (2010) is implemented where for an additive unmeasured mediator-outcome confounding, the correlation between the model errors is considered as the sensitivity parameter. In the NCANDA application study, \(M_{3}\) shows a significant mediation effect. With the estimated \(\boldsymbol{\theta}\) of \(M_{3}\), Figure C.1 presents the sensitivity analysis plot over the range of the sensitivity parameter (\(\rho\)). In the figure, the 95% confidence intervals are constructed from 500 bootstrap samples. From the figure, the 95% confidence interval covers zero for the \(\rho\) value between \(-0.20\) and \(-0.05\). When \(\rho>-0.05\), the average indirect effect is positive and significant; when \(\rho<-0.20\), the average indirect effect is negative and significant.
### Results from the CAP-Med approach
We also apply the CAP-Med approach introduced in Section 3 to the NCANDA data. The CAP step identifies 8 components using the criterion of DfD \(\leq 2\). After running the mediation analysis,
the seventh component (C7) shows a significantly positive average indirect effect with \(\text{AIE}=0.038\) and \(95\%\) CI \((0.004,0.078)\). Figure C.2 presents the sparsified loading profile of this component and the corresponding regions in the brain map. Compared to the profile of \(M_{3}\) identified by the proposed approach, GMed, a high similarity between the two is observed with a similarity of \(|\langle\hat{\boldsymbol{\theta}}_{3}^{(\text{GMed})},\hat{\boldsymbol{ \theta}}_{7}^{(\text{CAP-Med})}\rangle|=0.864\). For the rest components identified by GMed, a corresponding component from CAP-Med with high similarity can be found: \(|\langle\hat{\boldsymbol{\theta}}_{1}^{(\text{GMed})},\hat{\boldsymbol{ \theta}}_{8}^{(\text{CAP-Med})}\rangle|=0.805\), \(|\langle\hat{\boldsymbol{\theta}}_{2}^{(\text{GMed})},\hat{\boldsymbol{ \theta}}_{6}^{(\text{CAP-Med})}\rangle|=0.772\), and \(|\langle\hat{\boldsymbol{\theta}}_{4}^{(\text{GMed})},\hat{\boldsymbol{ \theta}}_{2}^{(\text{CAP-Med})}\rangle|=0.862\). Though both approaches identify these consistent components, the proposed approach offers an integrated way of targeting the ones demonstrating a mediation effect rather than performing a two-step approach.
|
2303.01519 | Emergence of Spacetime from Fluctuations | We use a result of Hawking and Gilkey to define a Euclidean path integral of
gravity and matter which has the special property of being independent of the
choice of basis in the space of fields. This property allows the path integral
to describe also physical regimes that do not admit position bases. These
physical regimes are pre-geometric in the sense that they do not admit a
mathematical representation of the physical degrees of freedom in terms of
fields that live on a spacetime. In regimes in which a spacetime representation
does emerge, the geometric properties of the emergent spacetime, such as its
dimension and volume, depend on the balance of fermionic pressure and bosonic
and gravitational pull. That balance depends, at any given energy scale, on the
number of bosonic and fermionic species that contribute, which in turn depends
on their masses. This yields an explicit mechanism by which the effective
spacetime dimension can depend on the energy scale. | Marcus Reitz, Barbara Šoda, Achim Kempf | 2023-03-02T19:00:01Z | http://arxiv.org/abs/2303.01519v2 | # Emergence of Spacetime from Fluctuations
###### Abstract
We use a result of Hawking and Gilkey to define a euclidean path integral of gravity and matter fields in the eigenbasis of the wave operators. On one hand, working in the eigenbasis of the wave operators means working exclusively with geometric invariants and this avoids the need to mod out the diffeomorphism group and makes it possible to carry out the path integral explicitly. On the other hand, working in the eigenbasis of the wave operators does not enforce the existence of a coordinate basis. As a consequence, this path integral also describes physical setups that are pre-geometric in the sense that they do not admit a mathematical representation in terms of fields on a spacetime. We focus on the regime in which a representation in terms of a spacetime and matter fields emerges. We find that the geometric properties of the emergent spacetime, such as its volume and number of dimensions, depend on the energy scale considered and on the balance of bosonic and fermionic species.
While there are several promising approaches to quantum gravity, see, e.g., [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15] and its relation to thermodynamics, see, e.g., [16; 17; 18; 19] the question has not been settled how the usual setup that consists of a spacetime and matter fields emerges from a more general unifying theory - a theory that also describes a high energy regime that does not possess a traditional description in terms of geometry and matter.
Here, to investigate this phenomenon of emergence, we will work with the euclidean signature path integral [20; 21] and we consider the case of a natural ultraviolet cutoff, \(\bar{\Lambda}\), on the spectrum of the Laplace operator \(\Delta\), for example, at the Planck scale, [22]. On one hand, such a UV cutoff possesses a direct information-theoretic interpretation as a covariant bandlimitation [23; 24; 25; 26; 27] which allows one to view spacetime as simultaneously discrete and continuous [28] in the sense of Shannon's sampling theorem [29; 30].
On the other hand, and this will be our focus here, a result by Gilkey and Hawking [31; 32] then relates the Einstein-Hilbert action to the dimension, \(N\), of the Hilbert space of scalar fields that live on the spacetime manifold:
\[N=\frac{1}{16\pi^{2}}\int d^{4}x\sqrt{g}\left(\frac{\bar{\Lambda}^{2}}{2}+ \frac{\bar{\Lambda}}{6}R+O(R^{2})\right) \tag{1}\]
This allows us to express the gravitational action, including higher order corrections, as
\[S_{g}=\mu N=\mu\operatorname{Tr}(\mathbb{1}) \tag{2}\]
with \(\mu=\frac{6\pi}{\bar{\Lambda}}\). Notice that \(S_{g}\) is here expressed as a trace, i.e., basis independently. Using the Laplace operator, we can also write the action of say \(N_{b}\) free bosonic field species basis independently:
\[S_{b}=\frac{1}{2}\sum_{i=1}^{N_{b}}\operatorname{Tr}\left((\Delta+m^{2})| \phi\rangle_{i}\langle\phi|_{i}\right) \tag{3}\]
For simplicity, we will here choose all species to possess the same nonzero mass, \(m\). The bra-ket notation is here used to basis-independently denote fields in the Hilbert space of fields that we will sum over in the path integral. While the conventional representation of \(S_{b}\) in coordinates is obtained by performing the trace in a position basis, we now perform the trace in the eigenbasis of \(\Delta\), to obtain:
\[S_{b}=\sum_{i=1}^{N_{b}}\sum_{n=1}^{N}\lambda_{n}(\phi_{n}^{i})^{2}. \tag{4}\]
Here, the \(\{\lambda_{n}\},\ n\in\{1,...,N\}\) are the eigenvalues of the wave operator \(\Delta+m^{2}\). Since we are working with euclidean signature, these eigenvalues are real and positive. The \(\phi_{n}^{i}\) are the coefficients of \(|\phi^{i}\rangle\) in the eigenbasis basis of \(\Delta\).
The Dirac action of fermionic fields is usually represented in a coordinate basis:
\[S_{f}=\int d^{4}x\sqrt{g}\bar{\Psi}\left(i\Gamma^{\mu}D_{\mu}\right)\Psi \tag{5}\]
Instead, we use a Dirac action expressed in the eigenbasis of the Dirac operator, reading, for \(N_{f}\) fermionic species:
\[S_{f}=\sum_{i=1}^{N_{f}}\sum_{n=1}^{N}\sqrt{\lambda_{i}}\theta_{n}^{i}\bar{ \theta}_{n}^{i} \tag{6}\]
Here, for simplicity, the fermionic wave operator is taken to be the positive square root of the bosonic wave operator. We will leave the case of general boson and
fermion masses for a future study. The \(\theta_{n}^{i}\), \(\bar{\theta}_{n}^{i}\) are the Grassmann-valued components of the Dirac field \(\psi^{i}\) of the \(i\)'th fermionic species. In the eigenbasis of the wave operators, our total action, \(S\), of gravity and matter therefore reads
\[S=S_{g}+S_{b}+S_{f} \tag{7}\]
with \(S_{g}\), \(S_{b}\), and \(S_{f}\) given by Eqs.2,4,5.
**Rationale for working in the eigenbasis of the wave operators.** We choose to work in the eigenbasis of the wave operators because, in this basis, the action is expressed entirely in terms of diffeomorphism invariant quantities. Therefore, we avoid having to path integrate over the metric and then facing the problem of modding out the diffeomorphism group.
Instead, our path integral will only sum and integrate over quantities that are diffeomorphism invariant: the eigenvalues, \(\{\lambda_{i}\}_{i=1}^{N}\), of the wave operators, the dimension, \(N\), of the Hilbert space of fields that is to be path integrated over and the coefficients \(\phi_{i}^{j}\), \(\theta_{i}^{j}\) of those bosonic and fermionic fields.
These quantities are invariant not only under the diffeomorphism group but also under the even larger full unitary group of the Hilbert space (coordinate transformations are unitary but not all unitaries are coordinate transformations).
In contrast, a traditional path integral over the metric while modding out the diffeomorphism group should amount to a path integral over an (elusive) complete set of independent diffeomorphism-invariant quantities [6; 32].
Therefore, our path integral will describe new physics and will not be a mere reformulation of the coordinate representation of the conventional gravitational path integral with free matter fields. In particular, as we will now show, our path integral does not only describe the traditional setup of matter fields living on a spacetime manifold. It also describes more general physical setups that may be called pre-geometric in the sense that they are not mathematically representable as a spacetime and matter fields. We will show that the traditional picture of spacetimes populated by matter fields emerges in a certain regime.
**The path integral.** The path integral, or partition function, \(Z\), with \(\Lambda:=\bar{\Lambda}+m^{2}\) now reads:
\[Z=\sum_{N=1}^{\infty}\int_{m^{2}}^{\Lambda}\mathcal{D}\lambda\int\mathcal{D} \phi\int\mathcal{D}\theta\mathcal{D}\bar{\theta}e^{-\beta S}\frac{\Lambda^{N( \frac{N_{f}}{2}-1)}}{(N-1)!}. \tag{8}\]
Regarding the range of the eigenvalues, recall that the Laplacian on a compact Riemannian manifold which does not possess a boundary must have zero as an eigenvalue. Simply integrating the eigenvalues from zero to the cutoff would make the probability of a zero eigenvalue, and therefore the probability for a spacetime without a boundary, vanish. In order to allow for boundaryless spacetimes, we will cover both cases, where we either do or do not enforce the existence of a zero eigenvalue.
First, we consider the case where zero as an eigenvalue of the Laplacian is enforced, i.e., we set the lowest eigenvalue of the bosonic wave operator to \(\lambda_{1}=m^{2}\). We then integrate over the Laplacian's remaining \((N-1)\) eigenvalues \(\{\lambda_{n}\}\). For simplicity, we perform these integrals without ordering the eigenvalues, which we then remedy with the factor \(((N-1)!)^{-1}\), to prevent the overcounting of spectra. After integrating out the fermion and boson fields, the Laplacian's spectrum \(\{\lambda_{n}\}\) and after summing over \(N\), the path-integral evaluates to:
\[Z=C\ m^{d-2}\text{exp}\left[2\ C\ \frac{\Lambda^{d/2}-m^{d}}{d}\right]. \tag{9}\]
Here, we defined \(d:=2-N_{b}+N_{f}\) and \(\beta_{max}:=\frac{2N_{f}-N_{b}}{2\mu}\). Also, \(C\) is defined as:
\[C:=(2\pi)^{\frac{N_{b}}{2}}\frac{e^{-\beta\mu}}{\Lambda^{1-\frac{N_{f}}{2}}} \beta^{\mu\beta_{max}} \tag{10}\]
For the special case \(d=0\) we obtain:
\[Z=Cm^{-2}\text{exp}\left[C\log\frac{\Lambda}{m^{2}}\right]. \tag{11}\]
For the alternative case where we do not enforce that zero is an eigenvalue of the Laplacian, we can calculate the partition function similarly. We then obtain for the partition function:
\[Z=\text{exp}\left[2C\frac{\Lambda^{d/2}-m^{d}}{d}\right]. \tag{12}\]
For the special case \(d=0\) we then obtain:
\[Z=\text{exp}\left[C\log\frac{\Lambda}{m^{2}}\right]. \tag{13}\]
We can now use the partition functions to calculate expectation values of observables.
**Effective dimension of spacetime depends on the balance of bosonic and fermionic matter species.** We begin by showing that the effective dimension of the manifold, as a function of the difference between the numbers of fermion and boson species, \(N_{f}\) and \(N_{b}\), is the value \(d\) defined above:
\[d=N_{f}-N_{b}+2. \tag{14}\]
To see this, we calculate the expected eigenvalue density \(p(\lambda)\). The density of eigenvalues can then be related to the effective manifold dimension \(d\) via Weyl's scaling law known from spectral geometry, see, e.g., [33; 34]. Weyl's scaling law states that for large eigenvalues \(\lambda\), the eigenfunctions become less sensitive to curvature and therefore
the density of eigenvalues becomes determined by the dimension, \(n\), of the Riemannian manifold:
\[\lim_{\lambda\rightarrow\infty}\ \ \rho(\lambda)\propto\lambda^{n/2-1}. \tag{15}\]
Using our path integral, we calculate the probability density \(p(\lambda_{i})\) for eigenvalues:
\[p(\lambda_{i})=\frac{1}{Z}\sum_{N}\int\mathcal{D}^{\prime}\lambda\int\mathcal{D }\phi\int\mathcal{D}\theta\int\mathcal{D}\bar{\theta}e^{-\beta S}, \tag{16}\]
Here, the measure \(\mathcal{D}^{\prime}\lambda\) includes all eigenvalues except \(\lambda_{i}\). Integrating out all fields and variables except for \(\lambda_{i}\) we obtain,
\[p(\lambda_{i})\sim\lambda_{i}^{N_{f}/2-N_{b}/2}. \tag{17}\]
For a large UV cutoff \(\Lambda\), we can compare this scaling of the eigenvalues to Weyl's law in Eq. 15 to find that the effective dimension \(d\) is given by the number of fermionic and bosonic fields in the model, as we anticipated in Eq. (14).
**A mechanism for the emergence of spacetime and matter from a pre-geometric high energy regime.** We notice that \(d\) can be zero or negative depending on the number of bosonic and fermionic species. While our calculations can be performed for positive or negative \(d\), a representation of our action in terms of matter fields that live on a curved spacetime can only exist, i.e., a spacetime populated by matter can only emerge, if \(N_{f}\) sufficiently exceeds \(N_{b}\). We further notice that if the bosonic and fermionic species possess nontrivially distributed rest masses then the numbers of bosonic and fermionic fields that effectively contribute to \(N_{f}\) and \(N_{b}\) can be energy dependent. This in turn can make the effective dimension \(d\) energy dependent. By this mechanism, a spacetime populated by matter could emerge at low energies from a pre-geometric high energy theory. In this study we will however only consider the case of equal rest masses.
**Expected dimension of the Hilbert space.** We now calculate the expectation value of the Hilbert space dimension, \(\langle N\rangle\):
\[\langle N\rangle=\frac{-Z^{-1}}{\beta}\frac{\partial Z}{\partial\mu}=1+2C \frac{\Lambda^{d/2}-m^{d}}{d}. \tag{18}\]
For the special case \(N_{f}-N_{b}+2=0\) we have:
\[\langle N\rangle=1+C\log\frac{\Lambda}{m^{2}}. \tag{19}\]
If we do not enforce that zero is an eigenvalue of the Laplacian, the results are similar:
\[\langle N\rangle=2C\frac{\Lambda^{d/2}-m^{d}}{d}, \tag{20}\]
and for the special case \(d=0\) we have:
\[\langle N\rangle=C\log\frac{\Lambda}{m}. \tag{21}\]
For the remainder of the text we will only consider the case with one eigenvalue fixed to zero. If \(2N_{f}-N_{b}>0\), we find for both \(\beta\rightarrow\infty\) and \(\beta\to 0\) that \(\langle N\rangle\to 1\), i.e., for both extremes of the temperature, the effective Hilbert space shrinks to one dimension. In between, \(\langle N\rangle\) has a unique maximum, namely when \(\beta\) takes the value:
\[\beta=\beta_{max}=\frac{2N_{f}-N_{b}}{2\mu}. \tag{22}\]
Figure 1 shows the curve of \(\langle N\rangle\) as a function of the temperature \(T=1/\beta\), where we chose the UV cutoff \(\Lambda\) to be equal to the Planck energy. The curve is presented in Planck units and with \(m\sim 0\). Recall that the numbers of matter degrees of freedom are related to the effective spacetime dimension, \(d\), via \(d=N_{f}-N_{b}+2\). The values \(N_{f}=30\) and \(N_{b}=28\) therefore correspond to a spectral dimension of \(4\).
Note that the maximum for the green curve is to the right of the Planck temperature \(T_{p}=1\), in the chosen units. However, if the number of fermionic degrees of freedom \(N_{f}\) is large enough, as for the blue curve, i.e., if \(N_{f}\gg\frac{2\mu+N_{b}}{2}\), the maximum of the curve will be close to zero and will increase exponentially with \(N_{f}\).
If, on the other hand, the number of boson species is dominating in the sense that \(2N_{f}-N_{b}<0\), then we find that \(\beta\rightarrow\infty\) corresponds to \(\langle N\rangle\to 1\) and \(\beta\to 0\) corresponds to \(\langle N\rangle\rightarrow\infty\), i.e., for positive values of \(\beta\) there is no extremum when the boson species dominate.
**The spectral gap and the diameter of the spacetime.** We now calculate the expectation value of the
Figure 1: The Figure shows the log plot of the expectation value of the Hilbert space dimension \(\langle N\rangle\) as a function of the temperature \(T\), for different choices of the number of fermionic degrees of freedom \(N_{f}\) and bosonic degrees of freedom \(N_{b}\). Bosons are dominant for the dashed lines and fermions are dominant for the solid lines.
spectral gap, which we can then use to infer the diameter of the emerging spacetime, i.e., its largest geodesic distance, \(\ell\).
To this end, we consider the case where the lowest eigenvalue is set to zero. The spectral gap is then calculated by finding the expectation value of the first non-zero eigenvalue, \(\langle\lambda_{2}\rangle\):
\[\langle\lambda_{2}\rangle=\int_{m}^{\Lambda}d\lambda_{2}\ \lambda_{2}P(\lambda_{2}|N \geq 2). \tag{23}\]
Here, \(P(\lambda_{2}|N\geq 2)\) is the probability of the spectral gap having value \(\lambda_{2}\), given a Hilbert space dimension \(N\geq 2\). We find this probability using Bayes' theorem:
\[P(\lambda_{2}|N\geq 2)=\frac{P(\lambda_{2},N\geq 2)}{P(N\geq 2)}=\frac{P( \lambda_{2},N\geq 2)}{1-P(N=1)}, \tag{24}\]
where \(P(\lambda_{2},N\geq 2)\) is the probability of the spectral gap being \(\lambda_{2}\) and \(N\) being larger or equal to \(2\) and \(P(N=1)\) is the probability of having \(N=1\),
\[P(N=1) = \frac{1}{Z}\int\mathcal{D}\phi\int\mathcal{D}\theta\mathcal{D} \tilde{\theta}e^{-\beta S}\Lambda^{-1} \tag{25}\] \[= \exp\left[-2C\frac{\Lambda^{d/2}-m^{d}}{d}\right].\]
To calculate \(P(\lambda_{2},N\geq 2)\) we use the measure \(\bar{\mathcal{D}}\) where we have \(\lambda_{1}=0\), we hold \(\lambda_{2}\geq 0\) fixed and we integrate all \(\lambda_{i}\) (\(i=3,..N\)) from \(\lambda_{2}\) to \(\Lambda\):
\[P(\lambda_{2},N\geq 2)= \frac{1}{Z}\sum_{N=2}^{\infty}\int_{\lambda_{2}}^{\Lambda}\!\! \mathcal{D}\lambda\!\int\!\mathcal{D}\phi\!\int\!\mathcal{D}\theta\mathcal{D} \bar{\theta}e^{-\beta S}\frac{\Lambda^{-N}}{(N-2)!} \tag{26}\] \[= C\lambda_{2}^{\frac{N_{f}-N_{b}}{2}}\ \exp\left[-2C\frac{ \lambda_{2}^{d/2}-m^{d}}{d}\right]\]
Combining the previous expressions we find:
\[\langle\lambda_{2}\rangle =\frac{A(\frac{d}{2C})^{2/d}\Gamma(1+\frac{2}{d},C\frac{2m^{2}}{ d})-\Gamma(1+\frac{2}{d},2C\frac{\Lambda^{d/2}}{d}))}{C},\] \[A =C\frac{\exp\!\left\{2C\frac{m^{d}}{d}\right\}}{1-\exp\!\left\{- 2C\frac{\Lambda^{d/2}-m^{d}}{d}\right\}}. \tag{27}\]
Here, \(\Gamma(a,b)=\int_{b}^{\infty}dy\ y^{a-1}e^{-y}\) is the incomplete gamma function.
When defined with respect to a Riemannian manifold \(\mathcal{M}\), the spectral gap is bounded by \(\ell^{-2}\), where \(\ell\) is the diameter of \(\mathcal{M}\) (see for example [35]). We will therefore interpret the expectation value \(\langle\lambda_{2}\rangle^{-d/2}\) as the effective volume \(V=\ell^{d}\) of a manifold.
**In the geometric regime, the effective density of degrees of freedom is constant up to corrections due to curvature.** First, we notice that the expected dimension of the Hilbert space is closely related to the effective spacetime volume: As Fig. 2 shows, the curves of \(\langle\lambda_{2}\rangle^{-d/2}\) and \(\langle N\rangle\) are closely matching. The position \(\beta_{max}\) of the unique maximum \(\langle\lambda_{2}\rangle_{max}\) of \(\langle\lambda_{2}\rangle^{-d/2}\), agrees exactly with the position of the unique maximum of \(\langle N\rangle\) given in Eq. (22). The value of the maximum of \(\langle\lambda_{2}\rangle^{-d/2}\) is the same order of magnitude as the maximum of \(\langle N\rangle\), but does not agree exactly. We can interpret these findings in terms of the effective density of degrees of freedom in the spacetime.
To this end, let us first explain what we here mean by the notion of number of degrees of freedom. What we are here referring to is the fact that functions, \(f\), in an \(N\)-dimensional function space \(\mathcal{H}\) possess \(N\) degrees of freedom in the following sense: to determine the function \(f\) it suffices to know the function's amplitudes \(a_{n}=f(x_{n})\) at \(N\) generic points \(x_{n}\). To see this, let us write \(f\) as a linear combination \(f(x)=\sum_{i=1}^{N}c_{i}b_{i}(x)\) of \(N\) basis functions \(b_{i}\in\mathcal{H}\). The \(N\) coefficients \(c_{i}\), and therefore \(f\) itself, can be obtained by solving the system of \(n\) linear equations \(a_{n}=\sum_{i=1}^{N}c_{i}b_{i}(x_{n})\), with \(n=1,...,N\) for the \(c_{i}\). This is possible by matrix inversion because the determinant of the matrix \(\left(b_{i}(x_{n})\right)_{i,n=1}^{N}\) is generically nonzero.
In this sense, in our path integral the Hilbert space dimension \(N\) is the total number of degrees of freedom of a field within the spacetime volume. Our finding that the expected volume of an emergent spacetime and its expected number of degrees of freedom are almost proportional therefore means that the density of degrees of freedom of fields on the emergent euclidean spacetime is approximately constant with temperature.
The close agreement between the effective volume \(\langle\lambda_{2}\rangle^{-1}\) and the effective Hilbert space dimension \(\langle N\rangle\) can also be
Figure 2: The figure shows the log plot of a comparison between the expectation value of the Hilbert space dimension \(\langle N\rangle\) (solid lines) and the expression \(\langle\lambda_{2}\rangle^{-d/2}\) (dashed lines), which we interpret as the effective volume, for the same set of choices of \(N_{f}\) and \(N_{b}\). We see that the effective volume and Effective Hilbert space dimension are of the same order of magnitude and have a closely matching temperature dependence.
interpreted as a quantum version of the Hawking-Gilkey relation of Eq. (1): The leading order is given by the volume of the spacetime, followed by curvature-induced corrections to the density of degrees of freedom.
It is remarkable that the sector of the theory in which the fermions are dominant, i.e., where the predicted effective spacetime dimension is positive, shows such consistent geometrical properties. This is because, by working in the eigenbasis of the Laplacian, we had purposely not assumed that a coordinate basis, i.e., a representation of the fields and the Laplacian on a spacetime, exists or emerges.
In the non-geometric regime where \(N_{b}>2+N_{f}\), i.e., where there is no positive effective spacetime dimension, there is indeed no geometric relationship between \(\langle\lambda_{2}\rangle^{-1}\) and \(\langle N\rangle\). For example, the large temperature limit of \(\langle\lambda_{2}\rangle^{-1}\) for \(N_{b}>2+N_{f}\) is finite, while the large temperature limit for \(\langle N\rangle\) for \(N_{b}>2+N_{f}\) is infinite.
**Outlook.** We demonstrated a new mechanism by which spacetime and matter could emerge in the 'low energy' regime of a non- or pre-geometric high energy theory. This can be viewed as an explicit example of the approach of [36], where it was proposed that the physics of the emergence of a matter-populated spacetime is mathematically the emergence of the representability of the correlators of a pre-geometric theory as quantum field theoretic correlators on a curved spacetime.
It will be very interesting to explore extending our model, for example, to specifically include unequal masses, to investigate path integrating over the masses and over \(N_{f}\) and \(N_{b}\), to include gauge symmetries and to include interactions. As was shown in [36; 37], it would be particularly valuable to include interactions, even if only perturbatively. This is because then, whenever an effective spacetime emerges, not only rough data about the spacetime, such as its dimension or volume become calculable, but also the metric itself becomes calculable. The underlying reason is that interaction terms are local. This allows one to determine coordinate bases in the Hilbert space of fields, namely as those bases in which the interaction terms are diagonal. This then allows one to transform propagators into a coordinate basis. Since a propagator is a known function of the geodesic distance for small distances, it becomes possible to explicitly calculate the emerging metric from a propagator in a coordinate basis, see [36; 37].
Further, it will be very interesting to explore the case of the Minkowski signature. Much of spectral geometry is then not applicable, for example, Weyl's asymptotic formula. Nevertheless, if we can include interactions, then by the above method, metrics of effectively emerging spacetimes should be calculable explicitly even for the Minkowski signature.
**Acknowledgements:** AK acknowledges support through a Discovery Grant by the National Science and Engineering Research Council of Canada (NSERC) and a Discovery Project Grant from the Australian Research Council (ARC). MR was supported in part by the Excellence Initiative - Research University Program at the Jagiellonian University in Krakow and the National Science Centre, Poland, under grant no. 2019/33/B/ST2/00589. BS is supported in part by the Perimeter Institute, which is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade.
|
2302.01630 | Frequency Quality in Low-Inertia Power Systems | This paper analyses the issue of frequency quality in low-inertia power
systems. The analysis is based on a real-world large-scale low-inertia power
system namely, the All-Island transmission system (AITS) of Ireland and
Northern Ireland currently accommodating up to 75% of non-synchronous
generation. The paper is motivated by a recent trend of some frequency quality
parameters such as the standard frequency deviation and the slow frequency
restoration. The paper first discusses the frequency control services currently
in place to ensure frequency quality in the AITS. An analysis of the frequency
quality parameters of the AITS is then presented based on actual data. The
paper also discusses, through an illustrative example, the effectiveness of
automatic generation control as a potential approach to keep frequency within
the operational range. | Taulant Kerci, Manuel Hurtado, Mariglen Gjergji, Simon Tweed, Eoin Kennedy, Federico Milano | 2023-02-03T09:59:25Z | http://arxiv.org/abs/2302.01630v1 | # Frequency Quality in Low-Inertia Power Systems
###### Abstract
This paper analyses the issue of frequency quality in low-inertia power systems. The analysis is based on a real-world large-scale low-inertia power system namely, the All-Island transmission system (AITS) of Ireland and Northern Ireland currently accommodating up to 75% of non-synchronous generation. The paper is motivated by a recent trend of some frequency quality parameters such as the standard frequency deviation and the slow frequency restoration. The paper first discusses the frequency control services currently in place to ensure frequency quality in the AITS. An analysis of the frequency quality parameters of the AITS is then presented based on actual data. The paper also discusses, through an illustrative example, the effectiveness of automatic generation control as a potential approach to keep frequency within the operational range.
Frequency quality, low-inertia power systems, automatic generation control (AGC).
## I Introduction
### _Motivation_
The displacement of conventional synchronous generators by converter-interfaced generation such as solar and wind energy leads to reduced levels of system inertia. Large-scale low-inertia power grids face many challenges, including but not limited to frequency stability, voltage stability, and converter-driven stability [1]. While the power system community including both academia and industry is working towards addressing the above challenges, an emerging critical issue that, so far, has received little to no attention is frequency quality. The objectives of this paper are to fill this gap and to raise awareness in the community on the topic.
### _Literature Review_
Transmission system operators (TSOs) in Europe (including EirGrid and SONI) define frequency quality in terms of different target/defining parameters. Table I shows the main parameters for the Continental European (CE) and Ireland/Northern Ireland (IE/NI) TSOs [2, 3]. Keeping these parameters within limits help: (i) to better control the operation of the power system and prevent damage to plant and equipment; (ii) keep the electric time on clocks that rely on counting the zero crossings; (iii) maintain the relevance of power system analysis that is generally performed at the nominal frequency; (iv) prevent motors from stalling; and (v) increase the trust of TSO customers and market participants on supply reliability and quality, etc. Note the wider range of most of the parameters of IE/NI compared to the parameters defined for the CE control area. For example, the standard frequency range for the CE and IE/NI control areas are \(\pm\) 50 mHz and \(\pm\) 200 mHz, respectively. Such parameters make sense considering that the CE synchronous area accounts for around 435 GW of peak demand (largest synchronous electrical grid in the world) compared to 6.9 GW of the All-Island transmission system (AITS) (i.e., it is harder to control frequency in a small power system).
Recent research has demonstrated that there is an almost linear relationship between renewables penetration and frequency variations [4, 5, 6, 7]. This suggest that there is a need to deploy more and faster reserve resources to deal with the ever increasing penetration of stochastic and intermittent renewable sources. For example, due to increased intra-interval fluctuations and limited ramping from generators, the frequency deviations in a provincial power system in China increased from 0.019 to 0.032 Hz from 2014 to 2020 (i.e., 68.4% increase) [8]. In the same vein, reference [9] focuses on the issue of frequency quality for Southwest China considering the operation data from asynchronous operation tests and automatic generation control (AGC). It is also worth mentioning that controlling the frequency in power systems with high shares of photo-voltaic (PV) is a challenging task due to PV power dropping much faster than, for example, wind power (e.g., 60% of the installed power capacity per minute when cloud passes) [10].
A way to meet frequency quality standards in low-inertia systems is that renewable energies and emerging technologies such as battery energy storage systems (BESS) provide frequency support. In this context, references [11, 12] study the participation of a 30 and 10 MW wind farms in AGC and show their potential by testing the behavior against field measurements and experimental results, respectively. On the other hand, the potential of solar PV providing AGC services under different conditions (solar resource intensity) is shown in [13] through a successful 300 MW power plant test. BESS is shown to respond well to AGC commands/set-points and provide secondary frequency regulation in [14].
### _Contributions_
The specific contributions of this paper are the following:
* An analysis of frequency quality based on a real-world low-inertia system namely the AITS.
* Show through the analysis that while some frequency quality parameters have improved over the last years, others such as the standard deviation of the frequency is increasing linearly.
* Propose different solutions to address the recent deterioration in frequency quality in the AITS.
* Study the effectiveness of AGC to smooth frequency variations due to wind and load variations and noise.
### _Paper Organization_
The remainder of the paper is organized as follows. Section II briefly describes the frequency control employed in the AITS. Section III provides the results of the frequency quality analysis of the AITS. Section IV discusses the effectiveness of AGC in reducing the standard deviation of the frequency through an illustrative example. Finally, conclusions and future work directions are given in Section V.
## II Frequency Control in the All-Island Transmission System
Figure 2 shows the current frequency control services employed in the AITS [15]. AirGrid and SONI have various frequency services to ensure frequency quality parameters remain within predefined limits. Such services include synchronous inertial response (SIR), fast frequency response (FFR) and primary, secondary and tertiary operating reserves (POR, SOR and TOR) as well as replacement reserves (RR) and ramping products. It should be noted that there does not exist an automatic secondary frequency control (or AGC) in the AITS. Instead, manual activations are currently the approach used in regulating system frequency proactively. The vast majority of the contracted volumes of system services procured to date comes from conventional sources [16]. However, as the AITS moves towards a reduced number of conventional units online by 2030, other technologies are expected to provide the majority of the services including BESS (approximately 650 MW installed and mostly used for system services rather than providing energy to the grid), demand response, wind and solar power.
Active power control (APC) is another crucial frequency control service impacting frequency quality and currently in place in the AITS. APC is a droop-based frequency control service and is mandatory for all dispatchable wind farms in Ireland [17]. It involves a selectable deadband setting of \(\pm\) 200 mHz or \(\pm\) 15 mHz. The default value of the deadband is \(\pm\) 200 mHz but when the frequency control is challenging EirGrid changes remotely this deadband to \(\pm\) 15 mHz. In this mode, wind farms adjust their output much more dynamically and contribute to the control of system frequency under normal, pre-contingency conditions. In the near future, it is expected that other technologies such as solar power will have APC functionality enabled in order to help maintain the frequency quality parameters.
## III Frequency Quality in the All-Island Transmission System
The AITS is a synchronous island currently accommodating up to 75% of non-synchronous generation at any point in time and is relaxing a number of operational constraints such as the minimum number of conventional generating units (from 8 to 7) and inertia (from 23 GWs to 20 GWs) to further increase the penetration of renewables [18]. This transition involves dealing with different technical challenges such as ensuring power system stability and security. However, an emerging challenge is that of maintaining frequency quality parameters within acceptable limits. The aim of this section is to analyse, through actual data, the frequency quality in the AITS.
Fig. 2: Frequency control overview in the AITS [15].
### _Frequency Deviations (Nadir/Zenith)_
This section focuses on the evolution of the maximum and minimum frequency deviations in the AITS over the last years. Note that this limit is 1000 mHz according to Table I. Figure 3 shows the trend of these frequency parameters (i.e., frequency nadir and zenith). It is interesting to notice that both parameters are within limits and improving in their performance. The performance of these parameters is, in particular, strongly related to the reduced number of large generator trippings in the system and the response from wind, high voltage direct current (HVDC) interconnectors, demand response, and BESS.
### _Frequency Standard Deviation_
Figure 4 shows the evolution of three main parameters namely minutes above and below the standard frequency range (\(\pm\) 200 mHz) and the standard deviation of the frequency. There are a number of factors that have led to a dramatic improvement in the quality of the parameters during 2009-2018, among others: (i) the change from verbal dispatch to electronic logged dispatch of generation units, leading to improved unit operator response; (ii) newer generating units in the latter years having the latest electronic governor controls; (iii) the retrofit of the electro-mechanical control systems on older generating units with modern electronic controls; (iv) the increase in system inertia as a result of more generating units to meet increases in system demand; and (v) the response of HVDC interconnectors and BESS (but also a reduction in the number of large generator trippings). However, there has been an increase in the standard deviation of the frequency during 2019-2021. This is mainly due to: (i) a reduction in regulating resources; (ii) an increasing proportion of the reserves from inverter-based resources that are not configured to regulate frequency; and (iii) aging of conventional generating portfolio. This trend could continue as more wind and solar power are integrated into the system and the operational policy evolves, i.e., reducing the number of conventional units online and, consequently, also reducing the inertia.
this paper, we explore the effectiveness of one of the options to improve frequency performance, that is, implementing an AGC (see Section IV below).
### _Frequency Recovery_
This section illustrates the issue of the slow frequency recovery in the AITS. With this aim, Fig. 6 displays the frequency trace for a 2022 event (9th of August) where the largest single infeed (LSI) tripped from 530 MW import. As can be seen, it took the frequency almost 15 minutes to recover to 50 Hz (i.e., time to restore frequency in Table I). In particular, it is worth noticing that frequency recovers to around 49.87 Hz in almost 3 minutes but then stays there for a long time. The fast recovery in the 3 minutes is mainly because of FFR from BESS (the majority of BESS installed in AITS have a trigger point of 49.8 Hz).
## IV Illustrative example
EU Network Codes and the national energy regulators require EirGrid and SONI to justify the need to install or not an AGC every few years [2]. This section aims to illustrate the AGC performance in terms of long-term frequency quality enhancement using the IEEE 39-bus system.
### _Stochastic Long-Term Power System Model_
Frequency quality is impacted by several dynamical processes starting from fast ones such as the inertial response of synchronous machines, the stochastic wind speed variations, to longer ones, such as primary and secondary frequency controllers of conventional generators. To capture and model all of these dynamics, we consider a combined short- and long-term dynamic power system model represented by a set of hybrid non-linear stochastic differential-algebraic equations [20], as follows:
\[\frac{d}{dt}\mathbf{x} =\mathbf{f}(\mathbf{x},\mathbf{y},\mathbf{u},\mathbf{z},\frac{d}{dt}\mathbf{\eta})\,, \tag{1}\] \[\mathbf{0} =\mathbf{g}(\mathbf{x},\mathbf{y},\mathbf{u},\mathbf{z},\mathbf{\eta})\,,\] \[\frac{d}{dt}\mathbf{\eta} =\mathbf{a}(\mathbf{x},\mathbf{y},\mathbf{\eta})+\mathbf{b}(\mathbf{x},\mathbf{y},\mathbf{\eta}) \,\mathbf{\zeta}\,,\]
where \(\mathbf{f}\) and \(\mathbf{g}\) represent the differential and algebraic equations, respectively; \(\mathbf{x}\) and \(\mathbf{y}\) represent the state and algebraic variables, such as generator rotor speeds and bus voltage angles, respectively; \(\mathbf{u}\) represents the inputs, such as the schedules of synchronous generators; \(\mathbf{z}\) represents discrete variables; \(\mathbf{\eta}\) represents the stochastic characterization of wind speed as well as the volatility of load power consumption; \(\mathbf{a}\) and \(\mathbf{b}\) are the _drift_ and _diffusion_ of the stochastic differential equations, respectively; and \(\mathbf{\zeta}\) is the white noise. To represent inertial and primary control dynamics, we consider conventional models of synchronous machines (4th-order models) and of their primary controllers, as well as dynamic models of wind power plants (5th-order doubly-fed induction generator) with inclusion of maximum power point tracker, voltage, pitch-angle, and frequency controls [21].
With regard to the long-term dynamics, the AGC is implemented as a centralized discrete controller in the control centers of TSOs and updates the power order set-points of dispatchable generators at certain time intervals, for example, every 4 seconds [22]. In this paper, we use the standard AGC scheme shown in Fig. 7. The AGC consists of an integrator with gain \(K_{o}\) that aims to nullify the steady-state frequency error, in this case, the difference between the reference frequency \(\omega^{\mathrm{ref}}\) and the measured frequency \(\omega_{\mathrm{Col}}\) (i.e., the center of inertia (Col)), as follows:
\[\frac{d}{dt}\Delta p=K_{o}(\omega^{\mathrm{ref}}-\omega_{\mathrm{Col}})\,, \tag{2}\]
where \(\Delta p\) is the output of the integrator. To simulate the discrete nature of the AGC, \(\Delta p\) is first discretized at given fixed-time intervals and then sent to each turbine governor (TG). These signals (\(\Delta p_{i}\)) are proportional to the capacity of the machines and the TG droops (\(R_{i}\)) and normalized with respect to the total droop of the system:
\[R_{\mathrm{tot}}=\sum_{i=1}^{n_{g}}R_{i}\,. \tag{3}\]
### _Simulation Results_
The purpose of this section is to simulate the effectiveness of AGC to reduce frequency fluctuations. The example is based on the IEEE 39-bus system and assumes a 25% wind power penetration (i.e., replace three conventional generators with wind power plants). Two scenarios are considered: (i) impact of stochastic noise (given by both load and wind); and (ii) scenario 1 plus the introduction of wind/load step and ramp variations [6]. All the simulations in this section are performed using the software tool Dome developed by the last author [23].
Figure 8 shows the results of the first scenario with and without the AGC. In this scenario, the inclusion of the AGC does not appear to have any visible impact on frequency fluctuations. This is due to the fact that the AGC controller is slow
Fig. 6: LSI trip on August 2022.
Fig. 7: Standard AGC.
compared to the dynamics of the noise (stochastic process). On the other hand, Fig. 9 compares the effect of AGC under both noise and wind/load step and ramp power variations. Since wind/load ramp time scales are closer to that of the AGC, in this case, the inclusion of the AGC allows reducing frequency deviations. Specifically, the standard deviations of the frequency with and without AGC are 0.05395 Hz and 0.0765 Hz, respectively. These results indicate that an AGC implementation may be an option to improve frequency quality in the AITS in the future.
## V Conclusions
This paper discusses the issue of frequency quality in a real-world low-inertia power system, namely, the AITS. The paper shows that while some frequency quality parameters have dramatically improved (e.g., minutes below and above \(\pm\) 200 mHz) over the last decade, others have deteriorated. In particular, the standard deviation of the frequency has increased linearly for the last three years. The paper proposes different solutions to keep frequency within operational limits. The potential effectiveness of one of the proposals, that is, installing AGC, is demonstrated through an example. It is shown that AGC is an option to regulate frequency around the target value.
Future work will focus on testing the effectiveness of different AGC approaches on a model of the AITS. This work will then feed in to the assessment of the range of solutions for managing frequency quality on the AITS.
|
2308.14500 | LAC: Latent Action Composition for Skeleton-based Action Segmentation | Skeleton-based action segmentation requires recognizing composable actions in
untrimmed videos. Current approaches decouple this problem by first extracting
local visual features from skeleton sequences and then processing them by a
temporal model to classify frame-wise actions. However, their performances
remain limited as the visual features cannot sufficiently express composable
actions. In this context, we propose Latent Action Composition (LAC), a novel
self-supervised framework aiming at learning from synthesized composable
motions for skeleton-based action segmentation. LAC is composed of a novel
generation module towards synthesizing new sequences. Specifically, we design a
linear latent space in the generator to represent primitive motion. New
composed motions can be synthesized by simply performing arithmetic operations
on latent representations of multiple input skeleton sequences. LAC leverages
such synthesized sequences, which have large diversity and complexity, for
learning visual representations of skeletons in both sequence and frame spaces
via contrastive learning. The resulting visual encoder has a high expressive
power and can be effectively transferred onto action segmentation tasks by
end-to-end fine-tuning without the need for additional temporal models. We
conduct a study focusing on transfer-learning and we show that representations
learned from pre-trained LAC outperform the state-of-the-art by a large margin
on TSU, Charades, PKU-MMD datasets. | Di Yang, Yaohui Wang, Antitza Dantcheva, Quan Kong, Lorenzo Garattoni, Gianpiero Francesca, Francois Bremond | 2023-08-28T11:20:48Z | http://arxiv.org/abs/2308.14500v4 | # LAC - Latent Action Composition for Skeleton-based Action Segmentation
###### Abstract
Skeleton-based action segmentation requires recognizing composable actions in untrimmed videos. Current approaches decouple this problem by first extracting local visual features from skeleton sequences and then processing them by a temporal model to classify frame-wise actions. However, their performances remain limited as the visual features cannot sufficiently express composable actions. In this context, we propose Latent Action Composition (LAC)1, a novel self-supervised framework aiming at learning from synthesized composable motions for skeleton-based action segmentation. LAC is composed of a novel generation module towards synthesizing new sequences. Specifically, we design a linear latent space in the generator to represent primitive motion. New composed motions can be synthesized by simply performing arithmetic operations on latent representations of multiple input skeleton sequences. LAC leverages such synthesized sequences, which have large diversity and complexity, for learning visual representations of skeletons in both sequence and frame spaces via contrastive learning. The resulting visual encoder has a high expressive power and can be effectively transferred onto action segmentation tasks by end-to-end fine-tuning without the need for additional temporal models. We conduct a study focusing on transfer-learning and we show that representations learned from pre-trained LAC outperform the state-of-the-art by a large margin on TSU, Charades, PKU-MMD datasets.
Footnote 1: Project website: [https://walker1126.github.io/LAC/](https://walker1126.github.io/LAC/)
## 1 Introduction
Human-centric activity recognition is a crucial task in real-world video understanding. In this context, _skeleton data_ that can be represented by 2D or 3D human keypoints plays an important role, as it is complementary to other modalities such as RGB [31, 7, 27, 23, 22, 48, 36, 63, 4] and optical flow [32, 25]. As the human skeleton modality has witnessed a tremendous boost in robustness _w.r.t._ content changes related to camera viewpoints and subject appearances, the study of recognizing activities directly from 2D/3D skeletons has gained increasing attention [20, 19, 5, 70, 50, 11, 55, 73, 9, 38, 21, 74]. While aforementioned approaches have achieved remarkable success, such approaches often focus on _trimmed videos_ containing _single actions_, which constitutes a highly simplified scenario. Deviating from this, in this work, we tackle the challenging setting of _action segmentation in untrimmed videos based on skeleton sequences_.
In untrimmed videos, activities are composable _i.e._, motion performed by a person generally comprises multiple actions (co-occurrence), each with the duration of a few seconds. Towards modeling _long-term dependency_ among different actions, expressive skeleton features are required. Current approaches [33, 46, 45, 16] obtain such features through visual encoder such as AGCNs [50] pre-trained on trimmed datasets. However, due to the limited motion information in the trimmed samples, the performance of such features in classifying complex actions is far from satisfactory. Towards addressing this issue, we propose to construct _synthesized composable skeleton data_ for training a more effective visual encoder, endowed with strong representability of subtle action details for action segmentation.
In this paper, we propose Latent Action Composition (LAC), a novel framework aiming at leveraging synthesized composable motion data for self-supervised action representation learning. As illustrated in Fig. 1 (left), as opposed to current self-supervised approaches [33, 46, 45, 16], LAC learns action representations in two steps: a first _action composition_ step is then followed by a _contrastive learning_ step.
_Action composition_ is a novel initialization step to train a generative module that can generate new skeleton sequences by combining multiple videos. As high-level motions are
difficult to combine directly by the joint coordinates (_e.g._, 'drink' and'sidtown'), LAC incorporates a novel Linear Action Decomposition (LAD) mechanism within an autoencoder. LAD seeks to learn an action dictionary to express subtle motion distribution in a discrete manner. Such action dictionary incorporates an orthogonal basis in the latent encoding space, containing two sets of directions. The first set named 'Static' includes directions representing static information of the skeleton sequence, _e.g._, viewpoints and body size. The other set named 'Motion' includes directions representing temporal information of the skeleton sequence, _e.g._, the primitive dynamics of the action performed by the subject. The new skeleton sequence is generated via a linear combination of the learned 'Static' and 'Motion' directions. We adopt motion retargeting to train the autoencoder and the dictionary using skeleton sequences with 'Static' and 'Motion' information built from 3D synthetic data [30]. Once the action dictionary is constructed, in the following _contrastive learning_ step, 'Static'/'Motion' information and action labels are not required and composable motions can be generated from any multiple input skeleton sequences by combining their latent 'Motion' sets.
The _contrastive learning_ step aims at training a skeleton visual encoder such as UNIK [73] in a self-supervised manner, without the need for action labels (see Fig. 1 (middle)). It is designed for the resulting visual encoder to be able to maximize the similarity of different skeleton sequences, that are obtained via data augmentation from the same original sequence, across large-scale datasets. Unlike current methods [18, 29, 47, 29, 57, 37, 43, 74] that perform contrastive learning for the video-level representations, we perform contrastive learning additionally on the frame space to finely maximize the per-frame similarities between the positive samples. Subsequently, the so-trained frame-level skeleton visual encoder is transferred and retrained on action segmentation datasets [16, 53].
To assess the performance of LAC, we train the skeleton visual encoder on the large-scale dataset Posetics [73] and we evaluate the quality of the learned skeleton representations (see Fig. 1 (right)) by fine-tuning onto unseen action segmentation datasets (_e.g._, TSU [16], Charades [53], PKU-MMD [12]). Experimental analyses confirm that action composition and contrastive learning can significantly increase the expressive power of the visual encoder. The fine-tuning results outperform state-of-the-art accuracy.
In summary, the contributions of this paper include the following. (i) We introduce LAC, a novel generative and contrastive framework, streamlined to synthesize complex motions and improve the skeleton action representation capability. (ii) In the generative step, we introduce a novel Linear Action Decomposition (LAD) mechanism to represent high-level motion features thanks to an orthogonal basis. The motions for multiple skeleton sequences can thus be linearly combined by latent space manipulation. (iii) In the contrastive learning step, we propose to learn the skeleton representations in both, video and frame space to improve generalization onto frame-wise action segmentation tasks. (iv) We conduct experimental analysis and show that pre-training LAC on Posetics and transferring it onto an unseen target untrimmed video dataset represents a generic and effective methodology for action segmentation.
## 2 Related Work
Temporal Action Segmentationfocuses on per-frame activity classification in untrimmed videos. The main challenge has to do with how to model long-term relationships among various activities at different time steps. Current methods mostly focus on directly using untrimmed RGB videos. Since untrimmed videos usually contain thousands of frames, training a single deep neural network directly on such videos is quite expensive. Hence, to solve this problem efficiently, previous works proposed to use a two-step method. In the first step, a pre-trained feature extractor (_e.g._, I3D [7]) is applied on short sequences to extract corresponding visual features. In the second step, action segmentation is modeled as a sequence-to-sequence (seq2seq) task to trans
Figure 1: **General pipeline of LAC.** Firstly, in the representation learning stage (left), we propose (i) a novel action generation module to combine skeletons of multiple videos (_e.g._, ‘Walking’ and ‘Drinking’ shown in the top and bottom respectively). We then adopt a (ii) contrastive module to pre-train a visual encoder by learning data augmentation invariant representations of the generated skeletons in both video space and frame space. Secondly (right), the pre-trained visual encoder is evaluated by transferring to action segmentation tasks.
late extracted visual features into per-frame action labels. Temporal Convolution Networks (TCNs) [33, 15, 77] and Transformers [14] are generally applied in the second step due to their ability to capture long-term dependencies.
Recently, few methods [13, 16] started to explore using skeletons in this task, in order to benefit from multi-modality information. In such methods, a pre-trained Graph Convolutional Network (GCN) such as AGCN [50] is used as a visual encoder to obtain skeleton features in the first step. However, unlike in pre-trained I3D which has strong generalizability across domains, pre-trained AGCN is not able to provide high-quality features due to its laboratory-based pre-trained dataset NTU-RGB+D [49]. We found that the performance significantly decreases when the pre-trained model is applied to more challenging real-world untrimmed skeleton videos datasets such as TSU [16] and Charades [53]. The main issue is that the pre-trained visual encoder does not have a sufficient expressive power to extract the complex action features especially for composable actions that often occur in real-world videos.
LAC differs from previous two-step methods. We propose a motion generative module to synthesize complex composable actions and to leverage such synthetic data to train a more general skeleton visual encoder [73] which is sensitive to composable action. Unlike previous approaches, the pre-trained visual encoder in LAC has stronger representation capability for skeleton sequences compared to previous two-step methods [13, 16] using pre-trained AGCN. In such strategy, the model can be end-to-end refined on the action segmentation tasks without need for the second stage.
Motion Retargetingaims to transfer motion from sequence of target subject onto source subject, where the main challenge lies in developing effective mechanisms to disentangle motion and appearance. As one of the most important applications of video generation [60, 65, 66, 76, 54, 67], previous image-based motion retargeting approaches explore to leverage structure representations such as 2D human keypoints [62, 3, 8, 74] and 3D human meshes [41, 64] as motion guidance. Recently, self-supervised methods [51, 52, 68] showed remarkable results on human bodies and faces by only relying on data without extract information.
Skeleton-based methods [1, 61, 3, 2] focus on transferring motion across skeletons of different shapes. Previous method [3] showed that transferring motion across characters enforces the disentanglement of static and dynamic information in a skeleton sequence. While they have achieved good performance, such method is unable to compose different actions for creating novel actions. Our method is different, we seek to learn an orthogonal basis in the feature space to represent the action distribution in a linear and discrete manner. In such a novel strategy, both static and dynamic features can be learned from a single encoder and skeleton sequences with complex motions are able to be synthesized by simply modifying the magnitudes along the basis.
Self-supervised Skeleton Action Representation learning involves extracting spatio-temporal features from numerous unlabeled data. Current methods [72, 37, 59, 43, 74] adopt contrastive learning [58, 69, 28] as the pretext task to learn skeleton representations invariant to data augmentation. However, recent techniques [56, 75, 72, 37, 59, 43, 74] merge the temporal features by average pooling and conduct contrastive learning on top of the global temporal features for the skeleton sequences. Thus they may lose important information of complex actions particularly in the case of
Figure 2: **Overview of the Composable Action Generation model in LAC.** The model consists of a visual encoder \(\mathrm{E_{LAC}}\) and a decoder \(\mathrm{D_{LAC}}\). In the latent space, we apply Linear Action Decomposition (LAD) by learning a visual action dictionary \(\mathbf{D}_{v}\), which is an orthogonal basis where each vector represents a basic ’Motion’/‘Static’ transformation. Given a pair of skeleton sequences \(\mathbf{p}_{m,c}\) and \(\mathbf{p}_{m^{\prime},c^{\prime}}\), (i) their latent codes \(\mathbf{r}_{m,c}\) and \(\mathbf{r}_{m^{\prime},c^{\prime}}\) are embedded by \(\mathrm{E_{LAC}}\). (ii) Their projections \(A_{m}\), \(A_{c}\) and \(A_{m^{\prime}}\), \(A_{c^{\prime}}\) along \(\mathbf{D}_{v}\) can be computed. The linear combination of \(A_{m}/A_{m^{\prime}}\) with corresponding directions in \(\mathbf{D}_{v}\) constitutes the ’Motion’ features and similarly the ’Static’ features can also be obtained. (iii) In the **training** stage, we leverage motion retargeting for learning the whole framework by swapping their ’Motion’ features and generating transferred motions. (iv) In the **inference** stage, we adopt linear combination of \(\mathbf{r}_{m}\) and \(\mathbf{r}_{m^{\prime}}\) to obtain the composable motion features \(\mathbf{r}_{mm^{\prime}}\) and the composable skeleton sequences can be generated.
co-occurring actions [16, 53]. In our work, we extend the visual encoder and the contrastive module to finely extract per-frame features. We use contrastive loss for both sequence and frame, to make sure that the skeleton sequences are discriminative in both spaces. The skeleton visual encoder can have a strong representation ability for the sequence and also for each frame to better generalize to frame-wise action segmentation tasks.
## 3 Proposed Approach
LAC is composed of two modules (see Fig. 1), a skeleton sequence generation module to synthesize the co-occurring actions and a self-supervised contrastive module to learn skeleton visual representations using the synthetic data. Subsequently, the skeleton visual encoder trained by the contrastive module can be transferred to downstream fine-grained action segmentation tasks. In this section, we introduce the full architecture and training strategy of LAC.
### Composable Action Generation
In this work, we denote the static information of a skeleton sequence (_i.e._, 'viewpoint','subject body size', etc.) as 'Static', while the temporal information (_i.e._, the dynamics of the 'action' performed by the subject) as 'Motion'. As shown in Fig. 2, the generative module is an autoencoder, consisting of an encoder and a decoder for skeleton sequences. To disentangle 'Motion' features from 'Static' in a linear latent space, we introduce a Linear Action Decomposition mechanism to learn an action dictionary where each direction represents a basic high-level action for the skeleton encoding. We apply motion retargeting for training the autoencoder (_i.e._, transferring the motion of a driving skeleton sequence to the source skeleton sequence maintaining the source skeletons invariant in viewpoint and body size). In the inference stage, the extracted 'Motion' features from multiple skeleton sequences can be combined linearly and composable skeletons can be generated by the decoder. The input skeletons can be in 3D or 2D.
Skeleton Sequence Autoencoder:The input skeleton sequence with 'Static' \(c\) and 'Motion' \(m\) is modeled by a spatio-temporal matrix, noted as \(\mathbf{p}_{m,c}\in\mathbb{R}^{T\times V\times C_{in}}\). \(T\), \(V\), and \(C_{in}\) respectively represent the length of the video, the number of body joints in each frame, and the input channels (\(C_{in}=2\) for 2D data, or \(C_{in}=3\) if we use 3D skeletons). As shown in Fig. 2 (i), LAC adopts an encoder \(\mathrm{E}_{\mathrm{LAC}}\) to embed a pair of input skeleton sequences \(\mathbf{p}_{m,c}\)/\(\mathbf{p}_{m^{\prime},c^{\prime}}\) into \(\mathbf{r}_{m,c}\)/\(\mathbf{r}_{m^{\prime},c^{\prime}}\in\mathbb{R}^{T\times C_{out}}\). \(T^{\prime}\) is the size of temporal dimension after convolutions and \(C_{out}\) is the output channel size. To generate skeleton sequences, a skeleton sequence decoder \(\mathrm{D}_{\mathrm{LAC}}\) (see Fig. 2 a.(iii)) is used to generate new skeleton sequences from the representation space. The autoencoder is designed by multiple 1D temporal convolutions and upsampling to respectively encode and decode the skeleton sequence. We provide in Tab. 1 and Supplementary Material (Appendix) building details of \(\mathrm{E}_{\mathrm{LAC}}\) and \(\mathrm{D}_{\mathrm{LAC}}\).
Linear Action Decomposition:The goal of Linear Action Decomposition (LAD) is to obtain the 'Motion' features on top of the encoded latent code of a skeleton sequence (see Fig. 2 a.(ii)). Our insight is that the high-level action of a skeleton sequence can be considered as a combination of multiple basic and independent 'Motion' and 'Static' transformations (_e.g._, raising hand, bending over) with their amplitude from a fixed reference pose (_i.e._, standing in the front view, see Fig. 4). Hence, we explicitly model the basic 'Static' and 'Motion' transformations using a unified action dictionary for the encoded latent skeleton features. Specifically, we first pre-define a learnable orthogonal basis, noted as \(\mathbf{D}_{v}=\{\mathbf{d}_{\mathbf{m}1},\mathbf{d}_{\mathbf{m}2},...,\mathbf{ d}_{\mathbf{m}J},\mathbf{d}_{\mathbf{c}1},\mathbf{d}_{\mathbf{c}2},...,\mathbf{d}_{ \mathbf{c}K}\}\) with \(J\in[1,C_{out})\) and \(K=C_{out}-J\), where each vector indicates a basic 'Motion'/'Static' transformation from the reference pose. Due to \(\mathbf{D}_{v}\) entailing an orthogonal basis, any two directions \(\mathbf{d}_{\mathbf{i}},\mathbf{d}_{\mathbf{j}}\) follow the constraint:
\[<\mathbf{d}_{\mathbf{i}},\mathbf{d}_{\mathbf{j}}>=\begin{cases}0&i\neq j\\ 1&i=j.\end{cases} \tag{1}\]
We implement \(\mathbf{D}_{v}\in\mathbb{R}^{C_{out}\times C_{out}}\) as a learnable matrix and we apply the Gram-Schmidt algorithm during each forward pass in order to satisfy the orthogonality. Then, we consider the 'Motion' features of \(\mathbf{p}_{m,c}\), denoted as \(\mathbf{r}_{m}\), as a linear combination between motion orthogonal directions in \(\mathbf{D}_{v}\), and associated magnitudes (amplitude) \(A_{m}=\{a_{m1},a_{m2},...,a_{mJ}\}\). Similarly, the 'Static' features \(\mathbf{r}_{c}\) are the linear combination between 'Static' orthogonal directions in \(\mathbf{D}_{v}\), and associated magnitudes \(A_{c}=\{a_{c1},a_{c2},...,a_{cK}\}\). For \(\mathbf{p}_{m^{\prime},c^{\prime}}\), we can obtain its decomposed components \(\mathbf{r}_{m^{\prime}}\), \(\mathbf{r}_{c^{\prime}}\) in the same way:
\[\begin{split}\mathbf{r}_{m}=\sum_{i=1}^{J}a_{mi}\mathbf{d}_{ \mathbf{m}i},&\mathbf{r}_{c}=\sum_{i=1}^{K}a_{ci}\mathbf{d}_{ \mathbf{c}i},\\ \mathbf{r}_{m^{\prime}}=\sum_{i=1}^{J}a_{mi}^{\prime}\mathbf{d}_{ \mathbf{m}i},&\mathbf{r}_{c^{\prime}}=\sum_{i=1}^{K}a_{ci}^{\prime} \mathbf{d}_{\mathbf{c}i}.\end{split} \tag{2}\]
For the skeleton encoding \(\mathbf{r}_{m,c}\)/\(\mathbf{r}_{m^{\prime},c^{\prime}}\), the set of magnitudes \(A_{m}\)/\(A_{m}^{\prime}\) and \(A_{c}\)/\(A_{c}^{\prime}\) can be computed as the projections of \(\mathbf{r}_{m,c}\)/\(\mathbf{r}_{m^{\prime},c^{\prime}}\) onto \(\mathbf{D}_{v}\), as Eq. 3:
\[\begin{split} a_{mi}=\frac{<\mathbf{r}_{m,c}\cdot\mathbf{d}_{m}>}{ \left\|\mathbf{d}_{\mathbf{m}i}\right\|^{2}},& a_{ci}=\frac{< \mathbf{r}_{m,c}\cdot\mathbf{d}_{\mathbf{c}i}>}{\left\|\mathbf{d}_{\mathbf{c}i} \right\|^{2}},\\ a_{mi}^{\prime}=\frac{<\mathbf{r}_{m^{\prime},c^{\prime}}\cdot \mathbf{d}_{\mathbf{m}i}>}{\left\|\mathbf{d}_{\mathbf{m}i}\right\|^{2}},& a_{ci}^{\prime}=\frac{<\mathbf{r}_{m^{\prime},c^{\prime}}\cdot \mathbf{d}_{\mathbf{c}i}>}{\left\|\mathbf{d}_{\mathbf{c}i}\right\|^{2}}.\end{split} \tag{3}\]
As \(\mathbf{r}_{m,c}\) has the temporal dimension of size \(T^{\prime}\), for each 'Motion' feature in the temporal dimension, we can obtain \(T^{\prime}\times\) sets of motion magnitudes \(A_{m}\) to represent the temporal dynamics of \(\mathbf{r}_{m}\). For \(\mathbf{r}_{c}\), as static information, we firstly merge the temporal dimension of \(\mathbf{r}_{m,c}\) by average pooling and then
conduct the projection process to obtain a unified \(A_{c}\). With such trained LAD, the decoder \(\mathrm{D_{LAC}}\) can generate different skeleton sequences by taking an arbitrary combination of magnitudes \(A_{m}\) and \(A_{c}\) along their corresponding directions as input. The high-level action can thus be controlled by the manipulations in the latent space.
Training (Motion Retargeting):We apply a general motion retargeting [3] to train the generative autoencoder and ensure that 'Motion' directions in LAD orthogonal basis \(\mathbf{D}_{v}\) are 'Static'-disentangled (see Fig. 2 (iii)). The main training loss function is the _reconstruction loss_: \(\mathcal{L}_{gen}\)=\(\mathcal{L}_{rec}\). Reconstruction loss aims at guiding the network towards a high generation quality. The new retargeted (motion swapped) skeleton sequence with 'Motion' \(m\), and 'Static' \(c^{\prime}\), noted as \(\mathbf{p}_{m,c^{\prime}}\) is generated from the recombined features, \(\mathbf{r}_{m}+\mathbf{r}_{c^{\prime}}\). Similarly, \(\mathbf{p}_{m^{\prime},c}\) can also be generated by swapping the pair of sequences. The skeleton sequence generation can be formulated as \(\mathbf{p}_{m,c^{\prime}}=\mathrm{D_{LAC}}(\mathbf{r}_{m}+\mathbf{r}_{c^{ \prime}})\) and \(\mathbf{p}_{m^{\prime},c}=\mathrm{D_{LAC}}(\mathbf{r}_{m^{\prime}}+\mathbf{r} _{c})\). The reconstruction loss consists of two components: \(\mathcal{L}_{rec}=\mathcal{L}_{self}+\mathcal{L}_{target}\). Specifically, at every training iteration, the decoder network \(\mathrm{D_{LAC}}\) is firstly used to reconstruct each of the original input samples \(\mathbf{p}_{m,c}\) using its representation \(\mathbf{r}_{m}+\mathbf{r}_{c}\). This component of the loss is denoted as \(\mathcal{L}_{self}\) and formulated as a standard autoencoder reconstruction loss (see Eq. 4).
\[\begin{split}\mathcal{L}_{self}=\mathbb{E}[\|\mathrm{D_{LAC}}( \mathbf{r}_{m}+\mathbf{r}_{c})-\mathbf{p}_{m,c}\|^{2}],\\ \mathcal{L}_{target}=\mathbb{E}[\|\mathrm{D_{LAC}}(\mathbf{r}_{m} +\mathbf{r}_{c^{\prime}})-\mathbf{p}_{m,c^{\prime}}\|^{2}].\end{split} \tag{4}\]
Moreover, at each iteration, the decoder is also encouraged to re-compose new combinations. As the generative module is trained on a synthetic dataset [30] including the cross-character motion retargeting ground-truth skeleton sequences, we can explicitly apply the cross reconstruction loss \(\mathcal{L}_{target}\) (see Eq. 4) through the generation. The same reconstruction losses are also computed for \(\mathbf{p}_{m^{\prime},c^{\prime}}\).
Inference (Motion Composition):As the trained LAD represents high-level motions in a linear space by the action dictionary, we can generate at the inference stage (see Fig. 2 (iv)) composable motions by the linear addition of 'Motion' features encoded from multiple skeleton sequences. We use the average latent 'Motion' features for the decoder to generate composable motions. We note that even if in some cases the combined motions may not be realistic, it can still help to increase the expressive power of the representation, which is important to express subtle details. Taking the motion combination of the two sequences \(\mathbf{p}_{m,c}\) and \(\mathbf{p}_{m^{\prime},c^{\prime}}\) as an example, the skeleton sequences \(\mathbf{p}_{mm^{\prime},c}\) and \(\mathbf{p}_{mm^{\prime},c^{\prime}}\) with the combined motions \(m\) and \(m^{\prime}\) are generated as follows:
\[\begin{split}\mathbf{p}_{mm^{\prime},c}=\mathrm{D_{LAC}}\left( \frac{1}{2}(\mathbf{r}_{m}+\mathbf{r}_{m^{\prime}})+\mathbf{r}_{c}\right),\\ \mathbf{p}_{mm^{\prime},c^{\prime}}=\mathrm{D_{LAC}}\left(\frac{1 }{2}(\mathbf{r}_{m}+\mathbf{r}_{m^{\prime}})+\mathbf{r}_{c^{\prime}}\right). \end{split} \tag{5}\]
As skeleton sequences \(\mathbf{p}_{mm^{\prime},c}\) and \(\mathbf{p}_{mm^{\prime},c^{\prime}}\) have the same composed motion but different 'Static' (_e.g._, viewpoints), they can form a positive pair for self-supervised contrastive learning to train a transferable skeleton visual encoder for fine-grained action segmentation tasks in Sec. 3.2.
### Self-supervised Skeleton Contrastive Learning
In this section, we provide details of the self-supervised contrastive module of LAC. We re-denote the generated composable skeleton sequence \(\mathbf{p}_{mm^{\prime},c}\) (in Sec. 3.1) as a query clip \(q\) and multiple positive keys (_e.g._, the sequence \(\mathbf{p}_{mm^{\prime},c^{\prime}}\)) denoted as \(k_{1}^{+},...,k_{p}^{+}\), can be generated by only modifying its 'Static' magnitudes \(A_{c}\) in the latent space. We follow the general contrastive learning method [28] based on the momentum encoder, to maximize the mutual information of positive pairs (_i.e._, the generated composable skeleton sequences with the same motion but different Statics), while pushing negative pairs (_i.e._, other skeleton sequences with different Motions) apart. Deviating from [28], the queue (memory) [28] stores the features of each frame for skeleton sequences and we propose to additionally enhance the per-frame representation similarity of positive pairs. The visual encoder can extract skeleton features that are globally invariant and also finely invariant to data augmentation and can generalize better to frame-wise action segmentation tasks.
Skeleton Visual Encoder:To have a strong capability to extract skeleton spatio-temporal features, we adopt the recent topology-free skeleton backbone network UNIK [73] as the skeleton visual encoder \(\mathrm{E_{V}}\) (see Tab. 1 and Appendix for details). To obtain the global sequence space, we adopt temporal average pooling layer to merge the
\begin{table}
\begin{tabular}{c|c|c|c} \hline Stages & \(\mathrm{E_{LAC}}\) & \(\mathrm{D_{LAC}}\) & \(\mathrm{E_{V}}\) \\ \hline \hline \multirow{2}{*}{Input} & 2D sequence & Rep. & 2D sequence \\ & \([T,2V]\) & \([T^{\prime},\,160]\) & \([T\times V,\,2]\) \\ \hline \multirow{2}{*}{1} & \multirow{2}{*}{Conv\(\left(8,64\right)\)} & \begin{tabular}{c} Upsample(2) \\ Conv (\(7,128\)) \\ \end{tabular} & \multirow{2}{*}{Conv\(\left(\begin{array}{*{1\times 1},\,64\\ 9\times 1},\,64\\ \end{array}\right)\times 4\)} \\ \hline \multirow{2}{*}{2} & \multirow{2}{*}{Conv\(\left(8,96\right)\)} & \begin{tabular}{c} Upsample(2) \\ Conv (\(7,\,64\)) \\ \end{tabular} & \multirow{2}{*}{Conv\(\left(\begin{array}{*{1\times 1},\,128\\ 9\times 1},\,128\\ \end{array}\right)\times 3\)} \\ \hline \multirow{2}{*}{3} & \multirow{2}{*}{Conv\(\left(8,\,160\right)\)} &
\begin{tabular}{c} Upsample(2) \\ Conv\(\left(\begin{array}{*{2\times 1},\,256\\ 7,2V\end{array}\right)\)} \\ \end{tabular} & \multirow{2}{*}{Conv\(\left(\begin{array}{*{1\times 1},\,256\\ 9\times 1},\,256\end{array}\right)\times 3\)} \\ \hline \multirow{2}{*}{4} & - & - & S-GAP \((2\times V,\,256)\) \\ \cline
temporal dimension of the visual representations, denoted as \(\mathrm{E_{Vs}}(q),\mathrm{E_{Vs}}(k_{1}^{+}),...,\mathrm{E_{Vs}}(k_{P}^{+})\in \mathbb{R}^{C_{out}\times 1}\) (see Tab. 1). Per-frame features can be obtained by \(\mathrm{E_{V}}\) before the temporal average pooling layer (see Tab. 1) and denoted as \(\mathrm{E_{Vf}}(q,\tau),\mathrm{E_{Vf}}(k_{1}^{+},\tau),...,\mathrm{E_{Vf}}(k_{ P}^{+},\tau)\in\mathbb{R}^{C_{out}\times T}\).
Contrastive Loss:We apply general contrastive InfoNCE loss [44] to train our visual encoder \(\mathrm{E_{V}}\) to encourage similarities between both sequence-level and frame-level representations of positive pairs, and discourage similarities between negative representations, denoted as \(\mathrm{E_{Vs}}(k_{1}^{-}),...,\mathrm{E_{Vs}}(k_{N}^{-})\) in sequence space and \(\mathrm{E_{Vf}}(k_{1}^{-},\tau),...,\mathrm{E_{Vf}}(k_{N}^{-},\tau)\) in frame space. The InfoNCE [44] objective is defined as: \(\mathcal{L}_{q}=\mathcal{L}_{q-s}+\mathcal{L}_{q-f}\), where
\[\mathcal{L}_{q-s}=-\mathbb{E}\Big{(}\log\frac{\sum_{p=1}^{P}e^{\mathrm{Sim} \left(\mathrm{E_{Vs}}(q,\tau),\mathrm{E_{Vs}}(k_{p}^{+})\right)}}{\sum_{n=1}^ {N}e^{\mathrm{Sim}\left(\mathrm{E_{Vs}}(q,\tau),\mathrm{E_{Vs}}(k_{n}^{+}) \right)}}\Big{)}, \tag{6}\]
\[\mathcal{L}_{q-f}=-\mathbb{E}\Big{(}\log\frac{\sum_{p=1}^{P}e^{\sum_{\tau=1}^{ T}\mathrm{Sim}\left(\mathrm{E_{Vf}}(q,\tau),\mathrm{E_{Vf}}(k_{p}^{+},\tau) \right)}}{\sum_{n=1}^{N}e^{\sum_{\tau=1}^{T}\mathrm{Sim}\left(\mathrm{E_{Vf}}( q,\tau),\mathrm{E_{Vf}}(k_{n}^{+},\tau)\right)}}\Big{)}, \tag{7}\]
where \(\tau\) represents the frame index in the temporal dimension of frame-level representations, \(P\) represents the number of positive keys, \(N\) denotes the number of negative keys (we use \(P=4\) and \(N=65,536\) for experiments), and the similarity is computed as:
\[\mathrm{Sim}(x,y)=\frac{\phi(x)\cdot\phi(y)}{\|\phi(x)\|\cdot\|\phi(y)\|}\cdot \frac{1}{Temp}, \tag{8}\]
where \(Temp\) refers to the temperature hyper-parameter [69], and \(\phi\) is a learnable mapping function (_e.g_., a MLP projection head [24]) that can substantially improve the learned representations.
Transfer-Learning for Action Segmentation:For transferring the visual encoder on downstream tasks, we attach \(\mathrm{E_{Vf}}\) to a fully-connected layer followed by a Softmax Layer to predict per-frame actions. The output size of each fully-connected layer depends on the number of action classes (see Tab. 1). Then, we re-train the visual encoder \(\mathrm{E_{V}}\) with action labels. For processing long sequences, we adopt a sliding window to extract features for a temporal segment and use Binary Cross Entropy loss to optimize the visual encoder step by step. In this way, \(\mathrm{E_{V}}\) can be re-trained end-to-end instead of pre-extracting features for all frames. In the inference stage, we combine the predictions of all the temporal sliding windows in an online manner [39].
**Charades**[53] is a real-world dataset containing fine-grained activities similar to TSU. It provides only raw video clips without skeleton data. In this work, we use the 2D skeleton data (2D coordinates) estimated by the toolbox [71]. We report _per-frame_ mAP on the localization setting of the dataset. For sake of reproducibility, we will release the estimated skeleton data on Charades.
**PKU-MMD**[12] is a basic untrimmed video dataset recorded in the laboratory setting. We use only the official 3D skeleton data. As this dataset is not densely labeled, we report the _event-based_ mAP for fair comparisons by applying a post-processing [42] on the frame-level predictions to get the action boundaries.
**Mixamo**[30] is a 3D animation collection, which contains elementary actions and various dancing moves. We use such a synthetic dataset for training and evaluating the generation module in LAC prior to contrastive learning on Posetics.
### Evaluation on Temporal Action Segmentation
In this section, we evaluate the transfer ability of LAC by both _linear evaluation_ (_i.e_., by training only the fully-connected layer while keeping frozen the backbone) and _fine-tuning evaluation_ (_i.e_., by refining the whole network) on three action segmentation datasets TSU, PKU-MMD and Charades with self-supervised pre-training on Posetics. We also report the results with supervised pre-training for reference (_i.e_., we use the generated composable skeletons and the combined action labels for pre-training).
**Linear Evaluation:** Tab. 5 (top) shows the linear results on the three datasets. This evaluates the effectiveness of transfer-learning with fewer parameters (only the classifier is trained) compared to training directly on the target datasets from scratch (random initialization). The results suggest that the weights of the model can be well pre-trained without action labels, providing a strong transfer ability (_e.g_., +10.4% on TSU CS and +6.6% on Charades) and the pre-trained visual encoder is generic enough to extract meaningful action features from skeleton sequences.
**Fine-tuning:** Tab. 5 (bottom) shows the fine-tuning results, where the whole network is re-trained. The self-supervised pre-trained model also performs competitively compared to supervised pre-trained models. From these results we conclude that collecting a large-scale trimmed skeleton dataset, without the need of action annotation, can be beneficial to downstream fine-grained tasks for untrimmed videos (_e.g_., +5.9% on TSU CS and +11.8% on CV).
**Training with fewer labels:** In many real-world applications, labeled data may be lacking, which makes it challenging to train models with good performance. To evaluate LAC in such cases, we transfer the visual encoder pre-trained on Posetics onto all the tested datasets by fine-tuning with only 5% and 10% of the labeled data. As shown in Tab. 4, without pre-training, the accuracy of the visual encoder [73] significantly decreases. In contrast, LAC with prior action representation learning achieves good performance on all three datasets in such setting.
**Comparison with SoTA:** We compare our fine-tuning results to other SoTA skeleton-based approaches [26, 46, 16, 39, 34, 12, 10, 35] on the real-world datasets TSU and Charades (see Tab. 2) and also laboratory dataset PKU-MMD (see Tab. 3). As previous approaches are based on supervised pre-training on large-scale datasets [40, 7], we also report our supervised results. The results in Tab. 2 show that LAC, even with self-supervised pre-training, outperforms all previous skeleton-based approaches [26, 46, 16] with supervised pre-training on our main target real-world datasets in a
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Pre-training**} & \multirow{2}{*}{**Training data**} & \multicolumn{2}{c}{**Toyota Smarthome Untrimmed**} & \multicolumn{2}{c}{**PKU-MMD** (IoU=0.1)} & \multicolumn{2}{c}{**Charades**} \\ & & CS(\%) & CV(\%) & CS(\%) & CV(\%) & mAP(\%) \\ \hline \hline Random init. [73] & Scratch & 5\% & 8.5 & 6.8 & 57.4 & 59.5 & 8.8 \\
**Self-supervised** & Posetics w/o labels & 5\% & **25.2** & **15.6** & **73.9** & **75.4** & **12.6** \\ \hline Random init. [73] & Scratch & 10\% & 12.9 & 9.5 & 66.4 & 68.1 & 9.3 \\
**Self-supervised** & Posetics w/o labels & 10\% & **29.0** & **17.9** & **79.8** & **81.1** & **17.4** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Transfer learning results by **fine-tuning** on all benchmarks of Toyota Smarthome Untrimmed, PKU-MMD and Charades with randomly selected **5% (top)** and **10% (bottom)** of labeled training data.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{**Pre-training**} & \multicolumn{2}{c}{**Toyota Smarthome Untrimmed**} & \multicolumn{2}{c}{**PKU-MMD** (IoU=0.1)} & \multicolumn{2}{c}{**Charades**} \\ & & \#Params & CS(\%) & CV(\%) & \#Params & CS(\%) & CV(\%) & \#Params & mAP(\%) \\ \hline \hline Random init. & Scratch & 13.1K & 8.1 & 6.9 & 13.3K & 11.8 & 12.4 & 40.2K & 6.1 \\ Supervised & Posetics w/ labels & 13.1K & 20.8 & 18.3 & 13.3K & 61.8 & 62.4 & 40.2K & 14.3 \\
**Self-supervised** & Posetics w/o labels & 13.1K & **18.5** & **16.6** & 13.3K & **55.2** & **58.8** & 40.2K & **12.7** \\ \hline Random init. & Scratch & 3.45M & 28.2 & 11.0 & 3.45M & 86.5 & 92.9 & 3.45M & 18.6 \\ Supervised & Posetics w/ labels & 3.45M & 36.8 & 23.1 & 3.45M & 92.6 & 94.6 & 3.45M & 25.6 \\
**Self-supervised** & Posetics w/o labels & 3.45M & **34.1** & **22.8** & 3.45M & **91.8** & **93.9** & 3.45M & **22.3** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Transfer-learning results by **linear evaluation (top)** and **fine-tuning (bottom)** on Toyota Smarthome Untrimmed, PKU-MMD and Charades with self-supervised pre-training on Posetics. Results with supervised pre-training are also reported for reference.
large margin (_e.g._, +7.4% on TSU CS and +12.5% on Charades). It suggests that composable motions are important to increase the expressive power of the visual representation and the end-to-end fine-tuning can benefit downstream tasks. Even if PKU-MMD does not contain composable actions, the performance is still slightly improved by learning a fine-grained skeleton representation. The results using RGB data are also reported for reference. The TSU and Charades datasets contain many object-oriented actions that are difficult to identify using skeleton data only. However, even in the absence of the object information, LAC surprisingly achieves better accuracy compared to all SoTA RGB-based methods [15, 16, 14, 46, 13]. We deduce that training the visual encoder end-to-end is more effective compared to using two-step processing. Moreover, skeletons can always be combined with RGB data by multi-modal fusion networks [13, 17] to further improve the performance.
### Evaluation on Action Generation.
As the generative model with LAD represents our main novelty for addressing the action segmentation challenges, we evaluate here the generation quality of LAC.
Quantitative Comparison:The generation model of LAC is trained on the Mixamo dataset to have an action composition ability before the contrastive learning. We compare the motion retargeting accuracy on this dataset. Specifically, we randomly split training and test sets on this dataset and follow the same setting and protocol described in [3, 74]. We firstly explore how many directions (_i.e._, the values of \(J\) and \(K\)) are required in the proposed action dictionary \(\mathbf{D}_{v}\). We empirically test four different values for \(J\) from 16 to 144. From results reported in Tab. 6, we observe that when using 128 directions (out of all \(dim\)=160 directions) for 'Motion', the model achieves the best reconstruction accuracy and outperforms SoTA methods [62, 3, 74]. Hence, we set \(J\)=128 and \(K\)=32 for all other experiments.
main target fine-grained dataset TSU, with self-supervised pre-training and fine-tuning protocol.
Impact of Action Composition:We start from a baseline model [73] that is pre-trained on the trimmed dataset (_i.e._, Posetics) in a general contrastive learning strategy [28] without using composable motions and frame-level contrast for action segmentation. The results in Tab. 7 (see L0) suggest that the visual encoder has a weak capability to learn features on top of an untrimmed skeleton sequence without learning a composable action representation. We then perform the self-supervised training on Posetics (in only the video space) with composable motions from different number of motions. As daily living videos contain in average two co-occurring actions [16], combining motions from two skeleton sequences in the pre-training stage can significantly improve the representation ability of the visual encoder and generalize better to real-world untrimmed action segmentation tasks (see Tab. 7 L1). Such number can simply be changed to adapt to different target datasets.
Impact of Frame-wise Contrast:To validate that frame-wise contrastive learning can further improve the fine-grained action segmentation tasks, we additionally maximize the per-frame similarity between the positive samples. We also select different uniform temporal sampling rates to reduce the redundant computational cost instead of using all the frames. The results in Tab. 7 L2 suggest that frame-wise contrast with uniformly sampling every 4 frames is the most effective to improve the action segmentation accuracy.
### Further Discussion
Transfer Learning vs. Self Pre-training:Our target is to train a generic skeleton encoder that can fit different downstream tasks. Hence, similar to current RGB-based methods using large-scale dataset such as Kinetics [7, 6] for pre-training, our model is pre-trained on the large-scale Posetics dataset to learn a generic skeleton representation. Such representation can be transferred onto different downstream tasks without the need for individual pre-training. This is a very effective practice for action segmentation models. To demonstrate the advantage of transfer-learning and to further compare LAC with SoTA methods, we here compare LAC with SoTA methods [46, 16] in Tab. 8 with self pre-training, _i.e._, _solely self-supervised pre-training the encoder on the tested dataset_ (on TSU, PKU-MMD [email protected] and Charades) using the proposed contrastive module without additional data and without action labels. The results show that, without extra training data, LAC can still outperform previous models [46, 16], as in the second stage, LAC adopts end-to-end fine-tuning to refine the visual encoder, which is more effective than using temporal modeling on the pre-extracted features [46, 16]. Moreover, current untrimmed datasets are not large enough, the generated actions have less diversity, so the representation ability of the skeleton encoder is less impressive than pre-training on Posetics.
## 5 Conclusion
In this work, we present LAC, a novel self-supervised action representation learning framework for the setting of skeleton action segmentation. We show that high-level motions of skeleton sequences can be learned and linearly combined using an orthogonal basis in the latent space. Moreover, we augment a contrastive learning module to better extract frame-level features, in addition to the generated composable skeleton sequences. Our experimental analysis confirms that a skeleton visual encoder that extracts such skeleton representation is able to boost downstream action segmentation tasks. Future work will extend our generative approach to RGB videos, in order to improve the capturing of the object information, which can be crucial and complementary to the skeleton-based model.
Acknowledgements:This work was supported by Toyota Motor Europe (TME) and the French government, through the 3IA Cote d'Azur Investments In the Future project managed by the National Research Agency (ANR) with the reference number ANR-19-P3IA-0002.
|
2310.11632 | Intertwined fractional quantum anomalous Hall states and charge density
waves | Motivated by the recent experimental breakthrough on the observation of the
fractional quantum anomalous Hall (FQAH) effects in semiconductor and graphene
moir\'{e} materials, we explore the rich physics associated with the
coexistence of FQAH effect and the charge density wave (CDW) order that
spontaneously breaks the translation symmetry. We refer to a state with both
properties as "FQAH-crystal". We show that the interplay between FQAH effect
and CDW can lead to a rich phase diagram including multiple topological phases
and topological quantum phase transitions at the same moir\'e filling. In
particular, we demonstrate the possibility of direct quantum phase transitions
from a FQAH-crystal with Hall conductivity $\sigma_H = - 2/3$ to a trivial CDW
insulator with $\sigma_H = 0$, and more interestingly, to a QAH-crystal with
$\sigma_H= -1$. | Xue-Yang Song, Chao-Ming Jian, Liang Fu, Cenke Xu | 2023-10-17T23:51:02Z | http://arxiv.org/abs/2310.11632v2 | # Intertwined fractional quantum anomalous Hall states and charge density waves
###### Abstract
Motivated by the recent experimental breakthrough on the observation of the fractional quantum anomalous Hall (FQAH) effects in semiconductor and graphene moire materials, we explore the rich physics associated with the coexistence of FQAH effect and the charge density wave (CDW) order that spontaneously breaks the translation symmetry. We refer to a state with both properties as "FQAH-crystal". We show that the interplay between FQAH effect and CDW can lead to a rich phase diagram including multiple topological phases and topological quantum phase transitions at the same moire filling. In particular, we demonstrate the possibility of direct quantum phase transitions from a FQAH-crystal with Hall conductivity \(\sigma_{H}=-2/3\) to a trivial CDW insulator with \(\sigma_{H}=0\), and more interestingly, to a QAH-crystal with \(\sigma_{H}=-1\).
## I Introduction
The recent advance in the fabrication and control of two-dimensional (2D) van der Waals heterostructures has enabled the development of moire superlattices that feature tunable mini-bands. Topologically non-trivial mini-bands in moire materials provide an ideal avenue to search for topological states of matter. As an example, evidence of fractional quantum Hall effect in fractionally filled moire band was observed in twisted bilayer graphene under a magnetic field above \(\sim 5\)T or higher [1; 2]. More recently, thermodynamic and transport measurements revealed the fractional quantum anomalous Hall (FQAH) effect at zero magnetic field in twisted TMD homobilayers [3; 4; 5; 6] and rhombohedral penta-layer graphene/hBN superlattice [7].
The discovery of FQAH effect points towards a fertile ground for studying strong interaction effect in topological moire bands of 2D materials. Twisted TMD homobilayers feature spin-valley-locked moire bands with opposite Chern numbers in the two valleys [8; 9; 10]. At finite carrier density, Coulomb interaction drives spontaneous valley polarization, and FQAH effect is anticipated at fractional fillings of the valley-polarized Chern band [11; 10; 12]. Interaction induced FQAH states in Chern bands are also known as (zero-field) fractional Chern insulators in the literature [13; 14; 15; 16; 17]. The highly tunable nature of moire systems, with abundant tuning knobs such as twist angle, displacement field, electrostatic doping and gate screening, offers a large parameter space to explore FQAH states and proximate phases [18; 19; 20].
In this work, we explore the rich physics of a state with coexisting FQAH and CDW (a state that we refer to as FQAH-crystal), and its proximate phases that occur at the same moire band filling under zero magnetic field. Our study is motivated by the observed phase transition at hole filling \(\nu=-2/3\) in twisted bilayer MoTe\({}_{2}\) under a displacement field from an FQAH state with quantized Hall conductance \(\sigma_{H}=-2/3\) (in the unit of \(e^{2}/h\)) to an insulating state [5; 6]. We consider a scenario in which the FQAH state spontaneously breaks the translational symmetry of the moire lattice. We call this state a FQAH-crystal, analogous to the notion of "Hall crystal" introduced in Ref. [31]. Our consideration of FQAH-crystal is partly motivated by recent numerical finding of a softened magneto-roton gap in \(\sigma_{H}=-2/3\) FQAH states [20], which suggests incipient CDW order with tripled \(\sqrt{3}\times\sqrt{3}\) moire unit cells (as illustrated in Fig.1) at experimentally relevant twist angles.
We show by field theory analysis that a variety of strongly correlated phases can be found in the vicinity of FQAH-crystal. These include a trivial CDW insulator with \(\sigma_{H}=0\) and a \(\sigma_{H}=-1\) QAH state with CDW order, which we call QAH-crystal. While the QAH-crystal phase has been proposed in moire systems under the name of topological charge density wave [26; 32; 33], its connection to FQAH physics [34; 35; 36] and phase transitions have received little attention before. All the phases considered in this work possess the same type of CDW order, but are distinguished by their different topological properties.
We further show that direct and (potentially) continuous phase transitions between these topologically distinct phases are theoretically allowed. Interestingly, these phase transitions can be described by (2+1)D quantum electrodynamics (QED) with a Chern-Simons (CS) term of the U(1) gauge field coupled to either fermionic or bosonic charges. In our theory, the transition from the FQAH-crystal with \(\sigma_{H}=-2/3\) to the trivial CDW insulator is described by a fermionic QED with two flavors Dirac fermions at low energy coupled with a U(1) gauge field with a CS term at level-1/2. On the other hand, its transition to a QAH-crystal with \(\sigma_{H}=-1\) is described by either bosonic or fermionic QEDs. These two descriptions are _dual_ to each other based on the boson-fermion duality web that was actively discussed in recent years [37; 38; 39; 40].
We note that direct transitions between a standard FQAH state (without any spontaneous symmetry breaking) and exotic CDW states, including with topological order, were studied in Ref. [28], whereas our work starts from a FQAH-crystal (with \(\sigma_{H}=-2/3\) and CDW or
der), and obtains different proximate phases. In particular, we highlight the possibility of a QAH-crystal (with \(\sigma_{H}=-1\) and CDW order) as a proximate phase and a direct phase transition between FQAH-crystal and QAH-crystal at \(\nu=-2/3\).
## II Phases at \(\nu=-2/3\)
The most prominent state observed experimentally in the homobilayer TMD moire system is the \(\sigma_{H}=\pm 2/3\) FQAH state at hole filling \(n_{h}=2/3\), i.e., at hole density of \(2/3\) per moire unit cell. Since this state is shown to be fully spin/valley polarized by magnetic circular dichroism measurements, in the following we consider a spinless electron system at charge density \(\nu=-2/3\).
Throughout the discussion below we postulate the presence of a charge density wave (CDW) which triples the unit cell (examples shown in Fig 1), such that the holes are at integer filling with respect to the enlarged moire unit cell. We will show that under tripling of the unit cell, it is natural to construct phases with Hall conductivity \(\sigma_{H}=-2/3\), \(-1\), \(0\) in a unified formalism. Furthermore, there can be direct and (potentially) continuous quantum phase transitions between any of the two states mentioned above, though a direct transition between the \(\sigma_{H}=-2/3\) state and the trivial insulator with \(\sigma_{H}=0\) requires certain discrete space-time symmetries.
For the purpose of constructing these phases and describing their properties, it is convenient to use the standard parton construction. One can formally write the hole operator as \(c=\Phi f\), where the bosonic parton \(\Phi\) carries the physical electric charge, and the charge-neutral parton \(f\) is a fermion. The electric charge can actually be assigned arbitrarily between \(\Phi\) and \(f\), which should not change the final physics. The parton construction formally enlarges the Hilbert space of the holes, which can be remedied by coupling \(\Phi\) and \(f\) both to an internal dynamical U(1) gauge field \(a\), with charge \(\pm 1\) respectively. The dynamical U(1) gauge field enforces a local constraint which equates the local density of \(f\) to that of \(\Phi\). The physical state of holes is obtained by enforcing the relation of hole density to that of the partons, i.e., \(\nu_{h}=\nu_{\Phi}=\nu_{f}=2/3\) with respect to the original moire unit cell. Importantly, in the presence of a CDW order that triples the unit cell, both the holes and partons are at integer fillings with respect to the enlarged unit cell.
### Phases tuned by fermionic parton \(f\)
In the following we will construct a series of states by making \(\Phi\) a bosonic fractional quantum Hall state with Hall conductivity \(-1/2\). Each state can also be equivalently constructed employing the composite fermion picture through vortex attachment. As is well known in the context of Landau level systems, composite fermions experience a modified residual magnetic field, and prominent fractional quantum Hall states are formed at integer filling of composite fermion Landau levels [41]. As we show later, in (moire) lattice systems, the mean-field state of composite fermions allows much richer possibilities, leading to a series of new states [42].
low energy bands in the folded BZ with Chern numbers \(+1,+1\). As we mentioned previously, the \(\sigma_{H}=-2/3\) state constructed here has FQAH effect as well as spontaneous translation symmetry breaking, which we refer to as FQAH-crystal. Later, we will demonstrate with a composite fermion construction that, the existence of the \(\sigma_{H}=-2/3\) state itself does not have to break the translation symmetry. However, all of the nearby states within our formalism must break the translation symmetry. This observation motivates us to focus on the scenario where the translation symmetry is already broken in the \(\sigma_{H}=-2/3\) state.
Here we would like to demonstrate that the parton construction given above is a natural state for holes at filling \(2/3\) of the moire unit cell. We note that the flux \(\phi_{\Phi}\) per moire unit cell felt by the parton \(\Phi\) is not necessarily equal to the physical flux seen by holes \(\phi_{h}\), due to the internal gauge field \(a\) coupled to both \(\Phi\) and \(f\). In the continuum, the total fluxes seen by the hole and the partons should in general obey the relation \(\phi_{h}=\phi_{\Phi}+\phi_{f}\). In twisted semiconductor bilayers [4; 9; 8] and other continuum systems [44] where holes fill a valley polarized Chern band, despite being at zero magnetic field, the holes experience an effective flux \(\phi_{h}=-1\) per moire unit cell produced by the periodic skyrmion spin (or layer pseudospin) texture in real space [45], and we could further set \(\phi_{\Phi}=-4/3\) to allow \(\Phi\) to form a Laughlin \(\nu=-1/2\) state. This leaves \(\phi_{f}=1/3\) and \(\nu_{f}=2/3\). The fermionic partons hence are naturally allowed to fill two Landau levels, equivalent to filling two bands with Chern number \(+1\). In fact, with the CDW order that triples the unit cell that we postulate, the \(f\) feel \(\tilde{\phi}_{f}=1\) with a filling \(\tilde{\nu}_{f}=2\) per enlarged unit cell, hence \(f\) could naturally form an insulator with total Chern number \(C=2\).
In terms of the Chern-Simons theory, this state corresponds to the following Lagrangian
\[{\cal L} = -\frac{2}{4\pi}b\wedge db+\frac{1}{2\pi}b\wedge da-\frac{1}{4\pi} a\wedge(a_{1}+a_{2}) \tag{1}\] \[+ \sum_{i=1,2}\frac{1}{4\pi}a_{i}\wedge da_{i}.\]
The gauge field \(b\) is the "dual" of the current of the bosonic parton \(\Phi\); \(a_{1}\) and \(a_{2}\) are the dual of the fermionic parton \(f\) that fills the two Chern bands with Chern number \((+1,+1)\); \(a\) is the gauge field that couples to both \(\Phi\) and \(f\). The CS Lagrangian Eq. 1 can also be written in a more compact form using the \(K-\)matrix [46]:
\[{\cal L}=\frac{1}{4\pi}K_{2/3,IJ}a^{I}\wedge da^{J}, \tag{2}\]
where
\[K_{2/3}=\begin{pmatrix}-2&0&0&1\\ 0&1&0&-1\\ 0&0&1&-1\\ 1&-1&-1&0\end{pmatrix}, \tag{3}\]
and \(a^{I}=(b,a_{1},a_{2},a)\). We will hereafter abbreviate \(a\wedge db\) as \(adb\) without loss of clarity. The topological ground state degeneracy is given by the determinant of \(K_{2/3}\), which in this case is 3. To derive the Hall conductivity of this state, we need to couple \(a^{I}\) to the external electromagnetic field \(A\), through a "charge vector" [46]. In the current construction, the charge vector is \(v=(1,0,0,0)\), meaning that only the bosonic parton \(\Phi\) carries electric charge \(+1\). By integrating out all the dynamical gauge field \(a^{I}\), one can show that the total Hall conductivity of the state is \(\sigma_{H}=-2/3\), i.e. \(\sigma_{H}=vK^{-1}v^{T}=-2/3\).
An alternative picture is that, the \(2/3\) state can be viewed as holes at filling 1, forming a \(\nu=-1\) integer quantum hall state, together with electrons at filling 1/3, forming a \(\nu=1/3\) Laughlin state. The \(K-\)matrix for this construction is
\[K_{2/3}^{\prime}=\begin{pmatrix}-1&0\\ 0&3\end{pmatrix} \tag{4}\]
with the charge vector \(v=(1,-1)\). The first diagonal element of \(K_{2/3}^{\prime}\) describes the \(\nu=-1\) quantum hall state and the second diagonal element describes \(\nu=1/3\) Laughlin state. The \(K-\)matrix in Eq. 3 is related to the \(K^{\prime}-\)matrix in Eq. 4 (up to 2 extra fields that describe a trivial, neutral sector) by a similarity transformation in \(SL(4,Z)\):
\[W^{T}K_{2/3}W=\begin{pmatrix}-1&0&0&0\\ 0&3&0&0\\ 0&0&0&-1\\ 0&0&-1&0\end{pmatrix},\] \[W=\begin{pmatrix}-1&1&1&0\\ -1&2&0&0\\ 0&-1&0&0\\ -2&2&1&-1\end{pmatrix}. \tag{5}\]
The third picture of constructing the \(2/3\) state is through the composite fermion (CF) and flux attachment. As we mentioned before, when the holes fill a valley polarized Chern band, the physics is topologically equivalent to integer quantum Hall state where a hole sees a \(\phi_{h}=-1\) magnetic flux quantum through each moire unit cell. Then a composite fermion is constructed by binding the hole with 2-flux quanta of a gauge field \(a\), i.e. the composite fermions will see total gauge flux \(\phi_{cf}=\phi_{h}+2\rho_{cf}\). When the hole density is \(2/3\) per moire unit cell, the density of \(\phi_{cf}\) would be \(1/3\) flux quantum per moire unit cell. Hence the composite fermions would naturally fill two Landau Levels of \(\phi_{cf}\), and form an integer quantum Hall state with composite fermion Hall conductivity \(\sigma_{cf}=2\). When inserting an extra flux density \(\delta\phi_{cf}\) into the system, composite fermion density \(\delta\rho_{cf}=\sigma_{cf}\delta\phi_{cf}\) will be accumulated; and the total Hall conductivity is the ratio between the extra density of composite fermions and the density of extra magnetic flux: \(\sigma_{H}=\delta\rho_{cf}/\delta\phi_{h}=\sigma_{cf}/(1-2\sigma_{cf})=-2/3\). This composite fermion construction for the \(2/3\) state does not break translation. The composite fermion picture for understanding the observed FQAH states in twisted
bilayer MoTe\({}_{2}\) is strongly supported by recent numerical studies [23; 24; 27].
To formally implement this flux attachment [47], we introduce a noncompact gauge field \(b\) whose charge is the flux of \(a\). We will demonstrate that a U(1)\({}_{-2}\) CS term for \(b\) attaches 2 units of fluxes of \(a\) to the composite fermion, which also carries charge under gauge field combination \(a+A\). The Lagrangian of all the field mentioned above reads:
\[{\cal L}=-\frac{2}{4\pi}bdb+\frac{1}{2\pi}bda+{\cal L}_{CF}[\psi,a+A], \tag{6}\]
where the mutual CS term between \(b,a\) implies that the flux of \(a\) is charged under \(b\), and the last term of Eq. 6 is the CF Lagrangian capturing the physics that the CF (\(\psi\)) is coupled to \(a+A\).
The equations of motion with respect to \(b\) and \(a\) lead to the following relations:
\[\frac{\delta{\cal L}}{\delta b_{0}}=0 \rightarrow \frac{da}{2\pi}=\frac{2db}{2\pi},\] \[\frac{\delta{\cal L}}{\delta a_{0}}=0 \rightarrow \rho_{cf}=\frac{db}{2\pi}. \tag{7}\]
Combining the equations we obtain the relation \(2\rho_{cf}=\frac{da}{2\pi}\), which corresponds to the picture of flux attachment: each CF is bound with two flux quanta of gauge field \(a\).
When the CF fermion \(\psi\) fills Chern bands with total Chern number \(C_{cf}\) (an integer), we need to introduce \(|C_{cf}|\) copies of gauge fields \(a_{i}\), which are dual to the current of the CFs:
\[{\cal L}_{CF}[\psi,a+A]={\rm sgn}(C_{cf})\sum_{i=1}^{C_{cf}}\left(\frac{1}{4 \pi}a_{i}da_{i}+\frac{1}{2\pi}a_{i}d(a+A)\right), \tag{8}\]
where each self-CS term of \(a_{i}\) describes the CF filling a complete Landau Level (or equivalently Chern band with Chern number 1). The flux current of \(a_{i}\), which is the dual of the CF current, couple to \(a+A\).
For \(C_{cf}=2\), after combining Eq. 6 and Eq. 8 we eventually arrive at _exactly the same_\(K-\)matrix as that in Eq. 3, albeit the charge vector now is \((0,1,1,0)\), which corresponds to shifting the electric charge from the bosonic parton \(\Phi\) to the fermionic parton \(f\), and it still leads to \(\sigma_{H}=-2/3\). The charge vector could be transformed to \((1,0,0,0)\), by relabeling \(a\to a-A\). Then the two formalisms based on parton and CF yield exactly identical \(K-\)matrices and Hall conductivity.
-- \(\sigma_{H}=0\) **and \(\sigma_{H}=-1\) states:**
To construct a trivial insulator phase with \(\sigma_{H}=0\), we can still fix the bosonic parton at a \(\nu=-1/2\) Laughlin state, and let the fermionic parton \(f\) fill two bands with Chern numbers \(+1,-1\) respectively. The \(K\) matrix of this state is similar to Eq. 3, with the diagonal component \(K_{33}\) changed to \(-1\). This change will lead to the Hall conductivity \(\sigma_{H}=0\), without any topological degeneracy.
An integer QAH state with \(\sigma_{H}=-1\) and coexisting CDW order (referred to as the QAH-crystal state) can be constructed by removing the row and column of the \(K\) matrix that involves the second band of the fermionic parton, meaning that the fermionic parton \(f\) now fills bands with total Chern numbers \(+1\).
In the composite fermion picture, when the CF forms a \(\nu=+1\) quantum hall state, inserting a \(+1\)\(\phi_{h}\)-flux quantum is accompanied by \(-2\) units of fluxes of \(a\), and accumulating \(-1\) charge of CF since there is a total \(-1\) unit of extra flux \(\phi_{cf}\), i.e. the CFs form a \(\nu=-1\) state with respect to \(\phi_{h}\). This state eventually corresponds to the \(\sigma_{H}=-1\) state. The \(K-\)matrix can be similarly deduced and is equivalent to that deduced from the parton construction.
The trivial insulator with \(\sigma_{H}=0\) corresponds to the CF forming a trivial insulator, which leads to trivial electromagnetic response. The \(K-\)matrices of the three states constructed in this subsection are summarized in table 1.
**Translation breaking enforced by filling:** Importantly, when the translation symmetry of the moire superlattice is preserved, it is not possible (at least within CF mean-field theory) to have a direct transition from a state that fills a CF band with Chern number \(C_{cf}=2\) and correspondingly Hall conductivity \(\sigma_{H}=-2/3\), to states with \(C_{cf}=1,0\) and \(\sigma_{H}=-1,0\). Since the effective field seen by the CFs is \(\phi_{cf}=1/3\) flux quanta through each moire plaquette, the CFs obey the magnetic translation symmetry: \(T_{1}T_{2}=T_{1}^{-1}T_{2}^{-1}e^{i2\pi/3}\) for the two elementary translations \(T_{1,2}\) that enclose a moire unit cell. The mean-field spectrum of the CFs will be 3 fold degenerate as the magnetic translation admits representations with minimal dimensions of \(3\). [48]
Therefore the translation symmetry guarantees that the change of the Chern number \(\Delta C_{cf}\) across a transition must have \(\Delta C_{cf}\) equal to multiples of 3, as it is given by the number of gapless Dirac cones in the spectrum. Hence if the transitions \(\sigma_{H}=-2/3\rightarrow-1\) and \(\sigma_{H}=-2/3\to 0\) are described by changing the band Chern number of the composite fermions, they can only occur when the translation symmetry of the moire lattice is broken, i.e. they can only occur when there is a background charge density wave order. In particular, the simplest scenario of CDW is to triple the unit cell, rendering the magnetic translation trivial and permitting a direct transition with \(\Delta C_{cf}=-1,-2\) etc. In the next section, we shall demonstrate explicitly direct transition from \(\sigma_{H}=-2/3\) FQAH-crystal to either \(\sigma_{H}=0\) CDW or \(\sigma_{H}=-1\) QAH-crystal.
### Phases tuned by bosonic parton \(\Phi\)
In all the three states constructed in the last subsection, the bosonic parton \(\Phi\) always form a \(\nu=-1/2\) bosonic Laughlin state. Starting with the FQAH-crystal state with \(\sigma_{H}=-2/3\), two more states can be constructed by changing the physics of the bosonic parton \(\Phi\). These states/phases are summarized in the global phase diagram Fig. 3.
One such state is an insulator without any Hall conductivity, but it has a neutral topological order and neutral chiral edge states, leading to quantized thermal Hall effect. This quantum thermal Hall insulator can be obtained from the FQAH-crystal, by driving the bosonic parton \(\Phi\) into a trivial insulator. When \(\Phi\) is in a trivial insulator, there is no nontrivial response to the external electromagnetic field as \(\Phi\) is the parton that carries the electric charge. However, this state must still have a nontrivial topological order, as the \(K\) matrix of this state corresponds to Eq. 3 after removing the components that involve gauge field \(b\). The determinant of the remaining \(3\times 3\)\(K\) matrix is 2, and it is equivalent to a simple semion topological order. The semion topological order can also be revealed by integrating out \(a_{1}\) and \(a_{2}\), which yields a level-2 CS term for the gauge field \(a\). This semion topological order with zero Hall conductivity is one of the states discussed in Ref. [28].
The other state is an QAH-crystal state with Hall conductivity \(\sigma_{H}=+2\). This state can be constructed from FQAH-crystal by driving the bosonic parton \(\Phi\) into a "superfluid" state. In the condensate of \(\Phi\), the hole operator \(c\) is identified with \(f\), and since \(f\) fills two bands with Chern number \(+1\), this leads to a QAH-crystal state with Hall conductivity \(\sigma_{H}=2\). An QAH state without topological order is possible as we assumed a background CDW that triples the unit cell.
## III Quantum phase transitions
So far, we have constructed five different states centered around the "2/3" state, in a phase diagram tuned by mean field physics of the bosonic parton \(\Phi\) and the fermionic parton \(f\). These states can also be equally well constructed through other formalisms, including the composite fermions and flux attachment. In this section, we discuss the quantum phase transitions between these states. Here we stress that the FQAH state we start with is in fact a FQAH-crystal, while the starting point of Ref. [28] was a FQAH state without spontaneous translation symmetry breaking. Hence different proximate phases and phase transitions are obtained in these two papers. For example, in our current case the most natural \(\sigma_{H}=0\) insulator state next to the \(\sigma_{H}=-2/3\) state is a trivial insulator with CDW, while in Ref. [28]
Figure 3: A schematic global phase diagram in terms of the parton construction, tuned by both physics of bosonic parton \(\Phi\) (vertical direction) and fermionic parton \(f\) (horizontal directions) starting from the FQAH-crystal. We discuss interesting critical theories among the phases shown in Sec. III.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Phase & CF & Parton \(f\) (\(\Phi\) in Laughlin \(-\frac{1}{2}\) state) & Electron+holes(\(\nu=-1\) IQH) & K matrix \\ \hline \(\sigma_{H}=-\frac{2}{3}\) & \(C_{cf}=2\) & \(C_{f}=2\) & Electron in \(\frac{1}{3}\) Laughlin & \(\left(\begin{array}{cccc}-2&0&0&1\\ 0&1&0&-1\\ 0&0&1&-1\\ 1&-1&-1&0\end{array}\right)\simeq\left(\begin{array}{cccc}-1&0&0&0\\ 0&3&0&0\\ 0&0&0&-1\\ 0&0&-1&0\end{array}\right)\) \\ \hline \(\sigma_{H}=0\) & \(C_{cf}=0\) & \(C_{f}=0\) & Electron in \(\nu=1\) IQH & \(\left(\begin{array}{cccc}-2&0&0&1\\ 0&1&0&-1\\ 0&0&-1&-1\\ 1&-1&-1&0\end{array}\right)\simeq\left(\begin{array}{cccc}-1&0&0&0\\ 0&1&0&0\\ 0&0&0&-1\\ 0&0&-1&0\end{array}\right)\) \\ \hline \(\sigma_{H}=-1\) & \(C_{cf}=1\) & \(C_{f}=1\) & Electron in trivial insulator & \(\left(\begin{array}{cccc}-2&0&1\\ 0&1&-1\\ 1&-1&0\end{array}\right)\simeq\left(\begin{array}{cccc}-1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right)\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the 3 formalisms that describes the three states with \(\sigma_{H}=-2/3,-1,0\) respectively. When the composite fermions (CF) fill Chern bands with total Chern number \(C_{cf}\), the physical Hall conductivity is \(\sigma_{H}=\frac{C_{cf}}{1-2C_{cf}}\).
the \(\sigma_{H}=0\) state has a topological order that would lead to nontrivial thermal Hall signal.
### \(\sigma_{H}=-2/3\to 0\) transition
To drive a transition between the FQAH-crystal state with \(\sigma_{H}=-2/3\) and a trivial CDW insulator with \(\sigma_{H}=0\), we can keep the bosonic parton \(\Phi\) in the \(\nu=-1/2\) state unchanged, and only change the total Chern number of the fermionic parton bands from \(C=2\) to \(C=0\), which can be realized by changing one of the occupied bands from \(C=+1\) to \(C=-1\). If there is a direct transition between these two states, it must involve two Dirac fermions at low energy. The complete critical theory reads:
\[{\cal L}_{1-2} = \sum_{i=1,2}\bar{\psi}_{i}\gamma\cdot({\rm i}\partial-a)\psi_{i} +m\bar{\psi}\psi+\frac{1}{2\pi}ad(\alpha-b) \tag{9}\] \[+ \frac{1}{4\pi}ad\alpha-\frac{2}{4\pi}bdb+\frac{1}{2\pi}Adb,\]
where \(\alpha,b\) are the dual fields associated with the filled \(C=1\) band of \(f\) (hence the \(U(1)_{1}\) for \(\alpha\)) and the boson \(\Phi\) currents, respectively. Two Dirac fermions must exist at low energy at the transition for one \(f\)-band to change from \(C=1\) (\(m>0\)) to \(C=-1\) (\(m<0\)).
The theory Eq. 9 can be simplified at the cost of losing the proper quantization of the CS terms. Integrating out \(\alpha\) generates \(-1/(4\pi)ada\).and integrating out \(b\) generates \(1/(8\pi)(A-a)d(A-a)\). The simplified theory then reads:
\[{\cal L}_{2;-\frac{1}{2}} = \sum_{i=1,2}\bar{\psi}_{i}\gamma\cdot({\rm i}\partial-a)\psi_{i} +m\bar{\psi}\psi-\frac{1}{8\pi}ada \tag{10}\] \[- \frac{1}{4\pi}adA+\frac{1}{8\pi}AdA.\]
We use the notation \({\cal L}_{N_{f};k}\) to label the QED Lagrangian with \(N_{f}\) flavors of Dirac fermions and a Chern-Simons term at level \(k\).
The degeneracy of the two Dirac fermions can be guaranteed by extra discrete space-time symmetries. In the absence of displacement field, the entire homobilayer twisted TMD moire system has a \(C_{2y}\) symmetry, a two-fold rotation along the vertical axis in Fig. 1, as well as a time-reversal symmetry \({\cal T}\). Both symmetries exchange the two valleys. Hence, each valley of the system holds a composite symmetry of \(C_{2y}{\cal T}\). This composite symmetry sends \((k_{x},k_{y})\rightarrow(k_{x},-k_{y})\). The degeneracy of the two Dirac fermions that is needed for a direct "\(\sigma_{H}=-2/3\rightarrow\sigma_{H}=0\)" transition in our set-up depends on the type of CDW order. For example, if the CDW is a stripe order along the \(y\) direction with modulation along the \(x\) direction as shown in Fig. 2, the two Dirac points could still be located at \((k_{x},-k_{y})\) and \((k_{x},-k_{y})\) points of the BZ, and their degeneracy is still protected by the \(C_{2y}{\cal T}\) symmetry. In contrast, if there is a \(\sqrt{3}\times\sqrt{3}\) CDW order with \(C_{3}\) symmetry shown in fig 1, there is no scenario where \(C_{3},C_{2y}{\cal T}\) together protect two and only two degenerate Dirac cones. Here we note that an out-of-plane displacement field in principle breaks the \(C_{2y}\) symmetry as it exchanges the two layers, hence under a displacement field the transition may split into two.
Although the microscopic symmetry \(C_{2y}\) of the system may be broken by a displacement field, extra effective symmetries may still exist in the physics of the moire minibands. For example, there is an extra discrete symmetry of the Hamiltonian that describes one valley of the system [8], which is a composite of \(R_{x}:y\rightarrow-y\), and a "time-reversal" that acts on this one-valley Hamiltonian. This symmetry still exists with the presence of the displacement field. If this symmetry is a good approximate symmetry of the moire miniband, it can also protect the degeneracy of two Dirac points and a direct transition of changing Chern number by 2, as was observed in model studies in Ref. [8] and [20].
An alternative description for the same transition from the \(\sigma_{H}=-2/3\) state to the \(\sigma_{H}=0\) state can be constructed using the "electron-hole picture": The \(\sigma_{H}=-2/3\) state can be viewed as a composition of holes at the \(\nu=-1\) IQH state, and electrons in the Laughlin \(\nu=1/3\) state. To drive a transition to the \(\sigma_{H}=0\) state we need a transition of the electrons from the \(\nu=1/3\) state to the \(\nu=1\) IQH state. The critical theory reads:
\[{\cal L}_{1-2;eh} = \sum_{i=1,2}\bar{\psi}\gamma\cdot({\rm i}\partial-a)\psi+m\bar{ \psi}\psi-\frac{1}{2\pi}ad\beta \tag{11}\] \[+ \frac{2}{4\pi}\beta d\beta+\frac{1}{2\pi}Ad\beta+\frac{1}{4\pi} AdA,\]
The last term \(+\frac{1}{4\pi}AdA\) accounts for the hole \(\nu=-1\) state that stays unchanged throughout the transition. The rest of the Lagrangian describes the transition of the electrons from the \(\nu=1/3\) state to a \(\nu=1\) IQH state. When the 2 Dirac cones are gapped out by a mass, integrating out the \(\psi\)'s give
\[{\cal L}_{m} = \frac{-{\rm sgn}(m)}{4\pi}ada-\frac{1}{2\pi}ad\beta+\frac{2}{4 \pi}\beta d\beta \tag{12}\] \[+ \frac{1}{2\pi}Ad\beta+\frac{1}{4\pi}AdA.\]
It is straightforward to verify that for \(m>0\) (\(m<0\)), the theory describes states with Hall responses \(\sigma_{H}=-2/3\) (\(\sigma_{H}=0\)) respectively. Starting with Eq. 11, the theory simplifies again to \({\cal L}_{2;-\frac{1}{2}}\) after integrating out \(\beta\).
### \(\sigma_{H}=-2/3\rightarrow-1\) transition
From the parton picture, this transition involves changing a fermion \(f\)'s state from integer quantum Hall state with \(\nu=+1\) to \(\nu=0\). This transition can be described by QED with one Dirac fermion in the infrared, the critical theory reads:
\[{\cal L}_{1-3} = \bar{\psi}\gamma\cdot({\rm i}\partial-a)\psi+m\bar{\psi}\psi- \frac{1}{8\pi}ada+\frac{1}{2\pi}ad(\alpha-b)\]
\[+ \frac{1}{4\pi}\alpha d\alpha-\frac{2}{4\pi}bdb+\frac{1}{2\pi}Adb,\]
We have added \(-\frac{1}{8\pi}ada\) to properly regularize a single Dirac cone, which arises from another massive Dirac fermion which must exist in the same band as \(\psi\). Here, the single Dirac fermion is written in the convention that, by changing the sign of \(m\), \(\psi\) would generate a level \(\pm 1/2\) CS term for \(a\). Integrating out \(b,\alpha\), we obtain a simplified theory (again at the cost of not properly quantizing the CS term)
\[{\cal L}_{1;-1} = \bar{\psi}\gamma\cdot({\rm i}\partial-a)\psi+m\bar{\psi}\psi- \frac{1}{4\pi}ada\] \[+ \frac{1}{8\pi}AdA-\frac{1}{4\pi}adA.\]
Similarly, when the Dirac cone is gapped out by a _positive_ mass term \(m\bar{\psi}\psi\), integrating out the \(\psi\)'s give
\[{\cal L}_{m>0}=-\frac{3}{8\pi}ada+\frac{1}{8\pi}AdA-\frac{1}{4\pi}adA, \tag{13}\]
which generates Hall conductivity \(\sigma_{H}=-2/3\). While \(m<0\) one has
\[{\cal L}_{m<0}=-\frac{1}{8\pi}ada+\frac{1}{8\pi}AdA-\frac{1}{4\pi}adA, \tag{14}\]
and integrates out \(a\) leaves \(\frac{1}{4\pi}AdA\), describing a state with \(\sigma_{H}=-1\).
From standard boson-fermion duality [37; 38], the critical theory \({\cal L}_{1;-1}\) is dual to
\[{\cal L}_{1;-1} \leftrightarrow |(\partial-{\rm i}\beta)\phi|^{2}+\frac{1}{4\pi}\beta d\beta+ \frac{1}{2\pi}ad\beta-\frac{1}{8\pi}ada\] \[+ \frac{1}{8\pi}AdA-\frac{1}{4\pi}adA.\]
Here \(\beta\) is another gauge field that couples to the dual bosonic field \(\phi\). One could verify that the massive and condensed phase of \(\phi\) corresponds to \({\cal L}_{m>(<)0}\) respectively, yielding \(\sigma_{H}=-2/3,-1\). One can also directly perform the duality transformation from Eq. 12.
Integrating \(a\) in the dual bosonic theory leaves
\[{\cal L}_{1;-1}\leftrightarrow|(\partial-{\rm i}\beta)\phi|^{2}+ \frac{3}{4\pi}\beta d\beta-\frac{1}{2\pi}\beta dA+\frac{1}{4\pi}AdA.\]
This Chern-Simons-matter theory with \(\phi\) coupled to a U(1) gauge field with a CS term with level-3 is the standard theory that describes a transition between a trivial insulator and a fractional quantum Hall state with three-fold topological degeneracy [49]. Combined with the last term \(\frac{1}{4\pi}AdA\) which corresponds to an extra \(\nu=-1\) IQH layer, the theory describes a transition between states with \(\sigma_{H}=-2/3\) and \(\sigma_{H}=-1\). This FQAH-crystal to QAH-crystal transition also admits another description in terms of bosonic partons, which is a modified version of the FQAH to QAH+CDW transition discussed in Ref.[28] driven by the condensation of 3 'vortex' fields coupled to a U(1)\({}_{3}\) Chern-Simons term. It is worth noting that this modified vortex condensation theory takes the same form as Eq. 15.[50]
### \(\sigma_{H}=-2/3\rightarrow+2\) transition
Another potentially direct transition is between state 1 and 5, i.e. a transition from a FQAH-crystal state with \(\sigma_{H}=-2/3\) to an QAH-crystal state with \(\sigma_{H}=+2\). In the parton construction, this requires changing the state of \(\Phi\) from a \(\nu=-1/2\) Laughlin state to a "superfluid" state. This transition of \(\Phi\) was discussed in Ref. [47] and [51], and it is described by a QED with two flavors of Dirac fermions and a CS term at level \(-1\). In our notation, the critical theory of \(\Phi\) is described by a Lagrangian \({\cal L}_{2;-1}\), and the Dirac fermions are charges of the gauge field \(b\), i.e. the dual of the current of \(\Phi\). To describe the transition between states 1 and 5, we need to couple \(b\) to several other gauge fields \(a\), \(a_{i}\) as in Eq. (1). After integrating out the \(a\) and \(a_{i}\), we arrive at the critical theory between state 1 and 5:
\[{\cal L}_{1-5}=\tilde{\cal L}_{2;-1/2}=\] \[\sum_{i=1,2}\bar{\chi}_{i}\gamma\cdot({\rm i}\partial-b)\chi_{i}+ m\bar{\chi}\chi-\frac{1}{8\pi}bdb+\frac{1}{2\pi}Adb. \tag{16}\]
We note that here the Dirac fermion \(\chi_{i}\) is different from the fermions in the previous sections, as it is charged under \(b\)(hence the critical theory \(\tilde{\cal L}_{2,-1/2}\) is different from \({\cal L}_{2;-1/2}\) in Eq. (10) with different charge assignment). The degeneracy of two Dirac cones is again protected by \(C_{2y}{\cal T}\) in a stripe order.
Integrating out the fermion \(\chi_{i}\), we obtain the following action
\[{\cal L}_{m}=-\frac{{\rm sgn}(m)}{4\pi}bdb-\frac{1}{8\pi}bdb+ \frac{1}{2\pi}Adb. \tag{17}\]
It is straightforward to verify that, for \(m>0\) (\(m<0\)), the final Hall conductivity is \(\sigma_{H}=-2/3\) (\(\sigma_{H}=+2\)).
### Transitions involving quantum thermal Hall insulator
Ref. [28] proposed a proximate insulating phase of the \(\sigma_{H}=-2/3\) FQAH state to be one with vanishing Hall response and a neutral topological order (TO) described by the U(1)\({}_{2}\) CS terms. In the current framework, this state could be obtained by putting the bosonic parton \(\Phi\) in a Mott insulator state. Formally, this amounts to eliminating the dual \(b\) of the boson current, i.e. setting \(b=0\), in the construction of the 2/3 state in Eq. (1). Physically, it means that the bosonic sectors are trivially gapped in the low energy. Integrating out \(a\) then sets \(a_{1}=a_{2}\), and one gets a U(1)\({}_{2}\) CS coupling of an internal gauge field, describing the neutral TO, signified by a quantized thermal hall response nevertheless.
The transition from quantum thermal Hall insulator to \(\sigma_{H}=-2/3\) FQAH-crystal or \(\sigma_{H}=+2\) QAH-crystal, can hence be obtained by tuning the \(\Phi\) out of the Mott phase to a Laughlin \(-1/2\) state (for state 1), or a superfluid (for state 5), respectively.
The Mott to Laughlin \(-1/2\) transition is realized by condensing the "vortices" of the bosonic partonsin the Laughlin \(-1/2\) state [28]. The critical theory describing the transition from state \(1\) to state \(4\) hence reads
\[\mathcal{L}_{1-4}=|(\partial-\mathrm{i}b)\Phi_{v}|^{2}-\frac{2}{4 \pi}bdb+\frac{1}{2\pi}Adb\] \[\frac{1}{2\pi}ad(b-a_{1}-a_{2})+\sum_{i=1,2}\frac{1}{4\pi}a_{i}da _{i}. \tag{18}\]
The first term describes the condensation of the vortices \(\Phi_{v}\), which is indicated by its coupling to \(b\), whose flux equals the boson density. As \(\Phi_{v}\) condenses, the field \(b\) acquires a gap and could be ignored. Hence, one arrives at the quantum thermal Hall insulator with \(\sigma_{H}=0\). An insulator phase of \(\Phi_{v}\) just leaves the Lagrangian for the \(2/3\) state Eq. (1). A similar transition was discussed in ref [28], albeit with \(3\) vortex fields enforced by fractional filling \(2/3\).
The transition from quantum thermal Hall insulator to QAH-crystal with \(\sigma_{H}=+2\) is given by the standard boson Mott-superfluid transition,
\[\mathcal{L}_{4-5}=|(\partial-\mathrm{i}a-\mathrm{i}A)\Phi|^{2}- \frac{1}{2\pi}ad(a_{1}+a_{2})+\sum_{i=1,2}\frac{1}{4\pi}a_{i}da_{i}. \tag{19}\]
When \(\Phi\) is gapped, the remaining last \(2\) terms describes the neutral topological order. The condensation of \(\Phi\) sets \(a=-A\), and the CS terms of \(a_{i}\)'s then give a Hall conductivity \(\sigma_{H}=2\).
## IV Summary
In this work we discussed the phase diagram centered around a FQAH-crystal state with \(\sigma_{H}=-2/3\) state at filling \(-2/3\), motivated by recent experiments.Various phases and phase transitions can be obtained by tuning the physics of the bosonic and fermionic partons, including a direct transition between the \(\sigma_{H}=-2/3\) state and a trivial insulating state with \(\sigma_{H}=0\) observed in recent experiments. Interestingly, we also find a direct transition between the \(\sigma_{H}=-2/3\) FQAH state and a \(\sigma_{H}=-1\) QAH state.
Our formalism and conclusions can easily be generalized to other FQAH states. For example, if the bosonic parton \(\Phi\) forms a \(\nu=-1/p\) Laughlin state (with even integer \(p\)), and the fermionic parton (or composite fermion) \(f\) fills Chern bands with total Chern number \(C_{cf}\), we would end up with an FQAH state with Hall conductivity
\[\sigma_{H}=\frac{C_{cf}}{1-pC_{cf}}. \tag{20}\]
In particular, when \(p=-2,C_{cf}=1\), i.e. the composite fermions fill a Chern band with \(C_{cf}=1\), one constructs a Laughlin state with \(\sigma_{H}=1/3\). When the CFs go through a transition from \(C_{cf}=1\to 0\), the electronic state transitions from \(\sigma_{H}=1/3\to 0\). The critical theory is similar to that of \(\sigma_{H}=-2/3\to-1\) with the Lagrangian \(\mathcal{L}_{1;-1}\) in Eq. (13), with an only difference of an integer quantum hall layer described by \(-1/(4\pi)AdA\). The critical theory for the transition hence reads
\[\mathcal{L}_{1/3\to 0}=\mathcal{L}_{1;-1}-\frac{1}{4\pi}AdA. \tag{21}\]
More states and phase transitions can be constructed by changing the "mean field states" of \(\Phi\) and \(f\). This general construction will be useful to understand the growing number of FQAH states [7] observed in this rapidly developing field.
_Acknowledgement_ We thank insightful discussions with Xiaodong Xu. LF thanks related collaborations with Aidan Reddy, Hart Goldman, Nisarga Paul, Ahmed Abouelkomsan and Emil Bergholtz. XYS thanks collaborations and discussions with T. Senthil and Y-H Zhang. XYS is supported by the Gordon and Betty Moore Foundation EPiQS Initiative through Grant No. GBMF8684 at the Massachusetts Institute of Technology. CMJ is supported by a faculty startup grant at Cornell University. LF is supported by the Air Force Office of Scientific Research (AFOSR) under Award No. FA9550-22-1-0432. CX is supported by the Simons foundation through the Simons Investigator program.
|
2305.04937 | A stopping rule for randomly sampling bipartite networks with fixed
degree sequences | Statistical analysis of bipartite networks frequently requires randomly
sampling from the set of all bipartite networks with the same degree sequence
as an observed network. Trade algorithms offer an efficient way to generate
samples of bipartite networks by incrementally `trading' the positions of some
of their edges. However, it is difficult to know how many such trades are
required to ensure that the sample is random. I propose a stopping rule that
focuses on the distance between sampled networks and the observed network, and
stops performing trades when this distribution stabilizes. Analyses demonstrate
that, for over 650 different degree sequences, using this stopping rule ensures
a random sample with a high probability, and that it is practical for use in
empirical applications. | Zachary P. Neal | 2023-05-08T13:27:57Z | http://arxiv.org/abs/2305.04937v5 | # Mixing time for uniform sampling of bipartite graphs with fixed degrees using the trade algorithm
###### Abstract
Uniform sampling of bipartite graphs and hypergraphs with given degree sequences is necessary for building null models to statistically evaluate their topology. Because these graphs can be represented as binary matrices, the problem is equivalent to uniformly sampling \(r\times c\) binary matrices with fixed row and column sums. The trade algorithm, which includes both the curveball and fastball implementations, is the state-of-the-art for performing such sampling. Its mixing time is currently unknown, although \(5r\) is currently used as a heuristic. In this paper we propose a new distribution-based approach that not only provides an estimation of the mixing time, but also actually returns a sample of matrices that are guaranteed (within a user-chosen error tolerance) to be uniformly randomly sampled. In numerical experiments on matrices that vary by size, fill, and row and column sum distributions, we find that the upper bound on mixing time is at least \(10r\), and that it increases as a function of both \(c\) and the fraction of cells containing a \(1\).
Graph, Markov chain, Mixing time, Network, Randomization +
Footnote †: The author 2023. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
## 1 Introduction
Uniform sampling of bipartite graphs and hypergraphs with given degree sequences is necessary for building null models to statistically evaluate their topology [8; 14; 19]. Because these graphs can be represented as binary matrices, the problem is equivalent to uniformly sampling \(r\times c\) binary matrices with fixed row and column sums. This more general problem arises not only in network science [20], but also in physics [24; 26; 2], mathematics [3], psychometrics [27], and ecology [17]. To state the problem formally: Let \(\mathcal{M}\) be the space of all \(r\times c\) binary matrices \(\mathbf{M}_{1}...\mathbf{M}_{n}\) with row sums \(R_{1}...R_{r}\), column sums \(C_{1}...C_{c}\), and fill \(f=\frac{\mathcal{F}\mathcal{R}}{rc}=\frac{\mathcal{F}}{rc}\). How can we randomly sample \(\mathbf{M}\in\mathcal{M}\) with uniform probability?
Many different solutions to this problem have been proposed [21], including algorithms that rely on filling [6; 15; 22], swapping [7], sampling [1; 13] and annealing [5] methods. The current state-of-the-art is the 'trade' algorithm, which includes the curveball [25], fastball [16] and parallel I/O-efficient [11] implementations that vary in their computational details. When using the trade algorithm, it is necessary to know how many trades must be performed to ensure that the resulting matrix \(\mathbf{M}\) is uniformly randomly sampled from \(\mathcal{M}\) (i.e. the mixing time). Although the trade algorithm is known to sample uniformly at random [9] and to be rapidly mixing under certain circumstances [12], only rough heuristics for its mixing time exist [25].
In this paper, we propose a distribution-based method for estimating the trade algorithm's mixing time, and for generating samples of matrices that are nearly guaranteed to be uniformly randomly sampled from \(\mathcal{M}\). We use this method to estimate its mixing time under a range of dimensional and distributional conditions. Our numerical results suggest that the upper bound on mixing time is at least
\(10r\), and that it increases as a function of \(c\) and \(f\), but does not depend on the distributions of \(R\) or \(C\). Although a precise upper bound remains unknown, this distribution-based method provides a way of ensuring that each member of a sample generated using the trade algorithm is uniformly randomly sampled from \(\mathcal{M}\).
## 2 Trade algorithm
### Description
The trade algorithm involves taking a random walk of length \(t\) in the state space \(\mathcal{M}\) from an initial state \(\mathbf{M}_{1}\) to an end state \(\mathbf{M}_{t+1}\). In each step of this random walk, 1s and 0s are randomly swapped between two randomly chosen rows, preserving each column's sum and both rows' sums. This swapping process is known as a 'trade' because it mirrors two children trading baseball cards. It improves on earlier swap methods that exchange, for example a \(\begin{array}{c}0\\ 1\\ 0\end{array}\) submatrix with a \(\begin{array}{c}0\\ 0\\ 1\end{array}\) submatrix, because a single trade can swap many 1s and 0s. The trade algorithm's random walk is a Markov chain that is known to be finite, irreducible, and aperiodic [9], and therefore to converge on the uniform distribution. This means that, given a sufficient number of steps or trades \(t\), \(\mathbf{M}_{t+1}\) is chosen uniformly at random from \(\mathcal{M}\). However, the number of trades that are required - the mixing time of the trade algorithm - remains unknown.
Existing implementations of the trade algorithm differ primarily in their time complexity. For example, the original curveball [25] implementation runs in \(O(n\ log\ n)\), while fastball [16] runs in \(O(n)\), and further improvements are possible through parallelization and I/O-efficiency [11]. Despite these differences, all implementations rely on the same Markov chain process for uniformly randomly sampling \(\mathbf{M}\in\mathcal{M}\). We use the fastball implementation to perform the analyses reported below, however the results are general to any implementation of the trade algorithm.
### Prior work on mixing time
Except for very small \(\mathcal{M}\), computing the exact mixing time of the trade algorithm is intractable [9]. Therefore, prior work has focused on estimating the mixing time via numerical experiments measuring the degree of matrix perturbation. Let the perturbation \(p\) between two matrices \(\mathbf{M}_{a}\) and \(\mathbf{M}_{b}\) be \(p(\mathbf{M}_{a},\mathbf{M}_{b})=\frac{\sum\left|\mathbf{M}_{a}-\mathbf{M}_{ b}\right|}{rc}\). That is, the perturbation is the fraction of entries that differ between the two matrices.
To estimate mixing time, [25] computed \(p(\mathbf{M}_{1},\mathbf{M}_{t+1})\) for a range of \(t\), seeking to determine "the number of swap attempts necessary to maximally perturb a matrix." They observed that for a \(100\times 100\) matrix the perturbation reaches a stable maximum on average at \(t=500\), leading them to speculate that the mixing time is approximately \(5r\) and to use this heuristic in an initial implementation called curveball. This finding was independently replicated by [9] using the same methods in \(10\times 10\) and \(100\times 100\) matrices, and has also been used to compare the mixing times of trade and swap algorithms [10].
This prior work has provided some insight into the mixing time of trade algorithms, but it has two notable limitations. First, prior work has focused on identifying the number of trades needed to _maximally_ perturb a matrix. However, the goal is not to maximally perturb a matrix, but rather to perturb a matrix so that it can be regarded as uniformly randomly sampled from \(\mathcal{M}\). In the language of Markov chains, the mixing time is not the number of steps required to reach a state that is maximally distant from an initial state, but instead is the number of steps required to ensure that a random walk has an equal probability of ending at every state in the space. Importantly, although \(\mathcal{M}\) includes elements that are highly perturbed relative to \(\mathbf{M}_{1}\), it also includes elements that are only slightly perturbed, and
indeed includes \(\mathbf{M}_{1}\) itself. Second, prior work has focused on identifying the number of trades \(t\) when \(p(\mathbf{M}_{1},\mathbf{M}_{t+1})\)'stabilizes,' but no formal test of stability is conducted. Instead, the estimated mixing time \(t\) has been inferred from visual inspection of a line graph plotting \(p\) against \(t\).
## 3 A distribution-based approach to estimating mixing time
To overcome these limitations, we propose a distribution-based approach to estimating mixing time. This approach focuses not on the perturbation \(p\) between two matrices, but instead on the distribution of perturbations between a reference matrix and all members of a set of matrices.
We first define a theoretical distribution. Let \(P(\mathbf{M},\mathcal{M})=p(\mathbf{M},\mathbf{M}_{1})...p(\mathbf{M}, \mathbf{M}_{n})\), that is, the perturbations between an initial state \(\mathbf{M}\) and all states in the state space \(\mathcal{M}\). \(P\) follows some unknown distribution that characterizes \(\mathcal{M}\) with respect to \(\mathbf{M}\), and which reflects the fact that elements of \(\mathcal{M}\) are located at varying distances from \(\mathbf{M}\).
We now define an empirical distribution. Let \(\mathcal{M}_{s}^{t}\) be a set of \(s\) binary matrices \(\mathbf{M}_{1}^{t}...\mathbf{M}_{s}^{t}\) obtained after performing \(t\) trades on a starting matrix \(\mathbf{M}\). Then let \(P_{s}^{t}(\mathbf{M},\mathcal{M}_{s}^{t})=p(\mathbf{M},\mathbf{M}_{1}^{t})... p(\mathbf{M},\mathbf{M}_{s}^{t})\), that is, the perturbations between \(\mathbf{M}\) and each of \(s\) matrices generated from \(\mathbf{M}\) by the trade algorithm after \(t\) trades.
When no trades have been performed, \(P_{s}^{0}=0...0\) because every matrix in \(\mathcal{M}_{s}^{0}\) is identical to \(\mathbf{M}\). After \(t\) trades, \(P_{s}^{t}\) follows some unknown distribution because each \(\mathbf{M}_{s}^{t}\in\mathcal{M}_{s}^{t}\) is located somewhere in \(\mathcal{M}\) at varying distances from \(\mathbf{M}\).
When a sample of \(s\) matrices is uniformly randomly sampled from \(\mathcal{M}\), then \(P_{s}^{t}\sim P\). Of course, the distribution of \(P\) is unknown. However, we can approximate the number of trades needed to achieve this state (i.e. the mixing time) by observing when the distribution of \(P_{s}^{t}\) stabilizes. Moreover, we can formally test its stability by comparing comparing \(P_{s}^{t}\) to \(P_{s}^{t+z}\) using the Kolmogorov-Smirnov (KS) test for the equality of distributions [18, 23], where \(z\) is the required duration of stability.
Figure 1A offers a small illustration of this approach. We begin with a \(10\times 10\) matrix \(\mathbf{M}\) where
Figure 1: A distribution-based approach to computing mixing time. (A) The distribution of perturbations between a \(10\times 10\) starting matrix and \(500\) matrices generated by the trade algorithm after \(t\) trades. Each \(p\)-values test whether a distribution is the same as the previous distribution using the Kolmogorov–Smirnov test. (B) The distribution of observed mixing times over \(500\) replications.
\(f=.5\) and both \(R\) and \(C\) are uniformly distributed. The bottom line shows the distribution of \(P_{500}^{10}\), that is, the distribution of perturbations in 500 matrices generated by the trade algorithm after 10 trades. The associated \(p\)-value (\(p<0.001\)) indicates that it is statistically significant from the the distribution of \(P_{500}^{0}\). The top line shows the distribution of \(P_{500}^{60}\), that is, the distribution of perturbations in 500 matrices generated by the trade algorithm after 60 trades. The associated \(p\)-value shows that it is _not_ statistically significantly different from the distribution of \(P_{500}^{80}\). This means that after 50 trades the distribution of \(P_{s}^{u}\) has stabilized, and therefore that this \(\mathcal{M}_{500}^{50}\) can be regarded as a set of 500 matrices that have been uniformly randomly sampled from \(\mathcal{M}\).
We call this value of \(t=50\) the _observed_ mixing time for these 500 random walks from this **M**. However, starting from a different initial state, or even just taking different random walks from the same initial state, may have a different mixing time. Therefore, estimating the mixing time requires repeating the process illustrated in Figure 1A. Figure 1B shows the distribution of observed mixing times over 500 replications, each time starting from a \(10\times 10\) matrix where \(f=0.5\) and row and column sums are uniformly distributed. We find that for a matrix with these characteristics the most common mixing time is \(t=50\), which matches the current \(5r\) heuristic [9, 25]. However, we also find that the upper bound on this mixing time is substantially larger (\(t=180\)). While identifying the true upper bound may be important for theoretical reasons, for practical purposes of informing the use of the trade algorithm to generate random samples, the \(95^{\text{th}}\) percentile offers a reasonable alternative. Here, we observe that the upper bound on mixing time, when applied to a \(10\times 10\) matrix with \(f=0.5\), is _usually_ (i.e. 95% of the time) \(t=100\).
There are several advantages to this distribution-based approach. First, because any given \(\textbf{M}\in\mathcal{M}\) may be similar or different from the initial state of the trade algorithm, it focuses on the distribution rather than the maximization of perturbation. Second, it offers a formal statistical test of a sample's uniform randomness. Finally, and most practically, it actually returns a sample of matrices that is guaranteed (within the error of the KS test) to be uniformly randomly sampled from \(\mathcal{M}\).
## 4 Numerical results
Figure 2 shows the results of using this distribution-based approach to estimate the upper bound on mixing time for matrices where the row and column sums have varying distributions (panel A), for maximally sparse matrices of varying dimensions (panel B), and for maximally dense matrices of varying dimensions (panel C). The R code necessary to reproduce these findings is available at [https://osf.io/rmwpz/](https://osf.io/rmwpz/).
In each experimental condition, a sample of \(s=500\) matrices is generated using the trade algorithm. This value ensures that the KS test is fully powered to detect cases where the test's null hypothesis (i.e., that two distributions are the same) cannot be rejected at the \(\alpha=0.05\) level, and therefore reduces the risk of type-II error [4]. The distribution of perturbations is evaluated after every \(z=r\) trades. Choosing a lag parameter based on the matrix's size recognizes that larger matrices require more trades to randomize, and therefore reduces the computational cost of estimating mixing time using this approach by reducing the frequency of computing perturbations and performing the KS test. The estimated upper bound on mixing time is the \(95^{\text{th}}\) percentile of observed mixing times across 500 replications.
Figure 2A estimates the upper bound on mixing time for a \(50\times 50\) matrix where \(f=0.1\) and the row and column sums approximately follow one of four beta distributions: left-tailed (\(\beta(10,1)\)), uniform (\(\beta(1,1)\)), constant (\(\beta(10000,10000)\)), or right-tailed (\(\beta(1,10)\)). The estimated values are similar across all conditions, and there is no discernible pattern to the limited variation. This suggests that row and column sum distributions play a limited role in determining the upper bound on the mixing time of the
trade algorithm.
Figure 2B shows estimated upper bound on mixing time for maximally sparse matrices that vary in size and that contain the fewest number of \(1\)s to guarantee no empty rows or columns. Therefore \(f\) varies from \(0.005\) for a \(200\times 200\) matrix, to \(0.04\) for matrices where \(min(R,C)=25\). Similarly, Figure 2C shows estimated upper bound on mixing time for maximally dense matrices that vary in size and where \(f=0.5\). It is clear that the upper bound is always lower when \(r\leqslant c\). Because the trade algorithm can simply be applied to a matrix's transpose when \(r>c\), we focus only on cases where \(r\leqslant c\), which lie on or below the diagonal in both panels.
The values on the diagonal in Figure 2B suggest the upper bound on mixing time is at least \(10r\), which is double the heuristic proposed by [25] for a matrix of arbitrary size and fill. Beyond this minimum, it increases as a function of \(c\) and \(f\). The upper bound we observe in these conditions closely follows \(10r^{1+\frac{f}{23}+\frac{1}{25r}}\). This function is almost certainly overfitted and may not describe out-of-sample conditions, but suggests a possible functional form for the upper bound. In the absence of a precise upper bound on mixing time, the distribution-based approach used in these analyses offers a way to generate samples of matrices that are nearly guaranteed to be uniformly randomly sampled from \(\mathcal{M}\)
## 5 Conclusion
The trade algorithm is the state-of-the-art for uniformly randomly sampling bipartite graphs and hypergraphs with fixed degrees, and analogously sampling binary matrices with fixed row and column sums. However, its use in practice has been limited by the fact that its mixing time is unknown. We have proposed a distribution-based approach to studying its mixing time, and used this approach to estimate its upper bound. A series of numerical experiments suggest that the upper bound on mixing time is at least \(10r\), and increases with the number of columns and the fraction of cells containing a \(1\).
Based on these results, we offer three recommendations for using the trade algorithm to uniformly randomly sample graphs or binary matrices. First, if the matrix is long (i.e., \(r>c\)), apply the trade algorithm to its transpose. Second, perform a minimum of \(10r\) trades, but more if the matrix is wide (i.e., \(c>r\)) or not maximally sparse (i.e., \(f>\frac{max(f,c)}{r\times c}\)). Finally, when an appropriate number of trades is unknown, use the distribution-based approach described in section 3 to generate a sample that is nearly guaranteed to be uniformly randomly sampled from \(\mathcal{M}\).
Figure 2: Estimated upper bound on mixing time of the trade algorithm for (A) \(50\times 50\) matrices where \(f=.1\) and the row and column sum have given distributions, (B) differing size matrices containing the minimum number of uniformly-distributed \(1\)s, and (C) differing size matrices where \(f=0.5\) and the row and column sums are uniformly distributed. Each value is the \(95^{\text{th}}\) percentile, over \(500\) replications, of observed mixing times. |
2303.06276 | A generalization of Zhu's theorem on six-valent integer distance graphs | Given a set $S$ of positive integers, the integer distance graph for $S$ has
the set of integers as its vertex set, where two vertices are adjacent if and
only if the absolute value of their difference lies in $S$. In 2002, Zhu
completely determined the chromatic number of integer distance graphs when $S$
has cardinality $3$. Integer distance graphs can be defined equivalently as
Cayley graphs on the group of integers under addition. In a previous paper, the
authors develop general methods to approach the problem of finding chromatic
numbers of Cayley graphs on abelian groups. To each such graph one associates
an integer matrix. In some cases the chromatic number can be determined
directly from the matrix entries. In particular, the authors completely
determine the chromatic number whenever the matrix is of size $3\times 2$ --
precisely the size of the matrices associated to the graphs studied by Zhu. In
this paper, then, we demonstrate that Zhu's theorem can be recovered as a
special case of the authors' previous results. | Jonathan Cervantes, Mike Krebs | 2023-03-11T01:28:36Z | http://arxiv.org/abs/2303.06276v1 | # A generalization of Zhu's theorem on six-valent integer distance graphs
###### Abstract
Given a set \(S\) of positive integers, the integer distance graph for \(S\) has the set of integers as its vertex set, where two vertices are adjacent if and only if the absolute value of their difference lies in \(S\). In 2002, Zhu completely determined the chromatic number of integer distance graphs when \(S\) has cardinality 3. Integer distance graphs can be defined equivalently as Cayley graphs on the group of integers under addition. In a previous paper, the authors develop general methods to approach the problem of finding chromatic numbers of Cayley graphs on abelian groups. To each such graph one associates an integer matrix. In some cases the chromatic number can be determined directly from the matrix entries. In particular, the authors completely determine the chromatic number whenever the matrix is of size \(3\times 2\) -- precisely the size of the matrices associated to the graphs studied by Zhu. In this paper, then, we demonstrate that Zhu's theorem can be recovered as a special case of the authors' previous results.
_Keywords--_ graph, chromatic number, abelian group, Cayley graph, integer distance graph, cube-like graph, Zhu's theorem
## 1 Introduction
An _integer distance graph_ is a Cayley graph on the group \(\mathbb{Z}\) of integers. In other words, given a set \(S\) of positive integers, we form the graph whose vertex set is \(\mathbb{Z}\) such that two vertices \(x\) and \(y\) are adjacent if and only if \(|x-y|\in S\). (We remark that such graphs are sometimes referred to simply as "distance graphs" in the literature, but as the term
"distance graph" can refer more generally to a graph whose vertex set is a metric space with an edge between each pair of points whose distance lies in some fixed set, to avoid ambiguity we use here the term "integer distance graph.")
Chromatic numbers of integer distance graphs have been widely investigated. We refer the reader to [4] for a survey of this subject and an extensive list of references. In particular Zhu, in [5], determines the chromatic number of all integer distance graphs of the form \(\mathrm{Cay}(\mathbb{Z},\{\pm a,\pm b,\pm c\})\). Moreover, Eggleton, Erdos, and Skilton in [3] prove that if an integer distance graph of finite degree admits a proper \(k\)-coloring, then it admits a periodic proper \(k\)-coloring. They obtain an upper bound on the period but point out that it is quite large and very likely can be reduced considerably.
In [1], the authors develop a general method for dealing with chromatic numbers of Cayley graphs of abelian groups. In [2] this method is summarized as follows: "A connected Cayley graph on an abelian group with a finite generating set \(S\) can be represented by its Heuberger matrix, i.e., an integer matrix whose columns generate the group of relations between members of \(S\). In a companion article, the authors lay the foundation for the use of Heuberger matrices to study chromatic numbers of abelian Cayley graphs." The article [2] goes on to describe its main results: "We call the number of rows in the Heuberger matrix the _dimension_, and the number of columns the _rank_. In this paper, we give precise numerical conditions that completely determine the chromatic number in all cases with dimension \(1\); with rank \(1\); and with dimension \(\leq 3\) and rank \(\leq 2\)." Example 2.4 in [1] gives a general formula for a Heuberger matrix associated to an integer distance graph. When the set \(S\) of positive integers has cardinality \(m\), this matrix has size \(m\times(m-1)\). The integer distance graphs in [5] have \(|S|=3\) and thus have Heuberger matrices of dimension \(3\) and rank \(2\). Hence one ought to be able to recover Zhu's theorem from the results of [2].
The purpose of the present paper is to do just that. We briefly discuss the method of proof. We begin with an integer distance graph formed by a set \(S=\{a,b,c\}\) with \(0<a<b<c\). We then take the matrix of [1, Example 2.4] as a starting point. The results of [2] require a matrix in a particular form. In [1] several matrix transformations are detailed which preserve the underlying graph. Via these transformations we morph the starting matrix into the needed form. The next section provides full details.
Moreover, a close examination of the method of proof yields significantly improved upper bounds for the periods of optimal colorings of such graphs.
This article depends heavily on [1] and [2], which we will refer to frequently. The reader should assume that all notation, terminology, and theorems used but not explained here are explained there.
## 2 Zhu's theorem
For positive integers \(a,b,c\) with \(\gcd(a,b,c)=1\), we define the _Zhu_\(\{a,b,c\}\)_graph_ as \(\mathrm{Cay}(\mathbb{Z},\{\pm a,\pm b,\pm c\})\).
**Theorem 2.1** ([5, Cor. 2.1]).: _Let \(X\) be a Zhu \(\{a,b,c\}\) graph with \(a\leq b\leq c\). Then_
\[\chi(X)=\begin{cases}2&\text{if $a,b,c$ are all odd}\\ 4&\text{if $a=1,b=2$, and $3|c$}\\ 4&\text{if $a+b=c$, and $a\not\equiv b\pmod{3}$}\\ 3&\text{otherwise}.\end{cases}\]
Note that [5] requires \(a\), \(b\), and \(c\) to be distinct, but we do not. By [1, Example 2.16], Theorem 2.1 holds even when two of the integers \(a,b,c\) are equal.
The most difficult part of the proof of Theorem 2.1 is showing that the upper bound of \(3\) for \(\chi(X)\) holds in every "otherwise" case. In this section, we furnish an alternate proof of Zhu's theorem in which those \(3\)-colorings arise by pulling back from Heuberger circulants. As we show, we can think about the accompanying graph homomorphisms either in terms of Heuberger matrices or more directly as reduction modulo a carefully chosen integer.
We now sketch a proof of Zhu's theorem using [2, Theorem 2.14]. Let \(a_{1},a_{2},a_{3}\) be nonzero integers with \(\gcd(a_{1},a_{2},a_{3})=1\), and let \(X\) be the Zhu \(\{|a_{1}|,|a_{2}|,|a_{3}|\}\) graph. It is straightforward to show that given any three integers, there are two of them such that either their sum or their difference is divisible by \(3\). The set \(\{\pm a_{1},\pm a_{2},\pm a_{3}\}\) is unchanged if we either permute \(a_{1},a_{2},a_{3}\) or else replace \(a_{j}\) with \(-a_{j}\). Consequently we may assume without loss of generality that \(3\mid a_{1}+a_{2}\). Moreover, by transposing \(a_{1}\) and \(a_{2}\), and/or replacing \(a_{1}\) and \(a_{2}\) with their negatives, we may assume that \(-a_{1}\leq a_{2}\) and \(|a_{1}|\leq|a_{2}|\). The purpose of these maneuvers is to find a Heuberger matrix for \(X\) in modified Hermite normal form.
Let \(g_{2}=\gcd(a_{1},a_{2})\). Let \(u_{12},u_{22}\in\mathbb{Z}\) such that
\[a_{1}u_{12}+a_{2}u_{22}=a_{3}g_{2} \tag{1}\]
Recall from [1, Example 2.4] that \(X\) is isomorphic to
\[\begin{pmatrix}\frac{a_{2}}{g_{2}}&-u_{12}\\ -\frac{a_{1}}{g_{2}}&-u_{22}\\ 0&g_{2}\end{pmatrix}^{\text{SACG}}\quad\cong M^{\text{SACG}}\quad\text{ where}\quad M=\begin{pmatrix}g_{2}&0\\ -u_{22}&-\frac{a_{1}}{g_{2}}\\ -u_{12}&\frac{a_{2}}{g_{2}}\end{pmatrix}.\]
Observe that \(M\) has no zero rows. Moreover, note that \(X\) does not have loops. It is straightforward to show that the column sums of \(M\) are both even if and only if \(a_{1},a_{2}\), and \(a_{3}\) are all odd, so by [1, Lemma 2.11] we have that \(\chi(X)=2\) if and only if \(a_{1},a_{2}\), and \(a_{3}\) are all odd. Assume now that \(\chi(X)\neq 2\). Thus by [2, Lemma 2.12] and [2, Theorem 2.14] we have that \(\chi(X)\) equals either \(3\) or \(4\). Take \(a,b,c\) so that \(0<a\leq b\leq c\) and \(\{a,b,c\}=\{|a_{1}|,|a_{2}|,|a_{3}|\}\). It remains to show that \(\chi(X)=4\) if and only if either (i) \(a=1,b=2,\text{ and }3|c\), or else (ii) \(a+b=c,\text{ and }a\not\equiv b\pmod{3}\).
If (i) holds, then by [1, Example 2.4] we have that \(X\) is isomorphic to \(\begin{pmatrix}1&0\\ 0&-1\\ 3(c/3)&2\end{pmatrix}^{\text{SACG}}\), whence we have \(\chi(X)=4\) by [2, Theorem 2.14].
If (ii) holds, then either \(a\) or \(b\) or \(c\) must be divisible by \(3\). If, say, \(3\mid b\), then by [1, Example 2.4] we have that \(X\) is isomorphic to \(\begin{pmatrix}1&0\\ -1&a\\ -1&a+3(k-1)\end{pmatrix}^{\text{SACG}}\) with \(k=(b+3)/3\), whence we have \(\chi(X)=4\) by [2, Theorem 2.14]. Similar arguments give us \(\chi(X)=4\) when \(3\mid a\) or \(3\mid c\).
Conversely, suppose that \(\chi(X)=4\), and we will show that either (i) or (ii) is satisfied. To apply [2, Theorem 2.14], we must first put \(M\) in modified Hermite normal form. The matrix \(M\) satisfies all conditions of [2, Def. 2.11] except the last; this can be rectified with help from the division theorem. Let \(q\) and \(r\) be integers such that
\[-u_{22}=q\left(-\frac{a_{1}}{g_{2}}\right)+r,\text{ where }-\left|\frac{a_{1}}{ g_{2}}\right|<r\leq 0. \tag{2}\]
We assume now that \(-\left|\frac{a_{1}}{2g_{2}}\right|\leq r\), and we leave to the reader the other, similar case where this inequality does not hold. Adding \(-q\) times the second column of \(M\) to the first, we obtain the matrix
\[M_{1}=\begin{pmatrix}g_{2}&0\\ r&-\frac{a_{1}}{g_{2}}\\ -u_{12}-\frac{ga_{2}}{g_{2}}&\frac{a_{2}}{g_{2}}\end{pmatrix}.\]
We have that \(X\) is isomorphic to \(M_{1}^{\text{SACG}}\) and that \(M_{1}\) is in modified Hermite normal form. Thus \(M_{1}\) equals one of the six types of matrices listed in the third statement in [2, Theorem 2.14]. We discuss here only the case where
\[M_{1}=\begin{pmatrix}1&0\\ 0&1\\ 3k&1+3k\end{pmatrix}\]
for some positive integer \(k\), and leave the other five cases for the reader. In this case we have \(g_{2}=1\), \(r=0\), \(a_{1}=-1\), \(a_{2}=1+3k\), and \(-u_{12}-qa_{2}=3k\). From (2) we get that \(u_{22}=-q\). So by (1) we get that \(a_{3}=3k\). From this we see that \(a=1\) and \(b=3k\) and \(c=1+3k\), so condition (ii) is met. \(\Box\)
We have natural graph homomorphisms from Zhu graphs to Heuberger circulants given by reducing modulo an appropriate integer. The next lemma recasts these homomorphisms in terms of Heuberger matrices associated to the corresponding standardized abelian Cayley graphs.
**Lemma 2.2**.: _Let \(a,b,c\) be positive integers such that \(\gcd(a,b,c)=1\) and \(b+c\nmid a\). Then \(C_{b+c}(a,b)\) is a Heuberger
circulant graph. Moreover, let \(X\) and \(Y\) be standardized Cayley graphs defined by_
\[\begin{pmatrix}y_{11}&y_{12}\\ y_{21}&y_{22}\\ y_{31}&y_{32}\end{pmatrix}_{X}^{SACG}\text{ and }\begin{pmatrix}y_{11}&y_{12}\\ y_{21}-y_{31}&y_{22}-y_{32}\end{pmatrix}_{Y}^{SACG}.\]
_Suppose we have an isomorphism between the Zhu \(\{a,b,c\}\) graph and \(X\) given by the map \(\varphi_{X}\colon\mathbb{Z}^{3}\to\mathbb{Z}\) defined by \(\varphi_{X}\colon e_{1}\mapsto a,e_{2}\mapsto b,e_{3}\mapsto c\). Then \(\varphi_{Y}\colon\mathbb{Z}^{2}\to\mathbb{Z}_{b+c}\) defined by \(\varphi_{Y}\colon e_{1}\mapsto a,e_{2}\mapsto b\) gives us an isomorphism between \(Y\) and \(C_{b+c}(a,b)\). Furthermore, the following diagram of graph homomorphisms commutes, where \(\tau_{1}\) is defined by \(e_{1}\mapsto e_{1},e_{2}\mapsto e_{2},e_{3}\mapsto-e_{2}\), and \(\tau_{2}\) is defined by reduction modulo \(b+c\)._
\[\begin{pmatrix}y_{11}&y_{12}\\ y_{21}&y_{22}\\ y_{31}&y_{32}\end{pmatrix}_{X}^{SACG}\quad\xrightarrow[\begin{smallmatrix}]{ \oplus}\\ \tau_{1}\end{smallmatrix}\begin{pmatrix}y_{11}&y_{12}\\ y_{21}-y_{31}&y_{22}-y_{32}\end{pmatrix}_{Y}^{SACG}\] \[\varphi_{X}\begin{bmatrix}&&\left|\varphi_{Y}\right.\\ \end{pmatrix}_{\tau_{2}}\]
Proof.: The conditions \(\gcd(a,b,c)=1\) and \(b+c\nmid a\) guarantee that \(C_{b+c}(a,b)\) meets the criteria of [2, Def. 2.5].
Let \(M_{X}\) and \(M_{Y}\), respectively, be the above matrices defining the graphs \(X\) and \(Y\). Using the fact that the kernel of \(\varphi_{X}\) equals the \(\mathbb{Z}\)-span of the columns of \(M_{X}\), it is then a routine exercise to show that the kernel of \(\varphi_{Y}\) equals the \(\mathbb{Z}\)-span of the columns of \(M_{Y}\), whence it follows that \(\varphi_{Y}\) is an isomorphism.
Finally, we have that \(\tau_{2}\circ\varphi_{X}(e_{j})=\varphi_{Y}\circ\tau_{1}(e_{j})\) for \(j=1,2,3\); hence the diagram is commutative.
In a nutshell: To reduce the Zhu \(\{a,b,c\}\) graph modulo \(b+c\), we subtract the third row from the second row to obtain the Heuberger circulant \(C_{b+c}(a,b)\). Indeed, the proof of Lemma 2.2 generalizes in a similar fashion to any number of variables.
Of course, we can play the same game with any pair of variables in lieu of \(b\) and \(c\). Moreoever, we can reduce by \(b-c\) instead of \(b+c\) by adding the two rows instead of subtracting them.
In [3] a _periodic \(k\)-coloring_ of an integer distance graph \(X\) with _period_\(p\) is a \(k\)-coloring \(c\) of \(X\) such that \(c(n)=c(n+p)\) for all \(n\in\mathbb{Z}\). Equivalently, a periodic \(k\)-coloring of an integer distance graph \(X\) with period \(p\) is a pullback, via the map \(n\mapsto\overline{n}\), of a \(k\)-coloring of a circulant graph of order \(p\). It is proved in [3] that if an integer distance graph \(\operatorname{Cay}(\mathbb{Z},S)\), where \(S\) is finite, has chromatic number \(\chi\), then it has a proper periodic \(\chi\)-coloring. That article provides what the authors describe as an "explicit (but weak)" upper bound of \(qk^{q}\) for the smallest period for such colorings, where \(q=\max S\).
We now show that for a Zhu \(\{a,b,c\}\) graph, we can indeed improve this upper bound considerably. For we have
just shown that Theorem 2.1 follows from [2, Theorem 2.14]. In the proof of [2, Theorem 2.14], all colorings are constructed via homomorphisms obtained by collapsing two rows by adding or subtracting them. By Lemma 2.2, such a homomorphism corresponds to reduction modulo the sum or difference of two of \(a,b,c\). Thus we have the following proposition.
**Proposition 2.3**.: _Let \(0<a\leq b\leq c\) be integers. Suppose the Zhu \(\{a,b,c\}\) graph has chromatic number \(\chi\). Then it admits a periodic \(\chi\)-coloring with period \(\leq b+c\)._
### Acknowledgments
The authors wish to thank Daphne Liu for her insights into the history of Zhu's theorem.
|
2307.14385 | Mental-LLM: Leveraging Large Language Models for Mental Health
Prediction via Online Text Data | Advances in large language models (LLMs) have empowered a variety of
applications. However, there is still a significant gap in research when it
comes to understanding and enhancing the capabilities of LLMs in the field of
mental health. In this work, we present a comprehensive evaluation of multiple
LLMs on various mental health prediction tasks via online text data, including
Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4. We conduct a broad range of
experiments, covering zero-shot prompting, few-shot prompting, and instruction
fine-tuning. The results indicate a promising yet limited performance of LLMs
with zero-shot and few-shot prompt designs for mental health tasks. More
importantly, our experiments show that instruction finetuning can significantly
boost the performance of LLMs for all tasks simultaneously. Our best-finetuned
models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of
GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of
GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on par with the
state-of-the-art task-specific language model. We also conduct an exploratory
case study on LLMs' capability on mental health reasoning tasks, illustrating
the promising capability of certain models such as GPT-4. We summarize our
findings into a set of action guidelines for potential methods to enhance LLMs'
capability for mental health tasks. Meanwhile, we also emphasize the important
limitations before achieving deployability in real-world mental health
settings, such as known racial and gender bias. We highlight the important
ethical risks accompanying this line of research. | Xuhai Xu, Bingsheng Yao, Yuanzhe Dong, Saadia Gabriel, Hong Yu, James Hendler, Marzyeh Ghassemi, Anind K. Dey, Dakuo Wang | 2023-07-26T06:00:50Z | http://arxiv.org/abs/2307.14385v4 | # Mental-LLM: Leveraging Large Language Models for Mental Health Prediction via Online Text Data
###### Abstract
Advances in large language models (LLMs) have empowered a variety of applications. However, there is still a significant gap in research when it comes to understanding and enhancing the capabilities of LLMs in the field of mental health. In this work, we present the first comprehensive evaluation of multiple LLMs, including Alpaca, Alpaca-LoRA, FLAN-T5, GPT-3.5, and GPT-4, on various mental health prediction tasks via online text data. We conduct a broad range of experiments, covering zero-shot prompting, few-shot prompting, and instruction fine-tuning. The results indicate a promising yet limited performance of LLMs with zero-shot and few-shot prompt designs for the mental health tasks. More importantly, our experiments show that instruction finetuning can significantly boost the performance of LLMs for all tasks simultaneously. Our best-finetuned models, Mental-Alpaca and Mental-FLAN-T5, outperform the best prompt design of GPT-3.5 (25 and 15 times bigger) by 10.9% on balanced accuracy and the best of GPT-4 (250 and 150 times bigger) by 4.8%. They further perform on pair with the state-of-the-art task-specific language model. We also conduct an exploratory case study on LLMs' capability on the mental health reasoning tasks, illustrating the promising capability of certain models such as GPT-4. We summarize our findings into a set of action guidelines for potential methods to enhance LLMs' capability for mental health tasks. Meanwhile, we also emphasize the important limitations before achieving deployability in real-world mental health settings, such as known racial and gender bias. We highlight the important ethical risks accompanying this line of research.
Mental Health, Large Language Model, Instruction Finetuning 2023. 2023. |
2302.11423 | The inverse Cox-Ingersoll-Ross process for parsimonious financial price
modeling | We propose a formulation to construct new classes of financial price
processes based on the insight that the key variable driving prices $P$ is the
earning-over-price ratio $\gamma \simeq 1/P$, which we refer to as the earning
yield and is analogous to the yield-to-maturity of an equivalent perpetual
bond. This modeling strategy is illustrated with the choice for real-time
$\gamma$ in the form of the Cox-Ingersoll-Ross (CIR) process, which allows us
to derive analytically many stylised facts of financial prices and returns,
such as the power law distribution of returns, transient super-exponential
bubble behavior, and the fat-tailed distribution of prices before bubbles
burst. Our model sheds new light on rationalizing the excess volatility and the
equity premium puzzles. The model is calibrated to five well-known historical
bubbles in the US and China stock markets via a quasi-maximum likelihood method
with the L-BFGS-B optimization algorithm. Using $\phi$-divergence statistics
adapted to models prescribed in terms of stochastic differential equations, we
show the superiority of the CIR process for $\gamma_t$ against three
alternative models. | Li Lin, Didier Sornette | 2023-02-22T15:02:22Z | http://arxiv.org/abs/2302.11423v2 | # A parsimonious inverse Cox-Ingersoll-Ross process for financial price modeling
###### Abstract
We propose a formulation to construct new classes of financial price processes based on the insight that the key variable driving prices \(P\) is the earning-over-price ratio \(\gamma\simeq 1/P\), which we refer to as the earning yield and is analogous to the yield-to-maturity of an equivalent perpetual bond. This modeling strategy is illustrated with the choice for real-time \(\gamma\) in the form of the Cox-Ingersoll-Ross (CIR) process, which allows us to derive analytically many stylised facts of financial prices and returns, such as the power law distribution of returns, transient super-exponential bubble behavior, and the fat-tailed distribution of prices before bubbles burst. Our model sheds new light on rationalizing the excess volatility and the equity premium puzzles. The model is calibrated to five well-known historical bubbles in the US and China stock markets via a quasi-maximum likelihood method with the L-BFGS-B optimization algorithm. Using \(\phi\)-divergence statistics adapted to models prescribed in terms of stochastic differential equations, we show the superiority of the CIR process for \(\gamma_{t}\) against three alternative models.
**Keywords:** asset pricing, financial risks, financial bubbles, excess volatility, fat tail distribution of returns, equity puzzle, earning yield, earning-over-price
Introduction
We introduce a new approach to understand the properties of stock market prices, inspired by an apparently trivial decomposition
\[P=\frac{E}{\gamma}\ \,\ \ \mbox{with}\ \gamma:=\frac{E}{P}, \tag{1}\]
where \(P\) denotes the price of a given stock, \(E\) is the corresponding current earnings per share (EPS) and we refer to \(\gamma\) as the earning yield, which is also called the earnings-over-price ratio. Note that \(\gamma\) is the inverse of the widely utilised Price-to-Earnings (PE) ratio. Though less popular than the PE ratio, the earning yield is frequently encountered in financial statement analysis, because it is used as a metric for return on investment (ROI). As its name suggests, the earning yield represents how many monetary units (MU) a company earned for 1 MU invested in the company. The term _earning yield_ may cause confusion for investors, as one might expect a company with a high earning yield to be better than one with a low earning yield. In reality, a company with the same earnings but a smaller earning yield is considered to have higher growth potential and is thus priced higher. In fact, the meaning of earning yield can be understood as (1) the relative importance of current earnings in pricing, and (2) the interest rate that a company issuing stock would face if it issued perpetual bonds instead, under the condition of surrendering growth and retention 1. Understandings these two facets can help disambiguate the meaning of earning yield and dispel any confusion.
Footnote 1: Section 2 will provide a more detailed discussion.
From a dynamic perspective, the expression (1) can be interpreted in two different ways, leading to two distinct viewpoints on what dominates. The first interpretation focuses on the fact that the price \(P\) varies linearly with earnings per share (EPS), given \(1/\gamma\), namely \(P\simeq E\), while the second interpretation emphasises that the price \(P\) varies inversely to the earning yield \(\gamma\), namely \(P\propto 1/\gamma\). Given that the EPS is in general slowly varying with low volatility, we adopt the second interpretation and argue that the properties of the price process mainly arise from those of the earning yield. Our proposal is that it is more natural and illuminating to model the _real-time_\(\gamma\), interpreted as the instantaneous market collective beliefs on earning yield, rather than the _real-time_ price \(P\) itself. Our strategy involves using time-dependent continuous stochastic processes to represent real-time \(\gamma\) and we have discovered that, even with simple processes, various empirically observed stock price and return properties can be rationalized, such as the power law distribution of returns, transient bubble behavior, and the fat-tailed distribution of prices before bubbles burst. Moreover, simple real-time \(\gamma\) dynamics provide straightforward explanation of the excess volatility and the equity premium puzzles.
Indeed, the structure of equation (1) provides a natural route to account for the excess volatility puzzle identified by Shiller and LeRoy for equity markets,
which refers to the fact that stock prices frequently undergo large changes that are difficult to explain from the generally small changes in prospective corporate earnings or dividends (Shiller, 1981; LeRoy and Porter, 1981; Shiller, 1992; Lyons, 2001; James et al., 2012). Translated using the notations of equation (1), \(E\) moves slowly with a much smaller variance than that of \(P\). This mechanically implies that the interesting or relevant price dynamics lies in the denominator \(\gamma\). The fact that \(\gamma\) is small in general (of the order of a few percent) and is in the denominator provides a natural way to understand structurally the excess volatility puzzle, as relatively small changes (in absolute value) of \(\gamma\) translate into large changes in price \(P\). As an often well-documented effect, suppose \(E=10\) monetary units (MU) and \(P=250\) MU, corresponding to \(\gamma=0.04\). Suppose that \(\gamma\) decreases to \(0.03\). Then, the price increases to \(333\), a \(33\%\) change, while \(E\) has remained unchanged. The excess volatility puzzle thus provides a motivation for proposing price processes deriving from those of \(\gamma\). We start from this empirical observation and propose a class of models for the price process of the form (1), so that the large fluctuations of the price can be ascribed to a suitably defined dynamics of real-time \(\gamma\).
The main model we consider assumes that the real-time \(\gamma\) is not fixed but fluctuates around some \(\gamma^{*}>0\). We show that this leads to an increased long-term return for the investor, which is approximately proportion to the variance of real-time \(\gamma\). Again, this is the interplay between being in the denominator of the price equation and being relatively small with non-negligible excursions or fluctuations that provides a possible concise description of the equity premium puzzle (Mehra and Prescott, 1985, 2003; Kocherlakota, 1996), namely the observation that the compensation for holding risky stocks is abnormally large. In particular, our approach makes clear that models assuming constant discount rates tend to underestimate future price growth compared with models that account for fluctuating mean-reverting discount rates.
Stock prices often exhibit sharp rises, followed by crashes and drawdowns, demonstrating strongly nonlinear or abrupt long-term mean-reversal characteristics. In other words, over time scales from years to a few decades, prices can transiently deviate a lot from their very long-term (logarithmic) trends. However, prices eventually return to the very long term trend associated with the growth rate of the underlying economy over very long time scales (MSCI, Inc., 2010). This very long-term mean-reversion property is the result of the combination of successions of faster price regime growth corrected by large drawdowns (Sornette and Cauwels, 2015). This nonlinear mean-reverting nature of stock prices suggests that the dynamics of real-time \(\gamma\) should follow a mean-reversal process. Ensuring positivity is important for real-world applications, and the simplest mean-reversal process that does so is the CIR process (Cox et al., 1985). Thus, as the first serious incarnation of our proposition to capture the complexity of the price dynamics via the earning yield process, we use the CIR process to model the real-time \(\gamma\). This choice is natural when remembering that the earning yield has the structure of an interest rate, and
the CIR process is one of the most popular interest rate models.
The remainder of this paper is structured as follows. Section 2 presents the mathematical structure of the model and discusses the no-arbitrage condition. In Section 3, we derive various properties of our proposed price dynamics, including detailed statistical results on the distributions of prices and returns, ergodicity, emergent risk premium, and the nature of transient super-exponential growth that may describe the phenomenon of bubbles, which are seen as ubiquitous in financial markets. Section 4 presents our empirical analysis of the model. First, we describe the algorithm used to calibrate the model, which is then applied to five well-known historical bubbles in the US and Chinese stock markets. We also make an empirical comparison of our model to different alternative earning yield processes using several distance diagnostic metrics. Finally, Section 6 concludes.
## 2 Price and discount rate processes
### From price to earning yield and back
Defining the earning yield by
\[\gamma:=\frac{E}{P}\, \tag{2}\]
by definition, one can write
\[P=\frac{E}{\gamma}. \tag{3}\]
This describes the driving forces acting on the stock price \(P\) in terms of \(E\), the current earnings per share (EPS) and the earning yield \(\gamma\). The variable \(\gamma\) can be interpreted in two ways. In the first interpretation, it represents the weight of current earnings in pricing the stock. In the second interpretation, \(\gamma\) stands for the interest rate that a company should have in a scenario with no-growth and no-retention.
* _First interpretation_: The earning yield represents the weight of a company's current earnings in the stock price. Investors take into account not only the current earnings but also the future (discounted) earnings, and the current stock price is derived from the combined contributions of both. In valuing a stock, they weigh the importance of current and future earnings. A lower earning yield signifies that investors place more emphasis on future earnings and/or adopt a smaller discount rate when pricing the stock. This understanding of the earning yield can thus rationalize why a lower earning yield indicates a higher growth potential and/or a lower discount rate.
* _Second interpretation_: The earning yield of a stock is the yield to maturity (YTM) derived from viewing the stock as an equivalent perpetual bond and pricing it using the bond pricing methodology under the no-arbitrage
condition. In other words, the earning yield is an _interest-like_ variable. Let us illustrate this point with the following simple example. Let us assume that the company's earnings are expected to grow at a fixed rate \(g\), the retention ratio remains at \(b\) each year, and the cost of capital for the stock is \(r\). With the current EPS being \(E\), the Gordon-Shapiro stock pricing model [Williams, 1938; Gordon and Shapiro, 1956] or the Miller and Modigliani [1961] model both gives
\[P=\sum_{t=1}^{\infty}\frac{E(1-b)^{t}(1+g)^{t}}{(1+r)^{t}}=\frac{E}{\frac{1+r} {(1-b)(1+g)}-1}. \tag{4}\]
With the definition of earning yield, we obtain
\[\gamma=\frac{1+r}{(1-b)(1+g)}-1. \tag{5}\]
Thus, a smaller earning yield implies a larger growth potential and/or smaller discount rate. For a perpetual bond that pays the dividend \(E\) per period, if it is issued at par value based on the stock, what should be its yield-to-maturity? According to eq.(4) and noting that the perpetual bond has par value \(E/\gamma\), the answer to the question is obtained by solving the following equation
\[\frac{E}{\gamma}=\frac{E}{1+\text{YTM}}+\frac{E}{(1+\text{YTM})^{2}}+\cdots+ \frac{E}{(1+\text{YTM})^{n}}+\cdots\, \tag{6}\]
whose solution is YTM\(=\gamma\). This analysis does not imply that a stock can be converted into a perpetual bond. Rather, it implies that, under no-arbitrage conditions, stock (equity) can be equated to bond (debt). In line with the well-known Modigliani and Miller [1958] theory, an adjustment of the capital structure by replacing equity with debt will not affect the value of the company (in the absence of taxes, bankruptcy costs, agency costs, and asymmetric information, and in an efficient market). In this simplified framework, the above analysis suggests that the earning yield of a company having growth potential and the cost of debt (yield to maturity, YTM) of a company that use earnings as interest to repay its debt are essentially two sides of the same coin. Hence, the earning yield can be further understood as the interest rate the company would face when growth and retention are surrendered.
In practice, the earning \(E\) typically changes only on a quarterly or monthly basis, while prices can be recorded to fluctuate at the millisecond time scale. Therefore, the real-time \(\gamma\) (denoted by \(\gamma_{t}\) hereafter) calculated using equation (2) on real-time prices \(P_{t}\), cannot be directly regarded as earning yield, but rather as the _instantaneous collective market belief on earning yield_. Intuitively, for the first interpretation, \(\gamma_{t}\) represents the collective belief on the weight of current earnings in pricing the stock.
Let us elaborate further on \(\gamma_{t}\) using the second interpretation. Consider a company whose earnings follow the growth rate sequence \(\{g_{t}^{1},g_{t}^{2},g_{t}^{3},...,g_{t}^{n},...\}\) expected at time \(t\). Let us assume that the earnings are retained according to the retention rate sequence \(\{b_{t}^{1},b_{t}^{2},b_{t}^{3},...,b_{t}^{n},...\}\), and that the required return of capital in each future period is the sequence \(\{r_{t}^{1},r_{t}^{2},r_{t}^{3},...,r_{t}^{n},...\}\). In addition, the expected earnings per share in period \(t\) is denoted \(E_{t}^{e}\). According to the stock pricing model of Gordon and Shapiro (1956) or to the Modigliani and Miller (1958) model, the stock price is given by
\[P_{t} =\sum_{k=1}^{\infty}\frac{E_{t+i}^{e}-b_{t}^{i}\cdot E_{t+i}^{e}} {\prod_{i=1}^{k}(1+r_{t}^{i})}=\sum_{k=1}^{\infty}\frac{E_{t}\cdot\prod_{i=1}^{ k}(1-b_{t}^{i})(1+g_{t}^{i})}{\prod_{i=1}^{k}(1+r_{t}^{i})}\] \[=\sum_{k=1}^{\infty}\frac{E_{t}}{\prod_{i=1}^{k}\frac{1+r_{t}^{i }}{(1-b_{t}^{i})(1+g_{t}^{i})}}\.\]
The last equation can be rewritten in the following form,
\[P_{t}=\frac{E_{t}}{1+\gamma_{t}^{1}}+\frac{E_{t}}{(1+\gamma_{t}^{1})(1+\gamma _{t}^{2})}+\frac{E_{t}}{(1+\gamma_{t}^{1})(1+\gamma_{t}^{2})(1+\gamma_{t}^{3}) }+...\, \tag{7}\]
with the recursion structure,
\[\gamma_{t}^{1} =\frac{1+r_{t}^{1}}{(1-b_{t}^{1})(1+g_{t}^{1})}-1,\] \[\gamma_{t}^{2} =\frac{(1+r_{t}^{1})(1+r_{t}^{2})}{(1-b_{t}^{1})(1-b_{t}^{2})(1+g _{t}^{1})(1+g_{t}^{2})}\cdot\frac{1}{1+\gamma_{t}^{1}}-1,\] \[\vdots\] \[\gamma_{t}^{n} =\prod_{i=1}^{n}\frac{1+r_{t}^{i}}{(1-b_{t}^{i})(1+g_{t}^{i})} \prod_{i=1}^{n-1}\frac{1}{1+\gamma_{t}^{i}}-1,\] \[\vdots\]
Given a total number \(n_{s}\) of stock shares, the total value of the company should be \(V_{t}=P_{t}\,n_{s}\). Let us imagine that the company alters its capital structure by exchanging all its equity with debt, i.e., issuing as much bond as possible to finance its operations such that the earning in each period will just cover the required debt payout. This amounts to sacrificing growth and not retaining earnings in the future. According to the Modigliani and Miller (1958) theory, the total liability of the company under this new capital structure is equal to the total value of the company before the exchange of equity into debt, which is \(V_{t}=P_{t}\,n_{s}\). Multiplying both sides of eq. (7) by \(n_{s}\), we have,
\[V_{t}=\frac{n_{s}\,E_{t}}{1+\gamma_{t}^{1}}+\frac{n_{s}\,E_{t}}{(1+\gamma_{t}^{ 1})(1+\gamma_{t}^{2})}+\frac{n_{s}\,E_{t}}{(1+\gamma_{t}^{1})(1+\gamma_{t}^{2 })(1+\gamma_{t}^{3})}+.... \tag{8}\]
The condition of no-arbitrage requires that each term in the sum in the r.h.s. of eq.(8) represents the value of a zero coupon bond maturing at the corresponding horizon indicated by the largest superscript. If the liability comes entirely from issuing perpetual bonds, with yield to maturity \(\gamma_{t}\), the value of the company becomes,
\[V_{t}^{\prime}=\frac{n_{s}E_{t}}{1+\gamma_{t}}+\frac{n_{s}E_{t}}{(1+\gamma_{t}) ^{2}}+\cdots+\frac{n_{s}E_{t}}{(1+\gamma_{t})^{n}}+\cdots\.\]
With \(V_{t}=V_{t}^{\prime}\), this recovers
\[\gamma_{t}=\frac{E_{t}}{P_{t}}\, \tag{9}\]
which is the definition of real-time \(\gamma\). From the equation (9), one arrives at
\[P_{t}=\frac{E_{t}}{\gamma_{t}}. \tag{10}\]
We consider expression (10) as the price process. To simplify the exposition and without significant impact on our results, since the volatility of \(P_{t}\) is mostly driven by \(\gamma_{t}\), we will take \(E_{t}\equiv E\) as a constant. By doing so, the price process becomes specified once the dynamics of \(\gamma_{t}\) is prescribed. Hence, we focus on the evolution of \(\gamma_{t}\), which we term the earning yield process.
### Two examples for earning yield processes
#### 2.2.1 Geometric Brownian motion
The simplest process that ensures that \(\gamma_{t}\) remains positive is the geometric Brownian motion (GBM). We can state the following proposition.
**Proposition 1**.: _Given definition (3), let us assume that the earning yield is given by the geometric Brownian motion_
\[\frac{\mathrm{d}\gamma_{t}}{\gamma_{t}}=-(\mu-\sigma^{2})\ \mathrm{d}t-\sigma\ \mathrm{d}W_{t}. \tag{11}\]
_Then, the dynamics of the price is the solution of the following stochastic differential equation (SDE) representing also a geometric Brownian motion:_
\[\frac{\mathrm{d}P_{t}}{P_{t}}=\mu\ \mathrm{d}t+\sigma\ \mathrm{d}W_{t}. \tag{12}\]
**Proof.** We simplify the notations by removing the subscript \(t\). According to Ito's lemma, expression (3) leads to
\[\mathrm{d}P =\frac{\partial P}{\partial\gamma}\mathrm{d}\gamma+\frac{1}{2} \frac{\partial^{2}P}{\partial\gamma^{2}}(\mathrm{d}\gamma)^{2}=-\frac{E}{ \gamma^{2}}\ \mathrm{d}\gamma+\frac{1}{2}\cdot 2\cdot\frac{E}{\gamma^{3}}\ (\mathrm{d} \gamma)^{2}\] \[=-\frac{E}{\gamma^{2}}\cdot[\,-(\mu-\sigma^{2})\gamma\ \mathrm{d}t- \sigma\gamma\ \mathrm{d}W\,]+\frac{E}{\gamma^{3}}\sigma^{2}\gamma^{2}\ \mathrm{d}t\] \[=\mu\frac{E}{\gamma}\ \mathrm{d}t+\sigma\frac{E}{\gamma}\ \mathrm{d}W=\mu P\ \mathrm{d}t+ \sigma P\ \mathrm{d}W\.\]
which recovers (12). Another way to derive this result is obtained by using the explicit solution of (11), which reads
\[\gamma_{t}=\gamma_{0}e^{-(\mu-\frac{1}{2}\sigma^{2})t+\sigma W_{t}}. \tag{13}\]
Inserting in definition (3) gives
\[P_{t}=\frac{E}{\gamma_{t}}=P_{0}e^{(\mu-\frac{1}{2}\sigma^{2})t+\sigma W_{t}}\, \tag{14}\]
where \(P_{0}:=\frac{E}{\gamma_{0}}\). Expression (14) is indeed the solution of equation (12).
As a simple process that ensures positivity, the geometric Brownian motion is widely accepted as a convenient price dynamics benchmark (Samuelson, 2016). However, despite its popularity coming from its simplicity, the geometric Brownian motion fails to account for many financial stylised facts (Cont, 2001; Chakraborti et al., 2011). Proposition 1 provides an interesting interpretation of why the geometric Brownian motion model for prices is financially unrealistic. Indeed, for the price to grow (with a positive drift coefficient \(\mu-\frac{1}{2}\sigma^{2}\)), the corresponding \(\gamma_{t}\) has to have a negative drift coefficient. This implies that the weight of current earnings in the determination of pricing continuously decreases geometrically over time or that the interest rate of corporate financing continuously decreases geometrically over time. This is strongly counterfactual. It is only for the case where \(\mu=\frac{1}{2}\sigma^{2}\) that both \(P_{t}\) and \(\gamma_{t}\) have zero drift.
#### 2.2.2 Cox-Ingersoll-Ross process
Given the inadequacy of the geometric Brownian motion, we now study the case where \(\gamma_{t}\) follows the Cox-Ingersoll-Ross (CIR) process, which was initially proposed to model interest rates (see Cox et al. (1985)):
\[\mathrm{d}\gamma_{t}=-\alpha\ (\,\gamma_{t}-\gamma^{*}\,)\ \mathrm{d}t+\psi \sqrt{\gamma_{t}}\ \mathrm{d}W_{t}. \tag{15}\]
The CIR process has a mean-reversal property, so that \(\gamma_{t}\) tends to fluctuate around a mean value \(\gamma^{*}\) with the amplitudes of its deviations from \(\gamma^{*}\) controlled by parameter \(\psi\). The mean-reversal property of \(\gamma_{t}\) can be rationalized by a behavioral interpretation. The "mean" can be explained as the typical representative belief on earning yield of a stock in the market. The instantaneous collective market belief may not always equate to the market typical representative belief due to two effects. First, short-term agreements may not fully reflect the beliefs of most traders. Second, not all traders are involved in trading at any given time, which can result in a noisy divergence and a transition from the short-run value of \(\gamma_{t}\) to the long-run value \(\gamma^{*}\).
The structure of the stochastic component proportional to \(\sqrt{\gamma_{t}}\) ensures that \(\gamma_{t}\) never crosses \(0\), remaining non-negative at all times. As we shall prove below, intuitively, this property of the CIR process ensures the stationarity of
the price process. The value of \(\gamma^{*}\) can be associated with a fundamental value \(P^{*}:=E/\gamma^{*}\). The use of the CIR process to model the stochastic dynamics of \(\gamma_{t}\) can be justified from the fact that \(\gamma_{t}\) plays the role of interest rate.
Using a process such as (15) inserted in (3) means that prices are discounted at time \(t\) inter-temporally with the same discount rate \(\gamma_{t}\). In other words, at each time \(t\), investors form an anticipation of what is the correct interest rate (similar to the yield-to-maturity or YTM) to apply to future earnings at all future times. Here, \(\gamma_{t}\) stands for the YTM of an equivalent perpetual bond, which is time-varying. The fact that \(\gamma_{t}\) changes with \(t\) expresses the changing anticipation of investors from one period to the next.
This representation should be distinguished from the model in which the term structure of discount rate is made explicit in terms of a set \(\{\widetilde{\gamma}_{t}^{1},\widetilde{\gamma}_{t}^{2},\cdots,\widetilde{ \gamma}_{t}^{n},\cdots\}\) so that the price is given by
\[P_{t}=\frac{E}{1+\widetilde{\gamma}_{t}^{1}}+\frac{E}{(1+\widetilde{\gamma}_{ t}^{1})(1+\widetilde{\gamma}_{t}^{2})}+\frac{E}{(1+\widetilde{\gamma}_{t}^{1})(1+ \widetilde{\gamma}_{t}^{2})(1+\widetilde{\gamma}_{t}^{3})}+... \tag{16}\]
Equation (16), which is similar to eq.(7), expresses the idea that the price \(P_{t}\) is obtained by discounting future earnings with discount rates \(\widetilde{\gamma}_{t}^{n}\) that are anticipated to be varying along the term structure \(1,2,3,...\) and changing randomly from one period \(t\) to the next. Equation (16) differs from (7) in that the former corresponds to a loose pricing model where there may not necessarily be a connection between \(\widetilde{\gamma}_{t}^{k}\) and \(\widetilde{\gamma}_{t}^{j}\), whereas the latter assumes a discount model in which growth rate, retention ratio, and required rate of return are all anticipated and predetermined. An end-member assumption is that the \(\widetilde{\gamma}_{t}^{n}\)'s are identically independently distributed (i.i.d.) according to some probability density function \(p_{\widetilde{\gamma}}(\widetilde{\gamma})\). Then, performing the change of variable \(x_{n}:=\frac{1}{1+\widetilde{\gamma}^{n}}\) and dropping the subscript \(t\), equation (16) becomes
\[P=E\left(x_{1}+x_{1}x_{2}+x_{1}x_{2}x_{3}+x_{1}x_{2}x_{3}x_{4}+...\right)\, \ \ \ \ x_{n}:=\frac{1}{1+\widetilde{\gamma}^{n}}\, \tag{17}\]
where the \(x_{n}\)'s are i.i.d. random variables with probability density function \(p_{x}(x)\). In the sequel, we shall comment on the relationships between model (3) with (15) and model (17).
**Proposition 2**.: _Given definition (3), let us assume that the earning yield is given by_
\[\mathrm{d}\gamma_{t}=\alpha\ (\gamma^{*}-\gamma_{t})\ \mathrm{d}t+\psi\sqrt{ \gamma_{t}}\ \mathrm{d}W_{t}\,\ \ \ \ \ \ \gamma^{*}=\frac{E}{P^{*}}. \tag{18}\]
_Then, the dynamics of the price is the solution of the following stochastic differential equation (SDE)_
\[\frac{\mathrm{d}P_{t}}{P_{t}}=\left[\ \alpha\left(\frac{P^{*}-P_{t}}{P^{*}} \right)+\psi^{2}\frac{P_{t}}{E}\ \right]\mathrm{d}t+\psi\sqrt{\frac{P_{t}}{E}}\ \mathrm{d}W_{t}. \tag{19}\]
The proof is in Appendix A.1.
### No-arbitrage condition
We show how the proposed price dynamics (19) excludes risk-free arbitrage opportunities. In other words, we formulate a no-arbitrage condition.
Let us assume that the market is made of \(N\) stocks. For each of these stocks, there is at least one derivative (e.g., a European call option) for which it is the underlying asset. We also assume the existence of a unique risk-free asset with a riskless interest rate \(r_{f}\). The price dynamics of the \(i\)th stock price has the same form as equation (19),
\[\frac{\mathrm{d}P_{t}^{(i)}}{P_{t}^{(i)}}=\Bigg{[}\;\alpha^{(i)}\left(\frac{P ^{(i)*}-P_{t}^{(i)}}{P^{(i)*}}\right)+\psi^{(i)2}\;\frac{P_{t}^{(i)}}{E^{(i)}} \;\Bigg{]}\,\mathrm{d}t+\psi^{(i)}\sqrt{\frac{P_{t}^{(i)}}{E^{(i)}}}\;\mathrm{d }W_{t}^{(i)}\, \tag{20}\]
with \(i=1,2,\cdots,N\).
According to the fundamental theory of asset pricing, the opportunity of risk-free arbitrage for any asset trading on the market is absent if and only if there is at least one stochastic process \(m_{t}\), the so-called stochastic discount factor (SDF), that satisfies the following two conditions (see Harrison and Kreps [1979]; Harrison and Pliska [1981]; Duffie [2010]):
1. \(m_{t}\) is a martingale process, which means that \(\mathbb{E}_{t}(\mathrm{d}(m_{t})=0\).
2. The product of \(m_{t}\) and the price discounted by the risk-free rate \(r_{f}\) of any asset is also a martingale process: \(m_{t}\,\widetilde{P}_{t}=\mathbb{E}_{t}(m_{t^{\prime}}\,\widetilde{P}_{t^{ \prime}}),t^{\prime}>t\), where \(P_{t}\) is the observed price and \(\widetilde{P}_{t}=e^{-r_{f}t}P_{t}\). This condition is equivalent to \(\frac{\mathbb{E}_{t}(\mathrm{d}m_{t}P_{t})}{m_{t}P_{t}}=r_{f}dt\).
By Ito calculus,
\[\frac{\mathbb{E}_{t}(\mathrm{d}m_{t}P_{t})}{m_{t}P_{t}}=\frac{\mathbb{E}_{t}( \mathrm{d}m_{t})}{m_{t}}+\frac{\mathbb{E}_{t}(\mathrm{d}P_{t})}{P_{t}}+\frac{ \mathbb{E}_{t}(\mathrm{d}m_{t}\mathrm{d}P_{t})}{m_{t}P_{t}}. \tag{21}\]
Since \(m_{t}\) is a martingale, the second condition reduces to
\[\mathbb{E}_{t}\left(\frac{\mathrm{d}P_{t}}{P_{t}}\right)+\mathbb{E}_{t}\left( \frac{\mathrm{d}m_{t}}{m_{t}}\frac{\mathrm{d}P_{t}}{P_{t}}\right)=r_{f} \mathrm{d}t. \tag{22}\]
The following proposition states that there is at least one SDF \(m_{t}\) that eliminates any opportunity of risk-free arbitrage for a market comprising \(N\) stocks and their derivatives.
**Proposition 3**.: _Suppose a market contains \(N\) stocks and their derivatives. The price \(P_{t}^{(i)}\) of the \(i\)th stock is the solution of Equation (20) and the price of its derivative \(\xi_{t}^{(i)}\) is determined by a \(C^{2}\) bivariate function of \(P_{t}^{(i)}\) and \(t\), \(f^{(i)}:(P_{t}^{(i)},t)\mapsto\xi_{t}^{(i)}\). Consider the stochastic process \(m_{t}\) given by_
\[\frac{\mathrm{d}m_{t}}{m_{t}}=\vartheta_{t}^{(1)}\mathrm{d}Z_{t}^{(1)}+ \vartheta_{t}^{(2)}\mathrm{d}Z_{t}^{(2)}+\cdots+\vartheta_{t}^{(N)}\mathrm{d} Z_{t}^{(N)},\quad i=1,\cdots,N,\]
_where the following conditions hold:_
\[\mathbb{E}_{t}[\,\mathrm{d}W_{t}^{(i)}\mathrm{d}Z_{t}^{(j)}\,]=\rho^ {(i)}\,\delta_{ij}\mathrm{d}t,\quad i,j=1,\cdots,N,\] \[\vartheta_{t}^{(i)}=\frac{1}{\rho^{(i)}}\left(\frac{[r_{f}-\alpha ^{(i)}]-\psi^{(i)2}}{\psi^{(i)}}\frac{1}{\sqrt{P_{t}^{(i)}E^{(i)}}}+\frac{ \alpha^{(i)}\sqrt{P_{t}^{(i)}E^{(i)}}}{P^{*(i)}\psi^{(i)}}\right)\quad i=1, \cdots,N,\]
_where \(\rho^{(i)}\) is an arbitrary multivariate function of \(N\) independent variables, \(P_{t}^{(i)},i=1,2,\cdot,N\), and \(Z_{t}^{(i)}\), \(W_{t}^{(i)}\) are Gauss-Wiener processes. Then, \(m_{t}\) qualifies as a SDF for the whole market, implying that the no-arbitrage condition holds._
The proof is in Appendix A.2.
It is important to stress that Proposition 2 only demonstrates the existence of a SDF in the market without referring to its uniqueness. Different specifications of \(\rho^{(i)}\) will result in different qualified SDFs.
The proof in Appendix A.2 gives an intuition of the mechanism for the absence of risk-free arbitrage opportunities. Consider a mini-market that comprises only the \(i\)th stock, some derivatives of this stock, and a risk-free bond. The proof takes advantage of the fact that any asset in the mini-market can be replicated from the other assets. This means that each such mini-market can be devoid of risk-free arbitrage opportunities. Given that the entire market comprises \(N\) such mini-markets, it stands to reason that there is no risk-free arbitrage opportunity for the entire market. Meanwhile, it is unsurprising that the universal SDF \(m_{t}\) acts as the combination of the \(N\) local SDF's of each mini-market.
## 3 Properties of the price process (19)
### Classification of two different regimes
The properties of the price process (19) derive from those of the CIR process (18). We consider solely the regimes where \(\alpha,\gamma\) and \(\psi\) are strictly positive. Two regimes need to be considered.
1. _Non-explosive nonlinear regime_: for \(2\alpha\gamma^{*}>\psi^{2}\), the CIR process \(\gamma_{t}\) starting from an arbitrary initial positive value remains strictly positive at all times. The condition \(\alpha>0\) then ensures that the CIR process is stationary and ergodic. Correspondingly, this implies that the price process remains bounded at all times and is also stationary and ergodic. This is proved in Appendix A.4.
2. _Recurrent explosive bubble regime_: for \(2\alpha\gamma^{*}\leq\psi^{2}\), the CIR process \(\gamma_{t}\) has a strictly positive probability of hitting \(0\) and, when this occurs, the origin is instantaneously reflecting. Let \(t_{c}=\inf\{t:\gamma_{t}=0\}\) be the time of first
hit with the origin. Then, the price \(P_{t}\) has a strictly positive probability of diverging in finite time at some stochastic time \(t_{c}\). We call this regime the recurrent exploding bubble regime, characterised by stochastic finite-time singularities. The price process is also stationarity and ergodic at long times.
A number of models of financial bubbles with stochastic finite-time singularities have been previously proposed (Sornette et al., 1996; Johansen et al., 1999, 2000; Sornette and Andersen, 2002; Ide and Sornette, 2002; Andersen and Sornette, 2004; Corsi and Sornette, 2014; Lin et al., 2014; Lin and Sornette, 2013; Lin et al., 2019; Schatz and Sornette, 2020). The model closest to the present work in the exploding bubble regime is (Sornette and Andersen, 2002; Andersen and Sornette, 2004) in which the price is an inverse power of a drifting Brownian motion. The exploding bubble regime generalises these works by considering the more elaborate richer and more financially relevant CIR process.
For fixed \(\alpha\) and \(\psi\), the condition \(2\alpha\gamma^{*}\leq\psi^{2}\) implies that a too low reference interest rate \(\gamma^{*}\) or a too weak mean reversal coefficient \(\alpha\) spells potential disaster in the form a looming finite time singularity. Of course, singularities (almost) never occur in Nature or Society but the dynamics preceding them can develop until a time when the associated excesses trigger other mechanisms to take over and make the system transition to other phases, such as crashes, drawdowns and even recessions. From this perspective, the policies of ultra-low interest rates taken by central banks in response to the great financial crises can be interpreted as having pushed the reference interest rate \(\gamma^{*}\) too low (below the safety zone threshold \(\psi^{2}/2\alpha\)), thus promoting dangerously extraordinary price appreciations, found in real estate as well as in certain sectors of the economy such as the digital social sector of GAFAM.
Another way to interpret this regime comes from the definition \(\gamma^{*}=E/P^{*}\): if the anchoring price \(P^{*}\) becomes too large compared with the earning \(E\) (i.e., \(P^{*}>H:=2\alpha E/\psi^{2}\)), the price dynamics enters a fundamentally unstable regime with a bubble developing strongly and exploding as a finite-time singularity.
The bifurcation from a stable price regime when \(2\alpha\gamma^{*}>\psi^{2}\) to the explosive regime when \(2\alpha\gamma^{*}\leq\psi^{2}\), just from a change of expectation of the reference price and/or earnings, provides a simple formal rationalisation of the common lore that financial markets are characterised by shifts between stable and unsustainable exuberant regimes. For instance, Bianci et al. (2022) shows that the US economy can be characterised by longer-term regime shifts in asset values that synchronise with shifts in interest rates.
While such strongly explosive price behaviours can sometimes be observed at least transiently during some financial bubbles, most bubbles seem to exhibit weaker explosive regimes (Phillips et al., 2011) but still characterised by transient "super-exponential" growth (Sornette and Cauwels, 2015; Ardila-Alvarez et al., 2021). In the remainder of this work, we thus focus on the first
regime \(2\alpha\gamma^{*}>\psi^{2}\), which exhibits many interesting properties. This condition \(2\alpha\gamma^{*}>\psi^{2}\) means that the volatility should not be too large, or alternatively the strength \(\alpha\) of the mean reverting process as well as the amplitude of the typical interest rate \(\gamma^{*}\) should be sufficiently large.
### Transition probability of the price
The price process characterised by Equations (3) and (18) has the following explicit transition density.
**Proposition 4**.: _Defining \(P^{*}:=\dfrac{E}{\gamma^{*}}\) and \(H:=\dfrac{2\alpha E}{\psi^{2}}\), the probability \(f(P_{t},t\mid P_{0})\) to find the price value \(P_{t}\) at time \(t\) given that the price was \(P_{0}\) at time \(0\) is given by_
\[f(P_{t},t\mid P_{0})=c\ e^{-(u+v)}v^{\frac{q}{2}+2}u^{-\frac{q}{2}}\ I_{q}(2 \sqrt{uv})\, \tag{23}\]
_where_
\[c =\dfrac{1-e^{-\alpha t}}{H} \tag{24}\] \[u =\dfrac{H}{P_{0}}\cdot\dfrac{1}{e^{\alpha t}-1}\] (25) \[v =\dfrac{H}{P_{t}}\cdot\dfrac{1}{1-e^{-\alpha t}}\] (26) \[q =\dfrac{H}{P^{*}}-1\, \tag{27}\]
_and \(I_{q}(\cdot)\) is the modified Bessel function of the first kind of order \(q\),_
\[I_{q}(x)=\sum_{k=0}^{\infty}\left(\dfrac{x}{2}\right)^{2k+q}\dfrac{1}{k!\ \Gamma(k+q+1)},\quad x\in\mathbb{R}.\]
The proof of Proposition 4 is given in Appendix A.3.
Let us note that the exponent \(q\) remains positive for \(P^{*}\leq H\), which is equivalent to \(2\alpha\gamma^{*}\geq\psi^{2}\). This is nothing but the condition that guarantees that the CIR process never touches \(0\), as discussed in section 3.1. In contrast, for \(P^{*}>H\), or \(2\alpha\gamma^{*}<\psi^{2}\), \(q\) becomes negative, and the exponent \(\frac{q}{2}+2\) becomes smaller than \(2\), implying the mathematical divergence of the mean of the prices, as we shall discuss below.
### Stationary distribution of the price
The fact that the price process defined by Equation (19) is an ergodic Markovian process derives from the equivalent property for the CIR process \(\gamma_{t}\). The proof of ergodicity is presented in Appendix A.4. There thus exists a stationary distribution \(\pi(P)\), which is the long-time limiting distribution of the transition distribution (23).
**Proposition 5**.: _Let us consider a price process characterised by the stochastic differential equation (SDE) (19). Irrespective of whether \(H\geq P^{*}\) or \(H<P^{*}\) and for an arbitrary \(P_{0}\), the invariant probability density of the price is given by_
\[\pi(P)=\frac{H^{\mu^{*}}}{\Gamma\left(\mu^{*}\right)}e^{-\frac{H}{P}}P^{-(1+\mu^ {*})}\,\ \ \mathrm{with}\ \mu^{*}=H/P^{*}=2\alpha\gamma^{*}/\psi^{2}. \tag{28}\]
_The characteristic function of the price distribution is correspondingly_
\[\varphi_{P}(t)=\mathbb{E}(e^{itP})=\frac{2H^{\frac{H}{2P^{*}}}}{\Gamma\left( \frac{H}{P^{*}}\right)}(-i\,t)^{\frac{H}{2P^{*}}}K_{\frac{H}{P^{*}}}(2\sqrt{ H}\sqrt{-i\,t})\, \tag{29}\]
_where \(K_{m}(\cdot)\) denotes the modified Bessel function of the first kind order \(m\) (see Witkovsky (2001) for the general derivation)._
The derivation of expression (28) is given in Appendix A.5.
Expression (28) shows that the probability density of the price has a power tail \(\pi(P)\simeq 1/P^{1+\mu^{*}}\), with a tail exponent \(\mu^{*}:=H/P^{*}=2\alpha\gamma^{*}/\psi^{2}\). The non-explosive nonlinear regime \((2\alpha\gamma^{*}>\psi^{2};H>P^{*})\) corresponds to \(\mu^{*}>1\) so that the mean of the price is exists. In contrast, in the recurrent explosive bubble regime \((2\alpha\gamma^{*}\leq\psi^{2};H<P^{*})\), \(\mu^{*}\leq 1\) and thus the mean of the price is infinite. Thus, the transition from the non-explosive nonlinear regime to the recurrent explosive bubble regime is mirrored by the transition from a price process with finite to infinite mean (see (Sornette, 2004) for a classification of power laws according to the values of the tail exponent \(\mu^{*}\) and its consequences).
For arbitrary very long sequences of \(N\) prices \(\{\,P_{t_{0}},P_{t_{1}},\cdots,P_{t_{N}}\,\}\) at time-points \(\{\,t_{1},t_{2}\ldots,t_{N}\,\}\), these prices are asymptotically and identically distributed random variables with distribution density (28). Thanks to the property of ergodicity, this stationary distribution of prices along time can be converted to an ensemble distribution or cross-sectional distribution (at a fixed time) across all possible parallel price realisations. Applied to a stock market of \(N\) stocks considered statistically equivalent, the asymptotic cross-sectional tail dependence of stock prices can be described by a power-law distribution with tail exponent \(\mu^{*}\). Kaizoji (2006) pioneered the diagnostic of the existence of global market bubbles via the transition of the power law tail exponent \(\mu^{*}\) for the cross-sectional distribution of stock prices from values larger than 1 to a value converging to 1 close to the end of the Japanese Internet bubble. Within our model framework, the maturation of the Internet bubble can be interpreted as decreases of \(\alpha,\gamma^{*}\) and/or increase of \(\psi\) such that \(P^{*}\) increased to overpass a decreasing \(H\). Given the empirical evidence that, in most financial bubbles, the crash follows a period of lower volatility (Sornette et al., 2018), we infer that \(\psi\) does not increase significantly during a bubble regime. We thus deduce that it is \(\alpha\) and/or \(\gamma^{*}\) that decreased during the maturation of the Internet bubble. In summary, the price process characterised by equations (3) and (18), or equivalently by equation (19), rationalises the curious observation of Kaizoji (2006) that had not until now found a theoretical explanation.
### Conditional moments of price and returns
Based on the explicit conditional transition density (23), the first and second conditional moments of the price and cumulative return can be easily calculated.
**Proposition 6**.: _Defining the cumulative return \(R_{t}=\dfrac{P_{t}}{P_{0}}\), for the price process characterised by equations (3) and (18) and for \(H\geq P^{*}\), the conditional expectation of \(R_{t}\) over the time interval from \(0\) to \(t\) is given by_
\[\mathbb{E}(R_{t}\mid P_{0})=\dfrac{e^{-u}}{c\,q\,P_{0}}\cdot{}_{1}F_{1}(q,q+1, u)\, \tag{30}\]
_where the parameters \(H\),\(c\),\(u\),\(q\) are the same as in Proposition 4, and \({}_{1}F_{1}(r,s,x)\) is a Kummer confluent hypergeometric function defined as_
\[{}_{1}F_{1}(r,s,x)=\dfrac{\Gamma(s)}{\Gamma(s-r)\Gamma(r)}\int_{0}^{1}e^{xx}z^{ r-1}(1-z)^{s-r-1}\mathrm{d}z=\dfrac{\Gamma(s)}{\Gamma(r)}\sum_{n=0}^{\infty} \dfrac{\Gamma(n+r)}{\Gamma(n+1)\Gamma(n+s)}\cdot u^{n}\.\]
The proof of this result is given in Appendix A.6.
**Proposition 7**.: _For the bubble process characterised by Equations (3) and (18), for \(H\geq P^{*}\), the conditional variance of \(R_{t}\) is given by_
\[\mathbb{V}ar(R_{t}\mid P_{0})=\dfrac{e^{-u}}{qc^{2}P_{0}^{2}} \left[\dfrac{1}{q-1}\cdot{}_{1}F_{1}(q-1,q+1,u)-\dfrac{1}{qe^{u}}\cdot{}_{1}F_ {1}(q,q+1,u)^{2}\right]. \tag{31}\]
The derivation of this result is given in Appendix A.7. In a nutshell, we have
\[\mathbb{E}(P_{t}^{2}\mid P_{0})=\dfrac{e^{-u}}{c^{2}q(q-1)}\cdot{}_{1}F_{1}(q -1,q+1,u). \tag{32}\]
Using \(\mathbb{V}ar(R_{t}\mid P_{0}):=\mathbb{E}(R_{t}^{2}\mid P_{0})-\mathbb{E}(R_ {t}\mid P_{0})^{2}=\dfrac{1}{P_{0}^{2}}\)\([\)\(\mathbb{E}(P_{t}^{2}\mid P_{0})-\mathbb{E}(P_{t}\mid P_{0})^{2}\ ]\), this allows one to obtain (31).
**Proposition 8**.: _Using expression (28) for the stationary distribution \(\pi(P)\), we obtain_
\[\mathbb{E}(R_{\infty}\mid P_{0})=\dfrac{H\cdot\Gamma\left(\frac{H}{P^{*}}-1 \right)}{P_{0}\cdot\Gamma\left(\frac{H}{P^{*}}\right)}=\dfrac{P^{*}}{P_{0}} \cdot\left(1-\dfrac{P^{*}}{H}\right)^{-1}. \tag{33}\]
_and_
\[\mathbb{V}ar(R_{\infty}\mid P_{0}) =\mathbb{E}(R_{\infty}\mid P_{0})-\mathbb{E}(R_{\infty}\mid P_{0} )^{2}\] \[=\dfrac{P^{*2}}{P_{0}^{2}}\cdot\left(1-\dfrac{P^{*}}{H}\right)^{- 1}\left[\left(1-\dfrac{2P^{*}}{H}\right)^{-1}-\left(1-\dfrac{P^{*}}{H}\right)^ {-1}\right]. \tag{34}\]
The proof of these results are given in Appendix A.8.
These two expressions (33) and (34) are valid only for \(H>P^{*}\). For \(H\leq P^{*}\), \(\int_{0}^{\infty}e^{-z}\ z^{\frac{H}{P^{*}}-2}\ \mathrm{d}z\to+\infty\) and \(\mathbb{E}(R_{\infty}\mid P_{0})=\infty\). This is just another way to retrieve the divergence of the mean of a random variable distributed according to a power law tail with exponent \(\mu^{*}<1\)[Sornette, 2004] as discussed above. The long-term expectation of cumulative returns does not exist even if the long-term stationary density exists. The mechanism is simply that, as time passes, larger and larger realisations of \(R_{t}\) are sampled so that the mean does not converge and diverges stochastically. Similarly, when \(H\leq 2P^{*}\), the variance of the long-term return does not exist, as the integral \(\int_{0}^{\infty}e^{-z}\ z^{\frac{H}{P^{*}}-3}\ \mathrm{d}z\to\infty\).
### Interpretation of expression (33), emergent risk premium and the equity premium puzzle
In the non-explosive nonlinear regime \(2\alpha\gamma^{*}>\psi^{2}\), the discount rate \(\gamma_{t}\) is fluctuating around \(\gamma^{*}\), mean-reverting stochastically around it so that \(\mathbb{E}(\gamma_{t})=\gamma^{*}\). A naive investor will deduce that the intrinsic value is then given by \(P^{*}=E/\gamma^{*}\), which can be argued to be the equilibrium price representing the final converged overall consensus on earning yield, and thus the anticipated long-term return should be \(\mathbb{E}(R^{*}\mid P_{0})=\mathbb{E}(P^{*}/P_{0}\mid P_{0})=P^{*}/P_{0}\). Another way to formulate this is to naively surmise from expression (3) that the price should be characterised by a mean value \(\mathbb{E}(P_{t})=P^{*}=E/\gamma^{*}\). However, as is well-known, the mean of the inverse is not the inverse of the mean as expression (33) exemplifies. Indeed, the nonlinear (inverse) dependence of the price \(P_{t}\) on the discount rate \(\gamma_{t}\) implies that \(\mathbb{E}(P_{t})=\phi P^{*}\), showing the existence of an amplification factor
\[\phi=\left(1-\frac{P^{*}}{H}\right)^{-1}=\left(1-\frac{\psi^{2}}{2\,\alpha \gamma^{*}}\right)^{-1}\,\ \ \text{for}\ P^{*}<H. \tag{35}\]
Thus, the expected stationary return is equal to the naively anticipated long-term return multiplied by the factor \(\phi\). This leads naturally to define an "emergent risk premium rate" as
\[\rho_{e}:=\ln\phi=-\ln\left(1-P^{*}/H\right)\ \in(0,\infty)\ \ \text{for}\ P^{*}<H. \tag{36}\]
To first order in \(\frac{P^{*}}{H}\), we have \(\rho_{e}\approx\frac{P^{*}}{H}=\frac{\psi^{2}}{2\,\alpha\gamma^{*}}\).
The volatility amplitude \(\psi\) of \(\gamma_{t}\) is the main positive contribution to \(\phi\) as seen from
\[\frac{\partial}{\partial\psi}\ln\phi=\frac{\partial}{\partial\psi}\ln\left(1- \frac{P^{*}}{H}\right)^{-1}=-\frac{\partial}{\partial\psi}\ln\left(1-\frac{ \psi^{2}P^{*}}{2\alpha E}\right)=\frac{\phi\Psi}{\alpha\gamma^{*}}>0\.\]
Figure 1 shows the dependence of the cumulative return \(\mathbb{E}(R_{t}\mid P_{0})=\mathbb{E}(P_{t}\mid P_{0})/P_{0}\) (equal to the normalised price) as a function of time for several values
of \(\psi\), as obtained from Equation (30). The larger \(\psi\) is, the steeper is the price increase at early times, the larger is the price acceleration at early times (see below our discussion of the super-exponential transient regime) and the larger the asymptotic price at long times. Thus, the larger the amplitude \(\psi\) of the fluctuations of \(\gamma_{t}\), the larger is the emergent risk premium rate.
The fact that \(\gamma_{t}\) is not fixed but fluctuates around \(\gamma^{*}\) leads to an increased long-term return equal to \(\ln\phi\) for the investor. This mechanism provides a possible concise description of the equity premium puzzle (Mehra and Prescott, 1985, 2003; Kocherlakota, 1996), namely the observation that the compensation for holding risky stocks is larger than predicted by a large class of economic models. Indeed, our simple framework suggests that models that assume constant discount rates tend to underestimate future price growth compared with models that account for fluctuating mean-reverting discount rates.
Note that the mechanism for the increased future price growth can be intuitively understood as roughly resulting from combining different discounting factors as in (16) and (17). Indeed, taking the expectation of expression (17) yields \(\mathbb{E}(P)=\frac{E}{1-\mathbb{E}(x)}\), assuming that the \(x_{n}\)'s are i.i.d. random variables. From
Figure 1: Cumulative expected return \(\mathbb{E}(R_{t}\mid P_{0})=\mathbb{E}(P_{t}\mid P_{0})/P_{0}\) as a function of the holding duration for different values of the amplitude \(\psi\) of the volatility of the \(\gamma_{t}\) process. All trajectories are generated by setting \(E=0.1\), \(P^{*}=10\), \(P_{0}=2\), and \(\alpha=0.005\) but with five different \(\psi\) from \(0.001\) to \(0.009\). The smallest values of \(\psi\) is chosen to let \(\psi\sqrt{P/E}\) correspond to the typical volatility \(0.5\%\) in a standard financial market, considering the typical value of the P/E ratio \(=25\). Correspondingly, \(H=2\alpha E/\psi^{2}\) varies from \(12.3\) to \(1000\) and \(\mathbb{E}(R_{\infty}\mid P_{0})=\phi P^{*}/P_{0}\) takes values from \(5.05\) to \(26.84\).
the definition \(x:=\frac{1}{1+\gamma}\) (see (17)), \(x\) is a convex function of \(\gamma\in[0,1]\) and, by Jensen inequality, \(\frac{E}{1-\mathbb{E}(x)}<\mathbb{E}\left(\frac{E}{1-x}\right)\). The gap \(\mathbb{E}\left(\frac{E}{1-x}\right)-\frac{E}{1-\mathbb{E}(x)}\) is the analog of \(\mathbb{E}(P_{t})-P^{*}=(\phi-1)P^{*}\). The approximate correspondence between model (3) with (15) and model (17) is further reinforced by noting that both lead to a power law tail for the distribution of prices and returns. For model (3), this is shown by Proposition 5 and expression (37). For model (17), this derives from the fact that it is a special case (de Calant et al., 1985) of the general of class of Kesten processes with multiplicative and additive stochastic components (Kesten, 1973; Malevergne and Sornette, 2001; Lux and Sornette, 2002; Buraczewski et al., 2016).
### Transient super-exponential price dynamics
Even in the non-explosive nonlinear regime \(2\alpha\gamma^{*}>\psi^{2}\) (\(P^{*}<H\)), the price exhibits transient growth regimes that are faster than exponential. A first visual indication is provided in figure 1, where the normalised expected price trajectory, plotted with a logarithmic scale as a function of linear time, exhibits a clearly visible transient convexity up to \(t\approx 200\) for \(\psi=0.009\). Recall that, in a linear-log plot (linear-in-time with logarithmic-in-price scales), an exponential growth is qualified as a straight line. A convex curve in such linear-log plot is growing faster than exponential with the instantaneous growth rate (the local tangent to the curve) growing itself. We are interested in this pattern because it has been previously argued to be a characteristic feature of a financial bubble (Johansen and Sornette, 2010; Sornette and Cauwels, 2015; Sornette, 2017; Schatz and Sornette, 2020; Ardila-Alvarez et al., 2021), since it can only be transient and has to correct to a long term exponential growth with rate dictated by long-term economic growth.
The existence of transient strongly increasing price trajectories reminiscent of financial bubbles can be observed in figure 2, which shows simulated price trajectories for the same parameters as in figure 1. For each value of \(\psi\), \(10\) simulated trajectories are displayed with a specific colour. For better visibility, the top panel depicts the \(20\) price trajectories for the two largest \(\psi\) values, while the bottom panel corresponds to the three smallest values. With again the price plotted along the y-axis in logarithmic scale, one can observe transient growth regimes that are reminiscent of the prices observed during financial bubbles with super-exponential growth.
Figure 3 is similar to figure 1 showing \(\mathbb{E}(P_{t}\mid P_{0})/P_{0}\) as a function of time but for different initial price \(P_{0}\). Although the five average price trajectories converge to the same long-term expected price level \(\phi P^{*}\approx 53.5\), they first exhibit a convex growth shape in the linear-log plot, corresponding to a super-exponential transient regime, as previously documented. This convex (super-exponential) growth then transitions to a concave growth, associated with a slowdown of the price increase until convergence to the steady-state level. The smaller the initial price \(P_{0}\), the longer is the duration of the super-exponential
Figure 2: Simulated price trajectories for the same parameters as in figure 1 and for the five values of \(\psi\) given in the inset. For each value of \(\psi\), 10 simulated trajectories are displayed with a specific colour indicated in the inset. The 20 price trajectories corresponding to the two largest \(\psi\) values are shown in the top panel, and the 30 price trajectories for the three lowest \(\psi\) values are in the lower panel. Note the logarithmic scale used for the y-axis.
regime.
The following proposition specifies the conditions needed for the existence of the super-exponential growth regime and characterises its duration.
**Proposition 9**.: _Let the price process be characterised by the SDE (19) in the non-explosive nonlinear regime \(2\alpha\gamma^{*}>\psi^{2}\) (\(P^{*}<H\)). Given an initial price \(P_{0}\), and with the definitions \(q=\frac{H}{P^{*}}-1\) (27), \(H=\frac{2\alpha E}{\psi^{2}}\) and \(P^{*}=E/\gamma^{*}\), \(g(\alpha)=e^{\alpha}-1\), and \(\Omega=\frac{e^{\frac{H}{P_{0}\,g(\alpha)}}\left(-\frac{H}{P_{0}\,g(\alpha)} \right)^{q}}{\Gamma(q)-\Gamma(q,-\frac{H}{P_{0}\,g(\alpha)})}\,\) under the condition that_
\[-g(\alpha)\,e^{\alpha}P_{0}\,\Omega^{2}+\left[\,e^{\alpha}H+g(\alpha)(1+e^{ \alpha}q)P_{0}\,\right]\Omega-(e^{\alpha}+1)H+P_{0}\,g(\alpha)(q-1)>0\, \tag{37}\]
_then the expected price persists in super-exponential growth from \(t=0\) till at least \(t=1\) and its duration \(t_{c}\) is determined by finding the time that is solution of the following equation:_
\[\frac{\partial^{2}}{\partial t^{2}}\left[\,-\frac{H}{P_{0}}\frac{1}{e^{\alpha t }-1}-\ln(1-e^{-\alpha t})+\ln\frac{H}{q}+\ln{}_{1}F_{1}\left(q,q+1,-\frac{H}{P _{0}}\frac{1}{e^{\alpha t}-1}\right)\,\right]=0. \tag{38}\]
_The choice of the minimum duration of \(1\) is arbitrary and is readily generalisable to any positive value._
Figure 3: The evolution of expected price as a function of the holding duration in a bubble process. The parameters are \(E=0.1,P^{*}=10,\alpha=0.005\) with \(\psi=0.009\) corresponding to the largest values used in figures 1 and 2.
**Proof.** The proof of this proposition is given in Appendix A.9. \(\Box\)
Figure 4 shows the dependence of the duration \(t_{c}\) of the super-exponential regime obtained by solving equation (38) as a function of the initial price \(P_{0}\) at time \(t=0\). The smaller \(P_{0}\) and the smaller \(\alpha\), the longer is the duration of the super-exponential regime. The curve shown in figure 4 for \(\alpha=0.005\) is consistent with the results of figure 3.
Figure 5 shows the dependence of the duration \(t_{c}\) of the super-exponential regime as a function of the volatility amplitude \(\psi\). It exhibits a concave shape with maximum value obtained for \(\dfrac{\psi^{2}}{2\alpha\gamma^{*}}\approx 0.7\). The results can be explained as follows. When \(H\) is too small to approach \(P^{*}\) (\(P^{*}/H\to 1\)), the long-run expected price \(\phi P^{*}\) becomes infinite such that it leaves enough time for the expected price to converge. Thus, the duration of super-exponential growth can be largely cut. However, when \(H\) is sufficiently large (\(P^{*}/H\to 1/2\)), it requires quite a tight time for the expected price to converge and leave little time for the duration of super-exponential growth.
Figure 4: Dependence of the duration \(t_{c}\) of the super-exponential growth regime obtained by solving equation (38) as a function of the initial price \(P_{0}\). The parameters are \(E=0.1\), \(\psi=0.009\), \(P^{*}=10\). The mean-reverting strength \(\alpha\) of the yield process \(\gamma_{t}\) takes values from \(0.005\) to \(0.008\) as indicated in the inset, with their colour codes. We scan values of \(\alpha\) such that condition (37) is valid, ensuring a minimum duration of the super-exponential regime. This corresponds to having \(\alpha\) approximately \(\in[0.005;0.008]\). For instance, taking \(\alpha=0.004\) leads to \(H=9.87<10=P^{*}\). Similarly, for \(\alpha>0.008\), condition (37) is violated.
## 4 Calibration of the model to real data
### Quasi-maximum likelihood calibration
The price process (19), which derives from expressions (3) with (18) in Proposition 2, can be calibrated by transforming the empirical price back into its corresponding earning yield (or discount rate) \(\gamma_{t}\), which is then calibrated to the CIR process. For a more stable numerical implementation, instead of the original form (15) or (18), it is preferable to use the following form
\[\mathrm{d}\gamma_{t}=\left(b-\alpha\,\gamma_{t}\right)\mathrm{d}t+\psi\sqrt{ \gamma_{t}}\,\mathrm{d}W_{t}. \tag{39}\]
After the statistical estimates \(\widehat{\alpha}\) and \(\widehat{b}\) are obtained, \(\widehat{\gamma^{*}}\) can be recovered from \(\widehat{b}/\widehat{\alpha}\). In principle, one could implement an exact likelihood inference, since the CIR process has an explicit analytical conditional density (46), i.e. a non-central \(\chi^{2}\) distribution. However, the involved modified Bessel function \(I_{q}(\cdot)\) makes it difficult to calculate the conditional distribution, as it tends to explode, inducing numerical implementation overflows. A solution is to implement the quasi-maximum likelihood approach to take advantage of the Euler discretisation of Equation (39),
\[\gamma_{t+\Delta_{s}}-\gamma_{t}=\left(b-\alpha\,\gamma_{t}\right)\Delta_{s}+ (\psi^{2}\gamma_{t})^{\frac{1}{2}}(W_{t+\Delta_{s}}-W_{t})\,\]
Figure 5: Duration of the transient super-exponential growth regime obtained by solving equation (38) as a function of \(\dfrac{P^{*}}{H}:=\dfrac{\psi^{2}}{2\alpha\gamma^{*}}\). The parameters are \(E=0.1\), \(P_{0}=2\), \(P^{*}=10\). Each curve corresponds to a fixed value of \(\alpha\) from \(0.005\) to \(0.008\) as shown in the inset.
where \(\Delta_{s}\) denotes the time lag between two consecutive observations of the process. As \((W_{t+\Delta_{s}}-W_{t}\sim\mathcal{N}(0,\Delta_{s})\), the approximate transition density is
\[\gamma_{t+\Delta_{s}}\mid\gamma_{t}\sim\mathcal{N}(\gamma_{t}+(b-\alpha\,\gamma _{t})\,\Delta_{s},\ (\psi^{2}\gamma_{t})^{\frac{1}{2}}\Delta_{s})\.\]
Thus, the quasi-log-likelihood for a series \(\boldsymbol{\gamma}:=\{\gamma_{t}\}_{t=0,1,\cdots,n}\) of observed data determined for a specific set of parameters is (neglecting a constant term)
\[\ln\mathcal{L}(\boldsymbol{\gamma},(b,\alpha,\psi))=-\frac{1}{2} \sum_{t=1}^{t=n}\left\{\ln\psi^{2}\gamma_{t-1}+\frac{1}{\Delta_{s}^{2}\psi^{2} \gamma_{t-1}}\cdot[\Delta\gamma_{t}-\Delta_{s}(b-\alpha\gamma_{t-1})]^{2} \right\}, \tag{40}\]
where \(\Delta\gamma_{t}=\gamma_{t}-\gamma_{t-1}\). The optimal parameters are obtained to satisfy
\[\max_{\{b,\alpha,\psi\}}\ln\mathcal{L}(\boldsymbol{\gamma},(b,\alpha,\psi))\.\]
Further, the corresponding variance for each estimated parameter can be obtained from the diagonal terms of the following Fisher information matrix,
\[\mathbb{V}ar(\boldsymbol{\hat{\theta}})=\left(\frac{\partial^{2}\mathcal{L}} {\partial\boldsymbol{\theta}\partial\boldsymbol{\theta}}\right)^{-1}\bigg{|}_ {\boldsymbol{\theta}=\boldsymbol{\hat{\theta}}},\qquad\boldsymbol{\theta}=(b, \alpha,\psi)\.\]
Given a time series of daily prices \(\{P_{t}\}_{t=1,\cdots,n}\) with \(n\) observations, we calculate the corresponding series \(\{\gamma_{t}\}_{t=0,1,\cdots,n}\) using \(\gamma_{t}=E/P_{t}\). Because we focus on empirical time series that have exhibited transient bubble regimes, we assume that the volatility of \(E\) can be neglected and we treat \(E\) as a constant in the calibration. We adopt the L-BFGS-B method to optimise the quasi-log-likelihood. The limited-memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) optimisation algorithm belongs to the family of quasi-Newton BFGS algorithms, which uses a limited amount of computer memory. The L-BFGS-B extends L-BFGS to handle simple bound constraints in search of the optimal parameters [Byrd et al., 1995]. Moreover, as \(\Delta_{s}\to 0\) is the necessary condition for the quasi-maximum likelihood method to obtain consistent estimations, we use \(\Delta_{s}=1/252\) (data of a year including 252 observations) in our implementation.
### Description of the five empirical data sets
We calibrate the model for the following five well-known historical bubbles.
**The US Dotcom bubble ending in mid-2000**. It is often called the Internet-Communication-Technology (ICT) bubble and regarded as a textbook-style bubble (Homm and Breitung [2012]; Phillips et al. [2011]). Over 1999, the cumulative return of the internet stock index (represented by NASDAQ) surged with an eightfold multiplication of the price; in contrast, for non-internet
stocks, the return was smaller than 20% [Kaizoji et al., 2015]. However, by mid-2001, a year after the bubble ended, the cumulative sum of the astronomical returns for internet stocks had completely evaporated. From April 11, 2000, the NASDAQ index underwent five days of consecutive sharp drops, with a drawdown of 25.8%. It was the largest drawdown since the index reached the peak of 5132.5 on March 10, 2000. For model calibration, we employ the NASDAQ daily close price as \(P_{t}\) and set April 11, 2000, as the end of the time window, with the corresponding date, April 12, 1999, as the start of the time window. According to the report from the NASDAQ offset market exchange white book, the P/E ratio was approximately 150 in mid-1999. Thus, we set \(\gamma_{0}=\gamma_{t_{\rm start}}=1/150\) to be the initial value of \(\gamma_{t}\) in the calibration window. Hereafter, this bubble is referred to as NASDAQ 2000.
**US stock market bubble ending in October 1987**. This bubble is often identified in retrospect as the most striking drop on October 19, 1987, known as the "Black Monday." Much work has been conducted to unravel the origin(s) of the crash [Sornette, 2017]. However, no clear cause has been singled out. Some commentators have ascribed the crash to the overinflated prices from a speculative bubble during the earlier period, which pushed global stock markets into an unsustainable state. The sharp drop on Black Monday is only the grand finale of consecutive market declines with an impressive cumulative depreciation of 29.6%, which began on October 6, 1987, lasting for nearly two trading weeks. For model calibration, we employ the S&P 500 daily close price as \(P_{t}\) and set October 6, 1987, as the end of the time window, with the corresponding date, October 7, 1986, as the start of the time window. According to the earnings data from S&P 500 Global, the P/E ratio for S&P 500 was approximately 6.9 in October 1986. Thus, we set \(\gamma_{0}=\gamma_{t_{\rm start}}=1/6.9\) to be the initial value of \(\gamma_{t}\) in the calibration window. Hereafter, this bubble is referred to as S&P 500 1987.
**US bubbles ending in October 1929**. This bubble is famously the herald of the Great Depression. It was preceded by extraordinary growth and prosperity on Wall Street in the late 1920s, with the Dow Jones Industrial Average index (DJIA) peaking at 381 on September 3, 1929. This bubble has been analysed in great detail by Galbraith [2009]. Its development is characterised by many stories on economic prosperity, with the reconstruction boom after the first world war. From October 11, 1929, incredible consecutive drops of DJIA kicked off. It only took one trading month for the price of DJIA to fall to 195.4 from 392.4, with a huge drawdown of 50%. For model calibration, we employ the DJIA daily close price as \(P_{t}\) and set October 11, 1929, as the end of the time window, with the corresponding date, October 12, 1928, as the start of the time window. As the annual normalised P/E ratio of DJIA in 1928 is 12.5, we set \(\gamma_{0}=\gamma_{t_{\rm start}}=1/12.5\) to be the initial value of \(\gamma_{t}\) in the calibration window. Hereafter, this bubble is referred to as DJIA 1929.
**Chinese stock market bubble ending at the beginning of 2008**. After
the reform of the split-share structure launched in 2006, the most representative stock index for the market of A-shares (i.e., the Shanghai Stock Exchange Composite [SSEC] index) underwent an exuberant growth. The growth was fuelled by compelling growth stories, confirming from all directions great positive outlooks based on the rapid fundamental growth of the Chinese economy, its sky-rocketing export surpluses inducing enormous reserves and liquidity, the strength of its currency, and the coming Olympic Games offering a path to extraordinary expected prosperity [Lin and Sornette, 2018]. Multiple anecdotal sources strongly posit that Chinese investors changed from prudent to very confident in the future prospects and gains. In 2007, the SSEC peaked at 6124 in October from 2600 in January (corresponding to a relative appreciation of 235.5%) within 10 months. However, after an anodyne rally in the first two weeks of 2018, SSEC began to slump on January 14, 2008. The following eight months saw the index return to the original 2600 level, capping a breathtaking roller-coaster performance for the bubble. For model calibration, we employ SSEC daily close price as \(P_{t}\) and set January 14, 2008, as the end of the time window, with the corresponding date, January 15, 2007, as the start of the time window. According to the Census and Economic Information Centre (CEIC) data, the P/E ratio for SSEC was approximately 20 in January 1999; we set \(\gamma_{0}=\gamma_{t_{\text{start}}}=1/20\) to be the initial value of \(\gamma_{t}\) in calibration window. Hereafter, this bubble is referred to as SSEC 2008.
**Chinese stock market bubble ending in the mid of 2015**. This bubble started around mid-2014, inducing an approximate growth of 150% in a year, crashed thereafter from mid-2015. The origin of this bubble is often considered to be quite different from SSEC 2008. It is thought to result from investors utilising strong leverage that led to an amplification and disconnection between the price and the realities of economic activity and corporate earnings. Indeed, the Chinese government encouraged small retail investors to join in investing in the stock market, and around 7% of China's population did profit from the easy access to credit for investment purposes [Sornette et al., 2015]. However, as a remarkable feature, the price's roller-coaster performance of this bubble is quite similar to SSEC 2008. For model calibration, we employ SSEC daily close price as \(P_{t}\) and set June 30, 2008, (which opens a consecutive seven-day drop, the longest losing streak since the peak of the bubble) as the end of the time window, with the corresponding date, July 1, 2014, as the start of the time window. According to the CEIC data, the P/E ratio for SSEC was approximately 10 in July 2014. Thus, we set \(\gamma_{0}=\gamma_{t_{\text{start}}}=1/10\) to be the initial value of \(\gamma_{t}\) in calibration window. Hereafter, this bubble is referred to as SSEC 2015.
Figure 6: Calibration of the model for five historical bubbles. The left panels show the stock index prices for each case. The calibration window for each case is delineated by the two dotted vertical lines. For each bubble, the estimation algorithm is implemented in a time window of one year ending just before the crash following the bubble. The right panels present the corresponding time series of \(\gamma_{t}\) in the calibration window of one year ending just before each crash. The horizontal green dashed lines indicate the estimated \(\widehat{P^{*}}=E/\widehat{\gamma^{*}}\) and \(\widehat{\gamma^{*}}\) respectively in the left and right panels. The red dotted-dashed line represents the estimated \(\phi P^{*}\).
### Results of the calibration on the five empirical bubble examples
For each time series of \(\gamma_{t}\), we choose \(\{b=0,\alpha=0,\psi=0\}\) as the lower bound and the sufficiently large values \(\{b=100,\alpha=100,\psi=100\}\) as the upper bound of the parameter space to implement the L-BFGS-B algorithm. The initial values for optimisation are all set to be \(\{b=0.01,\alpha=0.01,\psi=0.01\}\) for the above five bubble cases. After \(\widehat{b},\widehat{\alpha},\widehat{\psi}\) has been obtained, we derive \(\widehat{\gamma^{*}}=\widehat{b}/\widehat{\alpha}\). Further, \(\widehat{P^{*}}=E/\widehat{\gamma^{*}}\), \(\widehat{\phi}=[1-\widehat{\psi}^{2}/(2\,\widehat{\alpha}\widehat{\gamma^{*}}) ]^{-1}\), and \(\widehat{P^{\dagger}}=\widehat{\phi}\widehat{P^{*}}\) can be sequentially calculated. Table 1 lists all calibrated parameters, where \(t_{\text{start}}\) and \(t_{\text{end}}\) respectively gives the start and end of date for the calibration time window, and \(P_{\text{max}}\) gives the maximum price of the stock index within the calibration time window. \(\ln\mathcal{L}\) denotes the quasi-maximum log-likelihood for the empirical \(\gamma_{t}\) over the time window based on the estimated parameters.
As shown in table 1, \(\widehat{\alpha}\) and \(\widehat{\psi}\) are respectively in the typical range of \(0.005\sim 0.02\) and \(0.002\sim 0.004\) for all bubbles. Moreover, the estimated \(\widehat{P^{*}}\) are not far from the peak price \(P_{\text{max}}\), except for SSEC 2015. This suggests that, before the final burst, the first four bubbles had almost reached maturation. SSEC 2015 seems to collapse too early before the instantaneous collective market belief on earning yield \(\gamma_{t}\) had time to reach its equilibrium state corresponding to \(\widehat{\gamma^{*}}\), leading \(P_{\text{max}}\) to remain much smaller than \(P^{*}\). This exception could be rationalised by the evidence that the Chinese real estate market and the overall economy synchronously cooled significantly when this bubble developed. As the real estate sector largely drove the Chinese economy in the first five years of the 2010s, the cooling of the real estate market could have dragged down the expected growth rate of the whole Chinese economy. It acted as an external pressure to cause a regime shift for \(\gamma^{*}\) in mid-2015, stopping the bubble early from further growth.
The left panels of Figure 6 show the price trajectories for each of the five stock indices studied here, in time windows showing clearly the ascending bubble regimes followed by the crashes and following drawdowns. The right panels show the corresponding earning yields \(\gamma_{t}\). The calibrated \(\widehat{P^{*}}\) are found close to the peak of the prices, except for SSEC 2015. It is interesting to note that \(\widehat{P^{*}}\) and \(\widehat{P^{\dagger}}=\widehat{\phi}\widehat{P^{*}}\) are almost identical for S&P500 1987 and Dow Jones 1929, indicating that the emergent risk premium \(\ln\phi\) for these bubbles is rather small. As table 1 shows, this might be ascribed to different reasons. For S&P500 1987, the small \(\phi\) came from a large \(\gamma^{*}\) (recall Equation (35)) that reflects a large expected equilibrium yield in that bubble. For Dow Jones 1929, the small \(\phi\) results from the large value of \(\alpha\), which controls the typical time scale for \(\gamma_{t}\) to converge to \(\gamma^{*}\).
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & NASDAQ 2000 & S\&P500 1987 & Dow Jones 1929 & SSEC 2008 & SSEC 2015 \\ \hline \(t_{\text{start}}\) & 1999/04/12 & 1986/10/06 & 1928/10/12 & 2007/01/15 & 2014/07/01 \\ \(t_{\text{end}}\) & 2000/04/11 & 1987/10/07 & 1929/10/11 & 2008/01/14 & 2015/06/30 \\ \(P_{\text{max}}\) & 5132.5 & 337.9 & 381.1 & 6092.1 & 5166.35 \\ \(\widehat{b}\) & \(\begin{array}{cc}1.51\times 10^{-5}\\ (3.23\times 10^{-5})\end{array}\) & \(\begin{array}{cc}8.17\times 10^{-4}\\ (1.24\times 10^{-4})\end{array}\) & \(\begin{array}{cc}0.0012\\ (5.89\times 10^{-4})\end{array}\) & \(\begin{array}{cc}2.38\times 10^{-4}\\ (2.04\times 10^{-4})\end{array}\) & \(\begin{array}{cc}1.63\times 10^{-4}\\ (2.65\times 10^{-4})\end{array}\) \\ \(\widehat{\alpha}\) & \(\begin{array}{cc}0.0044\\ (0.0059)\end{array}\) & \(\begin{array}{cc}0.0081\\ (0.0086)\end{array}\) & \(\begin{array}{cc}0.0194\\ (0.0092)\end{array}\) & \(\begin{array}{cc}0.0099\\ (0.0061)\end{array}\) & \(\begin{array}{cc}0.0054\\ (0.0039)\end{array}\) \\ \(\widehat{\gamma^{*}}\) & \(\begin{array}{cc}0.034\\ 0.0016\end{array}\) & \(\begin{array}{cc}0.101\\ (8.81\times 10^{-8})\end{array}\) & \(\begin{array}{cc}0.0033\\ (3.49\times 10^{-7})\end{array}\) & \(\begin{array}{cc}0.0037\\ (4.48\times 10^{-7})\end{array}\) & \(\begin{array}{cc}0.0042\\ (5.86\times 10^{-7})\end{array}\) & \(\begin{array}{cc}0.0044\\ (6.70\times 10^{-7})\end{array}\) \\ \(\widehat{\phi}\) & \(\begin{array}{cc}1.093\\ 5.94\times 10^{5}\end{array}\) & \(\begin{array}{cc}1.007\\ 5.04\times 10^{5}\end{array}\) & \(\begin{array}{cc}1.006\\ 5.61\times 10^{5}\end{array}\) & \(\begin{array}{cc}1.59\times 10^{6}\end{array}\) & \(\begin{array}{cc}1.12\times 10^{6}\end{array}\) \\ \(\widehat{P^{*}}\) & \(\begin{array}{cc}5051.3\end{array}\) & \(\begin{array}{cc}338.2\end{array}\) & \(\begin{array}{cc}337.4\end{array}\) & \(\begin{array}{cc}5810.4\end{array}\) & \(\begin{array}{cc}6711.6\end{array}\) \\ \(\widehat{P^{\dagger}}\) & \(\begin{array}{cc}5520.6\end{array}\) & \(\begin{array}{cc}340.5\end{array}\) & \(\begin{array}{cc}339.4\end{array}\) & \(\begin{array}{cc}6030.7\end{array}\) & \(\begin{array}{cc}7140.1\end{array}\) \\ \(2\ln{\cal L}\) & \(\begin{array}{cc}-3916.5\end{array}\) & \(\begin{array}{cc}-2706.0\end{array}\) & \(\begin{array}{cc}-2773.0\end{array}\) & \(\begin{array}{cc}-2809.0\end{array}\) & \(\begin{array}{cc}-2620.0\end{array}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Parameters of model (19), which derives from expressions (3) with (18) in Proposition 2, obtained by calibrating five historical bubbles via quasi-maximum likelihood estimation method. The standard error calculated from Fisher information matrix are provided in the parentheses.
### Model comparison to test to goodness of fit of the price process (19)
All the above calibrations and corresponding analyses have been performed under the assumption that the obtained yield process \(\gamma_{t}\) truly obeys the CIR process in the form (39). Recall that \(\gamma_{t}\) is obtained from the obtained price process by using (3). Indeed, if the price follows equation (19), then \(\gamma_{t}\) follows equation (18) and vice-versa, according to Proposition 2.
It remains to be shown empirically whether the dynamics characterised by eq.(39) for \(\gamma_{t}\) fits data better than alternative yield processes. We now turn to this question by developing tests based on \(\phi\)-divergence statistics specifically adapted to models prescribed in terms of stochastic differential equations (SDE) (Iacus, 2009; Pardo, 2018). In a nutshell, this approach tests whether there is a statistically significant difference between an alternative model compared with the model of reference (null hypothesis) in the explanatory power of a given dataset, where the difference between the two (approximated) parametric models is measured in terms of certain \(\phi\)-divergence measures.
Let us denote the parameter vector \(\boldsymbol{\theta}=(\boldsymbol{\theta}^{\prime},\boldsymbol{\theta}^{{}^{ \prime\prime}})\) that contains all parameters in the following generic SDE describing an ergodic diffusion process \(X_{t}\)
\[\mathrm{d}X_{t}=\mu(\boldsymbol{\theta}^{\prime},X_{t})\,\mathrm{d}t+\sigma( \boldsymbol{\theta}^{{}^{\prime\prime}},X_{t})\,\mathrm{d}W_{t}, \tag{41}\]
where parameter vector \(\boldsymbol{\theta}^{\prime}\) and \(\boldsymbol{\theta}^{{}^{\prime\prime}}\) are respectively with dimension \(p\) and \(q\). Let \(f(X,\boldsymbol{\theta})\) be the probability density for the observations \(X\). Let us assume that the model under testing is calibrated and obtains parameters \(\boldsymbol{\theta}_{0}\), while the alternative model is calibrated and obtains parameters \(\widehat{\boldsymbol{\theta}}\). The \(\phi\)-divergence measure between the model \(f(X,\boldsymbol{\theta}_{0})\) (null hypothesis) and \(f(X,\widehat{\boldsymbol{\theta}})\) is defined as the following expected function of likelihood ratios,
\[D_{\phi}(\widehat{\boldsymbol{\theta}},\boldsymbol{\theta}_{0})=\mathbb{E}_{ \boldsymbol{\theta}_{0}}\left[\,\phi\left(\frac{f(X,\widehat{\boldsymbol{ \theta}})}{f(X,\boldsymbol{\theta}_{0})}\right)\,\right]\, \tag{42}\]
where \(\phi(x)\) is a convex continuous function and is restricted on \([0,+\infty]\) with \(\phi(1)=\phi^{\prime}(1)=0\). Using different \(\phi(x)\) allows the above divergence to be translated into a large body of well-known distance measures for two probability densities. For example,
* for \(\phi(x)=x(\ln x-1)+1\), \(D_{\phi}\) recovers to the Kullback-Leibler (KL) divergence (Kullback and Leibler, 1951)\(D_{KL}(f(X,\widehat{\boldsymbol{\theta}})||f(X,\boldsymbol{\theta}_{0}))\);
* for \(\phi(x)=\left(\frac{x-1}{x+1}\right)^{2}\), \(D_{\phi}\) recovers to the Balakrishnan-Sanghvi (BS) divergence (Balakrishnan and Sanghvi, 1968) equals to \(\int\left(\frac{f(X,\widehat{\boldsymbol{\theta}})-f(X,\boldsymbol{\theta}_{ 0})}{f(X,\widehat{\boldsymbol{\theta}})+f(X,\boldsymbol{\theta}_{0})}\right)^ {2}f(X,\boldsymbol{\theta}_{0})\,\mathrm{d}X\);
* for \(\phi(x)=-2x^{\frac{1}{2}}+2x+1\), \(D_{\phi}\) recovers to the Rathie-Kannappan (RK) divergence (Rathie and Kannappan, 1972), which is propotional to \[\int\left[1-\left(\frac{f(X,\widehat{\mathbf{\theta}})}{f(X,\mathbf{\theta}_{0})}\right)^ {-\frac{1}{2}}\right]f(X,\mathbf{\theta}_{0})\,\mathrm{d}X.\]
Given a sample of \(n\) observations and some asymptotically efficient estimator, such as the quasi-log-likelihood estimator \(\widehat{\mathbf{\theta}}\), De Gregorio and Iacus (2013) has proven that the \(\phi\)-divergence statistic \(T_{\phi,n}=2nD_{\phi,n}^{\mathrm{EXP}}(\widehat{\mathbf{\theta}},\mathbf{\theta}_{0})\) is asymptotically distribution-free under \(H_{0}:\mathbf{\theta}=\mathbf{\theta}_{0}\) versus \(H_{1}:\mathbf{\theta}\neq\mathbf{\theta}_{0}\), and
\[T_{\phi,n}\stackrel{{ d}}{{\longrightarrow}}\chi_{p+q}^{2},\quad \text{when }n\to\infty,\Delta_{s}\to 0.\]
\(D_{\phi,n}^{\mathrm{EXP}}(\widehat{\mathbf{\theta}},\mathbf{\theta}_{0})\) is the empirical version for the theoretical \(\phi\)-divergence measure (42), which reads,
\[D_{\phi,n}^{\mathrm{EXP}}(\widehat{\mathbf{\theta}},\mathbf{\theta}_{0})=\frac{1}{n} \sum_{t=1}^{t=n}\phi\left(\frac{p_{t}(\widehat{\mathbf{\theta}})}{p_{t}(\mathbf{\theta }_{0})}\right)\,\]
where \(p_{t}(\mathbf{\theta})=\exp\left(\frac{1}{2}\left\{\ln\sigma^{2}(\mathbf{\theta}^{{}^ {\prime\prime}},X_{t-1})+\frac{1}{\Delta_{s}^{2}\sigma^{2}(\mathbf{\theta}^{{}^{ \prime\prime}},X_{t-1})}\cdot[\Delta X_{t}-\Delta_{s}\mu(\mathbf{\theta}^{{}^{ \prime}},X_{t-1})]^{2}\right\}\right)\).
To implement the test for a given time series \(\{\gamma_{t}\}_{t=1}^{t=n}\), we first estimate \(\widehat{\mathbf{\theta}}\) for the chosen alternative model by maximizing its corresponding quasi-log-likelihood. The second step is to calculate the above \(\phi\)-divergence statistic \(T_{\phi,n}\) by taking into account \(\mathbf{\theta}_{0}\), the parameters for the model under testing. Then, given the significance level \(\alpha\), we test whether to reject \(H_{0}\) by examining if \(T_{\phi,n}>c_{\alpha}\) where \(c_{\alpha}\) is the \(1-\alpha\) quantile of the limiting random variable that follows the \(\chi_{p+q}^{2}\) distribution.
For the \(\gamma_{t}\) time series obtained for each of the five historical bubbles studied above, we compare the CIR model to the following three alternative models:
* Brownian Motion (BM): \(\mathrm{d}\gamma_{t}=b\,\mathrm{d}t+\psi\,\mathrm{d}W_{t}\),
* Geometric Brownian Motion: (GBM): \(\mathrm{d}\gamma_{t}=-\alpha\gamma_{t}\,\mathrm{d}t+\psi\gamma_{t}\,\mathrm{d}W_ {t}\),
* CKLS process (CKLS): \(\mathrm{d}\gamma_{t}=\left(b-\alpha\gamma_{t}\right)\mathrm{d}t+\psi\gamma_{t} ^{v}\,\mathrm{d}W_{t}\).
The BM model is the simplest stochastic model, in which the increment of \(\gamma_{t}\) only has a constant downward trend \(b\) without any mean-reverting effect. The GBM model is another possibly model for \(\gamma_{t}\). Assuming that the stock price \(P_{t}\) obeys the geometric Brownian motion with drift of \(\mu\) and diffusion of \(\sigma\) taken as a classical benchmark, this implies that \(\gamma_{t}=E/P_{t}\) also follows a geometric Brownian motion with drift of \(\alpha=(\mu-\sigma^{2})\) and diffusion of \(\psi=\sigma\), as reported in Proposition 1. The CKLS process is also famous in financial applications, in particular to model short-term interest (see (Chan et al., 1992)). The CIR process is a special case of the CKLS process obtained for \(v=0.5\). Hence, choosing the CKLS as an alternative model can help determine whether the non-square root of \(\gamma_{t}\) in the diffusion term has more explanatory power.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & NASDAQ 2000 & S\&P500 1987 & Dow Jones 1929 & SSEC 2008 & SSEC 2015 \\ \hline & \multicolumn{4}{c}{Panel A: Kullback-Leibler Divergence} \\ \hline BM & 6.76 & 2.79 & 3.80 & 6.83 & 15.3 \\ & (0.149) & (0.593) & (0.434) & (0.145) & (0.004)\({}^{**}\) \\ GBM & 8.48 & 3.59 & 5.35 & \(1.36\times 10^{2}\) & 11.0 \\ & (0.076) & (0.464) & (0.254) & (\(<0.001\))\({}^{***}\) & (0.02)\({}^{*}\) \\ CKLS & 0.01 & 0.74 & 9.28 & \(1.05\times 10^{3}\) & 25.1 \\ & (\(>\)0.999) & (0.946) & (0.054) & (\(<0.001\))\({}^{***}\) & (\(<0.001\))\({}^{***}\) \\ \hline & \multicolumn{4}{c}{Panel B: Balakrishnan-Sanghvi Divergence} \\ \hline BM & 4.20 & 1.50 & 1.99 & 4.36 & 4.81 \\ & (0.379) & (0.827) & (0.738) & (0.359) & (0.308) \\ GBM & 5.99 & 1.29 & 2.67 & 5.99 & 7.29 \\ & (0.200) & (0.863) & (0.615) & (0.200) & (0.121) \\ CKLS & \(3.30\times 10^{-3}\) & 0.32 & 4.63 & 9.03 & 4.89 \\ & (\(>\)0.999) & (0.988) & (0.360) & (0.060) & (0.299) \\ \hline & \multicolumn{4}{c}{Panel C: Rathie-Kannappan Divergence} \\ \hline BM & 3.66 & 1.42 & 1.92 & 3.77 & 6.87 \\ & (0.453) & (0.840) & (0.750) & (0.438) & (0.143) \\ GBM & 4.95 & 1.65 & 2.67 & 40.3 & 6.19 \\ & (0.293) & (0.799) & (0.614) & (\(<0.001\))\({}^{***}\) & (0.185) \\ CKLS & 3.36 & 0.36 & 4.65 & \(2.3\times 10^{2}\) & 10.2 \\ & (\(>\)0.999) & (0.986) & (0.325) & (\(<0.001\))\({}^{***}\) & (0.372) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of the \(\phi\)-divergence tests that \(\gamma_{t}\) follows the CIR process for five historical bubbles. In each bubble case, the time series of \(\gamma_{t}\) is obtained from the stock index price over the time window with \(t_{\text{start}}\) and \(t_{\text{end}}\) listed in table 1, and the corresponding parameters \(\boldsymbol{\theta}_{0}\) of \(H_{0}\) are set to the estimated values \(\hat{b},\hat{\alpha},\hat{\psi}\). Panels A, B and C respectively give the results of the tests for the three \(\phi\)-divergence measures. Each row lists the values of the \(T_{\phi,n}\) statistic and the p-value (below within parenthesis) for the \(\widehat{\boldsymbol{\theta}}\) estimated by maximization of the quasi-log-likelihood for the corresponding alternative model.
We calculate the statistic \(T_{\phi,n}\) for each of three different \(\phi(x)\) and then perform the corresponding tests. The three \(\phi(x)\) correspond respectively to the KL divergence, BS divergence and RK divergence listed above.
As shown in table 2, the tests for NASDAQ 2000, S&P500 87 and Dow Jones 1929 does not reject the null \(H_{0}\) that \(\gamma_{t}\) obeys the CIR process, regardless of the alternative model to be compared with and for all \(\phi\)-divergence test statistics. For these three time series, the test results are thus clearly supporting our proposed CIR model for the dynamics of \(\gamma_{t}\) given by eq. (39).
However, for the two Chinese stock market bubbles, the three \(\phi\)-divergence tests give different results. For the case of SSEC 2008, only the test based on the BS divergence suggests that the CIR process is more competent than other processes to describe the obtained empirical \(\gamma_{t}\). In contrast, the KL and RK divergence tests suggest a sharp rejection of \(H_{0}\) in favour of the GBM model or the more general process CKLS with an exponent \(v\) different from 0.5. For SSEC 2015, both BS and RK divergence tests indicate that \(H_{0}\) should not be rejected, while the test based on KL divergence suggests the opposite result. Note however in this case that the null \(H_{0}\) is not strongly rejected with respect to the alternative BM and GBM models.
Overall, the above tests imply that the proposed model might be better suited to characterize US than Chinese stock markets. However, the comparison between the competing models for the three different divergence measures for SSEC 2015 and SSEC 2008 also suggests that our model could be applicable to Chinese stock markets.
## 5 Conclusion
We have introduced a new formulation to understand the properties of stock market prices, which is based on interpreting the earning-over-price, called here the earning yield process \(\gamma_{t}\), as the key variable driving prices. \(\gamma_{t}\) has been argued to represent the instantaneous collective belief on the weight of current earnings in pricing the stock. In a second interpretation, we have proposed that the earning yield of a company is analogous to the yield-to-maturity of an equivalent perpetual bond and \(\gamma_{t}\) stands for the market instantaneous collective belief on such yield-to-maturity.
Our main proposal has been to focus on this \(\gamma_{t}\), in essence the inverse of the price (\(\gamma_{t}\simeq 1/P_{t}\)), as a more suitable variable to rationalise the many stylised facts of financial prices and returns. We have illustrated the power of the choice of the \(\gamma_{t}\simeq 1/P_{t}\) as the _right variable_ to model financial prices by exploring the properties resulting from perhaps the simplest choice for \(\gamma_{t}\) in the form of the CIR process. This choice was made for simplicity and because the CIR process is the prevailing one to characterise short-term interest properties, and the earning yield has been argued to be an interest-like variable.
One of the merits of the CIR model for \(\gamma_{t}\) stems from the fact that we have been able to derive explicit analytical solutions for many properties of
the resulting price process. In particular, we have obtained the analytical expression for the transition density for the price process expressed in terms of the modified Bessel function of the first kind. We have obtained the analytical expressions for the first and second moments of prices and cumulative returns, expressed in terms of the Kummer confluent hypergeometric function. We have obtained the ensemble average of the cumulative price. We have shown that the inverse CIR process of the price is an ergodic Markovian process. We have classified the existence of two regimes: (i) the non-explosive nonlinear regime valid for \(2\alpha\gamma^{*}>\psi^{2}\), in which the price process remains bounded at all times and is also stationary and ergodic; (ii) the recurrent explosive bubble regime for \(2\alpha\gamma^{*}\leq\psi^{2}\), in which there is a strictly positive probability for the price to diverge in finite time and then rebound immediately to a finite values. This implies that the price can transiently explode to infinity. This behavior provides a natural model for explosive financial bubbles. But even in the non-explosive nonlinear regime, we demonstrated that the price exhibits transient super-exponential behavior that have been argued to be characteristic signatures of financial bubbles. Our model provides a very simple framework, which is quite remarkable in its ability in rationalising empirical observations that have not hitherto received explanations, such as the transition of the power law tail exponent for the cross-sectional distribution of stock prices from values larger than 1 to a value converging to 1 close to the end of the Japanese Internet bubble.
We have developed a quasi-maximum likelihood method with the L-BFGS-B optimisation algorithm to calibrate the model to five well-known historical bubbles in the US and China stock markets. The estimated \(\alpha\) and \(\psi\) parameters for the five bubbles are within the typical interval that accords with the numerical analysis. Furthermore, for the five bubbles, the perceived intrinsic value \(P^{*}\) has been estimated to be close to the peak price in the calibration window, except for the second Chinese bubble. This implies that estimating \(P^{*}\) can help determine the degree of maturation of a bubble and should aid in constructing early-warning systems for bubble collapse.
Finally, let us reflect briefly in more general terms on the profound implications of our proposition that the inverse of the price (\(\gamma_{t}\simeq 1/P_{t}\)) is the "right" financial variable to model. In the systematic scientific investigation of the world, a key component is identifying the right variable to model, which involves selecting the key factors that are relevant to the phenomenon being studied and developing a model that accurately captures their interactions and relationships. Identifying the right variable to model is a critical step in the scientific process and can have a significant impact on the accuracy and usefulness of the resulting models. This translates to the quest of finding the "right" variable that makes the theory simple and insightful. In the present case, we have proposed that the "right" variable is not the price but its inverse. In the physical sciences, it has been documented many times that observable variables are not necessarily the "right" variables in terms of which the theory is simple.
We think that this strategy introduced in the present work can open up a large new set of insights in financial economics.
## Appendix A Proofs
### Proof of Proposition 2
**Proof.** The proof is constructed by directly showing that the original process defined by expressions (3) and (18) does satisfy SDE (19). For ease of notations, we remove subscript \(t\) is neglected. According to the Ito lemma, (3) leads to
\[\mathrm{d}P =\frac{\partial P}{\partial\gamma}\mathrm{d}\gamma+\frac{1}{2} \frac{\partial^{2}P}{\partial\gamma^{2}}(\mathrm{d}\gamma)^{2}\] \[=-\frac{E}{\gamma^{2}}\ \mathrm{d}\gamma+\frac{1}{2}\cdot 2 \cdot\frac{E}{\gamma^{3}}\ (\mathrm{d}\gamma)^{2}\] \[=-\frac{E}{\gamma}\cdot\frac{1}{\gamma}[\ -\alpha(\gamma- \gamma^{*})\ \mathrm{d}t+\psi\sqrt{\gamma}\ \mathrm{d}W\ ]+\frac{E}{\gamma}\cdot\frac{1}{\gamma^{2}}\cdot\psi^{2} \gamma\ \mathrm{d}t\] \[=P\left[\alpha\left(1-\frac{\gamma^{*}}{\gamma}\right)+\frac{ \psi^{2}}{\gamma}\right]\ \mathrm{d}t-P\psi\sqrt{\frac{1}{\gamma}}\ \mathrm{d}W\]
Therefore,
\[\frac{\mathrm{d}P}{P} =\left[\alpha\left(1-\frac{\gamma^{*}}{\gamma}\right)+\frac{ \psi^{2}}{\gamma}\right]\ \mathrm{d}t-\psi\sqrt{\frac{1}{\gamma}}\ \mathrm{d}W\] \[\text{replace }\gamma\text{ by }\frac{E}{P}\text{ and then replace }\gamma^{*}\text{ by }\frac{E}{P^{*}}\] \[\frac{\mathrm{d}P}{P} =\left[\ \alpha\left(\frac{P^{*}-P}{P^{*}}\right)+\psi^{2}\frac{P}{E} \ \right]\mathrm{d}t-\psi\sqrt{\frac{P}{E}}\ \mathrm{d}W\] \[\text{according to the symmetry of }\mathrm{d}W_{t}\] \[\frac{\mathrm{d}P}{P} =\left[\ \alpha\left(\frac{P^{*}-P}{P^{*}}\right)+\psi^{2}\frac{P}{E} \ \right]\mathrm{d}t+\psi\sqrt{\frac{P}{E}}\ \mathrm{d}W\.\]
\(\square\)
### Proof of Proposition 3
**Proof.** The proposition is proven by checking that \(m_{t}\) satisfies the two conditions for qualifying as a stochastic discount factor (SDF).
Firstly, it is easy to see that \(\mathbb{E}_{t}(\mathrm{d}m_{t})=0\). As \(Z_{t}^{(i)}\) is a Gauss-Winner process, \(\mathbb{E}_{t}(m_{t}\vartheta_{t}^{(i)}\mathrm{d}Z_{t}^{(i)})=m_{t}\vartheta _{t}^{(i)}\cdot\mathbb{E}_{t}(\mathrm{d}Z_{t}^{(i)})=0\). Hence, \(\mathbb{E}_{t}(\mathrm{d}m_{t})=\sum_{i=1}^{N}\mathbb{E}_{t}(m_{t}\vartheta_{ t}^{(i)}\mathrm{d}Z_{t}^{(i)})=0\).
We then check whether the prices of an arbitrary stock and its derivative satisfy the second condition.
For stock \(i\), we have
\[\mathbb{E}_{t}\left(\frac{\mathrm{d}m_{t}}{m_{t}}\frac{\mathrm{d}P_{t }^{(i)}}{P_{t}^{(i)}}\right) =\mathbb{E}_{t}\left(\vartheta_{t}^{(i)}\mathrm{d}Z_{t}^{(i)} \psi^{(i)}\sqrt{\frac{P_{t}^{(i)}}{E^{(i)}}}\mathrm{d}W_{t}^{(i)}+\sum_{j\neq i }\vartheta_{t}^{(j)}\mathrm{d}Z_{t}^{(j)}\psi^{(i)}\sqrt{\frac{P_{t}^{(i)}}{E ^{(i)}}}\mathrm{d}W_{t}^{(i)}\right)\] \[=\vartheta_{t}^{(i)}\psi^{(i)}\sqrt{\frac{P_{t}^{(i)}}{E^{(i)}}} \rho^{(i)}\delta_{ii}\mathrm{d}t+\sum_{j\neq i}\vartheta_{t}^{(j)}\psi^{(i)} \sqrt{\frac{P_{t}^{(i)}}{E^{(i)}}}\rho^{(i)}\delta_{ij}\mathrm{d}t\] \[=\vartheta_{t}^{(i)}\rho^{(i)}\cdot\psi^{(i)}\sqrt{\frac{P_{t}^{ (i)}}{E^{(i)}}}\,\mathrm{d}t,\] \[\mathbb{E}_{t}\left(\frac{\mathrm{d}P_{t}^{(i)}}{P_{t}^{(i)}}\right) =\left[\alpha^{(i)}+\left(\frac{\psi^{(i)2}}{E^{(i)}}-\frac{ \alpha^{(i)}}{P^{*(i)}}\right)\right]\mathrm{d}t.\]
If the following equality holds
\[\vartheta_{t}^{(i)}\rho^{(i)}=\frac{[r_{f}-\alpha^{(i)}]-\psi^{(i)2}}{\psi^{( i)}}\frac{1}{\sqrt{P_{t}^{(i)}E^{(i)}}}+\frac{\alpha^{(i)}\sqrt{P_{t}^{(i)}E^{( i)}}}{P^{*(i)}\psi^{(i)}}\,\]
then the second condition in Equation (22) for all stocks is also satisfied.
We then check the conditions for the prices of derivatives. For ease of notation, we denote
\[\mu_{t}^{(i)} :=\mu(P_{t}^{(i)})=\left[\ \alpha^{(i)}\left(\frac{P^{(i)*}-P_{t}^{(i) }}{P^{(i)*}}\right)+\psi^{(i)2}\ \frac{P_{t}^{(i)}}{E^{(i)}}\ \right],\] \[\sigma_{t}^{(i)} :=\sigma(P_{t}^{(i)})=\psi^{(i)}\sqrt{\frac{P_{t}^{(i)}}{E^{(i)}}}.\]
According to the Ito formula, we have,
\[\mathrm{d}\xi_{t}^{(i)}=\left[\frac{\partial\xi_{t}^{(i)}}{\partial t}+\frac{ \partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}\mu_{t}^{(i)}P_{t}^{(i)}+\frac{1}{ 2}\frac{\partial^{2}\xi_{t}^{(i)}}{\partial P_{t}^{(i)2}}\sigma_{t}^{(i)2}P_{t }^{(i)2}\right]\mathrm{d}t+\frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}P _{t}^{(i)}\sigma_{t}^{(i)}\mathrm{d}W_{t}^{(i)}. \tag{43}\]
At any time \(t\), if one take \(1\) share of derivative \(\xi_{t}^{(i)}\) and simultaneously short \(\frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}\) shares at the price \(P_{t}^{(i)}\), the diffusion term can be neutralised, and the portfolio must be risk-free. Further, given the no-arbitrage condition that any risk-free portfolio in the market must grow at the risk-free rate \(r_{f}\), we get
\[\left[\frac{\partial\xi_{t}^{(i)}}{\partial t}+\frac{\partial\xi_{t}^{(i)}}{ \partial P_{t}^{(i)}}\mu_{t}^{(i)}P_{t}^{(i)}+\frac{1}{2}\frac{\partial^{2} \xi_{t}^{(i)}}{\partial P_{t}^{(i)2}}\sigma_{t}^{(i)2}P_{t}^{(i)2}\right]- \frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}\mu_{t}^{(i)}P_{t}^{(i)}=r_{f }\left(\xi_{t}^{(i)}-\frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}P_{t}^ {(i)}\right).\]
Moreover,
\[\frac{\partial\xi_{t}^{(i)}}{\partial t}+\frac{\partial\xi_{t}^{(i)}}{ \partial P_{t}^{(i)}}r_{f}P_{t}^{(i)}+\frac{1}{2}\frac{\partial^{2}\xi_{t}^{(i)} }{\partial P_{t}^{(i)2}}\sigma_{t}^{(i)2}P_{t}^{(i)2}=r_{f}\xi_{t}^{(i)}. \tag{44}\]
This equation can be considered as the principal motion equation for the price of derivatives. Thus,
\[\mathbb{E}_{t}(\mathrm{d}\xi_{t}^{(i)}) =\left[\frac{\partial\xi_{t}^{(i)}}{\partial t}+\frac{\partial\xi_ {t}^{(i)}}{\partial P_{t}^{(i)}}\mu_{t}^{(i)}P_{t}^{(i)}+\frac{1}{2}\frac{ \partial^{2}\xi_{t}^{(i)}}{\partial P_{t}^{(i)2}}\sigma_{t}^{(i)2}P_{t}^{(i)2 }\right]\mathrm{d}t\] \[\mathbb{E}_{t}\left(\frac{\mathrm{d}m_{t}}{m_{t}}\mathrm{d}\xi_{t }^{(i)}\right) =\mathbb{E}_{t}\left(\vartheta_{t}^{(i)}\mathrm{d}Z_{t}^{(i)} \frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}P_{t}^{(i)}\sigma_{t}^{(i)} \mathrm{d}W_{t}^{(i)}+\sum_{j\neq i}\vartheta_{t}^{(j)}\mathrm{d}Z_{t}^{(j)} \frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}P_{t}^{(i)}\sigma_{t}^{(i)} \mathrm{d}W_{t}^{(i)}\right)\] \[=\vartheta_{t}^{(i)}\frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{ (i)}}P_{t}^{(i)}\sigma_{t}^{(i)}\rho^{(i)}\delta_{ii}\mathrm{d}t+\sum_{j\neq i }\vartheta_{t}^{(j)}\frac{\partial\xi_{t}^{(i)}}{\partial P_{t}^{(i)}}P_{t}^ {(i)}\sigma_{t}^{(i)}\rho^{(i)}\delta_{ij}\mathrm{d}t\] \[=\vartheta_{t}^{(i)}\rho^{(i)}\cdot\frac{\partial\xi_{t}^{(i)}}{ \partial P_{t}^{(i)}}P_{t}^{(i)}\sigma_{t}^{(i)}\mathrm{d}t.\]
It is important to note that \(\vartheta_{t}^{(i)}\rho^{(i)}=\frac{r_{f}-\mu_{t}^{(i)}}{\sigma_{t}^{(i)}}\). Hence,
\[\mathbb{E}_{t}(\mathrm{d}\xi_{t}^{(i)})+\mathbb{E}_{t}\left(\frac{\mathrm{d}m _{t}}{m_{t}}\mathrm{d}\xi_{t}^{(i)}\right)=\left[\frac{\partial\xi_{t}^{(i)}}{ \partial t}+\frac{\partial\xi_{t}^{(i)}}{\partial P}r_{f}P_{t}^{(i)}+\frac{1} {2}\frac{\partial^{2}\xi_{t}^{(i)}}{\partial P_{t}^{(i)2}}\sigma_{t}^{(i)2}P_{ t}^{(i)2}\right]\mathrm{d}t.\]
Recall that, from Equation (44). the right-hand-side of the above equation is equal to \(r_{f}\xi_{t}^{(i)}\mathrm{d}t\). Therefore,
\[\mathbb{E}_{t}\left(\frac{\mathrm{d}\xi_{t}^{(i)}}{\xi_{t}^{(i)}}\right)+ \mathbb{E}_{t}\left(\frac{\mathrm{d}m_{t}}{m_{t}}\frac{\mathrm{d}\xi_{t}^{(i) }}{\xi_{t}^{(i)}}\right)=r_{f}\mathrm{d}t\.\]
### Proof of Proposition 4
Starting from expression (2) relating \(\gamma_{t}\) to \(P_{t}\), the transition probability \(f(P_{t},t\mid P_{0})\) for \(P_{t}\) is obtained from the transition probability \(f^{CIR}(\gamma_{t},t\mid\gamma_{0})\) for \(\gamma_{t}\) via the standard condition of the conservation of probability under a change of variable, which yields
\[f(P_{t},t\mid P_{0})=f^{CIR}(\gamma_{t},t\mid\gamma_{0})\cdot\frac{E}{P_{t}^{ 2}}=f^{CIR}\left(\frac{E}{P_{t}},t\mid\frac{E}{P_{0}}\right)\cdot\frac{E}{P_{t }^{2}}. \tag{45}\]
The well-known CIR conditional transition density is given by
\[f^{CIR}(\gamma_{t},t\mid\gamma_{0})=d\;e^{-(u+v)}\left(\frac{v}{u}\right)^{ \frac{g}{2}}I_{q}(\sqrt{2uv}), \tag{46}\]
where
\[d =\frac{2\alpha}{\psi^{2}(1-e^{-\alpha t})}, q =\frac{2\alpha\gamma^{*}}{\psi^{2}}-1\] \[u =d\,\gamma_{0}\,e^{-\alpha t}, v =d\,\gamma_{t}\.\]
Substituting \(\gamma^{*}=\dfrac{E}{P^{*}}\), \(\gamma_{0}=\dfrac{E}{P_{0}}\), and \(\gamma_{t}=\dfrac{E}{P_{t}}\) into \(q,u,v\), respectively, we obtain
\[q =\dfrac{2\alpha E}{\psi^{2}P^{*}}-1=\dfrac{H}{P^{*}}-1\] \[u =\dfrac{2\alpha E\ e^{-\alpha t}}{\psi^{2}P_{0}\ (1-e^{-\alpha t})}= \dfrac{H}{P_{0}}\cdot\dfrac{1}{e^{\alpha t}-1}\] \[v =\dfrac{2\alpha E}{\psi^{2}(1-e^{-\alpha t})P_{t}}=\dfrac{H}{P_{t }}\cdot\dfrac{1}{1-e^{-\alpha t}}.\]
From Equations (45) and (46), we have
\[f(P_{t},t\mid P_{0}) =\dfrac{2\alpha}{\psi^{2}(1-e^{-\alpha t})}\ e^{-(u+v)}\left( \dfrac{v}{u}\right)^{\frac{q}{2}}I_{q}(\sqrt{2uv})\cdot\dfrac{E}{P_{t}^{2}}\] \[=\dfrac{2\alpha E}{\psi^{2}(1-e^{-\alpha t})}\ e^{-(u+v)}v^{\frac {q}{2}}\cdot\dfrac{1}{P_{t}^{2}}\cdot u^{-\frac{q}{2}}\ I_{q}(\sqrt{2uv})\] \[=\dfrac{H}{1-e^{-\alpha t}}\ e^{-(u+v)}v^{\frac{q}{2}+2}\cdot \left(\dfrac{H}{1-e^{-\alpha t}}\right)^{-2}\cdot u^{-\frac{q}{2}}\ I_{q}( \sqrt{2uv})\] \[=c\ e^{-(u+v)}v^{\frac{q}{2}+2}u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv}).\]
### Proof of the ergodicity of the price process (19)
The analysis follows Iacus (2009) and Cherny and Engelbert (2005), examining the integral convergence conditions of the so-called _scale measure_ and _speed measure_. Specifically, consider a Markovian process in the form of a stochastic differential equation (SDE)
\[\mathrm{d}X_{t}=b(X_{t})\ \mathrm{d}t+\sigma(X_{t})\ \mathrm{d}W\,\]
the scale measure and speed measure are defined respectively as
\[\rho(x) =e^{-\int_{a}^{x}\frac{2b(y)}{\sigma^{2}(y)}\mathrm{d}y}\,\qquad\mathrm{arbitrary}\ \ a\geq 0\ \ \mathrm{and}\ \ a<x\] \[m(x) =\dfrac{1}{\sigma^{2}(x)\rho(x)}\.\]
If \(\int_{0}^{\infty}m(x)\ \mathrm{d}x<\infty\), the above process is ergodic inside the domain \([\,0,\infty\,]\). Moreover, if
\[\int_{0}^{a}\rho(x)\mathrm{d}x=\infty,\quad\int_{a}^{\infty}\rho(x)\mathrm{d} x=\infty,\qquad\mathrm{arbitrary}\ \ a\geq 0\ \ \mathrm{and}\ \ a<\infty,\]
the process can never reach \(0\) or \(\infty\) with a positive probability. However, If one or both integrals are convergent, the corresponding boundary \(0\) or \(\infty\) is reached and is instantaneously reflecting.
For the price process specified by Equation (19), the scale measure (taking \(a=1\)) is
\[\rho(x)=\exp\left\{-\int_{1}^{x}\frac{2(\alpha+Ay)y}{By^{3}}\mathrm{d}y\right\}=e ^{-\frac{2\alpha}{B}}\cdot e^{\frac{2\alpha}{B}\frac{1}{x}}x^{-\frac{2A}{B}},\]
where \(A=\frac{\psi^{2}}{E}-\frac{\alpha}{P^{*}}\) and \(B=\frac{\psi^{2}}{E}\) with \(A\in(-\frac{\alpha}{P^{*}},\,B),\ B\in(0,\infty)\). Thus,
\[m(x)=\frac{e^{\frac{2\alpha}{B}}}{B}\cdot e^{-\frac{2\alpha}{B}\frac{1}{x}}x^{ \frac{2A}{B}-3}\.\]
Then,
\[\int_{0}^{\infty}m(x)\ \mathrm{d}x =\frac{e^{\frac{2\alpha}{B}}}{B}\int_{0}^{\infty}e^{-\frac{2 \alpha}{B}\frac{1}{x}}x^{\frac{2A}{B}-3}\ \mathrm{d}x\] \[=\frac{e^{\frac{2\alpha}{B}}}{B}\int_{0}^{\infty}e^{-\frac{2 \alpha}{B}\cdot y}y^{-\frac{2A}{B}+1}\ \mathrm{d}y\qquad\text{with }y=\frac{1}{x}\] \[=\frac{e^{\frac{2\alpha}{B}}}{B}\int_{0}^{\infty}e^{-z}\left( \frac{B}{2\alpha}\right)^{1-\frac{2A}{B}}\cdot z^{-\frac{2A}{B}+1}\cdot\frac{B }{2\alpha}\ \mathrm{d}z\qquad\text{with }z=\frac{2\alpha}{B}y\] \[=\frac{e^{\frac{2\alpha}{B}}}{B}\left(\frac{B}{2\alpha}\right)^{ 2-\frac{2A}{B}}\Gamma(2-\frac{2A}{B})<\infty\.\]
Therefore, the price process is ergodic in the domain \([\,0,+\infty\,]\). Since that the price process is the reciprocal of the CIR process, this result is intimately associated with the ergodicity for the CIR process. Additionally,
\[\int_{1}^{\infty}\rho(x)\mathrm{d}x =e^{-\frac{2\alpha}{B}}\int_{1}^{\infty}e^{\frac{2\alpha}{B}\frac {1}{x}}x^{-\frac{2A}{B}}\ \mathrm{d}x\] \[=e^{-\frac{2\alpha}{B}}\cdot\left(\frac{2\alpha}{B}\right)^{1- \frac{2A}{B}}\int_{0}^{\frac{2\alpha}{B}}e^{y}\ y^{-2+\frac{2A}{B}}\ \mathrm{d}y\qquad\text{with }y= \frac{2\alpha}{Bx}\] \[=e^{-\frac{2\alpha}{B}}\cdot\left(\frac{2\alpha}{B}\right)^{1- \frac{2A}{B}}\cdot(-1)^{-2+\frac{2A}{B}}\int_{-\frac{2\alpha}{B}}^{0}e^{-z}z^ {-2+\frac{2A}{B}}\ \mathrm{d}z\qquad\text{with }z=-y\] \[=(-1)^{-2+\frac{2A}{B}}e^{-\frac{2\alpha}{B}}\cdot\left(\frac{2 \alpha}{B}\right)^{1-\frac{2A}{B}}\left(\int_{-\frac{2\alpha}{B}}^{\infty}e^{- z}z^{-2+\frac{2A}{B}}\ \mathrm{d}z-\int_{0}^{\infty}e^{-z}z^{-2+\frac{2A}{B}}\ \mathrm{d}z\right)\] \[=\begin{cases}C_{1}\left[\,\Gamma(-1+\frac{2A}{B},-\frac{2\alpha }{B})-\Gamma(-1+\frac{2A}{B})\,\right]<\infty,\qquad-1+\frac{2A}{B}>0,\\ \infty\end{cases}\]
where \(C_{1}:=(-1)^{-2+\frac{2A}{B}}e^{-\frac{2\alpha}{B}}\cdot\left(\frac{2\alpha} {B}\right)^{1-\frac{2A}{B}}\). The integral \(\int_{1}^{\infty}\rho(x)\mathrm{d}x\) converges when the condition \(-1+\frac{2A}{B}>0\) is satisfied. This condition can be finally
translated into \(\frac{2\alpha E}{\psi^{2}}<P^{*}\), i.e., \(H<P^{*}\). However,
\[\int_{0}^{1}\rho(x)\mathrm{d}x =e^{-\frac{2\alpha}{B}}\int_{0}^{1}e^{\frac{2\alpha}{B}\frac{1}{x}} x^{-\frac{2A}{B}}\ \mathrm{d}x\] \[=e^{-\frac{2\alpha}{B}}\cdot\left(\frac{2\alpha}{B}\right)^{1- \frac{2A}{B}}\int_{1}^{\infty}e^{y}\ y^{-2(1-\frac{A}{B})}\ \mathrm{d}y\qquad\text{with }y= \frac{2\alpha}{Bx}\] \[=\infty\.\]
Hence, the price process is ergodic and can never reach _zero_. When \(H<P^{*}\), the process has a strictly positive probability of exploding to infinity in finite time. When this happens, the price is instantaneously reflected from infinity. Otherwise (\(H>P^{*}\)), the price does not reach infinity in finite time.
### Proof of Proposition 5
**Proof.** As the price process that is solution of Equation (19) is ergodic, it has a unique stationary distribution, which is the limiting distribution of its long-term evolution. Using the definitions (24-27), irrespective of whether \(H\geq P^{*}\) or \(H<P^{*}\) and for an arbitrary \(P_{0}\), we always have
\[\pi(P) =\lim_{t\to\infty}f(P_{t},t\mid P_{0})\] \[=\lim_{t\to\infty}c\ e^{-(u+v)}v^{\frac{q}{2}+2}u^{-\frac{q}{2}} \ I_{q}(2\sqrt{uv})\] \[=\lim_{t\to\infty}c\ e^{-(u+v)}v^{\frac{q}{2}+2}u^{-\frac{q}{2}} \ \sum_{n=0}^{\infty}\left(\frac{2\sqrt{uv}}{2}\right)^{2n+q}\frac{1}{n!\ \Gamma(n+q+1)}\] \[=\lim_{t\to\infty}c\ e^{-(u+v)}v^{q+2}\ \sum_{n=0}^{\infty}\frac{(uv)^{n}}{n!\ \Gamma(n+q+1)}\]
Notice that \(\ c=\frac{1-e^{-\alpha t}}{H}\stackrel{{ t\to\infty}}{{ \longrightarrow}}\frac{1}{H}\), \(v=\frac{1}{cP}\stackrel{{ t\to\infty}}{{\longrightarrow}}\frac{H }{P}\), and \(u\sim\frac{1}{e^{\alpha t}-1}\stackrel{{ t\to\infty}}{{ \longrightarrow}}0\), then
\[\pi(P) =\frac{1}{H}e^{-\frac{H}{P}}\left(\frac{H}{P}\right)^{q+2}\frac{1 }{\Gamma(q+1)}\] \[\text{Replacing }q\text{ by }\frac{H}{P^{*}}-1,\text{ this yields}\] \[=\frac{H^{\frac{H}{P^{*}}}}{\Gamma\left(\frac{H}{P^{*}}\right)}e^ {-\frac{H}{P}}P^{-(\frac{H}{P^{*}}+1)}\.\]
### Proof of Proposition 6
Using the definitions (24-27),
\[\mathbb{E}(P_{t}\mid P_{0}) =\int_{0}^{\infty}P_{t}\cdot f(P_{t},t\mid P_{0})\ \mathrm{d}P_{t}\] \[=\int_{0}^{\infty}P_{t}\cdot c\cdot e^{-(u+v)}\ v^{\frac{q}{2}+2} \ u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv})\ \mathrm{d}P_{t}\] Note that \[v=\frac{1}{cP_{t}}\] and performing the change of variable \[\mathrm{d}P_{t}=-cP_{t}^{2}\ \mathrm{d}v\] \[=-\int_{\infty}^{0}(cP_{t})^{2}P_{t}\cdot e^{-(u+v)}\ v^{\frac{q} {2}+2}\ u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv})\ \mathrm{d}v\] \[=\int_{0}^{\infty}(cP_{t})^{2}P_{t}\cdot e^{-(u+v)}\ v^{\frac{q} {2}-1}\cdot\left(\frac{1}{cP_{t}}\right)^{3}\cdot u^{-\frac{q}{2}}\ I_{q}(2 \sqrt{uv})\ \mathrm{d}v\] \[=\int_{0}^{\infty}\frac{1}{c}\cdot e^{-(u+v)}\ v^{\frac{q}{2}-1} \ u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv})\ \mathrm{d}v\] \[=\int_{0}^{\infty}\frac{1}{c}\cdot e^{-(u+v)}\ v^{\frac{q}{2}-1} \ u^{-\frac{q}{2}}\ \sum_{n=0}^{\infty}\left(\frac{2\sqrt{uv}}{2}\right)^{2n+q} \frac{1}{n!\ \Gamma(n+q+1)}\ \mathrm{d}v\] \[=\int_{0}^{\infty}\frac{1}{c}\cdot e^{-(u+v)}\ v^{\frac{q}{2}-1} \ u^{-\frac{q}{2}}\ (uv)^{\frac{q}{2}}\sum_{n=0}^{\infty}\frac{(uv)^{n}}{\Gamma(n+1) \Gamma(n+q+1)}\ \mathrm{d}v\] \[=\frac{e^{-u}}{c}\sum_{n=0}^{\infty}\left(\frac{u^{n}}{\Gamma(n+ 1)\Gamma(n+1+q)}\cdot\int_{0}^{\infty}e^{-v}v^{q-1+n}\ \mathrm{d}v\right)\] \[=\frac{e^{-u}}{c}\sum_{n=0}^{\infty}\frac{\Gamma(n+q)}{\Gamma(n+ 1)\Gamma(n+1+q)}\cdot u^{n}\] \[=\frac{e^{-u}}{c}\cdot\frac{\Gamma(q)}{\Gamma(q+1)}\cdot{}_{1}F_ {1}(q,q+1,u)\] \[=\frac{e^{-u}}{c\ q}\cdot{}_{1}F_{1}(q,q+1,u)\]
Therefore, \(\mathbb{E}(R_{t}\mid P_{0})=\mathbb{E}\left(\frac{P_{t}}{P_{0}}\mid P_{0} \right)=\frac{e^{-u}}{cqP_{0}}\cdot_{1}F_{1}(q,q+1,u)\). \(\Box\)
### Proof of Proposition 7
Based on the explicit conditional transition density of Equation (23) and definitions (24-27), we have
\[\mathbb{E}(P_{t}^{2}\mid P_{0}) =\int_{0}^{\infty}P_{t}^{2}\cdot f(P_{t},t\mid P_{0})\ \mathrm{d}P_{t}\] \[=\int_{0}^{\infty}P_{t}^{2}\cdot c\cdot e^{-(u+v)}\ v^{\frac{q}{2 }+2}\ u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv})\ \mathrm{d}P_{t}\] \[=-\int_{\infty}^{0}(cP_{t})^{2}P_{t}^{2}\cdot e^{-(u+v)}\ v^{ \frac{q}{2}+2}\ u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv})\ \mathrm{d}v\] \[=\int_{0}^{\infty}(cP_{t})^{2}P_{t}^{2}\cdot e^{-(u+v)}\ v^{ \frac{q}{2}-2}\cdot\left(\frac{1}{cP_{t}}\right)^{4}\cdot u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv})\ \mathrm{d}v\] \[=\int_{0}^{\infty}\frac{1}{c^{2}}\cdot e^{-(u+v)}\ v^{\frac{q}{2 }-2}\ u^{-\frac{q}{2}}\ I_{q}(2\sqrt{uv})\ \mathrm{d}v\] \[=\int_{0}^{\infty}\frac{1}{c^{2}}\cdot e^{-(u+v)}\ v^{\frac{q}{2 }-2}\ u^{-\frac{q}{2}}\ \sum_{n=0}^{\infty}\left(\frac{2\sqrt{uv}}{2}\right)^{2n+q} \frac{1}{n!\ \Gamma(n+q+1)}\ \mathrm{d}v\] \[=\int_{0}^{\infty}\frac{1}{c^{2}}\cdot e^{-(u+v)}\ v^{\frac{q}{2 }-2}\ u^{-\frac{q}{2}}\ (uv)^{\frac{q}{2}}\sum_{n=0}^{\infty}\frac{(uv)^{n}}{ \Gamma(n+1)\Gamma(n+q+1)}\ \mathrm{d}v\] \[=\frac{e^{-u}}{c^{2}}\sum_{n=0}^{\infty}\left(\frac{u^{n}}{ \Gamma(n+1)\Gamma(n+1+q)}\cdot\int_{0}^{\infty}e^{-v}v^{q-2+n}\ \mathrm{d}v\right)\] \[=\frac{e^{-u}}{c^{2}}\sum_{n=0}^{\infty}\frac{\Gamma(n+q-1)}{ \Gamma(n+1)\Gamma(n+1+q)}\cdot u^{n}\] \[=\frac{e^{-u}}{c^{2}}\cdot\frac{\Gamma(q-1)}{\Gamma(q+1)}\cdot_{1 }F_{1}(q-1,q+1,u)\] \[=\frac{e^{-u}}{c^{2}\ q(q-1)}\cdot_{1}F_{1}(q,q+1,u)\]
### Proof of Proposition 8
\[\mathbb{E}(R_{\infty}\mid P_{0}) =\frac{\mathbb{E}(P_{\infty})}{P_{0}}=\frac{1}{P_{0}}\int_{0}^{ \infty}P\cdot\pi(P)\ \mathrm{d}P\] \[=\frac{1}{P_{0}}\int_{0}^{\infty}P\frac{H^{\frac{H}{P^{*}}}}{ \Gamma\left(\frac{H}{P^{*}}\right)}e^{-\frac{H}{P}}P^{-(\frac{H}{P^{*}}+1)}\ \mathrm{d}P\] \[=\frac{1}{P_{0}\cdot\Gamma\left(\frac{H}{P^{*}}\right)}\int_{0}^{ \infty}e^{-\frac{H}{P}}\left(\frac{H}{P}\right)^{\frac{H}{P^{*}}}\ \mathrm{d}P\] \[\text{Defining }z=\frac{H}{P}\text{ and applying change of variable }\mathrm{d}P=-\frac{P^{2}}{H}\mathrm{d}z\] \[=\frac{H}{P_{0}\cdot\Gamma\left(\frac{H}{P^{*}}\right)}\int_{0}^ {\infty}e^{-z}\ z^{\frac{H}{P^{*}}-2}\ \mathrm{d}z\] \[\mathbb{E}(R_{\infty}\mid P_{0}) =\frac{H\cdot\Gamma\left(\frac{H}{P^{*}}-1\right)}{P_{0}\cdot \Gamma\left(\frac{H}{P^{*}}\right)}=\frac{P^{*}}{P_{0}}\cdot\left(1-\frac{P^{ *}}{H}\right)^{-1}. \tag{47}\]
A similar derivation holds for the variance of the cumulative return.
\[\mathbb{E}(R_{\infty}^{2}\mid P_{0}) =\frac{1}{P_{0}^{2}}\int_{0}^{\infty}P^{2}\cdot\pi(P)\ \mathrm{d}P\] \[=\frac{1}{P_{0}^{2}\cdot\Gamma\left(\frac{H}{P^{*}}\right)}\int_{ 0}^{\infty}e^{-\frac{H}{P}}\left(\frac{H}{P}\right)^{\frac{H}{P^{*}}}P\ \mathrm{d}P\] \[=\frac{H^{2}}{P_{0}^{2}\cdot\Gamma\left(\frac{H}{P^{*}}\right)} \int_{0}^{\infty}e^{-z}\ z^{\frac{H}{P^{*}}-3}\ \mathrm{d}z\] \[=\frac{H^{2}\cdot\Gamma\left(\frac{H}{P^{*}}-2\right)}{P_{0}^{2} \cdot\Gamma\left(\frac{H}{P^{*}}\right)}=\frac{P^{*2}}{P_{0}^{2}}\cdot\left( 1-\frac{2P^{*}}{H}\right)^{-1}\left(1-\frac{P^{*}}{H}\right)^{-1}. \tag{48}\]
Hence, we have
\[\mathbb{V}ar(R_{\infty}\mid P_{0}) =\mathbb{E}(R_{\infty}\mid P_{0})-\mathbb{E}(R_{\infty}\mid P_{0 })^{2}\] \[=\frac{P^{*2}}{P_{0}^{2}}\cdot\left(1-\frac{P^{*}}{H}\right)^{-1} \left[\left(1-\frac{2P^{*}}{H}\right)^{-1}-\left(1-\frac{P^{*}}{H}\right)^{-1} \right]. \tag{49}\]
### Proof of proposition 9
For the super-exponential growth to persist at least until \(t=1\), given that the price started from \(P_{0}\) at \(t=0\), the following conditions must hold:
\[\frac{\partial}{\partial t}\left.\ln\mathbb{E}(P_{t}\mid P_{0}) \right|_{t=1}>0\,\] \[\lim_{t\to 1^{-}}\frac{\partial^{2}}{\partial t^{2}}\left.\ln \mathbb{E}(P_{t}\mid P_{0})>0\.\]
The first order condition (that the expected price grows) is always satisfied. We thus focus on the second order condition. Using expression (30) for \(\mathbb{E}(R_{t}\mid P_{0})\), a direct calculation for the second-order derivative yields
\[\lim_{t\to 1^{-}}\frac{\partial^{2}}{\partial t^{2}}\,\ln \mathbb{E}(P_{t}\mid P_{0})=\lim_{t\to 1^{-}}\frac{\partial^{2}}{\partial t^{2}}\,\ln \mathbb{E}(R_{t}\mid P_{0})\] \[=-\bigg{\{}\,e^{\alpha}\alpha^{2}\bigg{[}e^{\alpha+\frac{2H}{(e^{ \alpha}-1)P_{0}}}(e^{\alpha}-1)P_{0}\left(\frac{H}{P_{0}-e^{\alpha}P_{0}}\right) ^{2q}\] \[\qquad\qquad+\left(\Gamma(q)-\Gamma\left(\frac{H}{P_{0}-e^{ \alpha}P_{0}}\right)\right)^{2}(1+e^{\alpha})H+(e^{\alpha}-1)(q-1)P_{0})\] \[\qquad\qquad+e^{\frac{H}{(e^{\alpha}-1)P_{0}}}\left(-\Gamma(q)+ \Gamma\left(\frac{H}{P_{0}-e^{\alpha}P_{0}}\right)\right)\left(\frac{H}{P_{0 }-e^{\alpha}P_{0}}\right)^{q}\] \[\qquad\qquad(e^{\alpha}H+(e^{\alpha}-1)(e^{\alpha}q+1)P_{0}) \bigg{]}\bigg{\}}\bigg{/}\big{[}\,(e^{\alpha}-1)\left(\Gamma(q)-\Gamma\left( q,-\frac{H}{(e^{\alpha}-1)P_{0}}\right)\right)^{2}\big{]}\.\]
Thus, given that \(e^{\alpha}\alpha^{2}>0\) and \((e^{\alpha}-1)>0\), the second order condition \(\lim_{t\to 1^{-}}\frac{\partial^{2}}{\partial t^{2}}\,\mathbb{E}(P_{t}\mid P_{0})>0\) is equivalent to condition (37).
The end of the super-exponential growth regime occurs at the time \(t_{c}\) when the log-price exhibits an inflexion point determined by the condition
\[\frac{\partial^{2}}{\partial t^{2}}\,\ln\mathbb{E}(P_{t}\mid P_{0})|_{t=t_{c} }=0\.\]
Using expression (30) for \(\mathbb{E}(R_{t}\mid P_{0})\), the above condition leads to Equation (38). \(\Box\)
|
2308.04713 | On the gauge dependence of scalar induced secondary gravitational waves
during radiation and matter domination eras | We revisit the vital issue of gauge dependence in the scalar-induced
secondary gravitational waves (SIGWs), focusing on the radiation domination
(RD) and matter domination (MD) eras. The energy density spectrum is the main
physical observable in such induced gravitational waves. For various gauge
choices, there has been a divergence in the energy density,
$\Omega_{\text{GW}}$, of SIGWs. We calculate SIGWs in different gauges to
quantify this divergence to address the gauge-dependent problem. In our
previous studies, we had found that the energy density diverges in the
polynomial power of conformal time (e.g., $\eta^6$ in uniform density gauge).
We try to fix this discrepancy by adding a counter-term that removes the
fictitious terms in secondary tensor perturbations. We graphically compare the
calculations in various gauges and also comment on the physical origin of the
observed gauge dependence. | Arshad Ali, Ya-Peng Hu, Mudassar Sabir, Taotao Sui | 2023-08-09T05:11:53Z | http://arxiv.org/abs/2308.04713v1 | On the gauge dependence of scalar induced secondary gravitational waves during radiation and matter domination eras
###### Abstract
We revisit the vital issue of gauge dependence in the scalar-induced secondary gravitational waves (SIGWs), focusing on the radiation domination (RD) and matter domination (MD) eras. The energy density spectrum is the main physical observable in such induced gravitational waves. For various gauge choices, there has been a divergence in the energy density, \(\Omega_{\rm GW}\), of SIGWs. We calculate SIGWs in different gauges to quantify this divergence to address the gauge-dependent problem. In our previous studies, we had found that the energy density diverges in the polynomial power of conformal time (e.g., \(\eta^{E}\) in uniform density gauge). We try to fix this discrepancy by adding a counter-term that removes the fictitious terms in secondary tensor perturbations. We graphically compare the calculations in various gauges and also comment on the physical origin of the observed gauge dependence.
**scalar induced gravitational waves, gauge transformation, cosmology**
## 1 Introduction
In 2015, Advanced LIGO improved the first network of advanced detectors to be significantly more sensitive to measuring GWs [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. The typical sources for studying the very early universe by cosmological GWs include phase transitions, which lead to the collisions of bubbles or the formation of cosmic strings, resonances during reheating, so-called primordial GWs (quantum fluctuations during inflation), and GWs induced by large primordial fluctuations [11]. The production of scalar induced gravitational waves (SIGWs) occurs when the large primordial fluctuations re-enter the
horizon sometime between inflation and the Big Bang Nucleosynthesis, which is a promising prospect. Because we do not have any evidence of the universe's content or the expansion history at that time, SIGWs allow access to the latter stages of inflation and carry information on the primordial universe's content. Future data on the primordial power spectrum acquired by SIGWs will complement those from other probes, such as spectral distortions [12, 13] in the multimessenger cosmology epoch [14].
In addition to SIGWs, there are also GWs with cosmological origins, including primordial GWs derived from inflation and GWs, generated from a cosmic phase transition [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37]. Even though the primordial GWs are too small to be observed by third-generation ground-based and space-based GW observatories, SIGWs can have peak frequencies as low as nanohertz or millihertz, making them detectable by future space-based GW observatories, including LISA [38, 39], TianQin [40] and Taiji [41], and PTA observations like SKA [42, 43]. Tomita [44] was the first to note that density fluctuations can induce GWs. Later, in a dust-dominated universe, they were rediscovered in refs. [45, 46] when studying second-order cosmological perturbations.
In contrast to the first-order perturbations, the second-order tensor perturbations induced by scalar perturbations are usually considered to have gauge dependence. Therefore, the secondary SIGWs may differ depending on gauge choice [47, 48, 49, 50, 51, 52, 53, 54, 55, 56], despite having many gauge-invariant tensor perturbations at second order [57, 52, 58, 59, 60, 61, 62, 63, 64, 65, 66]. We must deter
mine the secondary tensor perturbations in different gauges for these reasons. However, SIGW production was typically discussed using the Poisson gauge [15, 33, 36, 37]. Thus, it is vital to examine SIGWs in other gauges. In RD, the energy densities of SIGWs in the Poisson, the TT, and the uniform curvature gauge were identical [50, 51, 52]. The energy density of SIGWs in the TT gauge during RD and MD was examined in [50]. As a general background, the SIGWs have been computed in the Poisson, comoving, and uniform curvature gauges for RD with \(w=1/3\) and MD with \(w=0\)[49].
Moreover, SIGWs can be measured by analyzing the energy density spectrum [67]. The gauge-dependence of SIGWs spectrum has been investigated in refs. [50, 51, 52]. The calculations in various gauges do not coincide and have risen to confusing statements in the literature [48, 49, 52, 68, 53, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 320, 333, 34, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71]. The tensor perturbations can be divided into two parts in the context of scalar-induced secondary tensor perturbations. On the one side, these perturbations freely propagate tensor perturbations by following the equation of motion without any source. These kinds of tensor perturbations are widely considered gravitational waves, and the time dependence of these gravitational waves can be read as \(h_{ij}\propto\sin(k\eta)\,\mathrm{or}\,\cos(k\eta)\). Despite coupling with scalar perturbations at the production time, they eventually decouple and propagate freely. They no longer depend on the gauge because they are independent of the scalar perturbation once they have decoupled.
On the other side, the secondary tensor perturbation couples with the scalar perturbations, where the freely propagating secondary tensor perturbations are solely contained until they decouple from the scalar perturbations. Since the scalar perturbations control these kinds of tensor perturbations, the time dependence of these tensor perturbations inherits those of the scalar perturbations. It is to be noted that the gauge dependence appears only in these sorts of tensor perturbations. In literature, in many references, these kinds of tensor perturbations are also usually called gravitational waves. In contrast, to distinguish these two kinds of tensor perturbations in this paper, we call exclusively the freely propagating tensor perturbations or free gravitational waves.
In ref. [52], it was claimed that the power spectrum of the energy density of the freely propagating secondary tensor perturbations investigated in the TT gauge is reduced compared with that investigated in the Poisson gauge. Nevertheless, as mentioned above, one does not envision that the induced secondary GWs depend on the gauge choices. The gauge independence of the induced secondary GWs can also be anticipated from the coincidence of the GWs calculated in the Poisson gauge and the flat gauge, given in ref. [49] (see the case of \(w>0\) there).
In this study, we reconsider the subhorizon SIGWs in different popular gauges. In particular, we emphasize on a detailed study of the second-order tensor perturbations generated by linear scalar perturbations in an expanding spacetime containing either RD or MD. More specifically, the paper aims to address the important issue of the gauge dependence of such tensor perturbations. This problem is highly relevant, as it is still actively discussed in the literature and has not been properly solved yet, so far as we are aware. Recent studies such as refs. [48, 49, 52, 53, 69] address this problem. By these studies, one could not obtain a physically meaningful discussion on whether the secondary tensor perturbations induced by (the quadratic of) the first-order scalar perturbations are gauge-invariant. Besides, there have been discrepancies in the previous studies of SIGWs [48, 49, 52, 53, 57, 61, 62, 63, 65, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 170, 171, 172, 174, 175, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 191, 292, 294, 295, 296, 297, 298, 300, 308, 309, 310, 311, 320, 321, 333, 341, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 51, 52, 53, 57, 59, 60, 53, 57, 56, 58, 59, 61, 59, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 99, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 127, 128, 129, 133, 134, 135, 136, 137, 138, 139, 140, 152, 139, 153, 154, 155, 156, 157, 159, 160, 170, 171, 173, 174, 175, 176, 177, 178, 179, 180, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 292, 293, 294, 295, 296, 297, 298, 300, 308, 309, 311, 310, 332, 334, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 57, 59, 61, 59, 70, 50, 51, 53, 57, 58, 59, 62, 63, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 99, 100, 99,
[69]. Furthermore, it extracted some oscillating terms sinx and cos\(x\), which are uncommon in all gauges, and presented physically meaningful contributions to SIGWs. For instance, the case of MD lacks convincing from a physical point of view. Indeed, in previous literature, it is assumed that freely propagating tensor perturbations may contribute to SIGWs during MD. Since these studies mentioned above are flawed during RD and MD. However, it is crucial to eliminate the terms that cause the discrepancy in different popular gauges.
In our proposal, we try to fix the discrepancies that occur in the previous studies. However, in the presented gauge independence framework, real and effective sources are called fictitious terms by following the refs. [57, 62, 65, 71, 72, 73], we introduce a counter term that removes the fictitious one instead of directly removing it by hand in secondary tensor perturbations. We are now using only the gauge invariant variables, following the refs. [57, 62, 71, 72, 73] thereby getting rid of the gauge-dependent fictitious terms. We carefully revisit the gauge dependence problem in SIGWs by explicitly calculating \(\Omega_{\rm GW}\) in seven different gauges. Thus in the sub-horizon modes, the observable energy density spectrum of the SIGWs is the same in seven gauges, in contrast to refs. [48, 49, 52, 53, 69]. In addition, we make a clear distinction between scalar-induced secondary tensor perturbations and SIGWs because of the mixing and coupling of tensor and scalar perturbations. Therefore, further, we recognize the oscillations \(\sin x\) and \(\cos x\) in scalar-induced secondary tensor perturbations as SIGWs during RD and MD. Basically, in the derivation of scalar-induced secondary tensor perturbations in different gauges, the physical interpretation is used to identify SIGWs.
In particular, in trying to remove the discrepancy issues in the gauge dependence of SIGWs, we find that the observable \(\Omega_{\rm GW}\) is actually gauge-independent in RD and MD. We show that all the kernel functions lead to the same gauge independence. Therefore, \(\Omega_{\rm GW}\) should be identical in all the proposed gauge fixings. Hence the energy density \(\Omega_{\rm GW}\) of SIGWs converges as in the late time limit (\(x\gg 1\)). Consequently, in principle, it indicates that the physical behavior of the observable \(\Omega_{\rm GW}\) is the same in various gauges. Moreover, SIGWs may explain the signal detected by the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) [74, 75].
The article is organized as follows: In sect. 2, we recapitulate the formalism for scalar-induced secondary GWs and the kernel functions. Here, we introduce the counter term that eliminates the extra scalar terms that cause discrepancies appearing in different gauges during RD and MD. In sect. 3, we explicitly investigate the kernel functions in various popular gauge choices during RD and MD, respectively. In sect. 4, in the sub-horizon modes, we generally shed light on the physical behavior of the kernel functions to evaluate the observable energy density. In particular, we present the comparison between the transfer functions (which may cause discrepancies in some gauges) and kernel functions in different gauges. Finally, we draw our discussion and concluding remarks in sect. 5.
## 2 SIGWs in the RD and MD eras
In this section, we analyze the secondary tensor perturbations induced by first-order scalar perturbations in an expanding spacetime containing either pure RD or MD. It is widely discussed in the literature that first-order tensor perturbations are gauge-independent. However, the secondary tensor perturbations may be gauge dependent [47, 48, 49, 50, 51, 52, 53, 54, 55, 56], while several gauge-invariant secondary tensor perturbations can be constructed in a particular gauge [57, 58, 59, 60, 61, 62, 63, 64, 65, 66]. Although SIGWs are usually discussed using the typically chosen Poisson gauge, we need to study SIGWs in other popular gauges.
Armed with the results of SIGWs in the Poisson gauge [36] we introduce a counter term in secondary tensor perturbations. In the following sections, we analyze whether scalar-induced secondary GWs are gauge invariant or gauge dependent. We investigate the energy density \(\Omega_{\rm GW}\) of SIGWs in various gauges to answer the question.
In this subsection, we give the general formula for calculating SIGWs without specifying the background and the gauge. To discuss SIGWs as the stochastic GW background, we consider the general perturbed metric around FLRW background as follows:
\[\begin{split} g_{00}=&-a^{2}(1+2\phi),\\ g_{0i}=& 2a^{2}\partial_{i}B,\\ g_{ij}=& a^{2}\delta_{ij}+a^{2}\left(\frac{1}{2}h_{ ij}^{\rm TT}-2\delta_{ij}\psi+2\partial_{i}\partial_{j}E\right),\end{split} \tag{1}\]
where \(a(\eta)\) is the scale factor of the universe. And the scalar perturbations \(\phi\), \(\psi\), \(B\), and \(E\) are of first order, and the transverse traceless part \(h_{ij}^{\rm TT}\) is the second-order tensor mode with \(h_{ii}^{\rm TT}=0\) and \(\partial_{i}h_{ij}^{\rm TT}=0\).
To eliminate the fictitious terms in the secondary tensor perturbations by following refs. [65, 72], we use gauge invariant variables by introduce the counter term \(\Xi_{kl}\) in the secondary tensor perturbation as:
\[\bar{h}_{ij}^{\rm TT}=h_{ij}^{\rm TT}+\mathcal{T}_{ij}^{kl}\Xi_{kl}, \tag{2}\]
where in the second term on the right-hand side of eq. (2), \(\mathcal{T}_{ij}^{lm}=\Lambda_{i}^{l}\Lambda_{j}^{m}-\Lambda_{i}^{l}\Lambda_{ij }^{m}/2\) represents the projection tensor that use to extract the transverse, trace-free part of a tensor and \(\Xi_{kl}\) is defined as:
\[\Xi_{kl}=-2\Big{(}4E\partial_{i}\partial_{k}\phi+\partial^{k}E\partial_{s} \partial_{i}\partial_{k}E-(\partial_{0}E-B)\partial_{i}\partial_{k}(\partial_ {0}E-B)\Big{)}. \tag{3}\]
Since \(h_{ij}\) have been treated as gauge dependent with having divergence in the previous studies [49; 52; 53; 69]. In contrast, here we are instead using \(\tilde{h}_{ij}\) by the adding \(\Xi_{kl}\) as a gauge-dependent counter term that, in principle, ensures the gauge independence of \(\tilde{h}_{ij}\). Later, we have shown graphically that this counter term removes the divergence. The explicit expression of \(\Xi_{kl}\), see e.g., refs. [65; 62; 55] for details.
In the following, in the evaluations of SIGWs, first, we briefly analyze the well-known results on \(h_{ij}^{\rm TT}\). We consider that the generation of SIGWs starts long before the horizon reentry. After perturbing Einstein's equation \(G_{\mu\nu}=8\pi GT_{\mu\nu}\) up to the second-order, we get [65]
\[\tilde{h}_{ij}^{\rm TT\prime\prime}+2\mathcal{H}\tilde{h}_{ij}^{\rm TT\prime} -\nabla^{2}\tilde{h}_{ij}^{\rm TT}=-4\mathcal{T}_{ij}^{lm}s_{lm}, \tag{4}\]
where \(s_{ij}\) is a source which is given by refs. [69; 53]:
\[\mathcal{T}_{ij}^{lm}s_{lm} =\partial_{i}\psi\partial_{j}\psi+\partial_{i}\phi\partial_{j} \phi-\partial_{i}\partial_{j}\sigma\left(\phi^{\prime}+\psi^{\prime}-\nabla^{ 2}\sigma\right)\] \[+\left(\partial_{i}\psi^{\prime}\sigma\partial_{j}+\partial_{j} \psi^{\prime}\partial_{i}\sigma\right)-\partial_{i}\partial_{i}\sigma\partial_ {j}\partial_{k}\sigma\] \[+2\partial_{i}\partial_{j}\psi\left(\phi+\psi\right)-8\pi Ga^{2} (\rho_{0}+P_{0})\partial_{i}\phi V\partial_{j}\delta V\] \[-2\partial_{i}\partial_{j}\psi\nabla^{2}E+2\partial_{i}\partial _{j}E\left(\psi^{\prime\prime}+2\mathcal{H}\psi^{\prime}-\nabla^{2}\psi\right)\] \[-\partial_{i}\partial_{k}E^{\prime}\partial_{j}\partial_{k}E^{ \prime}+\partial_{i}\partial_{i}\partial_{i}E\partial_{j}\partial_{k}\partial_ {j}E\] \[+2\left(\partial_{j}\partial_{k}\psi\partial_{i}\partial_{k}E+ \partial_{i}\partial_{k}\psi\partial_{i}\partial_{j}E\right)\] \[-2\mathcal{H}(\partial_{i}\psi\partial_{j}E^{\prime}+\partial_{j} \psi\partial_{i}E^{\prime})-\left(\partial_{i}\psi^{\prime}\partial_{j}E^{ \prime}+\partial_{j}\psi^{\prime}\partial_{i}E^{\prime}\right)\] \[-\left(\partial_{i}\psi\partial_{j}E^{\prime\prime}+\partial_{j} \partial_{i}\psi\partial_{j}E^{\prime\prime}\right)+2\partial_{i}\partial_{j}E ^{\prime}\psi^{\prime}\] \[+\partial_{i}\partial_{j}\partial_{k}E\partial_{k}\left(E^{\prime \prime}+2\mathcal{H}E^{\prime}-\nabla^{2}E\right), \tag{5}\]
where \(\sigma=E^{\prime}-B\) is the shear potential, the anisotropic stress tensor \(\Gamma_{ij}\) of the matter fluid is considered to be zero.
With different gauge choices, using \(E=0\), the above source eq. (5) can be reduced to the form given in refs. [48; 52; 70] with negligible anisotropic stress. Generally, we should use eq. (5) instead. Especially here, we should include all those terms that contain \(E\) in different gauges during RD and MD. In the Fourier space, the tensor \(h_{ij}^{\rm TT}\) can be expanded with plus \(\epsilon_{ij}^{+}\), and cross \(\epsilon_{ij}^{\times}\) polarization tensors as follows [69; 36; 53]:
\[h_{ij}^{\rm TT}(\mathbf{x},\eta)=\int\frac{{\rm d}^{3}k}{(2\pi)^{3/2}}\epsilon^{ ik\mathbf{x}}[h_{\mathbf{k}}^{+}(\eta)\epsilon_{ij}^{+}+h_{\mathbf{k}}^{\times}(\eta) \epsilon_{ij}^{\times}]. \tag{6}\]
Next, we define the projection tensor for the source \(s_{lm}(\mathbf{x},\eta)\) in the Fourier space as:
\[\mathcal{T}_{ij}^{lm}s_{lm}=\int\frac{{\rm d}^{3}k}{(2\pi)^{3/2}}\epsilon^{ ik\mathbf{x}}[\epsilon_{ij}^{+}\epsilon^{+}lm+\epsilon_{ij}^{\times}\epsilon^{ \times}\epsilon^{\times}]s_{lm}(\mathbf{k},\eta), \tag{7}\]
we now find the solution to eq. (4) for \(\epsilon_{ij}^{+}\) as:
\[h^{+}(\mathbf{k},\eta)=4\int\frac{{\rm d}^{3}p}{(2\pi)^{3/2}}\epsilon^{*ij}p_{i}p_ {j}\xi(\mathbf{p})\zeta(\mathbf{k}-\mathbf{p})\frac{1}{k^{2}}I(u,v,x), \tag{8}\]
where \(x=k\eta\), \(u=p/k\), \(v=|\mathbf{k}-\mathbf{p}|/k\), \(\zeta(\mathbf{p})=\psi+\mathcal{H}\delta p/\rho_{0}^{\prime}\) is the primordial curvature perturbation. Here, one can assume equal contributions from the two polarized tensors in Fourier space. We use one polarization to evaluate its energy density and get the total energy density by doubling it. In the above expression (8), \(I(u,v,x)\) is the kernel function, given by [21; 26; 28; 36]
\[I(u,v,x)=\int_{0}^{x}{\rm d}\bar{x}\frac{a(\bar{\eta})}{a(\eta)}kG_{k}(\eta, \bar{\eta})f(u,v,\bar{x}), \tag{9}\]
where \(f(u,v,x)\) is associated with \(S_{\mathbf{k}}^{+}=\mathbf{e}^{+ij}s_{ij}(\mathbf{k},\eta)\) as follows:
\[S_{\mathbf{k}}^{+}(\eta)=\int\frac{{\rm d}^{3}p}{(2\pi)^{3/2}}\zeta(\mathbf{p})\zeta( \mathbf{k}-\mathbf{p})\mathbf{e}^{+ij}p_{i}p_{j}f(u,v,x). \tag{10}\]
From eqs. (4)-(10), we derived these expressions in the semi-analytic way [69; 36; 53]. Furthermore, in the following, we derive the explicit expressions of the source function \(f(u,v,x)\), which will be used in the subsequent sections to derive the kernel functions in seven different gauges. The source function \(f(u,v,x)\) in eq. (10) can be symmetrized under the exchange \(u\leftrightarrow v\) for computational simplicity, as:
\[f(u,v,x)=\frac{1}{2}(\tilde{f}(u,v,x)+\tilde{f}(v,u,x)), \tag{11}\]
where
\[\tilde{f}(u,v,x) =T_{\phi}(ux)T_{\phi}(vx)-T_{\phi}(ux)T_{\phi}(vx)\] \[-\frac{v}{u}T_{\sigma}(ux)\left[T_{\phi}^{*}(vx)+T_{\phi}^{*}(vx)+ T_{\sigma}(vx)\right]\] \[-2\frac{v}{v}T_{\phi}^{*}(ux)T_{\sigma}(vx)-\frac{1-u^{2}-v^{2}}{2 uv}T_{\sigma}(ux)T_{\sigma}(vx)\] \[+2T_{\phi}(ux)T_{\phi}(vx)+\frac{2}{\mathcal{H}^{2}-\mathcal{H}}\] \[\times\left[kuT_{\phi}^{*}(ux)+\mathcal{H}T_{\phi}(ux)\right]\left[ kvT_{\phi}^{*}(vx)+\mathcal{H}T_{\phi}(vx)\right]\] \[+2\frac{u^{2}}{v^{2}}T_{E}(vx)\left[T_{\phi}^{*}(ux)+\frac{2 \mathcal{H}}{ku}T_{\phi}^{*}(ux)+T_{\phi}(ux)\right]\] \[+2T_{\phi}(ux)T_{E}(vx)-\frac{1-u^{2}-v^{2}}{2uv}T_{E}^{*}(ux)T_{E }^{*}(vx)\] \[-\left(\frac{1-u^{2}-v^{2}}{2uv}\right)^{2}T_{E}(ux)T_{E}(vx)\] \[+4\frac{u}{v}T_{\phi}^{*}(ux)T_{E}^{*}(vx)+2T_{\phi}(ux)T_{E}^{* }(vx)\] \[+\frac{4\mathcal{H}}{kv}T_{\phi}(ux)T_{E}^{*}(vx)-\frac{1-u^{2}-v^ {2}}{2u^{2}}T_{E}(ux)\] \[\times\left[T_{E}^{**}(vx)+\frac{2\mathcal{H}}{kv}T_{E}^{*}(vx)+ T_{E}(vx)\right], \tag{12}\]
and \(T^{*}(v)=\mathrm{d}T(y)/\mathrm{d}y\). The power spectrum of SIGWs then can be written as [53]:
\[\mathcal{P}_{k}(k,x)=4\int_{0}^{\infty}{\rm d}u\int_{|1-v|}^{1+u}{\rm d}v\left[ \frac{4u^{2}-(1+u^{2}-v^{2})}{4uv}\right]^{2}\]
\[\times I^{2}(u,v,x)\mathcal{P}_{\zeta}(uk)\mathcal{P}_{\zeta}(vk), \tag{13}\]
where \(\mathcal{P}_{\zeta}\) is the primordial scalar power spectrum, and the GW energy density \(\rho_{\text{GW}}(\eta)=\int\mathrm{d}\ln k\rho_{\text{GW}}(\eta,k)\) can be evaluated as [76]:
\[\rho_{\text{GW}}=\frac{{M_{\eta}}^{2}}{16a^{2}}\left\langle\overline{h_{ij,k}h_ {ij,k}}\right\rangle, \tag{14}\]
where the over-line denotes the oscillation average. In general, one can write the fraction of the energy density of SIGWs as [29, 36]:
\[\Omega_{GW}\left(\eta,k\right)=\frac{\mathrm{d}\rho_{\text{GW}}}{\rho_{c} \mathrm{d}\ln k}=\frac{1}{24}\left(\frac{k}{\mathcal{H}(\eta)}\right)^{2} \overline{\mathcal{P}_{k}\left(k,\eta\right)}, \tag{15}\]
where \(\rho_{c}=3H^{2}/8\pi G\) denote the critical energy density of the Universe, and \(\mathcal{P}_{k}\) can be defined as:
\[\left\langle h_{\mathbf{k}_{1}}^{r_{1}}(\eta)h_{\mathbf{k}_{2}}^{r_{2}}(\eta )\right\rangle=\frac{2\pi^{2}}{k_{1}^{3}}\delta_{s_{1}s_{2}}\delta^{3}( \mathbf{\emph{k}}_{1}+\mathbf{\emph{k}}_{2})\mathcal{P}_{k}(k_{1},\eta),\ s_{i}= +,\times. \tag{16}\]
Next, we analyze the secondary tensor perturbations under the various gauges during RD and MD. Here, we only consider the first-order scalars, \(\alpha\), and \(\beta\). We do not consider the secondary coordinate transformation because the coordinate transformation of tensor modes does not depend on the transformation of the same order. Thus the secondary tensor perturbation can be transformed with a counter term as [29, 53, 69]:
\[\tilde{h}_{ij}^{\text{TT}}\to h_{ij}^{\text{TT}}+\chi_{ij}^{\text{TT}}+\Xi_{ ij}^{\text{TT}}, \tag{17}\]
where
\[\chi_{ij}^{\text{TT}}(\mathbf{x},\eta) =\mathcal{T}_{ij}^{lm}\chi_{lm}\] \[=\int\frac{\mathrm{d}^{3}k}{(2\pi)^{3/2}}\mathrm{e}^{\mathrm{i}k \cdot x}[\chi^{+}(\mathbf{k},\eta)\epsilon_{ij}^{+}+\chi^{\zeta}(\mathbf{k},\eta) \epsilon_{ij}^{\times}], \tag{18}\] \[\Xi_{ij}^{\text{TT}}(\mathbf{x},\eta) =-\int\frac{\mathrm{d}^{3}k}{(2\pi)^{3/2}}\mathrm{e}^{\mathrm{i} k\cdot x}[\Xi^{+}(\mathbf{k},\eta)\epsilon_{ij}^{+}+\Xi^{\times}(\mathbf{k},\eta) \epsilon_{ij}^{\times}],\] (19) \[\chi^{+}(\mathbf{k},\eta) =-\int\frac{\mathrm{d}^{3}p}{(2\pi)^{3/2}}\epsilon^{+ij}p_{i}p_{ j}\bigg{(}4\alpha(\mathbf{p})\sigma(\mathbf{k}-\mathbf{p})+\frac{16}{\eta}\alpha(\mathbf{p})\] \[\quad\times[E(\mathbf{k}-\mathbf{p})+\beta(\mathbf{k}-\mathbf{p})]+\mathbf{p}\cdot( \mathbf{k}-\mathbf{p})\beta(\mathbf{p})[4E(\mathbf{k}-\mathbf{p})\] \[\quad+2\beta(\mathbf{k}-\mathbf{p})]-8\psi(\mathbf{p})\beta(\mathbf{k}-\mathbf{p})+2 \alpha(\mathbf{p})\alpha(\mathbf{k}-\mathbf{p})\bigg{)},\] \[=4\int\frac{\mathrm{d}^{3}p}{(2\pi)^{3/2}}\epsilon^{+ij}p_{i}p_{j} \zeta(\mathbf{p})\zeta(\mathbf{k}-\mathbf{p})\frac{1}{k^{2}}I_{\chi}(u,v,x), \tag{20}\]
and
\[\Xi^{+}(\mathbf{k},\eta) =-\int\frac{\mathrm{d}^{3}p}{(2\pi)^{3/2}}\epsilon^{+ij}p_{i}p_{ j}3(1+w)/(5+3w)\] \[\quad\times\zeta(\mathbf{p})\zeta(\mathbf{k}-\mathbf{p})\frac{1}{k^{2}}I_{ \Xi}(u,v,x), \tag{21}\]
where \(I_{\chi}(u,v,x)\) takes the form of
\[I_{\chi}(u,v,x) =-\frac{1}{9uv}\bigg{[}2T_{\alpha}(ux)T_{\sigma}(vx)\] \[\quad+2T_{\alpha}(vx)T_{\sigma}(ux)+2T_{\alpha}(ux)T_{\alpha}(vx)\] \[\quad-4\left(\frac{u}{v}T_{\phi}(ux)T_{\beta}(vx)+\frac{v}{u}T_{ \phi}(vx)T_{\beta}(ux)\right)\] \[\quad+\frac{1-u^{2}-v^{2}}{uv}[T_{\beta}(ux)T_{E}(vx)\] \[\quad+T_{\beta}(vx)T_{E}(ux)+T_{\beta}(ux)T_{\beta}(vx)]\] \[\quad+4\frac{\mathcal{H}}{k}\left(\frac{1}{v}T_{\alpha}(ux)T_{E}( v,x)+\frac{1}{u}T_{E}(ux)T_{\alpha}(vx)\right.\] \[\quad\left.+\frac{1}{v}T_{\alpha}(ux)T_{\beta}(vx)+\frac{1}{u}T_ {\beta}(ux)T_{\alpha}(vx)\right)\bigg{]}\,, \tag{22}\]
while the kernel function \(I_{\Xi}\left(u,v,x\right)\) takes the form of
\[I_{\Xi}\left(u,v,x\right)= -2\left(-\frac{2}{v^{2}}T_{E}\left(vx\right)T_{\phi}(ux)-\frac{2}{u ^{2}}T_{\phi}\left(vx\right)T_{E}(ux)\right.\] \[\quad+\frac{2}{uv}T_{B}\left(vx\right)T_{B}(ux)+\left(\frac{1}{v^ {2}}T_{E}^{\prime}\left(vx\right)-\frac{1}{v}T_{B}\left(vx\right)\right)\] \[\quad\times\left(\frac{1}{u^{2}}T_{E}^{\prime}(ux)-\frac{1}{u}T_ {B}(ux)\right)\] \[\quad+\left(\frac{u\mathcal{H}}{v(k)^{2}}\right)T_{E}\left(vx \right)T_{E}(ux)\,. \tag{23}\]
Here we symmetrized the kernel function \(I_{\chi}(u,v,x)\) under \(u\leftrightarrow v\). It is worth mentioning that the transformed secondary tensor perturbations have expressions in the form of the first-order scalar coordinate transformation. With the gauge transformation (17) and the result for SIGWs in the Poisson gauge, it is easy to accomplish the (semi)analytic derivation of the SIGWs in any chosen gauge without conducting the complicated calculations in that gauge. Combining eqs. (6), (8), (17), (18), (20), and (23) one obtains the gauge transformation of SIGWs as follows:
\[\tilde{h}_{\mathbf{k}}^{+}\to h_{\mathbf{k}}^{+}+\chi_{\mathbf{k}}^{+}+\Xi_{\mathbf{k}}^{+}\] \[=4\int\frac{\mathrm{d}^{3}p}{(2\pi)^{3/2}}\epsilon^{+ij}(\mathbf{k})p_{ i}p_{j}\zeta(\mathbf{p})\zeta(\mathbf{k}-\mathbf{p})\] \[\quad\times\frac{1}{k^{2}}\left[I(u,v,x)+I_{\chi}(u,v,x)+I_{\Xi}(u,v,x)\right], \tag{24}\]
and for the perturbations, one can use the transfer functions \(T(x)\) as follows:
\[\alpha(\mathbf{k},x) =\frac{3(1+w)}{5+3w}\zeta(\mathbf{k})\frac{1}{k}T_{\alpha}(x), \tag{25}\] \[\beta(\mathbf{k},x) =\frac{3(1+w)}{5+3w}\zeta(\mathbf{k})\frac{1}{k^{2}}T_{\beta}(x),\] (26) \[\sigma(\mathbf{k},x) =\frac{3(1+w)}{5+3w}\zeta(\mathbf{k})\frac{1}{k}T_{\sigma}(x), \tag{27}\]
\[E(\mathbf{k},x) = \frac{3(1+w)}{5+3w}\zeta(\mathbf{k})\frac{1}{k^{2}}T_{E}(x), \tag{28}\] \[B(\mathbf{k},x) = \frac{3(1+w)}{5+3w}\zeta(\mathbf{k})\frac{1}{k}T_{B}(x),\] (29) \[\psi(\mathbf{k},x) = \frac{3(1+w)}{5+3w}\zeta(\mathbf{k})T_{\phi}(x),\] (30) \[\phi(\mathbf{k},x) = \frac{3(1+w)}{5+3w}\zeta(\mathbf{k})T_{\phi}(x). \tag{31}\]
The above gauge transformation (24) is our main result for studying the gauge transformation of SIGWs in general. From the expression (24), one can either transform the solution or show how the power spectrum of SIGWs can be transformed under the gauge transformation. For instance, with the solution in the Poisson gauge, one can get the solution in any gauge, according to the following transformation:
\[I_{\tilde{h}}(u,v,x)\to I_{\tilde{h}}(u,v,x)+I_{x}(u,v,x)+I_{ \mathbb{Z}}(u,v,x). \tag{32}\]
Here, during RD, we have
\[I_{\mathrm{RD},\chi}(u,v,x)=-\frac{1}{9w}\left[-4\left(\frac{u} {v}T_{\mathrm{P}}(ux)T_{\beta}(vx)+\frac{v}{u}T_{\mathrm{P}}(vx)T_{\beta}(ux)\right)\right.\] \[\left.+2T_{\alpha}(ux)T_{\alpha}(vx)+\frac{4}{k\eta}\left(\frac{1 }{v}T_{\alpha}(ux)T_{\beta}(vx)+\frac{1}{u}T_{\beta}(ux)T_{\alpha}(vx)\right)\right.\] \[\left.+\frac{1-u^{2}-v^{2}}{uv}T_{\beta}(ux)T_{\beta}(vx)\right], \tag{33}\]
\[I_{\mathrm{RD},\tilde{h}}(u,v,x)=-\frac{1}{9w}\left[-4\left( \frac{u}{v}T_{\mathrm{P}}(ux)T_{\beta}(vx)+\frac{v}{u}T_{\mathrm{P}}(vx)T_{ \beta}(ux)\right)\right.\] \[\left.+2T_{\alpha}(ux)T_{\alpha}(vx)+\frac{4}{k\eta}\left(\frac{ 1}{v}T_{\alpha}(ux)T_{\beta}(vx)+\frac{1}{u}T_{\beta}(ux)T_{\alpha}(vx)\right)\right.\] \[\left.+\frac{1-u^{2}-v^{2}}{uv}T_{\beta}(ux)T_{\beta}(vx)+18T_{B }\left(vx\right)T_{B}(ux)\right], \tag{34}\]
and for MD is
\[I_{\mathrm{MD},\chi}(u,v,x)= -\frac{9}{100uv}\left[2T_{\alpha}(ux)T_{\alpha}(vx)+u^{2}v^{2}T_ {\beta}(ux)T_{\beta}(vx)\right.\] \[-4\left(\frac{u^{2}+v^{2}}{uv}\left(T_{\mathrm{P}}(ux)T_{\beta}( vx)+\frac{v}{u}T_{\mathrm{P}}(vx)T_{\beta}(ux)\right)\right)\] \[+\frac{8}{x}\left(\frac{1}{v}T_{\alpha}(ux)T_{\beta}(vx)+\frac{1 }{u}T_{\beta}(ux)T_{\alpha}(vx)\right)\] \[-\frac{u^{2}+v^{2}}{uv}T_{\beta}(ux)T_{\beta}(vx)\right], \tag{35}\]
\[I_{\mathrm{MD},\tilde{h}}(u,v,x)= -\frac{9}{100uv}\left[2T_{\alpha}(ux)T_{\alpha}(vx)+u^{2}v^{2}T_ {\beta}(ux)T_{\beta}(vx)\right.\] \[-4\left(\frac{u^{2}+v^{2}}{uv}\left(T_{\mathrm{P}}(ux)T_{\beta}( vx)+\frac{v}{u}T_{\mathrm{P}}(vx)T_{\beta}(ux)\right)\right)\] \[+\frac{8}{x}\left(\frac{1}{v}T_{\alpha}(ux)T_{\beta}(vx)+\frac{1 }{u}T_{\beta}(ux)T_{\alpha}(vx)\right)\] \[\left.-\frac{u^{2}+v^{2}}{uv}T_{\beta}(ux)T_{\beta}(vx)+\frac{200 }{u}T_{B}\left(vx\right)T_{\beta}(ux)\right]. \tag{36}\]
The above expressions (33) and (35) can be obtained by the use of the transfer functions \(T_{\sigma}=T_{E}=0\) and \(T_{\phi}=T_{\mathrm{P}}\) into eq. (22), in the Poisson gauge. Besides, gauge transformation from the Poisson gauge to the other gauges gives the transfer functions \(T_{\alpha}\) and \(T_{\beta}\), respectively. In the following, we explicitly use eq. (32) to evaluate the kernel function in any given gauge.
## 3 Results of the Kernel Functions during RD and MD
This section presents the results of the kernel functions by evaluating the transfer functions of the metric perturbations in seven different gauges. These transfer functions describe the evolution of the density perturbations on subhorizon scales. One can see the gauge (in)dependence by evaluating the kernel function \(I(u,v,x)\).
### Poisson gauge
In this subsection, we consider the standard Poisson gauge with \(B=E=0\). Here we use the Bardeen's potentials \(\phi_{\mathrm{P}}=\psi_{\mathrm{P}}=\Phi=\Psi\)[77]. In the RD universe, we have \(\Phi=\Psi=2\zeta/3\) on superhorizon scales. Conversely, the counter perturbation \(\Xi_{kl}\) vanishes in the Poisson gauge i.e., \(\Xi_{kl}=0\)[65]. Following ref. [53] one can calculate the analytical expression of \(I(u,v,x)\) explicitly at the late time, \(x\gg 1\) as follows:
\[I_{\mathrm{RD},p}(u,v,x)\] \[=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3}v\sin x+u(vx)^{3}\sin x-3 uvx^{3}\sin x\right)\] \[-\frac{3}{u^{3}v^{3}x^{4}}\left(6uvx^{2}\cos\frac{ux}{\sqrt{3}} \cos\frac{vx}{\sqrt{3}}+6\sqrt{3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt {3}}\right.\] \[\left.-18\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v ^{2}-3)x^{2}\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right)\] \[\times\left[\left(\mathrm{Ci}\left[\left(1+\frac{u-v}{\sqrt{3}} \right)x\right]+\mathrm{Ci}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]- \mathrm{Ci}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]\right.\] \[\left.-\mathrm{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x \right]+\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\right)\sin x\] \[+\left(-\mathrm{Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x \right]-\mathrm{Si}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]\right.\] \[+\left.\mathrm{Si}\left[\left(1-\frac{u+v}{\sqrt{3}}\right)x \right]+\left.\mathrm{Si}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right] \right)\cos x\right]. \tag{37}\]
The subscript "P" indicates the evaluation in the Poisson gauge. The evolution of \(I_{\mathrm{P}}^{2}(u,v,x)\) with \(u=v=1\) and \(u=v=0.1\) is shown in Figures 1 and 2, respectively. It is to be noted that \(I_{\mathrm{P}}(u,v,x\rightarrow\infty)\propto x^{-1}\), and \(\Omega_{\mathrm{GW}}(k,x\rightarrow\infty)\) is a
constant. It means that SIGWs appear as free radiation deep inside the horizon.
In MD \(w=0\), the Bardeen's potentials are \(\Phi=\Psi=3\zeta/5\). In this gauge, the kernel function can be expressed explicitly as:
\[I_{\text{MD,P}}(u,v,x)=\frac{18(x\cos x-\sin x)}{5x^{3}}+\frac{6}{5}. \tag{38}\]
As \(I_{\text{P}}(u,v,x\rightarrow\infty)=6/5\), eq. (13) shows that the primordial power spectrum \(\mathcal{P}_{h}\) is a constant at \(x\gg 1\), and the energy density \(\Omega_{\text{GW}}\) is proportional to \(x^{2}\). Hence \(\Omega_{\text{GW}}(k,x\rightarrow\infty)\propto a\) if we use eq. (14). According to eq. (8), it can be seen that the constant term \(6/5\) in eq. (38) contributes a constant to \(h_{\mathbf{k}}\). Consequently, the contribution to \(h_{\mathbf{k}}^{\prime}\) and the energy density approaches zero. It means that we should use the definition (15) to determine \(\Omega_{\text{GW}}\). Otherwise, the constant \(6/5\) will be mistakenly calculated if we use eq. (14). Accordingly, the constant \(6/5\) in eq. (38) does not provide any contribution to the energy density \(\Omega_{\text{GW}}\). Thus, the constant in eq. (38) does not represent a wave solution, and GWs come from those terms that represent the oscillations as \(\sin x\) and \(\cos x\). After barring the constant factor \(6/5\), one can find \(I_{\text{P}}(x\rightarrow\infty)\propto\cos x/x^{2}=\cos x/a\) that leads to \(\Omega_{\text{GW}}\propto a^{-1}\) and \(\rho_{\text{GW}}\propto a^{-4}\), which behaves, as one would expect, as radiation in the MD era. However, only the terms with the oscillations as \(\sin x\) and \(\cos x\) provide evidence for SIGWs.
Since the energy density \(\Omega_{\text{GW}}\) of SIGWs is uniquely determined by the investigation of the kernel functions in different gauge choices, the energy density spectrum evaluated in the gauge-independent framework takes the same form as those examined in the Poisson gauge during RD and MD.
### TT gauge
The TT gauge can be defined as \(\phi=B=0\). Since the kernel function \(I(u,v,x)\) depends on the source function \(f(u,v,x)\) linearly presented in eq. (9). In this gauge, we can find the transfer function as follows:
\[I_{\text{RD, TT}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3} v\sin x+u(vx)^{3}\sin x-3uxx^{3}\sin x\right)+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^{ 3}v^{3}x}\times\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right] \sin x+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^{3}v^{3}x}\] \[\times\left(-6uvx^{2}\cos\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{ 3}}+6\sqrt{3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+6\sqrt{3}vx \sin\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}-18\sin\frac{ux}{\sqrt{3}}\sin \frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3)x^{2}\right.\] \[\times\left.\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right) -\frac{9}{u^{2}v^{2}x^{2}}\left((1-u^{2}-v^{2})x^{2}\left[\text{Ci}\left( \frac{ux}{\sqrt{3}}\right)+\mathcal{C}-\ln\frac{ux}{\sqrt{3}}-\frac{\sin(ux/ \sqrt{3})}{ux/\sqrt{3}}\right]\times\left[\text{Ci}\left(\frac{vx}{\sqrt{3}} \right)+\mathcal{C}-\ln\frac{vx}{\sqrt{3}}-\frac{\sin(vx/\sqrt{3})}{vx/\sqrt{ 3}}\right]\right.\] \[+2\left[\frac{\sin(ux/\sqrt{3})}{ux/\sqrt{3}}-1\right]\left[ \frac{\sin(vx/\sqrt{3})}{vx/\sqrt{3}}-1\right]+4\left[-\text{Ci}\left(\frac{ ux}{\sqrt{3}}\right)-\mathcal{C}+\ln\frac{ux}{\sqrt{3}}+\frac{\sin(ux/ \sqrt{3})}{ux/\sqrt{3}}\right]\left[1-\cos\frac{vx}{\sqrt{3}}\right]\] \[+4\left[-\text{Ci}\left(\frac{vx}{\sqrt{3}}\right)-\mathcal{C}+ \ln\frac{vx}{\sqrt{3}}+\frac{\sin(vx/\sqrt{3})}{vx/\sqrt{3}}\right]\left[1- \cos\frac{ux}{\sqrt{3}}\right]\right)\times\left[\left(\text{Ci}\left[\left( 1+\frac{u-v}{\sqrt{3}}\right)x\right]+\text{Ci}\left[\left(1+\frac{v-u}{ \sqrt{3}}\right)x\right]-\text{Ci}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x \right]\right.\] \[\left.-\text{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x\right] \right)+\left(-\text{Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]- \text{Si}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]+\text{Si}\left[ \left(1-\frac{u+v}{\sqrt{3}}\right)x\right]+\text{Si}\left[\left(1+\frac{u+v}{ \sqrt{3}}\right)x\right]\right)\cos x\right]. \tag{39}\]
In this gauge, as mentioned above, the expression (39) is obtained by using values of the transfer functions. In fact, as we show in Figure 1 in this work, they lead to a discrepancy between the Poisson gauge and the TT gauge. In the refs. [48, 49, 52, 53, 69], the secondary tensor perturbations generated by the quadratic combination of a linear scalar-type cosmological perturbation are widely investigated.
Nevertheless, most previous studies are based on a Poisson gauge without proper explanation. There in the previous studies, it is shown that the secondary induced tensor perturbations are generically gauge dependent. In addition, it is also presented over there that the result of the kernel function come from those terms that represent the oscillations as \(\sin x\) and \(\cos x\). After barring the constant factor \(6/5\), one can find \(I_{\text{P}}(x\rightarrow\infty)\propto\cos x/x^{2}=\cos x/a\) that leads to \(\Omega_{\text{GW}}\propto a^{-1}\) and \(\rho_{\text{GW}}\propto a^{-4}\), which behaves, as one would expect, as radiation in the MD era. However, only the terms with the oscillations as \(\sin x\) and \(\cos x\) provide evidence for SIGWs.
Since the energy density \(\Omega_{\text{GW}}\) of SIGWs is uniquely determined by the investigation of the kernel functions in different gauge choices, the energy density spectrum evaluated in the gauge-independent framework takes the same form as those examined in the Poisson gauge during RD and MD.
### TT gauge
The TT gauge can be defined as \(\phi=B=0\). Since the kernel function \(I(u,v,x)\) depends on the source function \(f(u,v,x)\) linearly presented in eq. (9). In this gauge, we can find the transfer function as follows:
\[I_{\text{RD, TT}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3} v\sin x+u(vx)^{3}\sin x-3uxx^{3}\sin x\right)+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^{ 3}v^{3}x}\times\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right] \sin x+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^{3}v^{3}x}\] \[\times\left(-6uvx^{2}\cos\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}+6 \sqrt{3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+6\sqrt{3}vx\sin\frac{ ux}{\sqrt{3}}-18\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3)x^{2}\right.\] \[\times\left.\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right) -\frac{9}{u^{2}v^{2}x^{2}}\left((1-u^{2}-v^{2})x^{2}\left[\text{Ci}\left( \frac{ux}{\sqrt{3}}\right)+\mathcal{C}-\ln\frac{ux}{\sqrt{3}}-\frac{\sin(ux/ \sqrt{3})}{ux/\sqrt{3}}\right]\right.\times\left[\text{Ci}\left(\frac{vx}{ \sqrt{3}}\right)+\mathcal{C}-\ln\frac{vx}{\sqrt{3}}-\frac{\sin(vx/\sqrt{3})}{ vx/\sqrt{3}}\right]\right.\] \[+2\left[\frac{\sin(ux/\sqrt{3})}{ux/\sqrt{3}}-1\right]\left[\frac{ \sin(vx/\sqrt{3})}{vx/\sqrt{3}}-1\right]+4\left[-\text{Ci}\left(\frac{ux}{ \sqrt{3}}\right)-\mathcal{C}+\ln\frac{ux}{\sqrt{3}}+\frac{\sin(ux/\sqrt{3})}{ ux/\sqrt{3}}\right]\left[1-\cos\frac{vx}{\sqrt{3}}\right]\left[1-\cos\frac{vx}{\sqrt{3}}\right]\] \[\left.+4\left[-\text{Ci}\left(\frac{vx}{\sqrt{3}}\right)- \mathcal{C}+\ln\frac{vx}{\sqrt{3}}+\frac{\sin(vx/\sqrt{3})}{vx/\sqrt{3}}\right] \left[1-\cos\frac{ux}{\sqrt{3}}\right]\right)\times\left[\left(\text{Ci}\left[ \left(1+\frac{u-v}{\sqrt{3}}\right)x\right]+\text{Ci}\left[\left(1+\frac{v-u}{ \sqrt{3}}\right)x\right]-\text{Ci}\left[\left(1+\frac{u
The counter-term in the TT gauge is non-trivial. Besides, as we can see, the terms in the first two lines of the expression (39) are the same with eq. (40). Suppose we had the terms that are freely propagating tensor perturbations and only represented the free GWs. In that case, these free oscillating terms contribute to the energy density \(\Omega_{\text{GW}}\) of SIGWs [50]. The evolution of the divergent kernel function \(I_{\text{RD, TT}}^{2}(u,v,x)\) with \(u=v=1\) and \(u=v=0.1\) is shown in Figure 1 and for the gauge independent kernel \(I_{h,\,\text{RD, TT}}^{2}(u,v,x)\) of the energy density of SIGWs is shown in Figure 2, respectively.
We finally obtain the kernel function in the MD universe from eq. (35) as follows:
\[I_{\text{MD, TT}}(u,v,x)= \frac{18(x\cos x-\sin x)}{5x^{3}}\] \[+\frac{2400x^{3}+5x^{5}\big{(}{-88+(-1+u^{2}+v^{2})x^{2}}\big{)}} {2000x^{3}}. \tag{41}\]
While during MD, by using eq. (32), we can evaluate the independent kernel function in the TT gauge as:
\[I_{\text{MD, \(\bar{h},\,\text{TT}}}(u,v,x)=\frac{18(x\cos x-\sin x )+30x^{3}}{5x^{3}}. \tag{42}\]
It is to be noted that in the above expression (41), the first term is the same as appearing in eq. (42). The only oscillating terms \(\sin x\) or \(\cos x\) contribute to SIGWs and show the physical behavior of the energy density of SIGWs, and the rest are fictitious terms. The evolution of the kernel function (41) is presented in Figures 3 and 4. And the evolution for the gauge independent kernel function (42) is shown in Figures 5 and 6, respectively.
### Comoving orthogonal gauge
Let us consider the comoving orthogonal gauge defined by \(\delta V=B=0\). We use the results obtained from the background equations. After some algebraic calculations, we obtain the \(I(u,v,x)\) in the comoving orthogonal gauge as:
\[I_{\text{RD, CO}}\left(u,v,x\right)=-\frac{3}{u^{3}v^{3}x^{4}} \left((ux)^{3}v\sin x+u(vx)^{3}\sin x-3uvx^{3}\sin x\right)+\frac{3(u^{2}+v^{2 }-3)^{2}}{4u^{2}v^{3}x}\times\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}} \right|\right]\sin x\] \[+\frac{3}{4u^{3}v^{3}x^{4}}\left[3C^{2}uv(u^{2}+v^{2}-1)x^{4}-2 \sqrt{3}Cv(5u^{2}+3v^{2}-3)x^{3}\sin\frac{ux}{\sqrt{3}}-2\sqrt{3}Cu(3u^{2}+5v ^{2}-3)x^{3}\sin\frac{vx}{\sqrt{3}}\right.\] \[-2[36-18(2u^{2}+2v^{2}-1)x^{2}+u^{2}v^{2}x^{4}]\sin\frac{ux}{ \sqrt{3}}\sin\frac{vx}{\sqrt{3}}+3uvx^{2}[-8+(u^{2}+v^{2}-1)x^{2}]\cos\frac{ ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}\] \[+vx\cos\frac{vx}{\sqrt{3}}\left(3Cu(u^{2}+v^{2}-1)x^{3}-2\sqrt{3 }[-12+(7u^{2}+3v^{2}-3)x^{2}]\sin\frac{ux}{\sqrt{3}}\right)\] \[+ux\cos\frac{ux}{\sqrt{3}}\left(3Cv(u^{2}+v^{2}-1)x^{3}-2\sqrt{3 }[-12+(3u^{2}+7v^{2}-3)x^{2}]\sin\frac{vx}{\sqrt{3}}\right)\] \[+\frac{3}{u^{3}v^{3}x^{4}}\left(-6uvx^{2}\cos\frac{ux}{\sqrt{3}} \cos\frac{vx}{\sqrt{3}}+6\sqrt{3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{ \sqrt{3}}+6\sqrt{3}vx\sin\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}-18\sin \frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right.\] \[\left.+(u^{2}+v^{2}-3)x^{2}\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{ \sqrt{3}}\right)+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^{3}v^{3}x}\times\left[\left( \text{Ci}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]+\text{Ci}\left[ \left(1+\frac{v-u}{\sqrt{3}}\right)x\right]-\text{Ci}\left[\left(1+\frac{u+v }{\sqrt{3}}\right)x\right]\right.\] \[\left.-\text{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x\right] \right)\sin x+\left(-\text{Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x \right]-\text{Si}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]\right.\right. \left.+\text{Si}\left[\left(1-\frac{u+v}{\sqrt{3}}\right)x\right]+\text{Si} \left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]\right)\cos x\right]. \tag{43}\]
In the above expression (43), there exist some extra terms that do not contribute to SIGWs and cause a divergence, as it is shown in Figure 1. We try to present a resolution by using a counter term in the aforementioned expression (24). We show that being second order in perturbation, such induced tensor perturbations are generically gauge independent in contrast to refs. [48, 49, 53, 69], as shown in Figure 2.
Therefore, by using the transformation of a counter term, the kernel function of the gauge independent SIGWs in eq. (32) is obtained to be
\[I_{\text{RD, }\hat{h},\,\text{CO}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4} }\left((ux)^{3}v\sin x+u(vx)^{3}\sin x-3uvx^{3}\sin x\right)-\frac{3}{u^{3}v^{3 }x^{4}}\Bigg{(}6uvx^{2}\cos\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}\] \[+6\sqrt{3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}-18\sin \frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3)x^{2}\sin\frac{ux}{ \sqrt{3}}\sin\frac{vx}{\sqrt{3}}\Bigg{)}\times\Bigg{[}\left(\text{Ci}\left[ \left(1+\frac{u-v}{\sqrt{3}}\right)x\right]+\text{Ci}\left[\left(1+\frac{v-u} {\sqrt{3}}\right)x\right]\right.\] \[\left.-\text{Ci}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x \right]-\text{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x\right]+\ln\left[ \left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\right)\sin x+\left(-\text {Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]-\text{Si}\left[\left(1+ \frac{v-u}{\sqrt{3}}\right)x\right]\] \[+\text{Si}\left[\left(1-\frac{u+v}{\sqrt{3}}\right)x\right]+ \text{Si}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]\right)\cos x \Bigg{]}\,. \tag{44}\]
The counter-term in the comoving orthogonal gauge is non-trivial. Besides, as we can see, the terms in the first two lines of the expression (43) are the same with eq. (44). Suppose we had the terms that are freely propagating tensor perturbations and only represented the free GWs. In that case, these free oscillating terms contribute to the energy density \(\Omega_{\text{GW}}\) of SIGWs [50]. The evolution of the divergent kernel function \(I_{\text{RD, CO}}^{2}(u,v,x)\) with \(u=v=1\) and \(u=v=0.1\) is shown in Figure 1 and for the gauge independent kernel \(I_{\hat{h},\,\text{RD, CO}}^{2}(u,v,x)\) of the energy density of SIGWs is shown in Figure 2, respectively.
In the MD universe, to calculate the result of the kernel function, we use eq. (35) and get
\[I_{\text{MD, CO}}(u,v,x)=\frac{18(x\cos x-\sin x)}{5x^{3}}\] \[+\frac{2400x^{3}+5x^{5}\Big{(}-88+(-1+u^{2}+v^{2})x^{2}\Big{)}}{ 2000x^{3}}. \tag{45}\]
While during MD, by using eq. (32), we can evaluate the independent kernel function in the comoving orthogonal gauge as:
\[I_{\text{MD, }\hat{h},\,\text{CO}}(u,v,x)=\frac{18(x\cos x-\sin x)}{5x^{3}}+6/5. \tag{46}\]
Interestingly, one can see that the above expression (45) is similar to eq. (41), and the first term of this expression is identical to the first term of eq. (46). The only oscillating terms \(\sin x\) or \(\cos x\) contribute to SIGWs and show the physical behavior of the energy density of SIGWs, and the rest are fictitious. The evolution of the kernel function (45) in the late time (\(x\gg 1\)) is presented in Figures 3 and 4. In addition, the constant term in eq. (46) does not account for SIGWs, so it makes no contribution to SIGWs if we consider physical SIGWs. On the other hand, we also showed that constant tensor perturbations in the Poisson gauge do not contribute to the energy density of GWs even though they appear in the integration kernel. This result in the present work supports our proposal. Moreover, the evolution for the gauge independent kernel function (46) is shown in Figures 5 and 6, respectively.
### Uniform curvature gauge
The uniform curvature gauge is defined as \(\psi=E=0\). One can use the results of the background equations in eq. (9), and after some algebraic manipulations, the kernel in this gauge is given by
\[I_{\text{RD, UC}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3 }v\sin x+u(vx)^{3}\sin x-3uvx^{3}\sin x\right)+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^ {3}v^{3}x}\times\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\sin x\] \[+\frac{3}{4u^{3}v^{3}x^{4}}\left[-24\left(-ux\cos\frac{ux}{\sqrt {3}}+\sqrt{3}\sin\frac{ux}{\sqrt{3}}\right)\left(-vx\cos\frac{vx}{\sqrt{3}}+ \sqrt{3}\sin\frac{vx}{\sqrt{3}}\right)-4\left(+6ux\cos\frac{ux}{\sqrt{3}} \left(-vx\cos\frac{vx}{\sqrt{3}}+\sqrt{3}\sin\frac{vx}{\sqrt{3}}\right)\right.\right.\] \[\left.\left.-3\sin\frac{ux}{\sqrt{3}}\left(-2\sqrt{3}vx\cos\frac{ vx}{\sqrt{3}}+(6+(u^{2}+v^{2}-3)x^{2})\sin\frac{vx}{\sqrt{3}}\right)\right)-6uvx^{2} \cos\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}+6\sqrt{3}ux\cos\frac{ux}{\sqrt {3}}\sin\frac{vx}{\sqrt{3}}\right.\]
\[+6\sqrt{3}vx\sin\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}-18\sin \frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3)x^{2}\sin\frac{ux}{ \sqrt{3}}\sin\frac{vx}{\sqrt{3}}\Bigg{]}+(u^{2}+v^{2}-3)^{2}x^{3}\] \[+6\sqrt{3}vx\sin\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}-18\sin \frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3)x^{2}\sin\frac{ux}{ \sqrt{3}}\sin\frac{vx}{\sqrt{3}}\Bigg{]}+(u^{2}+v^{2}-3)^{2}x^{3}\] \[+6\sqrt{3}vx\sin\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}-18\sin \frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3)x^{2}\sin\frac{ux}{ \sqrt{3}}\sin\frac{vx}{\sqrt{3}}\Bigg{]}+(u^{2}+v^{2}-3)^{2}x^{3}\] \[\times\Bigg{(}\sin x\left(\mathrm{Ci}\left[\left(1+\frac{u-v}{ \sqrt{3}}\right)x\right]+\mathrm{Ci}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x \right]\right.\left.-\mathrm{Ci}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x \right]-\mathrm{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x\right]\right) \sin x+\left(-\mathrm{Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]\right.\] \[-\left.\mathrm{Si}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x \right]+\mathrm{Si}\left[\left(1-\frac{u+v}{\sqrt{3}}\right)x\right]+\mathrm{ Si}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]\right)\cos x\Bigg{]}\,. \tag{47}\]
The above expression (47) has some fictitious terms that cause a divergence. To fix this discrepancy, we show that being second order in perturbation, such induced tensor perturbations are generically gauge independent in contrast to refs. [48, 49, 53, 69], as shown in Figure 2.
Therefore, by using the transformation of a counter term, the kernel function of the gauge independent SIGWs in eq. (32) is obtained to be
\[I_{\mathrm{RD,\,\tilde{h},\,UC}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4} }\left((ux)^{3}v\sin x+u(vx)^{3}\sin x-3uvx^{3}\sin x\right)-\frac{3}{u^{3}v^{ 3}x^{4}}\Bigg{(}6uvx^{2}\cos\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}+6\sqrt {3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\] \[-18\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3 )x^{2}\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\Bigg{)}\times\Bigg{[} \left(\mathrm{Ci}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]+\mathrm{Ci }\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]-\mathrm{Ci}\left[\left(1+ \frac{u+v}{\sqrt{3}}\right)x\right]\right.\] \[-\left.\mathrm{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x \right]+\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\right) \sin x+\left(-\mathrm{Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]- \mathrm{Si}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]+\mathrm{Si}\left[ \left(1-\frac{u+v}{\sqrt{3}}\right)x\right]\right.\] \[+\left.\mathrm{Si}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x \right]\right)\cos x\Bigg{]}\,. \tag{48}\]
The counter-term in the uniform curvature gauge is non-trivial. Also, as we can see, some terms in the expression of eq. (47) are the same as in eq. (48). Suppose we had the terms that are freely propagating tensor perturbations and only represented the free GWs. In that case, these free oscillating terms contribute to the energy density \(\Omega_{\mathrm{GW}}\) of SIGWs [50]. The evolution of the divergent kernel function \(I_{\mathrm{RD,\,UC}}^{2}(u,v,x)\) with \(u=v=1\) and \(u=v=0.1\) is shown in Figure 1 and for the gauge independent kernel \(I_{\mathrm{R,\,RD,\,UC}}^{2}(u,v,x)\) of the energy density of SIGWs is shown in Figure 2, respectively.
In MD, we substitute the results of the transfer functions into the kernel function (35), and we get
\[I_{\mathrm{MD,\,UC}}(u,v,x)=\frac{18(x\cos x-\sin x)}{5x^{3}}+\frac{-45x^{5}+6x ^{3}}{1000x^{3}}. \tag{49}\]
In the MD era, by using eq. (32), we can evaluate the independent kernel function in the uniform curvature gauge as:
\[I_{\mathrm{MD,\,\tilde{h},\,UC}}(u,v,x)=\frac{18(x\cos x-\sin x)}{5x^{3}}+6/5. \tag{50}\]
One can see that the above expression (47) has some extra terms that do not contribute to SIGWs and cause a divergence. However, we find the kernel with the inclusion of a counter term to remove the divergent term and get the gauge-independent kernel function (48). Here, the only oscillating terms \(\sin x\) or \(\cos x\) contribute to SIGWs and show the physical behavior of the energy density of SIGWs, and the rest are fictitious. Besides, one can see the divergence behavior of the kernel at the subhorizon limit in Figures 1 and 2 with \(u=v=1\) and \(u=v=0.1\), respectively. While the evolution of the kernel function (49) in the late time (\(x\gg 1\)) is shown in Figures 3 and 4, respectively. In addition, the constant term \(6/5\) in eq. (50) does not account for SIGWs, so it does not contribute SIGWs if we consider physical SIGWs. On the other hand, we also showed that constant tensor perturbations in the Poisson gauge do not contribute to the energy density of GWs even though they appear in the integration kernel. This result in the present work supports our proposal. Moreover, the evolution for the gauge independent kernel function (50), at the late time limit (\(x\gg 1\)) is shown in Figures 5 and
\(6\), respectively.
### Total matter gauge
Next, the total matter gauge is defined by \(\delta V=E=0\). For the kernel function (9), we use some background calculations. After some calculations, one can compute the kernel function \(I_{\text{RD, TM}}(u,v,x)\) as:
\[I_{\text{RD, TM}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3}v \sin x+u(vx)^{3}\sin x-3uvx^{3}\sin x\right)+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^{3 }v^{3}x}\times\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\sin x +\frac{1}{4u^{3}v^{3}x^{4}}\] \[\times\left(-2\left(6ux\cos\frac{ux}{\sqrt{3}}+\sqrt{3}(u^{2}x^{2 }-6)\sin\frac{ux}{\sqrt{3}}\right)\times\left(6vx\cos\frac{vx}{\sqrt{3}}+ \sqrt{3}(v^{2}x^{2}-6)\sin\frac{vx}{\sqrt{3}}\right)-12\left[6ux\cos\frac{ux} {\sqrt{3}}-vx\cos\frac{vx}{\sqrt{3}}\right.\right.\] \[+\left.\left.\sqrt{3}\sin\frac{vx}{\sqrt{3}}\right)-3\sin\frac{ux }{\sqrt{3}}\left(-2\sqrt{5}vx\cos\frac{ux}{\sqrt{3}}+[6+(u^{2}+v^{2}-3)^{2}x ^{2}]\sin\frac{vx}{\sqrt{3}}\right)\right]+\frac{3}{4u^{3}v^{3}x}\left(- \frac{4}{x^{3}}\left(-6uvx^{2}\cos\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}\right.\right.\] \[+\left.6\left.\sqrt{3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{ \sqrt{3}}+6\sqrt{3}vx\sin\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}-3(6+(u^{2 }+v^{2}-3)x^{2})\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right)+3(u^{2} +v^{2}-3)x^{3}\left[\sin x\left(\text{Ci}\right.\right.\] \[\left.\left.\times\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x \right]+\text{Ci}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]-\text{Ci} \left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]-\text{Ci}\left[\left|1- \frac{u+v}{\sqrt{3}}\right|x\right]+\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^ {2}}\right|\right]\right)+\cos x\left(-\text{Si}\left[\left(1+\frac{u-v}{ \sqrt{3}}\right)x\right]\right.\] \[\left.\left.-\text{Si}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x \right]+\text{Si}\left[\left(1-\frac{u+v}{\sqrt{3}}\right)x\right]+\text{Si} \left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]\right)\right)+(u^{2}+v^{2}-3 )^{2}\times\left[\left(\text{Ci}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x \right]+\text{Ci}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]\right.\right.\] \[\left.\left.-\text{Ci}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x \right]-\text{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x\right]\right)\sin x +\left(-\text{Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]-\text{Si} \left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]\right.\right.\] \[\left.\left.+\text{Si}\left[\left(1-\frac{u+v}{\sqrt{3}}\right)x \right]+\text{Si}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]\right) \cos x\right]. \tag{51}\]
It is to be noted that the above expression (51) is different from the kernel in the Poisson gauge. There exist some extra terms in eq. (51). Moreover, one can see the behavior of the evolution of (51) at the subhorizon limit in Figure 1 with \(u=v=1\) and \(u=v=0.1\). Here, it is vital to present a resolution to these discrepancies by using a counter term given in the aforementioned expression (24). We show that being second order in perturbation, such induced tensor perturbations are generically gauge independent in contrast to refs. [48, 49, 53, 69], as shown in Figure 2 with \(u=v=1\) and \(u=v=0.1\).
Therefore, by using the transformation of a counter term, the kernel function of the gauge independent SIGWs in eq. (32) is obtained to be
\[I_{\text{RD, $\tilde{h}$, TM}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3}v\sin x+u(vx)^{3}\sin x-3 uvx^{3}\sin x\right)-\frac{3}{u^{3}v^{3}x^{4}}\left(6uvx^{2}\cos\frac{ux}{\sqrt{3}} \cos\frac{vx}{\sqrt{3}}+6\sqrt{3}ux\cos\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3 }}\right.\] \[\left.-18\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{ 2}-3)x^{2}\sin\frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right)\times\left[ \left(\text{Ci}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]+\text{Ci} \left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]-\text{Ci}\left[\left(1+\frac {u+v}{\sqrt{3}}\right)x\right]\right.\right.\] \[\left.\left.-\text{Ci}\left[\left|1-\frac{u+v}{\sqrt{3}}\right|x \right]+\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\right) \sin x+\left(-\text{Si}\left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]-\text{ Si}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x\right]+\text{Si}\left[\left(1- \frac{u+v}{\sqrt{3}}\right)x\right]\right.\right.\] \[\left.\left.+\text{Si}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x \right]\right)\cos x\right]. \tag{52}\]
The counter-term in the total matter gauge is non-trivial. Besides, one can see that the terms in the expression (51) are extra compared to eq. (52), which cause discrepancies in the total matter gauge during RD. Suppose we had the terms that are freely propagating tensor perturbations and only represented the free GWs. In that case, these free oscillating terms contribute to the energy density \(\Omega_{\text{GW}}\) of SIGWs [50]. The evolution of the divergent kernel function \(I_{\text{RD, TM}}^{2}(u,v,x)\) with \(u=v=1\) and \(u=v=0.1\) is shown in Figure 1 and for the gauge independent kernel \(I_{\tilde{h},\text{RD, TM}}^{2}(u,v,x)\) of the energy
density of SIGWs is shown in Figure 2, respectively.
In MD universe, to evaluate the kernel function, we use eq. (35), and find
\[I_{\text{MD, TM}}(u,v,x)=\,\frac{18(x\cos x-\sin x)}{5x^{3}}+\frac{-5x^{5}+6x^{3 }}{2500x^{3}}. \tag{53}\]
Now in the MD era, by using eq. (36), we can evaluate the gauge-independent kernel function in the total matter gauge as:
\[I_{\text{MD, $\bar{h}$, TM}}(u,v,x)=\frac{18(x\cos x-\sin x)}{5x^{3}}+6/5. \tag{54}\]
Here, one can see that the first term of the expression (53) is identical to the first term of eq. (54). The only oscillating terms \(\sin x\) or \(\cos x\) contribute to SIGWs and show the physical behavior of the energy density of SIGWs, and the rest are fictitious. The evolution of the kernel function (45) in the late time (\(x\gg 1\)) is presented in Figures 3 and 4. In addition, the constant term in eq. (54) does not account for SIGWs, so it makes no contribution to SIGWs if we consider physical SIGWs. On the other hand, we also showed that constant tensor perturbations in the Poisson gauge do not contribute to the energy density of GWs even though they appear in the integration kernel. This result in the present work supports our proposal. Moreover, the evolution for the gauge independent kernel function (54) is shown in Figures 5 and 6, respectively.
### Uniform density gauge
The uniform density gauge is defined by \(\delta\rho=E=0\). To evaluate the kernel function in this gauge, after some straightforward calculations, we have
\[I_{\text{RD, UD}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3} v\sin x+u(vx)^{3}\sin x-3uvx^{3}\sin x\right)+\frac{3(u^{2}+v^{2}-3)^{2}}{4u^{3}v^{ 3}x}\times\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\sin x+ \frac{1}{4u^{3}v^{3}x^{4}}\] \[\times\left[2ux\left(u^{2}x^{2}-6\right)\cos\left(\frac{ux}{ \sqrt{3}}\right)\left(vx\left(v^{2}x^{2}-6\right)\cos\left(\frac{vx}{\sqrt{3} }\right)\right)-2\sqrt{3}\left(v^{2}x^{2}-3\right)\sin\left(\frac{vx}{\sqrt{3 }}\right)\right)-4\left(u^{2}x^{2}-3\right)\sin\left(\frac{ux}{\sqrt{3}} \right)\times\left(\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The counter-term in the uniform density gauge is non-trivial. Besides, it can be seen from the evolution of the expression of the kernel (55) that it is different from the Poisson gauge and has divergence that will break down the perturbation theory [69, 53]. However, we try to fix this discrepancy and use the transformation (17) having counter term \(\Xi_{kl}\). We find the gauge-independent kernel function (56) in the uniform density gauge. In addition, it is supposed we had the terms freely propagating tensor perturbations and only represented the free GWs. In that case, these free oscillating terms contribute to the energy density \(\Omega_{\text{GW}}\) of SIGWs [50]. The evolution of the divergent kernel function \(I_{\text{RD, UD}}^{2}(u,v,x)\) with \(u=v=1\) and \(u=v=0.1\) is shown in Figure 1 and for the gauge independent kernel \(I_{\text{\tiny{E, RD, UD}}}^{2}(u,v,x)\) of the energy density of SIGWs is shown in Figure 2, respectively.
In the MD universe, from eq. (35), we find the kernel function as follows:
\[I_{\text{MD, UD}}(u,v,x) =\frac{18(x\cos x-\sin x)}{5x^{3}}\] \[+\frac{-x^{5}(12+u^{2}x^{2})(12+v^{2}x^{2})+216000x^{3}}{36000x^{ 3}}. \tag{57}\]
While during MD, by using eq. (32), we can evaluate the gauge independent kernel function in the uniform density gauge as:
\[I_{\text{MD, }\bar{h},\,\text{UD}}(u,v,x)=\frac{18(x\cos x-\sin x )}{5x^{3}}+6/5. \tag{58}\]
Here, we can see that the first term in the above expression (57) is similar to the oscillating term of eq. (58). The only oscillating terms \(\sin x\) or \(\cos x\) contribute to SIGWs and show the physical behavior of the energy density of SIGWs, and the rest are fictitious. The evolution of the kernel function (57) in the late time (\(x\gg 1\)) is presented in Figures 3 and 4. In addition, the constant term in eq. (58) does not account for SIGWs, so it does not contribute SIGWs if we consider physical SIGWs. On the other hand, we also showed that constant tensor perturbations in the Poisson gauge do not contribute to the energy density of GWs even though they appear in the integration kernel. This result in the present work supports our proposal. Moreover, the evolution for the gauge independent kernel function (58) is shown in Figures 5 and 6, respectively.
### Uniform expansion gauge
Finally, we consider the uniform expansion gauge, defined by \(3(\mathcal{H}\phi+\psi^{\prime})+k^{2}\sigma=0,E=0\). From eq. (33), we can calculate the kernel function in this gauge. After some algebraic manipulations, we get
\[I_{\text{RD, UE}}(u,v,x)=-\frac{3}{u^{3}v^{3}x^{4}}\left((ux)^{3 }v\sin x+u(vx)^{3}\sin x-3uvx^{3}\sin x\right)+\frac{3(u^{2}+v^{2}-3)^{2}}{4u ^{3}v^{3}x}\times\ln\left[\left|\frac{3-(u+v)^{2}}{3-(u-v)^{2}}\right|\right]\sin x\] \[+\frac{9}{u^{3}q^{3}x^{4}\left(u^{2}x^{2}+6\right)\left(v^{2}x^{ 2}+6\right)}\left[u^{2}v^{2}x^{4}\sin\left(\frac{ux}{\sqrt{3}}\right)\sin \left(\frac{vx}{\sqrt{3}}\right)+12\sqrt{3}u^{2}vx^{3}\sin\left(\frac{ux}{ \sqrt{3}}\right)-36u^{2}x^{2}\sin\left(\frac{ux}{\sqrt{3}}\right)\sin\left( \frac{vx}{\sqrt{3}}\right)\right.\] \[+12\sqrt{3}u^{2}x^{3}\cos\left(\frac{ux}{\sqrt{3}}\right)\sin \left(\frac{vx}{\sqrt{3}}\right)-6v^{2}x^{2}\sin\left(\frac{ux}{\sqrt{3}} \right)\sin\left(\frac{vx}{\sqrt{3}}\right)+72uvx^{2}\cos\left(\frac{ux}{ \sqrt{3}}\right)\cos\left(\frac{vx}{\sqrt{3}}\right)+116\sin\left(\frac{ux}{ \sqrt{3}}\right)\sin\left(\frac{vx}{\sqrt{3}}\right)\] \[-72\sqrt{3}vx\sin\left(\frac{ux}{\sqrt{3}}\right)\cos\left(\frac {vx}{\sqrt{3}}\right)-72\sqrt{3}ux\cos\left(\frac{ux}{\sqrt{3}}\right)\sin \left(\frac{vx}{\sqrt{3}}\right)\left]-\frac{3}{u^{3}v^{3}x^{4}}\left(-6uvx^ {2}\cos\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}+6\sqrt{3}ux\cos\frac{ux}{ \sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right.\] \[+6\sqrt{3}vx\sin\frac{ux}{\sqrt{3}}\cos\frac{vx}{\sqrt{3}}-18\sin \frac{ux}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}+(u^{2}+v^{2}-3)x^{2}\sin\frac{ux}{ \sqrt{3}}\sin\frac{vx}{\sqrt{3}}\sin\frac{vx}{\sqrt{3}}\right)+\frac{3(u^{2}+v ^{2}-3)^{2}}{4u^{3}v^{3}x}\times\left[\left(\text{Ci}\left[\left(1+\frac{u-v}{ \sqrt{3}}\right)x\right]\right.\right.\] \[\left.\left.+\text{Ci}\left[\left(1+\frac{v-u}{\sqrt{3}}\right)x \right]-\text{Ci}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]-\text{Ci} \left[\left|1-\frac{u+v}{\sqrt{3}}\right|x\right]\right)\sin x+\left(-\text{Si} \left[\left(1+\frac{u-v}{\sqrt{3}}\right)x\right]-\text{Si}\left[\left(1+ \frac{v-u}{\sqrt{3}}\right)x\right]\right.\] \[\left.\left.+\text{Si}\left[\left(1-\frac{u+v}{\sqrt{3}}\right)x \right]+\text{Si}\left[\left(1+\frac{u+v}{\sqrt{3}}\right)x\right]\right) \cos x\right]. \tag{59}\]
Similar to the discussion in the previous subsection, we find the kernel (59) in the uniform expansion gauge. We show the evolution of kernel (59) in Figure 1. Furthermore, we try to present a resolution by making use of a counter term given in the aforementioned expression (24). We show that being second order in perturbation, such induced tensor perturbations are generically gauge independent in contrast to refs. [48, 49, 53, 69], as shown in Figure 2. Therefore, by using the transformation of a counter term, the kernel function of the gauge independent SIGWs in eq. (32) is obtained to be
The counter-term in the uniform expansion gauge is non-trivial. Besides, as we can see, the terms in the first two lines of the expression (43) are the same with eq. (60). Suppose we had the terms that are freely propagating tensor perturbations and only represented the free GWs. In that case, these free oscillating terms contribute to the energy density \(\Omega_{\text{GW}}\) of SIGWs [50]. The evolution of the divergent kernel function \(I_{\text{RD, UE}}^{2}(u,v,x)\) with \(u=v=1\) and \(u=v=0.1\) is shown in Figure 1 and for the gauge independent kernel \(I_{\hat{h},\,\text{RD,UE}}^{2}(u,v,x)\) of the energy density of SIGWs is shown in Figure 2, respectively.
In the MD universe, from eq. (35), one can obtain the result of kernel function as:
\[I_{\text{MD, UE}}(u,v,x) =\frac{18(x\cos x-\sin x)}{5x^{3}}\] \[+\frac{-810x^{5}+150x^{3}(u^{2}x^{2}+18)(v^{2}x^{2}+18)}{125x^{3} (u^{2}x^{2}+18)(v^{2}x^{2}+18)}. \tag{61}\]
For MD, with the use of eq. (32), one can calculate the gauge independent kernel function in the uniform expansion gauge as:
\[I_{\text{MD, \(\hat{h},\,\text{UE}}}(u,v,x)=\frac{18(x\cos x-\sin x)}{5x^{3}}+6/5. \tag{62}\]
Here, one can see that the first term in the above expression (61) is appearing in eq. (62). The only oscillating terms \(\sin x\) or \(\cos x\) contribute to SIGWs and show the physical behavior of the energy density of SIGWs, and the rest are fictitious. The evolution of the kernel function (61) in the late time (\(x\gg 1\)) is presented in Figures 3 and 4. In addition, the constant term in eq. (62) does not account for SIGWs, so it makes no contribution to SIGWs if we consider physical SIGWs. On the other hand, we also showed that constant tensor perturbations in the Poisson gauge do not contribute to the energy density of GWs even though they appear in the integration kernel. This result in the present work supports our proposal. Moreover, the evolution for the gauge independent kernel function (62) is shown in Figures 5 and 6, respectively.
To examine the energy density \(\Omega_{\text{GW}}\) of SIGWs in various gauge choices, the secondary scalar-induced tensor is of two forms. One kind of tensor perturbation is freely propagating tensor perturbations whose oscillations are like \(\sin(k\eta)\) or \(\cos(k\eta)\). This kind of perturbation contributes to SIGWs. The other tensor perturbations with terms instead of \(\sin(k\eta)\) or \(\cos(k\eta)\) do not contribute to SIGWs. In the following, we will shed light on different gauge choices and confirm with comparison among the kernel functions whether SIGWs are gauge (in)dependent in different gauges with the above identification of SIGWs.
## 4 Comparison among the kernel functions in various gauges
This section compares the kernel functions in the Poisson gauge, the TT gauge, the comoving orthogonal gauge, the uniform curvature gauge, the total matter gauge, the uniform density gauge, and the uniform expansion gauge during RD and MD. We show that they all lead to the same gauge-independent kernel functions. Thus, the energy density, \(\Omega_{\text{GW}}\), of SIGWs should be the same in seven gauge choices. To be precise, in the following subsections, we show the evolution of our results of the kernel functions in Figures 1 and 2 in RD, and Figures 3-6 in MD, respectively.
### Comparison among the kernel functions during RD
First, during RD, we compare the behavior of \(I(u,v,x)\) obtained in each gauge and their differences for the finite value of \(x\). Examples of the kernel functions' time evolution for the given sets of \(u=v=1\) and \(u=1\), \(v=0.1\) in seven gauges are shown in Figures 1 and 2. In these figures, \(x=k\eta\) can be analyzed as the time parameter in the unit in which the horizon entry of the tensor perturbation occurs at \(x=1\).
Figure 1 shows the comparison of the evolution of \(I(u,v,x)\) in the Poisson and six other gauges. As seen in this figure, before the source perturbations enter the horizon \(x\ll 1\), the induced perturbations remain almost constant but later start
oscillating in growing or decaying modes. At the late time \(x\gg 1\), all of the secondary perturbations in Figure 1 oscillate, and the amplitudes of their oscillations are in growing or decaying modes. This figure compares the kernel functions in the Poisson gauge with those of the six other gauges. Here, in Figure 1, we show that \(I_{\rm RD}(u,v,x)\) of the tensor perturbations in six other gauges are divergent as \(x\to\infty\), while the one in the Poisson gauge tends to converge. The relationship of this result was presented in refs. [48, 49, 53], which has a flaw during RD. Also, it indicates that the tensor perturbations are gauge-dependent. Besides, the literature claims that the physical observable energy density should not be gauge dependent.
Therefore, we have tried to fix this discrepancy by introducing a counter term in eq. (2) and found a gauge-independent kernel function in contrast to refs. [48, 49, 53]. In addition, the counter terms in the TT gauge, the comoving orthogonal gauge, the uniform curvature gauge, the total matter gauge, the uniform density gauge, and the uniform expansion gauge are not trivial. Figure 2 shows the evolution of the gauge independent kernels \(I_{k,\,{\rm RD}}(u,v,x)\) in the Poisson and the six other gauges. After removing the discrepancy issues, we find that the physical observable \(\Omega_{\rm GW}\) is gauge-independent.
### Comparison among the kernels during MD
Finally, we discuss comparing the kernel functions in MD. First, we show the evolution of the kernels with the constant term \(6/5\) in Figures 3 and 4. In Figure 3, we show the behavior of the evolution of \(I_{\rm MD}(u,v,x)\) at \(u=v=1\) and \(u=1\) and \(v=0.1\), respectively.
In the left panel, we can see that the kernel functions \(\ I_{\rm MD,CO},I_{\rm MD,UC},I_{\rm MD,TM},I_{\rm RD,UD}\) start to grow, and \(I_{\rm MD,P},I_{\rm MD,TT}\), and \(I_{\rm MD,UE}\) are going to be constant as the secondary induced perturbations enter the horizon (\(x\simeq 1\)). While in the right panel, one can see that only \(I_{\rm MD,P}\), and \(I_{\rm MD,UE}\) are going to be constant, and the others start to grow as the secondary induced perturbations enter the horizon.
From these two panels, it can be seen that the values at \(x\gg 1\) do not depend on \(u\) and \(v\) extensively except for \(I_{\rm MD,CO}\), and \(I_{\rm MD,TT}\). As for \(I_{\rm MD,CO}\), and \(I_{\rm MD,TT}\), these have different behavior for \(v=1\) and \(v=0.1\) before the source perturbations enter the horizon ( \(x\ll 1\)) and at the late time (\(x\gg 1\)). We
can see that the behavior of these kernels is not the same as those in an RD era. From these observations, one can deduce that the behaviors of the secondary perturbations induced by first perturbations at (\(x\gg 1\)) are distinct except in Poisson and uniform expansion gauges.
Moreover, in Figure 4, after considering the terms (except the single term like \(x^{n}\) or \(1/x^{n}\)) in six other gauges, like in the Poisson gauge, we get the same behavior in the evolution of the kernel functions in all seven gauges, which is actually a constant behavior in \(x\gg 1\). We deduce that the behavior of secondary perturbations at (\(x\gg 1\)) is the same, and all the kernels are almost constant as \(x\to\infty\). As discussed in the previous subsection, these behaviors are not identical to those in an RD universe. Because we included the constant terms in each gauge, they do not represent the GWs oscillations as shown in the previous subsection.
Finally, we compare the kernels after barring the factor 6/5. Specifically, we show the evolution of our results for the kernel functions in Figures 5 and 6. In Figure 5, in the left panel, it is to be noted that kernel functions of tensor perturbations in the comoving gauge, the comoving orthogonal gauge, the uniform curvature gauge, and the uniform density gauge start to grow on superhorizon scales (\(x\simeq 1\)) and tend to be divergent as \(x\to\infty\), while those in the Poisson, the TT, and the uniform expansion gauges tend to converge at the lat time limit (\(x\gg 1\)). In the MD era, we also used eq. (2) and found a gauge-independent kernel functions \(I_{\tilde{h},\,\mathrm{MD}}(u,v,x)\) in contrast to refs. [48, 49, 69].
## 5 Discussion and concluding remarks
In this paper, we have reconsidered the SIGWs in the late-time limit in different popular gauges. In particular, we have dealt with the second-order tensor perturbations generated by the linear scalar perturbations in an expanding spacetime containing either RD or MD. We have tried to address the discrepancies of the previous studies by introducing a counter term (3) to remove the fictitious terms in the secondary tensor perturbations. We have shown that the late-time (e.g., observable) GWs investigated in seven different gauges coincide with each other, in contrast to refs. [48, 49, 52, 53, 69]. In this work, we have explicitly evaluated the gauge-independent kernel functions, which uniquely examined the energy density of SIGWs in different gauges. Moreover, the evolution
of the transfer functions is also presented.
On the other hand, according to refs. [48, 49, 52, 53, 69], the secondary tensor perturbations could be different in seven different gauges, even in the subhorizon limit. One can find that the difference between the Poisson gauge and the six other gauges comes from the extra terms like \(\cos\left(\frac{u_{\rm W}v}{\sqrt{3}}\right)\) or \(\sin\left(\frac{u_{\rm W}v}{\sqrt{3}}\right)\), and \(x^{n}\) or \(1/x^{n}\); see, the specific expressions of different kernels in refs. [48, 49, 52, 53, 69]. This indicates that these different results of kernels in different gauges show gauge dependence that occurs in the secondary tensor perturbations coupling with scalar perturbation, not in the GWs. More precisely, in the gauge independent framework in the RD phase, the discrepancy appearing in different gauges in refs. [48, 49, 52, 53] is eliminated. Consequently, we have found the gauge-independent kernel functions, which uniquely determine the same energy density \(\Omega_{\rm GW}\) of SIGWs in seven gauges.
It is to be noted that the situation is different for the secondary tensor perturbations induced by scalar perturbations in the late MD phase. In this case, the scalar perturbations continue to accompany the secondary tensor perturbations on the subhorizon limit, even in the Poisson gauge, because of the growing matter perturbations. This kind of secondary tensor perturbation can be larger than the induced GWs during the RD phase on a large scale [37]. That is why the secondary tensor perturbations easily depend on the gauge during the MD phase. For example, in ref. [69], it is shown there in different gauges that the kernel function \(I(u,v,x)\) is different in other gauges from that in the Poisson gauge even at late time. However, it should be noted that this kind of secondary tensor perturbation is not a gravitational wave. In this work, we remove the occurring discrepancies by introducing the counter term. Here, we also find the gauge independent energy density \(\Omega_{\rm GW}\) of SIGWs.
Observationally, the secondary tensor perturbations are usually assumed to be GWs. However, the observational sensitivity for these GWs will be distinct from that of conventional GWs. The secondary induced tensor perturbations may explain the signal observed by the North American Nanohertz Observatory for Gravitational Waves (NANOGrav) [74, 75].
Finally, it is important to recognize that our analysis can be extended in various ways. In particular, we explore SIGWs further to examine the physics of the early Universe in the
Figure 5: (Color online) The evolution of \(|I_{\rm MD,P}|\)\(|I_{\rm MD,T}|\), \(|I_{\rm MD,C0}|\), \(|I_{\rm MD,C1}|\), \(|I_{\rm MD,TM}|\), \(|I_{\rm MD,LUD}|\), and \(|I_{\rm MD,UE}|\) is shown to be divergent in different gauges. Here, we show kernels after barring the factor 6/5. Left: we take \(u=v=1\). Right: we let \(u=1\) and \(v=0.1\).
inflationary scenario and the associated PBHs. Further, we need to study the gauge dependency of SIGWs to find a gauge-invariant way. Moreover, we will explore the waveform of the energy density \(\Omega_{\rm GW}\) and examine its relationship with the scalar power spectrum \(\mathcal{P}_{\zeta}^{2}(k)\) during RD and MD.
_Arshad Ali and Mudassar Sabir are thankful to Professor Yangui Gong for many inspiring discussions and collaborations on related topics. This work was supported by the National Natural Science Foundation of China (Grant Nos. 12175105, 12147175, 12247170, 11575083, and 11565017), the Top-noch Academic Programs Project of Jiangsu Higher Education Institutions (TAPP)._
|
2310.14992 | Bayesian Regression Markets | Although machine learning tasks are highly sensitive to the quality of input
data, relevant datasets can often be challenging for firms to acquire,
especially when held privately by a variety of owners. For instance, if these
owners are competitors in a downstream market, they may be reluctant to share
information. Focusing on supervised learning for regression tasks, we develop a
regression market to provide a monetary incentive for data sharing. Our
mechanism adopts a Bayesian framework, allowing us to consider a more general
class of regression tasks. We present a thorough exploration of the market
properties, and show that similar proposals in literature expose the market
agents to sizeable financial risks, which can be mitigated in our setup. | Thomas Falconer, Jalal Kazempour, Pierre Pinson | 2023-10-23T14:45:51Z | http://arxiv.org/abs/2310.14992v3 | # Bayesian Regression Markets
###### Abstract
Machine learning tasks are vulnerable to the quality of data used as input. Yet, it is often challenging for firms to obtain adequate datasets, with them being naturally distributed amongst owners, that in practice, may be competitors in a downstream market and reluctant to share information. Focusing on supervised learning for regression tasks, we develop a _regression market_ to provide a monetary incentive for data sharing. Our proposed mechanism adopts a Bayesian framework, allowing us to consider a more general class of regression tasks. We present a thorough exploration of the market properties, and show that similar proposals in current literature expose the market agents to sizeable financial risks, which can be mitigated in our probabilistic setting.
Introduction
Data is the lifeblood of machine learning, yet for many firms, obtaining datasets of sufficient quality remains a challenge, with them being naturally distributed amongst owners with heterogeneous characteristics (e.g., privacy preferences). This has motivated several developments in the field of collaborative analytics, also known as federated learning (Figure 0(a)), where models are trained on local servers without the need for data centralization, thereby preserving privacy and distributing the computational burden (Kairouz et al., 2019). However, this framework provides only an _incentive-free_ means for data sharing, relying on the critical assumption that owners are willing to collaborate (i.e., by sharing their private information) altruistically. This rather strong assumption may be violated if owners are competitors in a downstream market environment (Gal-Or, 1985). Consequently, a fruitful area of research has emerged that proposes to instead _commoditize_ data within a market-based framework, where compensation (e.g., remuneration) can be used as an incentive for collaboration (Bergemann and Bonatti, 2019).
Information economics has been a prominent concept in game theory literature since the 1980s (Gal-Or, 1985), with early works focused on incentive-free data sharing, both publicly (Morris and Shin, 2002) and within local information channels (Dahleh et al., 2016). Over the last decade, data monetization has been an increasingly discussed topic, for which the first proposals considered _data markets_ (Figure 0(b)), allowing buyers to purchase raw data from sellers through bilateral transactions (Rasouli and Jordan, 2021). Whilst this offers a seemingly practical way to acquire data from others, the value of the data to the buyer typically depends on the _analytics task_ at hand, hence pricing raw data in these markets is difficult (Cong et al., 2022), especially in the context of privacy-preservation (Acemoglu et al., 2022).
Figure 1: Schematic illustration of existing frameworks for data sharing with multiple buyers and sellers, where each figure depicts a building block consisting of a single interaction. The blue, red and green arrows indicate computational, information and monetary transactions between the buyer and the seller, respectively1.
Instead, one can acknowledge that the reason a firm may procure data in the first place is often to enhance capabilities in some analytics task. Rather than viewing the value of data as an intrinsic property, it can instead be a function of any enhancement in said capabilities it can provide (Agarwal et al., 2019). This leads to opportunities for combining the notion of data markets with collaborative analytics to form _analytics markets_ (Figure 0(c)), whereby the buyer owns an analytics task (e.g., a machine learning model) and seeks to enhance its capabilities (e.g., predictive performance) by harnessing the data, and possibly the compute, of the sellers. The buyer pays for the overall capability enhancement, for which each seller is compensated based on its marginal contribution.
As in Pinson et al. (2022), we are interested in applications in which the analytics task describes a regression model along with the process for inference used for training (i.e., our attention centers on _regression markets_). This builds on current literature concerning data elicitation from strategic (Dekel et al., 2010) and privacy-sensitive (Cummings et al., 2015) owners. In this context, owners of regression models seek to enhance predictive performance, for which they have a private valuation (e.g., their value of forecast accuracy in a downstream decision-making process). Their public bids, which may not equal private valuations, are then used to set the price. Sellers propose their own data as features and are remunerated based on their marginal contributions to the improved model-fitting. The market revenue is therefore a function of both the market price and the overall enhancement in predictive performance.
In related work, these _regression tasks_ are often introduced from a frequentist perspective, which seems to contradict current trends towards probabilistic forecasting across many industries (Gneiting and Katzfuss, 2014). This would instead favour probabilistic regression models capable of directly providing the distribution of the target signal, rather than merely point-estimating a particular characteristic thereof (e.g., the expected value). Even though frequentist methods can indeed be used for probabilistic forecasting (e.g., by interpolating point-estimated quantiles), disregarding uncertainty in parameter estimation can yield overly confident predictions that miss-represent the true level of variability in the data. Accordingly, there is an incentive to adopt _Bayesian inference_, a principled framework for modelling parameter uncertainty that, in fact, subsumes many frequentist regression methods, thereby providing richer and more nuanced information about future outcomes.
Our contribution is the development of a regression market that enables Bayesian methods for regression, allowing us to consider a more general class of probabilistic regression tasks. We provide a thorough exploration of the market properties with a focus on _fair_ allocation of market revenue. We further show that frequentist-based
mechanism designs in the current literature expose the market agents to considerable financial risks, which we mitigate by re-formulating the value of a feature in terms of the information gain it provides.
The remainder of the paper reads as follows: Section 2 introduces the market agents and the design of our proposed market mechanism. Section 3 assesses the theoretical market properties of our proposal and presents methods for mitigating financial risks exhibited by the agents. Section 4 and Section 5 illustrate our findings through a set of simulation-based and real-world case studies, respectively. Finally, Section 6 gathers our conclusions and perspectives for future work.
## 2 Preliminaries
Our proposed mechanism is intended to be hosted on a platform capable of handling both the analytical (e.g., parameter inference) and market-based (e.g., revenue allocation) components together in tandem. As the market will comprise multiple agents, we define a _transaction_ as an exchange between a single _central agent_ (i.e., a buyer) and multiple _support agents_ (i.e., sellers), at a particular point in time, whereby the central agent seeks to enhance the predictive performance of a _regression task_, for which the support agents propose their own data as input features. Whilst this definition preserves the capacity for parallel transactions, we assume each is independent, thereby disregarding data exclusivity, wherein the same data can only be sold a finite number of times (Cao et al., 2017), as well as any externalities this may exert (Agarwal et al., 2020).
Although there may be multiple sellers, the enhancement in performance received by the central agent is perceived to be a function of the complete set of information available. We hence view a transaction as being between the central agent and a _single_ monopolistic support agent, a single agent with access to the complete set of features and only one item for sale, specifically the available _loss reduction_. The private valuation is assumed to equal the public bid (i.e., the valuation for a marginal improvement in model fitting). The central agent is then allocated the full performance enhancement offered by the monopolistic support agent, and the payment collected is a function of these two values. One can view this as a specification of the mechanism proposed in Agarwal et al. (2019), where the monopolistic support agent offers several possible performance enhancements, each representing varying degrees of obfuscation of the true data, characterized by the discrepancy between the bid of the central agent and the market price. Since we assume the market price to be exogenous, our work is concerned specifically with the regression analysis and subsequent revenue allocation, as opposed
to the pricing mechanism.
### Market Agents
Let \(\mathcal{A}\) denote the set of market agents, one of which \(c\in\mathcal{A}\) is the central agent seeking to enhance their forecasts. The remaining agents \(a\in\mathcal{A}_{-c}\) are support agents that propose data as input features, whereby \(\mathcal{A}_{-c}=\mathcal{A}\setminus\{c\}\). The central agent is characterized by an interest in a particular stochastic process \(\{Y_{t}\}\), defined as a set of successive random variables \(Y_{t}\) indexed over discretized time steps \(t\). Eventually, a time-series \(\{y_{t}\}\) is observed, comprising realizations from \(\{Y_{t}\}\) (i.e., one per time step). Instead of assuming that a particular characteristic of \(Y_{t}\) is sought (e.g., the expected value, a specific quantile, etc.), we rather model the entire distribution, albeit conditioned on the observed data; the characteristic extracted by the central agent is simply treated as some downstream decision-making process.
We write \(\mathbf{x}_{\mathcal{I},t}\) as the vector of input features at time \(t\), indexed by the ordered set \(\mathcal{I}\). Each agent \(a\in\mathcal{A}\) owns a subset \(\mathcal{I}_{a}\subseteq\mathcal{I}\) of indices, such that the features are distributed amongst the market agents as follows: the central agent \(c\) owns the subset \(\mathcal{I}_{c}\subset\mathcal{I}\). Each support agent \(a\in\mathcal{A}_{-c}\) also owns a subset, with indices \(\mathcal{I}_{a}\subset\mathcal{I}\), such that \(|\mathcal{I}_{c}|+\sum_{a\in\mathcal{A}_{-c}}|\mathcal{I}_{a}|=|\mathcal{I}|\). We write \(\mathcal{I}_{-c}\) as the set of indices for features owned only by the support agents.
Since the data is observed at successive time steps, we let \(\mathbf{x}_{t}=[x_{1,t},\ldots,x_{|\mathcal{I}|,t}]^{\top}\) be the vector of values for all features at time \(t\). When only a particular subset of features \(\mathcal{C}\subseteq\mathcal{I}\) is used, we add an index for the set itself, such that the vector of values for features in \(\mathcal{C}\) at time \(t\) is denoted by \(\mathbf{x}_{\mathcal{C},t}\). We write \(\mathcal{D}_{\mathcal{C},t}=\{\mathbf{x}_{\mathcal{C},t^{\prime}},y_{t^{ \prime}}\}_{\mathcal{V}t^{\prime}\leq t}\) to be the set of input-output pairs for a particular subset of features observed over a set of discrete time indices \(t^{\prime}\in\{1,\ldots,t\}\) up until time \(t\).
### Regression Task
To instigate a transaction, the central agent first posts a regression task to the market platform, which describes the particular model for which they seek to enhance predictive performance. We consider the problem of interpolating through data (i.e., the observations \(\{y_{t}\}\)) under the assumption that the target signal is subject to noise, whilst the input features are noise-free. Let us define an interpolant as a mapping \(f\) between a subset of features \(\mathbf{x}_{\mathcal{C},t}\) and a real-valued scalar, which may, for instance, represent the expected value of the target signal conditioned on the inputs such that
\[f:\mathbf{x}_{\mathcal{C},t}\in\mathbb{R}^{|\mathcal{C}|}\mapsto\mathbb{E}[Y_ {t}\,|\,\mathbf{x}_{\mathcal{C},t}]\in\mathbb{R},\quad\forall t,\,\mathcal{V}. \tag{1}\]
We focus solely on parametric regression, and further limit ourselves to functions that can be expressed as linear in their coefficients, with a view to preserve convexity and later guarantee certain market properties. The simplest regression models within this class are those which are also linear functions of the input features, and hence exhibit limited flexibility. We can however obtain a richer class of models by considering linear combinations of a fixed set of nonlinear functions (i.e., basis functions). These models maintain linearity with respect to the parameters, whilst facilitating nonlinearity with respect to the input features. Let \(\mathbf{w}_{\mathcal{C}}\in\mathbb{R}^{|\mathcal{C}|}\) be a vector of coefficients that is used to parameterize the mapping in (1), which, for notational brevity, we assume to be part of a general set of free parameters \(\Theta_{\mathcal{C}}\) that shall be inferred from data. We write \(\psi(\mathbf{x}_{\mathcal{C},t})\) to be the vector of basis functions specified by the central agent, such that the linear interpolant is given by
\[f(\mathbf{x}_{\mathcal{C},t},\mathbf{w}_{\mathcal{C}})=\mathbf{w}_{\mathcal{C} }^{\top}\psi(\mathbf{x}_{\mathcal{C},t}),\quad\forall t,\;\forall\mathcal{C}, \tag{2}\]
where we assume that the vector of basis functions under consideration invariably incorporates a dummy basis function (i.e., \(\psi_{0}(\mathbf{x}_{\mathcal{C},t})=1\), \(\forall t\)) which is included as part of the feature set owned by the central agent. We note that in general the central agent need not own any feature themselves, in which case only this dummy term is provided and all predictive performance is supplied by the features owned by support agents.
From a probabilistic perspective, it is favourable to describe the entire target distribution, rather than simply a particular characteristic thereof, in effort to express the uncertainty in the target signal for each value of the input features. We model the target variable as a deviation from the deterministic mapping in (2) under a zero-mean additive noise process, the parameters of which are also held in \(\Theta_{\mathcal{C}}\). In frequentist regression analyses, the free parameters in \(\Theta_{\mathcal{C}}\) would be treated as unknown yet fixed quantities, with the observed data perceived as random samples from an underlying stochastic process. Hence, any uncertainties in the parameter estimates (e.g., sampling variability, measurement error, misspecification, etc.) are disregarded.
In contrast, Bayesian inference treats the parameters themselves as random variables and aims to infer their distribution by incorporating prior beliefs, which are updated as new data is observed. Let \(\mathcal{H}\) denote a hypothesis, a set of fixed assumptions that restricts the space of possible regression models. The hypothesis contains the vector of basis functions, as well as the functional forms of two probability distributions, namely a prior distribution (i.e., plausible parameter values) and a likelihood (i.e., the probability of the data conditioned on the parameters). The regression task posted to the market platform by the central agent at time \(t\) is therefore fully described by a hypothesis and the observed data.
### Market Clearing
Each of the support agent posts their feature(s) to the platform in hope to receive monetary compensation, albeit without knowing the value of their data _a priori_. We suppose each support agent is willing to accept any nonnegative payment if their data is deemed useful. However, we acknowledge that in practice, support agents may indeed prefer to condition their participation on a minimum payment to, for instance, reflect privacy costs (Acquisti et al., 2016; Han et al., 2022b). Whilst the intention is to fairly allocate revenue amongst features based on marginal contributions to the overall improvement in some objective, in reality, certain features may worsen predictive performance and have a negative impact to the central agent, hence we make the following assumption.
**Assumption 1**: _Given the specified hypothesis, a feature selection process (e.g., cross-validation or marginal likelihood optimization) is performed by the market operator a priori (i.e., before market clearing), such that only features capable of imposing a nonnegative impact on the objective are considered._
For discussions on conventional feature selection problems, cross-validation in a Bayesian context and methods for marginal likelihood optimization, the reader is referred to Guyon and Elisseeff (2003), Watanabe and Opper (2010) and Fong and Holmes (2020), respectively. Once the entire set of required market inputs have been received, the market operator is tasked with clearing the market. In addition to feature selection, this procedure involves several steps, namely parameter inference, performance evaluation, payment collection and revenue allocation.
#### 2.3.1 Parameter Inference
Based on all of the observations up until time \(t\), we can summarize our updated beliefs about the parameters through the posterior distribution, which, by virtue of Bayes theorem, is proportional to the product of the likelihood and the prior such that
\[p(\Theta_{\mathcal{C}}|\mathcal{D}_{\mathcal{C},t})\propto p(\mathcal{D}_{ \mathcal{C},t}|\Theta_{\mathcal{C}})p(\Theta_{\mathcal{C}}),\quad\forall t,\; \forall\mathcal{C}. \tag{3}\]
For an arbitrary choice of prior, the posterior may not be available in closed-form, requiring methods for approximate Bayesian inference (e.g., Monte-Carlo integration) to be employed. However, for a known functional form of the likelihood, priors that are conjugate can result in posteriors with tractable, well-known densities. Although we can indeed use the entire set of observations to evaluate the posterior (i.e., batch inference), it may be more appropriate to allow the moments of this distribution to vary in time, thereby accounting for nonstationarities in any of the underlying processes that
can lead to concept drift. In a Bayesian treatment of linear regression, batch inference can be viewed as a specification of a more general _online learning_ problem, whereby the parameters are updated in a recursive manner. To see this, we re-write the expression in (3) as a series of sequential updates such that
\[p(\Theta_{\mathcal{C}}|\mathcal{D}_{\mathcal{C},t}) \propto p(\mathcal{D}_{\mathcal{C},t}|\Theta_{\mathcal{C}})p( \Theta_{\mathcal{C}}|\mathcal{D}_{\mathcal{C},t-1}), \forall t,\,\forall\mathcal{C}, \tag{4a}\] \[=p(\mathcal{D}_{\mathcal{C},t}|\Theta_{\mathcal{C}})\left[p( \Theta_{\mathcal{C}})\prod_{t^{\ast}<t}p(\mathcal{D}_{\mathcal{C},t^{\ast}}| \Theta_{\mathcal{C}})\right], \forall t,\,\forall\mathcal{C}. \tag{4b}\]
To place greater weight on more recent data, we can augment this update step to use exponential forgetting, where the importance given to past information decreases exponentially. This generally translates to the idea of likelihood flattening, whereby we reformulate (4b) as a trade-off between the posterior at the previous time step and the original prior (i.e., before any data had been observed), thereby emulating a loss in belief with respect to the historic estimates (Peterka, 1981). This trade-off between the two distributions can be framed as the problem of finding the probability density function with minimum expected Kullback-Leibler (KL) divergence (i.e., relative entropy) between them (Kulhavy and Zarrop, 1993), which has a unique solution enabling us to replace the prior at time \(t\) in (4a) with the following:
\[p(\Theta_{\mathcal{C}}|\mathcal{D}_{\mathcal{C},t-1},\tau) =\operatorname*{argmin}_{p^{\ast}}\,\tau\,D_{\mathrm{KL}}\,(p^{ \ast}\|p(\Theta_{\mathcal{C}}|\mathcal{D}_{\mathcal{C},t-1}))+(1-\tau)\,D_{ \mathrm{KL}}\,(p^{\ast}\|p(\Theta_{\mathcal{C}})), \forall t, \tag{5a}\] \[\propto p(\Theta_{\mathcal{C}}|\mathcal{D}_{\mathcal{C},t-1})^{ \tau}p(\Theta_{\mathcal{C}})^{1-\tau}, \forall t, \tag{5b}\]
where the variable \(p^{\ast}\) denotes the resultant density function, \(D_{\mathrm{KL}}(\cdot\|\cdot)\in\mathbb{R}_{+}\) is the KL divergence and the parameter \(\tau\in[0,1]\) is analogous to the forgetting factor in time-weighted Least-Squares fitting (Vahidi et al., 2005). Observe that, as \(\tau\mapsto 1\), the prior information available at time \(t\) becomes identical to the posterior information at the previous time step as in (4b), emulating batch learning, whereas when \(\tau=0\), all of the previous information is _forgotten_ and we resort to the original (i.e., flat) prior. For convenience, we treat \(\tau\) as a time-invariant hyperparameter, however for a full Bayesian treatment one could also infer its value jointly, together with \(\Theta_{\mathcal{C}}\).
#### 2.3.2 Performance Evaluation
Given a set of observations up until time \(t\), we can evaluate the performance of a specific model (i.e., subset of features) by making a prediction for a time step \(t^{\ast}\), conditioned on the observed input features. For now, we consider the general case where \(t^{\ast}\) is an arbitrary time step to account for both in-sample (i.e., \(t^{\ast}\leq t\)) and out-of-sample (i.e., \(t^{\ast}>t\)) situations. In Bayesian regression analyses, a _prediction_ is typically defined to be
the computation of the posterior predictive distribution, derived by integrating out the parameters using the convolution of the likelihood with the posterior, given by
\[p(y_{r}|\mathbf{x}_{C,t^{*}},\mathcal{D}_{C,t})=\int p(y_{r}|\mathbf{x}_{C,t^{*}},\mathcal{D}_{C,t},\Theta_{C})p(\Theta_{C}|\mathcal{D}_{C,t})d\Theta_{C},\quad \forall C, \tag{6}\]
which for brevity we hereafter omit the training dataset and write as \(p(y_{r}|\mathbf{x}_{C,t^{*}})\). In order to evaluate predictive performance, we define an objective function \(\ell\). If a model describing a particular characteristic of \(Y_{t}\) is sought, then this objective function could be set as a direct function of the residuals (i.e., by extracting the corresponding point from the predictive distribution). However, as we intend to provide the entire predictive distribution, we can generally define \(\ell\) as a function of the predictive likelihood (i.e., \(\ell_{C,t^{*}}:p(y_{r}|\mathbf{x}_{C,t^{*}})\mapsto\mathbb{R}\)), assuming the following.
**Assumption 2**: _The mapping \(\ell\) is a negatively-oriented strictly proper scoring rule. Accordingly, it holds that: (i) for any two models, the one that provides the more accurate description of the data will render a lower score; and (ii) the lowest score is uniquely obtained when the prediction converges to the true distribution._
In the context of online exponential forgetting, evaluating \(\ell\) at each time step can be perceived as a recursive and adaptive time-varying estimator of its expected value; adaptive in the sense that a greater weight is placed on more recent data. Hence, the in-sample estimate of \(\mathbb{E}[\ell]\) for a particular subset of features at time \(t\) with respect to (6) can be described by the following recursion:
\[\mathbb{E}[\ell_{C}]_{t}=(1-\tau)\ell_{C,t}+\tau\,\mathbb{E}[\ell_{C}]_{t-1}, \quad\forall t,\,\forall C. \tag{7}\]
To consider the case of out-of-sample evaluation (e.g., if \(t\) is the next available time step) we simply replace \(\mathcal{D}_{C,t}\) in (7) with \(\mathcal{D}_{C,t-1}\) (i.e., the most recent observations). It should be noted that for Bayesian model comparison, one would generally prefer to consider the _marginal_ likelihood instead, which quantifies the joint probability of the data, thereby penalizing implausible over-parameterized models that may generalize poorly (MacKay, 1992). However, we consider the predictive likelihood sufficient for evaluation given we are solely concerned with predictive performance.
#### 2.3.3 Payment Collection
Once the predictive performance of the complete set of input features has been evaluated, the payment can be collected from the central agent. As well as a regression task, our market requires the central agent to post to the platform their public bid, denoted by \(\lambda\in\mathbb{R}_{+}\), which represents an exogenous linear mapping between a unit improvement in \(\ell\) and the corresponding downstream monetary reward that would be earned. The
market price_ is solely dependent on this valuation, which instills the ideology that the value of data is derived from the enhanced predictive performance it provides, rather than the raw data itself. We do acknowledge the weakness of this linearity assumption, as in practice, \(\lambda\) may be, for instance, a logarithmic function of the central agent's revenue (i.e., further reductions in \(\ell\) may provide diminishing returns), albeit with exponential costs for the support agents. Nevertheless, we leave it as future work to explore the optimal functional form of \(\lambda\). Lastly, the total market revenue at time \(t\) is equal to the payment collected from the central agent, denoted \(\pi_{c,t}\), which is a function of \(\lambda\), as well as the overall improvement in the objective, such that
\[\pi_{c,t}=\lambda\left(\mathbb{E}[\ell_{\mathcal{I}_{c}}]_{t}-\mathbb{E}[\ell_ {\mathcal{I}}]_{t}\right),\quad\forall t. \tag{8}\]
#### 2.3.4 Revenue Allocation
Once the market has been cleared, the natural question that follows is: _how can we fairly allocate market revenue amongst support agents?_ To answer this question, several auction-based setups have been proposed, for both welfare-maximizing and revenue-maximizing mechanisms, pertaining to topics such as privacy preservation (Koutsopoulos et al., 2015), data exclusivity (Cao et al., 2017) and the negative externalities exhibited by the market agents (Agarwal et al., 2020). Other methods bear upon interoperability in machine learning, adopting widely adopted solution concepts (namely, semivalues) for the problem of attribution in cooperative game theory to allocate revenue amongst support agents directly (Dubey et al., 1981). The benefit of this approach being that these solution concepts are generally characterized by a collection of axioms that yield desirable market properties by design (Ghorbani and Zou, 2019), specifically: symmetry, efficiency, null-player and additivity. For a definition of these axioms, the reader is referred to Chalkiadakis et al. (2011).
If we frame features as players and their interactions as a cooperative game, the semivalue of a feature can be defined as its expected marginal contribution towards a set of other features, weighted solely based on the size of the sets. For many applications, the semivalue of choice is the _Shapley value_(Shapley, 1997), the unique value that satisfies all of the four axioms stated previously. Given the set \(\mathcal{I}_{-c}\) of indices corresponding to features owned by the support agents, let \(v:\mathcal{C}\in\mathcal{P}(\mathcal{I}_{-c})\mapsto\mathbb{R}\) be a characteristic function that maps the power set \(\mathcal{P}(\mathcal{I}_{-c})\) of all features with indices in \(\mathcal{I}_{-c}\) to a real-valued scalar, where the set \(\mathcal{C}\) denotes a coalition in the cooperative game. If we further let \(\mathcal{C}^{\prime}=\mathcal{C}\cup\mathcal{I}_{c}\) for all \(\mathcal{C}\subseteq\mathcal{I}_{-c}\), be the union of the set of indices owned by the central agent and a
particular subset of indices owned by the support agents, the Shapley value is given by
\[\phi_{i,t}=\sum_{\mathcal{C}\in\mathcal{P}(\mathcal{I}_{-c}\setminus\{i\})}\frac{ |\mathcal{C}|!(|\mathcal{I}_{-c}|-|\mathcal{C}|-1)!}{|\mathcal{I}_{-c}|!}\,m_{t }(\{i\},\mathcal{C}^{\prime}),\quad\forall i\in\mathcal{I}_{-c},\,\forall t, \tag{9}\]
where \(m_{t}(\{i\},\mathcal{C})\) describes the marginal contribution of the \(i\)-th feature to coalition \(\mathcal{C}\), conventionally defined as \(m_{t}(\{i\},\mathcal{C})=v_{t}(\mathcal{C})-v_{t}(\mathcal{C}\cup\{i\})\) with respect to the characteristic function. The weight in this discrete expectation assigned to each coalition is defined as such to avoid unnecessary calculations of the marginal contribution of the \(i\)-th feature to permutations of the same coalition, which would have equal value by virtue of the symmetry axiom. For instance, \(m_{t}(\{i\},\{j\})\equiv m_{t}(\{i\},\{j\})\), \(\forall(i,j)\in\mathcal{I}_{-c},\ i\neq j\), thus it is computationally favourable to avoid making this calculation twice.
We acknowledge that there is indeed a rich collection of semivalues to choose from, for example the _Banzhaf value_(Lehrer, 1988), which is an unweighted average of the marginal contribution of a feature towards coalitions of other features, satisfying all but the additivity axiom, as well as the _Leave-one-out value_, a simple Vickrey-Clarke-Groves mechanism which attributes each feature its marginal contribution towards the grand coalition. The particular choice of semivalue thus depends on the desired properties of the market. For instance, whilst the Banzhaf value violates the efficiency axiom, it may offer greater robustness to malicious behaviour, whereby a support agent replicates its data, acting under multiple identities to maximize revenue (Han et al., 2022). Likewise, although simple to implement, the Leave-one-out value may fall short when features are not independent and when the regression model is non-separable or nonlinear. We choose to adopt the Shapley value due to its appealing uniqueness in satisfying the four axioms, the first two of which are often used as a criteria for fairness (van den Brink, 2002).
The use of semivalues is this context is, however, not straightforward in general, as there exists several methods for representing a machine learning model as a cooperative game (Covert et al., 2021), each with causal nuances that may be suited to particular contexts (Chen et al., 2020; Janzing et al., 2020). To avoid taking causality into consideration, we hereby make the following simplifying assumption.
**Assumption 3**: _Any two features available in the market are statistically independent (e.g., potentially as a result of Assumption 1), that is, \(p(x_{i,t}|x_{j,t})=p(x_{i,t})\ \forall(i,j)\in\mathcal{I},\ i\neq j,\forall t\)._
We do acknowledge that this is a particularly strong assumption, and encourage an exploration of the causal effects amongst correlated features within our Bayesian framework, and the implications to the market design therein, as future work. This Shapley-based attribution policy can then be used to allocate the market revenue amongst the support agents. First, we need to address that given our estimator of
the expected objective varies with time (i.e., in an online learning environment), the attributions are time-varying too, as well as the payment of the central agent described in (8). In line with (7), the estimated _expected_ Shapley value at time \(t\) is given by
\[\mathbb{E}[\phi_{i}]_{t}=(1-\tau)\phi_{i,t}+\tau\mathbb{E}[\phi_{i}]_{t-1},\quad \forall i\in\mathcal{I}_{-c},\,\forall t. \tag{10}\]
Given we evaluate (10) for each feature, the overall payment received by each support agent is simply given by
\[\pi_{a,t}=\sum_{i\in\mathcal{I}_{a}}\lambda\,\mathbb{E}[\phi_{i}]_{t},\quad \forall a\in\mathcal{A}_{-c},\,\forall t. \tag{11}\]
### Market Stages
Finally, we need to consider the fact that in practice, machine learning pipelines are typically divided into in-sample (i.e., training) and out-of-sample (i.e., testing) stages. The first stage involves parameter estimation using observed input-output pairs, which we accomplish using Bayesian inference to derive the posterior distribution. At the second stage, a trained model is used for genuine forecasting on previously unseen data, testing its capacity to generalize beyond the training set.
Given these two stages are distinct, it is necessary to differentiate between in-sample and out-of-sample data valuation. Not only is the isn-sample value of a feature merely an estimate of its actual value towards genuine forecasting, but the central agent's valuation for an in-sample improvement in the objective will be correlated with the out-of-sample performance of the model, as any downstream decision-making processes typically occur at this stage. We therefore adopt the two-stage regression market model proposed in Pinson et al. (2022), that is, the value of a feature is assessed based on marginal contributions to both the in-sample and out-of-sample estimates of \(\mathbb{E}[\ell]\), albeit in separate transactions.
## 3 Market Properties
Since we have assumed the market price to equal the public bid of the central agent, the remaining design decision that will influence the properties of the market is related to the Shapley value-based attribution policy, specifically the choice of characteristic function used to value a particular coalition of features. Whilst Shapley values are emerging as the _de facto_ tool for interpreting predictions from complex machine learning models (Guyon and Elisseeff, 2010; Sundararajan and Najmi, 2020; Tsai et al., 2023), their application within a probabilistic context is not yet as well understood. Difficulties arise from the need to compare predictions obtained from different subsets of input
features, which can be less straightforward when the model output is a probability distribution as opposed to a scalar. In general, at a given time \(t\) we set the characteristic function to be equal to the current estimate of the expected value of the objective as in (7), such that we write the characteristic function as \(v_{t}(\mathcal{C})=\mathbb{E}[\ell_{\mathcal{C}}]_{t}\), hence the design decision is not the characteristic function itself _per se_, but the particular functional form of \(\ell\), which recall maps the predictive likelihood to a real value.
In this section we introduce the following market designs: (i) \(\mathcal{M}^{\text{MLE}}_{\text{NLL}}\) -- a frequentist framework based on maximum likelihood estimation (MLE) which values features using the negative logarithm of the posterior predictive likelihood (NLL), (ii) \(\mathcal{M}^{\text{BLR}}_{\text{NLL}}\) -- the analogue of \(\mathcal{M}^{\text{MLE}}_{\text{NLL}}\) now in a Bayesian linear regression (BLR) framework, and (iii) \(\mathcal{M}^{\text{BLR}}_{\text{KL}-v}\) and \(\mathcal{M}^{\text{BLR}}_{\text{KL}-m}\) -- BLR frameworks that instead value features based on the information gain they provide, measured using the KL divergence.
### Likelihood-based Allocation
In related work, such as Agarwal et al. (2019), emphasis is placed on frequentist regression analyses, whereby a point-estimate of the model parameters is obtained. As for our case, this too typically involves modelling the target signal as a deviation from (2) under an additive noise process. One can indeed still then obtain probabilistic forecasts, for instance, using maximum likelihood estimation; if a Gaussian likelihood function is assumed, the maximum likelihood estimate of the coefficients of the interpolant describe the conditional mean, whilst the estimated variance expresses the noise, or the unexplained variability, in the target. The characteristic function can then simply be set to the expected value of the NLL function, or even some arbitrary negatively-oriented convex function of the residuals when adopting a fully deterministic framework (e.g., Least-Squares). We denote this frequentist market design by \(\mathcal{M}^{\text{MLE}}_{\text{NLL}}\). In this section, we shall analyze the market properties obtained by extending this idea to its Bayesian analogue.
In our Bayesian treatment of regression analyses, we have access to the joint posterior distribution, which represents plausible values for the free parameters. Accordingly, the revenue allocation derived from using any random sample from this distribution could be considered plausible with respect to frequentist design. However, for a central agent that partakes in risk-informed decision-making downstream, a natural incentive arises to provide the most nuanced representation of uncertainty, that is, the predictive distribution derived by marginalizing over the entire space of parameters. A reasonable candidate for the characteristic function is therefore simply the NLL, which now incorporates the uncertainty in the parameter estimates (i.e., with \(\mathcal{M}^{\text{BLR}}_{\text{NLL}}\) denoting
the corresponding market design) in (6), such that
\[\ell_{\mathcal{C},t}=-\log(p(y_{t}|\mathbf{x}_{\mathcal{C},t})),\quad\forall t,\, \forall\mathcal{C}. \tag{12}\]
For this definition to satisfy Assumption 2, it is required that the posterior predictive likelihood is log-concave. Many common probability distributions are indeed log-concave, and therefore could be utilized easily in a frequentist (i.e., maximum likelihood) framework. However, in order to avoid approximation errors inherent to general Bayesian inference, it is necessary for the posterior to be available in closed-form, thus the prior and posterior should be conjugate distributions, the space of which that leads to a log-concave posterior predictive likelihood is limited. Therefore, as well as for mathematical convenience, to adhere to this we further assume the following, and leave exploration of alternative hypotheses to future work.
**Assumption 4**: _The specified hypothesis \(\mathcal{H}\) comprises a Gaussian likelihood function along with a conjugate uninformative Gaussian prior._
Note that, whilst this assumption is restrictive, it is in fact common in practice (i.e., it is a byproduct of simply using a quadratic function of the residuals as the objective in frequentist regression), and still permits non-Gaussian data generating processes, but merely induces misspecifications in such a case.
It is also worth highlighting the tangible benefits to the central agent merely by facilitating the transition from frequentist to Bayesian regression analyses. For instance, not only does the additional element of predictive uncertainty provide richer and more nuanced information about future outcomes, but maximum likelihood estimation also has a tendency to render implausible overparameterized models that generalize poorly to out-of-sample analyses. This is especially true when the number of training observations is limited, since increasing model complexity inevitably results in a _better_ in-sample fit (i.e., overfitting). In contrast, Bayesian methods inherently embody _Occam's razor_ (i.e., a proclivity towards simplicity) by exploiting prior knowledge that can induce regularization without the need for ad-hoc penalty terms, thereby facilitating well-calibrated uncertainty estimates using training data alone, without the need for any hold-out data analysis, which can be both computationally expensive and wasteful of valuable observations.
We shall now explore the key properties obtained in our proposed extension towards a Bayesian regression market mechanism. These properties are derived from the axioms that characterize the semivalue, all four of which are satisfied by the Shapley value. We first present the properties that we categorize as _universal_, those which are guaranteed to be satisfied under all circumstances.
**Theorem 1**: _Our proposed framework for Bayesian regression markets based on Shapley allocation yields the following universal market properties._
1. _Symmetry -- two features_ \(x_{i,t}\) _and_ \(x_{j,t}\) _with equal marginal contribution to any coalition receive the same attribution, that is,_ \(\forall\mathcal{C}\in\mathcal{I}_{-c}\setminus\{i,j\}:v_{t}(\mathcal{C}^{\prime }\cup\{i\})\equiv v_{t}(\mathcal{C}^{\prime}\cup\{j\})\mapsto\phi_{i,t}\equiv \phi_{j,t},\ \forall(i,j)\in\mathcal{I}_{-c},\ i\neq j,\ \forall t\)_._
2. _Linearity -- for any two features_ \(x_{i,t}\) _and_ \(x_{j,t}\)_, their joint contribution to a particular coalition of other features is equal to the sum of their marginal contribution, that is,_ \(v_{t}(\mathcal{C}^{\prime}\cup\{i\})+v_{t}(\mathcal{C}^{\prime}\cup\{j\})=v_{ t}(\mathcal{C}^{\prime}\cup\{i,j\}),\ \forall\mathcal{C}\in\mathcal{I}_{-c}\setminus\{i,j\},\ \forall t\)_._
3. _Budget balance -- the payment of the central agent is equal to the sum of revenues received by the support agents, that is,_ \(\pi_{c,t}\equiv\sum_{a\in\mathcal{A}_{-c}}\pi_{a,t},\ \forall t\)_._
_Proof_: _Omitted since each universal properties follows directly from the semivalue axioms satisfied by the Shapley value._
Symmetry and linearity are inherited directly from the corresponding axioms; symmetry assures attributions are invariant to permutation of indices, equivalent to the anonymity property in Lambert et al. (2008), whilst linearity removes any incentive for a support agent to strategically package their features, ensuring that revenue remains consistent regardless of whether the features are offered individually or as a bundle. Similarly, budget balance is a byproduct of the efficiency axiom, which states that the total attribution allocated to all features should sum to the value of the grand coalition, that is, \(v_{t}(\mathcal{I})=\sum_{i\in\mathcal{I}_{-c}}\phi_{i,t},\ \forall t\). Accordingly, given the definitions in (8) and (11), it holds universally that the total sum of the revenues of each of the support agents equals the payment collected from the central agent.
In addition to these universally held market properties, our proposed Bayesian regression market mechanism further obtains a collection of properties that we hereafter refer to as _asymptotic_, those which can only be guaranteed up to sampling uncertainty, and as such hold less generally.
**Theorem 2**: _Our proposed framework for Bayesian regression markets based on Shapley allocation yields the following asymptotic market properties._
1. _Individual rationality -- support agents have a weak preference for participating in the market rather than not participating, that is,_ \(\pi_{a,t}\geq 0,\forall a\in\mathcal{A}_{-c},\ \forall t\)_._
2. _Zero-element -- a support agent that provides no feature, or only provides features with zero marginal contribution to all coalitions of other features should receive no payment, that is,_ \(\forall\mathcal{C}\in\mathcal{I}_{-c}:v_{t}(\mathcal{C}^{\prime}\cup\{i\}) \equiv v_{t}(\mathcal{C}^{\prime}),\ \forall i\in\mathcal{I}_{a}\mapsto\pi_{a}=0,\ \forall t\)_._
3. _Truthfulness -- support agents receive their maximum potential payment when reporting their true data, that is,_ \(v_{t}(\mathcal{C}^{\prime};\mathbf{x}_{\mathcal{C}^{\prime},t})\geq v_{t}( \mathcal{C}^{\prime};\mathbf{x}_{\mathcal{C}^{\prime}}+\boldsymbol{\eta}_{t})\)_,_ \(\forall\mathcal{C}\in\mathcal{I}_{-c},\forall i\in\mathcal{C}_{-b}\)_,_ \(\forall t\)_, where_ \(\boldsymbol{\eta}_{t}\) _represents noise added to the original feature._
_Proof: Individual rationality follows directly from Assumption 1 and zero-element follows directly from the null-player axiom of semivalues satisfied by the Shapley value. For a proof of truthfulness, see Appendix A._
For the sake of illustration, suppose for now that given a particular transaction, our posterior estimates are such that they are indistinguishable from the Dirac measure (i.e., a point mass) around the _true_ parameter values. As such, our estimate of the expected value of the objective also converges to the _true_ value in this case and the asymptotic properties can instead be considered universal. Individual rationality would proceed directly from Assumption 1, since given \(\phi_{i,t}\geq 0\), \(\forall i\in\mathcal{I}_{-c}\), \(\forall t\), it readily follows from definitions (8) and (11) that payments can only be nonnegative. Similarly, the zero-element property, inherited from the null-player axiom, would hold by design as if no feature is reported to the market then trivially no revenue is allocated, and if instead the true coefficient associated with a feature is zero, so too would be the associated revenue.
Finally, truthfulness ensures incentive compatibility, that is, there is an incentive for support agents to report their true feature data. We assume that if a support agent is to provide an untruthful report of their data, they do so through the addition of centred noise with finite variance. Building on Assumption 3, noise added to a particular feature is uncorrelated with noise added to any other, and conditionally independent of the target given the feature.
**Corollary 3**: _Following Assumptions 2, 3 and 4 the revenue of each of the support agents exhibits a unique maximum when each reports their true feature data. **Proof:** See Appendix A._
Given Corollary 3, if one or more of the features are reported untruthfully, the mean of the posterior is equivalent the coefficient vector obtained by minimizing the in-sample sum-of-squares error with the addition of a quadratic regularization term. This can be perceived as an implementation of Ridge regression (Hoerl and Kennard, 1970), which can be derived in a Bayesian setting using the original features in combination with an informative prior, rendering a shrinkage of the regression coefficients and creating an endogeneity bias, thereby reducing the variability in the predictive distribution induced by parameter uncertainty, all the while reducing the associated revenue. However, even in this idealistic case wherein the true posterior, and hence the true expected loss, is known, these properties can only be guaranteed in-sample, and may not generalize to
the out-of-sample market stage, especially for nonstationary processes. For instance, on the subject of truthfulness, whilst untruthful reports emulate regularization thereby lessening in-sample likelihood, this in turn may lead to a reduction in overfitting, thereby potentially improving out-of-sample performance. These issues pertain to the rich field of generalization in machine learning, for which bounds can typically only be attained under strict assumptions about the data generating processes (Mohri et al., 2018). We leave a thorough examination of the generalization characteristics of these market properties to future work.
However, we do still need to acknowledge that in practice, the true posterior is unknown, that is, only an in-sample estimate of its moments are available. In consequence, the asymptotic market properties cannot generally even be guaranteed in-sample. To make certain these properties hold at least up to sampling uncertainty, we assume that the specified hypothesis is such that as more data is observed, the posterior distribution converges to the Dirac measure around the maximum likelihood estimate of the parameter values almost surely, that is
\[D_{\text{KL}}(p(\Theta_{\mathcal{C}}|\mathcal{D}_{\mathcal{C},t}\|\delta( \Theta_{\mathcal{C}}^{\star}))\xrightarrow{t}0,\quad\forall\mathcal{C}, \tag{13}\]
where \(\delta(\cdot)\) is the probability density function of the Dirac delta distribution and \(\Theta^{\star}\) is the maximum likelihood estimate of the parameters. This assumption implies asymptotic consistency of models that are well-specified. Whilst in practice model misspecification is inevitable, concentration around the maximum likelihood estimate is sufficient to guarantee the properties in Theorem 2 hold up to sampling uncertainty.
However, since these properties theoretically only hold in expectation, it is likely that they will be violated in a single-shot of the market. We note that, violation of the asymptotic market properties would indeed impart no negative impact to the central agent with respect to predictive performance, provided the objective is minimized given the observed data. Although, this cannot be said for the support agents, whom may be exposed to considerable financial risks, especially when a limited number of observations are available, where sub-optimal estimates of the moments of the posterior distribution could lead to massively distorted payments. This issue would be exacerbated in the out-of-sample market stage, for which the in-sample estimate of the posterior may be less efficient. Seeking to account for these risks, we explore alternative formulations of the characteristic functions in Section 3.2.
Lastly, we want to address a few additional properties of similar market mechanisms proposed in related work. In particular, Lambert et al. (2008) introduce _normality_ in the context of wagering mechanisms, which in fact holds universally in our setup, albeit reliant upon Assumption 3, translating to: the relative revenue of a particular support
agent increases either when the absolute importance of their feature(s) increases, or when the absolute importance of another support agent's feature(s) decreases. The same authors also introduce _sybilproofness_ and _monotonicity_, which are not deemed relevant to our setup. Another property frequently discussed in literature is that of _robustness to replication_, which states that no support agent should be able to increase their revenue simply by replicating their data - a crucial property to consider since data can in theory be replicated at zero marginal cost. Whilst several mechanism designs have been proposed to satisfy this property (e.g., Agarwal et al. (2019), Ohrimenko et al. (2019), Han et al. (2022)), its satisfaction generally comes at a cost, for instance the proposal in Agarwal et al. (2019) sacrifices budget balance and remains exposed to spiteful agents (i.e., those which are interested in minimizing the revenue of the other agents as well as in maximizing their own profits). Therefore, data replication, and robustness thereto, remains an open challenge; we leave exploration of this topic in relation to our setup as future work.
### Risk Exposure Reduction
In effort to reduce the financial risks exhibited by the support agents, we explore alternative methods for valuing coalitions within our Shapley value-based attribution policy. Our approach is somewhat inspired by recent works concerning multi-class classification, whereby the model output is instead a discrete probability distribution. In this setting, Covert et al. (2020) demonstrate that model comparison can be conceptualized as the relative mutual information. However this does require explicit computation of the joint distribution over the observed data, which may be intractable when dealing with continuous distributions, necessitating expensive approximation (Kraskov et al., 2004).
A compelling variation was presented in Agussurja et al. (2022), wherein rather than focusing on predictive performance, multiple data owners instead seek to perform joint inference of a set of parameters using their combined datasets. The value assigned to a particular subset of input features is then given by the information gain on the _true_ parameters measured using the KL divergence of the joint posterior from a common prior. This is, however, not immediately applicable to our setup, as we are indeed interested not only in learning the parameters, but in compensating support agents based on their contribution to overall predictive performance. In addition, the posterior distribution is shown to assign infinite density to the _true_ parameters in the limit. As a result, the Shapley values, and subsequent revenue allocations, converge to infinity given a fixed valuation, \(\lambda\). That being said, we can instead utilize the information gain by considering the posterior predictive distributions, which inherently encapsulate
the utility of the features in relation to predictive performance. In the following, we derive two methods for utilizing the KL divergence in our setup, demonstrating the implications on the market properties.
#### 3.2.1 Marginal Contribution
Under Assumption 1, each of the features available to the market is considered weakly informative, therefore the addition of any one feature to a coalition at worst will not impact predictive performance. Hence, we can express the marginal contribution of a feature to a particular coalition as the additional information that it provides, that is, the KL divergence from the predictive distribution _without_ to the predictive distribution _with_ the particular feature \((\mathcal{M}^{\text{BLR}}_{\text{KL}-m})\), such that
\[m_{t}(\{i\},\mathcal{C})=\mathbb{E}[D_{\text{KL}}(p(y_{i}|\mathbf{x}_{\mathcal{ C}\cup\{i\},t)}\|p(y_{t}|\mathbf{x}_{\mathcal{C},t)})],\quad\forall i,\; \forall\mathcal{C}. \tag{14}\]
Here we remove the original characteristic function altogether and replace it with a function that maps the predictive distribution of both coalitions to a real-valued scalar. Given Assumption 4, we can generally formulate the KL divergence as the expected value of the logarithm of the Radon-Nikodym derivative, since any two univariate Gaussian distributions satisfy absolute continuity.
**Corollary 4**: _The definition in (14) yields payments asymptotically equivalent to those obtained with \(\ell\) set as the NLL, as in our original definition. **Proof**: See Appendix B._
Despite this asymptotic equivalence, the impact of using the KL divergence as described in (14) becomes apparent when the number of observations is limited; the resultant revenue allocations will be less volatile, reducing risk exposure of the support agents. This results from the fact that the KL divergence accounts only for the relative entropy, thereby providing a more robust comparison by considering the overall information held within the distributions rather than the specific observations of the target signal, which can be distorted by outliers.
**Theorem 5**: _Replacing the expression for the marginal contribution with the definition in (14) alters the market properties in Theorems 1 and 2 as follows: individual rationality becomes a universally held property at the expense of budget balance violation, whilst the remnant properties remain unchanged. **Proof**: Indiviudal rationality follows directly from Gibbs' inequality. For a proof of the loss of budget balance, see Appendix C._
The KL divergence satisfies Gibbs' inequality, which states that relative entropy is always nonnegative. Hence under Assumption 1, any allocations will be weakly positive and individual rationality will hold universally by design. However, reducing
the definition of marginal contribution to a single inseparable expression removes the telescoping sum structure of the original Shapley value formulation, which in turn leads to a violation of the efficiency axiom and hence budget balance. For brevity, we shall omit a proof for the remaining properties by virtue of similarity to Theorems 1 and 2. Although budget balance is violated, the universal satisfaction of individual rationality theoretically removes the most severe financial risks exhibited by the support agents, as they are guaranteed a nonnegative revenue. We see this as a similar trade-off exhibited in Agarwal et al. (2019) in pursuit of robustness to replication, that is, the addition of financial security is simply paid for by the market.
#### 3.2.2 Characteristic Function
If violation of budget balance is unpractical, one can instead use the KL divergence in a manner that more closely resembles that presented in Agussurja et al. (2022), whereby the value of a coalition is defined as the information gain relative to a common prior, however instead of considering the prior and posterior parameter distributions, we set the common prior to be the predictive distribution of the central agent, such that
\[v_{t}(\mathcal{C})=\mathbb{E}[D_{\text{KL}}(p(y_{t}|\mathbf{x}_{\mathcal{C},t })\|p(y_{t}|\mathbf{x}_{\mathcal{I}_{\mathcal{C},t}}))],\quad\forall i,\; \forall\mathcal{C}. \tag{15}\]
Now we have instead only re-defined the characteristic function \((\mathcal{M}_{\text{KL}-v}^{\text{BLR}})\) such that the marginal contribution is still given by \(m_{t}(\{i\},\mathcal{C})=v_{t}(\mathcal{C})-v_{t}(\mathcal{C}\cup\{i\})\).
**Theorem 6**: _Valuing a coalition as described in (15) yields payments asymptotically equivalent to those obtained with \(\ell\) set as the NLL, as in our original definition. **Proof**: See Appendix D_
**Corollary 7**: _Replacing the expression for the marginal contribution with the definition (15) preserves the market properties in Theorems 1 and 2. **Proof**: Omitted due to similarity to that for these theorems._
Since we retain the telescoping sum structure of the Shapley value, budget balance is reinstated as a universal property. However, individually rationality is reduced back to an asymptotic property. This follows from the fact that the marginal contribution now involves the subtraction of expected KL divergences, for which Gibbs' inequality no longer applies. That being said, this design should still provide us with less volatile allocations relative to \(\mathcal{M}_{\text{NLL}}^{\text{MLE}}\), when a limited number of observations are available. Hence, one should still expect reductions in risk exposure for the support agents, the extent to which will be studied through a series of simulation studies in Section 4.
### Summary of Market Designs
Apart from extending from frequentist to Bayesian regression analyses, the proposed market designs differ solely in their marginal contribution formulation. Since we can write the definition in (8) as \(\pi_{c,t}=\lambda\,m_{t}(\mathcal{I}_{-c},\mathcal{I}_{c}),\forall t\), both the payment of the central agent _and_ the revenue allocation are affected by the difference in the functional form of \(m_{t}\), the extent of which will also be studied in Section 4. To end this section, we provide a summary of the different formulations in Table 1.
## 4 Simulation Studies
To illustrate our findings, we shall now present a collection of scenarios and simulation-based case studies2. To emphasize the versatility of our proposed Bayesian regression market design, we devote particular attention to four distinct setups, each representing an additional layer of complexity to emulate real-world intricacies. It is important to note that these setups provide simplified representations of real-world situations, merely for the purpose of demonstration. We explore compounding effects of likelihood misspecification, specifically with respect to both the interpolated function and the intrinsic noise in the target signal.
Footnote 2: Our code is publicly available at: [https://github.com/tdfalc/regression-markets](https://github.com/tdfalc/regression-markets)
In each of the simulation-based case studies, the central agent seeks to model a target variable \(Y_{t}\) using their own feature \(x_{1,t}\) and the relevant features available in the market, each owned by a unique support agent, namely \(x_{2,t}\) and \(x_{3,t}\), such that the modelled likelihood is an independent Gaussian stochastic process with finite precision \(\xi_{Y_{t}}\), with the linear interpolant for the grand coalition given by \(f(\mathbf{x}_{t},\mathbf{w})=\sum_{i=1}^{N}\xi_{Y_{t}}\).
\begin{table}
\begin{tabular}{l l} \hline \hline Market Design & Formulation of marginal contribution: \(m_{t}(\{i\},\mathcal{C}),\forall t,\,\forall i,\,\forall\mathcal{C}\) \\ \hline (i) \(\mathcal{M}_{\mathrm{NLL}}^{\mathrm{MLE}}\) & \(\mathds{E}[-\log(p(y_{t}|\mathbf{x}_{\mathcal{C},t},\Theta_{\mathcal{C}}^{*})) ]-\mathds{E}[-\log(p(y_{t}|\mathbf{x}_{\mathcal{C}\cup\{i\},t},\Theta_{ \mathcal{C}\cup\{i\}}^{*}))]\) \\ (ii) \(\mathcal{M}_{\mathrm{NLL}}^{\mathrm{BLR}}\) & \(\mathds{E}[-\log(p(y_{t}|\mathbf{x}_{\mathcal{C},t}))]-\mathds{E}[-\log(p(y_{t }|\mathbf{x}_{\mathcal{C}\cup\{i\},t}))]\) \\ (iii) \(\mathcal{M}_{\mathrm{KL}-m}^{\mathrm{BLR}}\) & \(\mathds{E}[D_{\mathrm{KL}}(p(y_{t}|\mathbf{x}_{\mathcal{C}\cup\{i\},t})\|p(y_ {t}|\mathbf{x}_{\mathcal{C},t}))]\) \\ (iv) \(\mathcal{M}_{\mathrm{KL}-v}^{\mathrm{BLR}}\) & \(\mathds{E}[D_{\mathrm{KL}}(p(y_{t}|\mathbf{x}_{\mathcal{C},t})\|p(y_{t}| \mathbf{x}_{\mathcal{I}_{c},t}))]-\mathds{E}[D_{\mathrm{KL}}(p(y_{t}|\mathbf{ x}_{\mathcal{C}\cup\{i\},t})\|p(y_{t}|\mathbf{x}_{\mathcal{I}_{c},t}))]\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Marginal contribution formulation for each of the market designs introduced in Section 3.
\(w_{0}+w_{1}x_{1,t}+w_{2}x_{2,t}+w_{3}x_{3,t},\forall t\). The various setups differ solely in the model of the likelihood as follows:
1. **Baseline --** The likelihood is well specified with respect to the _true_ data generating process, given by \(p(y_{t}|\mathbf{x}_{t},\mathbf{w})=\mathcal{N}(f(\mathbf{x}_{t},\mathbf{w}), \xi_{Y_{t}})\).
2. **Interpolant --** The interpolant is misspecified such that we write the _true_ mean of the likelihood as \(f(\mathbf{x}_{t},\mathbf{w})=\mathbf{w}^{\top}(\mathbf{x}_{t}\odot\mathbf{x}_ {t})\), \(\forall t\), where \(\odot\) denotes the Hadamard product.
3. **Noise --** Further to the misspecified interpolant, the Gaussian noise assumption is incorrect, with the _true_ process given by a Student's t-distribution with two degrees of freedom.
4. **Heteroskedasticity --** The non-Gaussian noise is heteroskedastic, such that at each time step it is multiplied by \(x_{2,t}^{2}\).
### In-sample Market
We begin with a demonstration of the link between the Bayesian learning procedure and the subsequent market revenue allocation, using the in-sample stage of the \(\mathcal{M}_{\text{MLE}}^{\text{NLL}}\) market as case study. For simplicity, we emulate batch inference (i.e., \(\tau=1\)) and consider only the _Baseline_ setup. We let the _true_ coefficients be \(\mathbf{w}=[-0.11,0.31,0.08,0.65]^{\top}\), and the precision of the noise in the target signal to be constant for all time steps, treated as a hyperparameter with \(\xi_{Y_{t}}=1.23\), \(\forall t\). We further set the valuation of the central agent to \(\lambda=0.01\) EUR per time step and per unit improvement in \(\ell\). We consider a single run of the market for increasing sample sizes, specifically 4, 10 and 40, recording the posterior moments, the predictive performance and the market revenue allocations for each. The results are shown in Figure 2.
In Figure 1(a), we see that as one would expect, increasing the number of observations improves the estimation of the posterior, eventually centering around the _true_ coefficient values. In Figure 1(b), we present the NLL distribution for the in-sample observations. As the sample size increases, the improved posterior facilitates better capturing of the additional information provided by the features of the support agents, resulting in considerably enhanced predictive performance. The central agent indeed must pay for such improvements, highlighted by the additional revenue earned by the support agents, presented in Figure 1(c). In the case of only 4 samples, we see that the predictive performance in fact decreases with the additional features, yielding small but negative revenues for the support agents. This emphasizes the importance of Assumption 1, as a
prior feature selection process could remove these features so that individual rationality is preserved.
### Uncertainty Quantification
Next we illustrate our four considered setups, highlighting the benefit to the central agent of merely facilitating Bayesian regression analyses. We set the _true_ parameters to \(\mathbf{w}=[-0.1,0.3,0.8,-0.4]^{\top}\) and \(\xi_{Y_{t}}=0.5\), \(\forall t\). We again emulate batch inference and run a Monte-Carlo simulation whereby we clear the market \(10^{3}\) times for several different sample sizes (i.e., numbers of in-sample observations) and record the expected NLL
Figure 2: In-sample market with increasing batch size. The dashed lines in (a) highlight the _true_ coefficient values. The histogram in (b) shows the in-sample NLL distribution. The horizontal bars in (c) are the cumulative revenues given the value of each datapoint provided.
for \(10^{3}\) out-of-sample observations for each. This is carried out using both maximum likelihood estimation and Bayesian regression analyses.
Figure 3 shows the empirical average of the percentage improvement in the objective value for the Bayesian regression model. Observe that the improvement is most significant across all setups when the sample size is relatively small, as the additional piece of uncertainty in the parameter estimates plays a greater role in the predictive distribution, increasing the predictive likelihood. Then, as the sample size increases, the parameter estimates converge in accordance with 13. Furthermore, as the additional layers of complexity are introduced, the benefit of incorporating parameter uncertainty increases considerably. These improvements attained by converting to a Bayesian framework indicate a better calibration of uncertainty, enriching the information used by the central agent for risk-informed decision-making downstream.
### Convergence Analysis
Now we present an empirical study of the in-sample asymptotic convergence for our various market designs. Let the _true_ coefficeints and noise precision be given by \(\mathbf{w}=[-0.1,0.8,0.7,-0.9]^{\top}\) and \(\xi_{Y_{t}}=1.0\), \(\forall t\), respectively, focusing here solely on the _Baseline_ setup, since asymptotic convergence is irrespective of the _true_ data generating processes,
Figure 3: Empirical average of the percentage improvement in the NLL ratio for BLR relative to MLE, plotted as a function of sample size.
but rather the set of modelling assumptions. A similar Monte-Carlo simulation is performed, recording the in-sample Shapley values for each run, the results of which are presented in Figure 4.
Looking first at Figure 3(a), we see that with a small sample size, the frequentist market design assigns a larger contribution to the features compared with those using Bayesian regression, however these values indeed converge asymptotically in align with the theory. This discrepancy is likely due to the greater reduction in the in-sample objective provided by the maximum likelihood estimate, which is of course prone to overfitting. In Figure 3(b), we see that although the \(\mathcal{M}^{\text{BLR}}_{\text{KL}-\nu}\) market yields a total allocation (i.e., the sum of the Shapley values divided by the value of the grand coalition) analogous to the likelihood-based markets, the \(\mathcal{M}^{\text{BLR}}_{\text{KL}-m}\) market renders a surplus in revenue when the sample size is small. This demonstrates the trade-off incurred by virtue of the now universally held individual rationality property, in other words, budget balance is no longer guaranteed, even during the in-sample stage. That being said, this problem indeed resolves with increasing number of observations as the Shapley values converge.
Figure 4: Empirical average of (a) expected Shapley values and (b) the expected total allocation, for each market design, plotted as a function of sample size. Dashed and solid lines in (a) correspond to features \(x_{2,t}\) and \(x_{3,t}\), respectively. In (b), the red and green lines are hidden behind the blue line, given budget balance is a universally held property in each of these markets.
### Risk Exposure
We now turn our attention to the finances of the support agents, which we assess by computing both the expected value of the revenue, \(\int\pi_{a,t}p(\pi_{a,t})d\pi_{a,t}\), \(\forall t\), and the expected shortfall (i.e., conditional value at risk), \(-1/\alpha\int_{\pi_{a,t}\leq q_{a}(\pi_{a,t})}\pi_{a,t}p(\pi_{a,t})d\pi_{a,t}\), \(\forall t\), for all \(a\in\mathcal{A}_{-c}\), where \(q_{a}(\cdot)\) is the quantile with nominal level \(\alpha\). We present empirical estimations of these values for a case study where we again clear the market for a new sample of data \(10^{3}\) times and record the revenue of each support agent, with the _true_ coefficients set to \(\mathbf{w}=[0.1,-0.5,0.0,0.7]^{\top}\), with noise precision \(\xi_{Y_{t}}=0.67\), \(\forall t\). We additionally set \(\lambda=0.03\) EUR per time step and per unit improvement in \(\ell\) for the both in-sample and out-of-sample stages. We use a simple sub-sampling method to derive the corresponding two-sided confidence intervals of both the expected value and expected shortfall of the revenue with a 95% confidence level. We run this simulation for each market design, as well as for each of the misspecification setups, with \(10^{3}\) in-sample and out-of-sample observations.
In Figure 5, we plot the revenue of the support agent who owns \(x_{2,t}\). Considering first Figure 4(a), observe that the expected value of the revenue is relatively consistent across all market designs for each setup. However, for each additional layer of complexity, the expected shortfall is positive for the \(\mathcal{M}^{\text{BLR}}_{\text{NLL}}\) market, increasing by almost two orders of magnitude in the latter setups. For the market designs based on the KL divergence, the expected shortfall remains somewhat constant around zero, highlighting the sizeable
Figure 5: Two-sided confidence intervals with a 95% confidence level for both the expected value and expected shortfall of the revenue received by the owner of \(x_{2,t}\), with quantile parameter \(\alpha=0.05\), for each setup, namely _Baseline_ (\(\circ\)), _Interpolant_ (\(\diamond\)), _Noise_ (\(\blacktriangleright\)) and _Heteroskedasticity_ (\(\square\)).
reductions in risk exposure possible by using the KL divergence instead of the NLL to allocate revenue. As we saw previously, the \(\mathcal{M}^{\text{MLE}}_{\text{NLL}}\) market design seemingly performs better than its Bayesian counterpart during the in-sample stage. However, if we look now at Figure 4(b), we see this indeed does not generalize out-of-sample.
For this small number of observations, the estimated moments of the posterior distribution are more likely to be sub-optimal, and hence the predictive likelihood is more volatile. In consequence, even the expected value of the out-of-sample revenue becomes more negative with each additional layer of complexity for the likelihood-based market designs, meaning that the individual rationality property is violated even in expectation. The out-of-sample revenues in the \(\mathcal{M}^{\text{MLE}}_{\text{NLL}}\) market are now worse than for \(\mathcal{M}^{\text{BLR}}_{\text{NLL}}\), demonstrating that the superior performance of the \(\mathcal{M}^{\text{MLE}}_{\text{NLL}}\) market in-sample was indeed simply a result of overfitting. In contrast, the expected value of the revenue is relatively consistent with those in-sample for both KL divergence-based markets.
Considering the risk, the expected shortfall for \(\mathcal{M}^{\text{BLR}}_{\text{NLL}}\) and \(\mathcal{M}^{\text{MLE}}_{\text{NLL}}\) increased by several orders of magnitude at the out-of-sample stage. Interestingly, one can now observe the consequence of re-instating budget balance by re-defining the characteristic function instead of the marginal contribution. Specifically, whilst for the \(\mathcal{M}^{\text{BLR}}_{\text{KL}-m}\) market design, individual rationality is held universally, the expected shortfall in \(\mathcal{M}^{\text{BLR}}_{\text{KL}-v}\) has drifted positive. That being said, the risks are generally considerably less compared with \(\mathcal{M}^{\text{BLR}}_{\text{NLL}}\), suggesting that there is still merit to this approach if budget balance is essential.
#### 4.4.1 Sensitivity Analysis: Sample Size
Using the same experimental procedure, we now consider the sensitivity of these findings to the number of in-sample observations, the results of which are shown in Figure 6. Note, the volumetric revenues are normalized here to account for the different sample sizes. In general, both the expected value and the expected shortfall of the revenue converge for all market designs. This was expected given the asymptotic convergence of the Shapley values shown previously. However, what is of note here is that out-of-sample, the expected shortfall associated with the likelihood-based market design takes considerably longer to converge, with substantial risk levels even for the larger sample sizes, notwithstanding that is merely the _Baseline_ setup. This emphasizes that although the majority of the risk exposure manifests in the out-of-sample market stage, using the KL divergence to allocate revenue can help mitigate this significantly.
#### 4.4.2 Sensitivity Analysis: Coefficient Magnitude
We now consider the sensitivity to the magnitude of the _true_ coefficient associated with the feature owned by the first support agent, by re-running the simluation with different
values for \(w_{2}\), all the while keeping the remaining coefficients constant. The results are shown in Figure 7. Still we see a lesser degree of inconsistency between the in-sample and out-of-sample payments for the market designs based on the KL divergence, both in terms of expected returns and risk, with considerably greater financial risk for \(\mathcal{M}^{\text{BLR}}_{\text{NLL}}\) during the out-of-sample stage. We emphasize again that this is even for the simple _Baseline_ setup.
We also see that for each of the market designs, the expected value and the expected shortfall of the revenue appear quadratic in \(|w_{2}|\). In fact, this should be of no surprise -- given Assumptions 3 and 4, one can readily show that in a Bayesian framework, both the expected value and variance of the Shapley value for a feature are a quadratic function of its contribution to the prediction (Falconer et al., 2023). To be self-contained, we provide a proof in Appendix E. One could argue this to have fairness implications, since in theory more informative features are exposed to a greater extent of financial uncertainty, however this is out-of-scope and we hereby leave a more thorough exploration of this phenomenon as future work. In addition, we note that since this would allow us to analytically describe the expected shortfall, having more consistent results between stages, such as with the KL divergence market designs, could enable us to provide the support agents with a qualitative _a priori_ upper bound on the risk (i.e., before clearing the market), even out-of-sample.
Figure 6: Two-sided confidence intervals with 95% confidence level for both the expected returns (solid lines) and expected shortfall (dashed lines), with quantile parameter \(\alpha=0.05\), for the first support agent considering the _Baseline_ setup and plotted as a function of sample size.
### Nonstationary Processes
Until this point, we have assumed only batch inference (i.e., \(\tau=1\)), however in Section 2.3 we showed theoretically that this is merely a specification of the more general online Bayesian inference problem, which facilitates time-varying posterior moments. For the final simulation-based case study, let us consider a nonstationary data generating process, wherein the parameters initially take on the values \(\mathbf{w}=[0.0,-0.2,0.1,0.3]^{\top}\), with noise precision \(\xi_{Y_{t}}=0.98\), \(\forall t\).
For simplicity, we only let the coefficient associated with \(x_{2,t}\) vary with time, with the rest constant. To illustrate the effect of likelihood flattening, we consider two cases, where: (i) \(w_{2}\) decreases linearly, and (ii) \(w_{2}\) exhibits a discontinuity, each representing increasingly complex processes to capture, with respect to their stationary analogue. We carry out a Monte-Carlo simulation whereby we record the empirical average of the parameter estimates at each time step with various values for \(\tau\), the results of which are presented in Figure 8. Of course, for the previous time-invariant cases, there would be no advantage of using likelihood flattening since the coefficients are stationary. For the more complex cases, as \(\tau\mapsto 1\), our posterior beliefs decay more gradually, but as \(\tau\) is reduced, we are able to better track the coefficient values, albeit with increased variance due to the fact that more weight is given to the flat prior.
As the discontinuity in Figure 7(b) represents a more extreme case of the smooth temporal evolution in Figure 7(a), we use this as a case study for further analysis. We run
Figure 7: Expected returns (solid lines) and expected shortfall (dashed lines), with quantile parameter \(\alpha=0.05\), for the first support agent considering the _Baseline_ setup and plotted as a function of \(w_{2}\).
a Monte-Carlo simulation whereby we fix \(\tau=0.94\) in order to better track the coefficient, albeit at the expense of increased variance. We re-run the entire online market clearing procedure \(10^{3}\) times, each time tracking the temporal evolution of market revenue over \(10^{3}\) time steps. We set \(\lambda=0.95\) EUR per time step and per unit improvement in \(\ell\) for both stages. Again, we carry out this simulation for each of the proposed Bayesian regression market designs, considering only the _Baseline_ setup, the results of which are
Figure 8: Temporal evolution of the empirical average of the estimated value for \(w_{2}\). The estimates for the remaining parameters are omitted for clarity.
Figure 9: Cumulative empirical averages of the expected value of the revenue, with quantile parameter \(\alpha=0.05\), for the first support agent considering the _Baseline_ setup.
shown in Figure 9.
Given the use of likelihood flattening, each of the market designs are able to capture the step-change in the _true_ coefficient value. However, the extent of likelihood flattening required reduces the effective window size of observations, emulating a consistently small sample size even as more observations arrive. As a result, the likelihood-based market exhibits poor generalization to the out-of-sample stage as we have seen before. In fact, even though the true coefficient is \(w_{2}=0.1\) before the step change, the expected value of the revenue is less than 0, resulting in a negative cumulative revenue in the first half of the simulation, and hence overall the agent earns considerably less in this market. In contrast, we see that both the expected value and expected shortfall of the revenue in both \(\mathcal{M}_{\text{KL}-m}^{\text{BLR}}\) and \(\mathcal{M}_{\text{KL}-v}^{\text{BLR}}\) markets remain relatively consistent to those observed in-sample.
## 5 Real-world Application
We round off our experimental analysis by verifying the applicability of our proposal to real-world applications. We make use of an open source dataset, namely the _Pan-European Climate Database_, as detailed in Koivisto and Leon (2022). This dataset consists of hourly average solar irradiance values by country in Europe, obtained by simulating the output from south-facing solar photovoltaic (PV) modules across several intra-country regions, using meteorological data. Although this data is not exactly _real_, it effectively captures the spatio-temporal aspects of solar irradiance across the continent, with the benefit of not being contaminated with any spurious data points, as can often be the case with real-world datasets.
Suppose that the electricity system operator in each country seeks to forecast its own country's average generation from solar PV modules, with a view to subsequently estimate electricity demand and determine balancing resource requirements. For illustration, we consider six countries, namely United Kingdom (UK), Belgium (FR), Austria (AT), Greece (GR), Cyprus (CY) and Turkey (TR), each of which is assumed to enter the regression market to enhance their respective forecasts. For simplicity, we focus on a 1-hour forecasting horizon (i.e., nowcasting) using only linear basis functions, though both longer latency periods and more complex models could be considered.
We extract data that spans a two-year period from the start of 2018 to the end of 2019, with an hourly resolution. Suppose that each of the six countries takes turn in assuming the role of either the central agent in parallel transactions. We use a simple Auto-Regressive with eXogenous input model with a maximum of one lag for each feature. For solar energy, forecasting with lags simultaneously captures temporal correlations at
particular locations and any indirect spatial correlations between neighboring locations, resulting from the natural development of cloud coverage and the rotation of the sun. We present the rolling average of the raw irradiance values in Figure 9(a), which highlights the seasonality of generation from solar PV modules, peaking during the summer months as expected. Similarly, by plotting the hourly average irradiance in Figure 9(b), one can observe the spatial correlations such that at any given time, the actual generation in the more Easterly countries could be indicative of what is to come in Western Europe later in the day.
For each forecast, we model the likelihood as an independent Gaussian stochastic process with finite precision, similar to the framework described in Section 4. We consider an online setting such that over the entire two-year period, at each time step (i.e., one hour interval), when a new observation of the target signal is collected, the forecast issued at the previous time step is used for out-of-sample market clearing, whilst at the same time, the posterior is updated and the in-sample market is cleared, and a forecast for the next time step is subsequently made. We set \(\tau=0.998\) and assume the valuation of each central agent to be \(\lambda=50\) EUR and \(\lambda=150\) EUR per time step and per unit improvement in \(\ell\) for the in-sample and out-of-sample stages, respectively, to reflect the costs of balancing resources. With each country set as the central agent, we record the predictive performance and cumulative revenues of the remaining countries across both stages over the entire two-year span using the \(\mathcal{M}_{\mathrm{KL}-v}^{\mathrm{BLR}}\) market design.
Let us first consider the improvements in predictive performance with respect to
Figure 10: The rolling average and hourly average solar irradiance observed in each country during the two-year time period of 2018–2019.
the NLL exhibited by each of the countries when assuming the role of the central agent. We show the average quarterly results in Table 2. In general, we observe a seasonality in the objective equivalent to that of the irradiance itself, such that smaller enhancements in predictive performance are exhibited during the end two quarters, since there is less potential for improving predictive performance when irradiance is low. Both United Kingdom and Greece receive the greatest improvements, with Cyprus and Turkey the smallest, the latter of which is likely due to the fact that these countries are further East, thus less able to exploit the spatial correlations depicted in Figure 9(b). We also note that the distribution of performance improvements amongst the countries is fairly similar between the in-sample and out-of-sample stages, which suggests any nonstationarities, as well as the time-varying objective estimates, are smooth, and hence the in-sample posterior is a relatively efficient estimator for use out-of-sample in the next time step.
In Figure 11, we present the smoothed evolution of the revenues across both the in-sample and out-of-sample market stages. We see that, similar to the objective estimates, the allocation is by no means constant with time, such that the revenues of each agent are typically lower over the winter months and increase throughout the rest of the year. The value of each observation therefore also reflects the seasonality observed in the generation from solar PV modules. We also see the spatio-temporal dynamics of solar irradiance, as countries to the East of the central agent, particularly those nearby or with high nominal generation, contribute most to the uplift. The revenues received by the remaining countries when either Cyprus or Turkey assume the role of the central agent are relatively small, in accordance with the results in Table 2. Lastly, we note that the revenues earned by some of the countries over the entire two-year period are substantial, for instance with Greece as the central agent, the system operator in Cyprus
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Country} & \multicolumn{4}{c}{In-sample} & \multicolumn{4}{c}{Out-of-sample} \\ & Q1 & Q2 & Q3 & Q4 & Q1 & Q2 & Q3 & Q4 \\ \hline UK & 0.40 & 2.24 & 2.24 & 0.37 & 0.39 & 2.32 & 2.45 & 0.36 \\ BE & 0.34 & 1.58 & 1.59 & 0.51 & 0.33 & 1.60 & 1.61 & 0.50 \\ AT & 0.66 & 1.77 & 1.47 & 0.72 & 0.65 & 1.81 & 1.49 & 0.72 \\ GR & 0.73 & 2.11 & 2.40 & 0.82 & 0.74 & 2.15 & 2.44 & 0.81 \\ CY & 0.44 & 1.05 & 1.20 & 0.56 & 0.43 & 1.05 & 1.21 & 0.55 \\ TR & 0.43 & 1.00 & 1.35 & 0.65 & 0.42 & 1.00 & 1.36 & 0.64 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Fractional improvement in the NLL ratio as a result of participating in the regression market, averaged over each calendar quarter, for both in-sample and out-of-sample market stages.
earns approximately \(1.2\times 10^{6}\) EUR, representing an average unit value of around 70 EUR per observation shared.
## 6 Conclusions
Data-driven firms that employ predictive analytics (e.g., machine learning) often lack access to adequate sources of data. Whilst sharing data amongst others could bring potential advantages, many firms remain hesitant to do so, predominantly due to privacy concerns and the fear of losing a competitive edge, rather than the practical complexities involved in establishing data-sharing pipelines. Analytics markets, or in our case _regression markets_, offer a possible solution to this, wherein data is commoditized with respect to the particular analytics task at hand, providing incentives for information exchange through remuneration.
In this paper, we proposed a mechanism design for a regression market that facilitates a generalized approach to forecasting, one based on Bayesian regression analyses. As a result, we provide the buyer with richer and more nuanced information about future outcomes, offering better calibration of uncertainty to be used for risk-informed decision-making downstream. We first introduced what we posed as the Bayesian analogue of recent frequentist-based proposals, but showed that this market design, akin to those in current literature, exposes the buyer to considerable financial risks, especially when a limited number of observations are available or when the data generating processes are nonstationary. In these settings, sub-optimal estimates of the posterior distribution led to sizeable expected losses, especially during the out-of-sample market stage, for which the in-sample estimates of the posterior moments are less efficient.
To mitigate these risks, we posed to re-formulate the value of a feature in terms of the information gain it provides. In particular, we derived two alternative definitions of the marginal contribution of a feature towards a set of other features using the Kullback-Leibler divergence, the first of which could guarentee individual rationality universally (i.e., no support agents would be allocated negative revenue). However, there is of course no free lunch, as this was at the expense of budget balance. Nevertheless, we showed that in both cases using the KL divergence was able to provide more robust revenue allocations by alleviate the financial risks that the support agents were exposed to, even at the out-of-sample market stage.
Possible directions for direct future work could include extending the concepts of our proposal to a broader class of machine learning models, such as (i) non-convex regression, which will have implications on market property guarantees; (ii) non-Gaussian hypotheses, which may require approximation bounds; and lastly (iii) alternative mod
Figure 11: Smoothed evolution of total revenue per time step made by each of the six countries to the remaining five whilst assuming the role of the central agent.
elling paradigms, for instance, classification, unsupervised learning, or data-driven optimization problems in general.
On a broader note, there are still many unanswered questions in relation to the complexities of treating data as a commodity. For instance, in practice, datasets cover different spatial and temporal horizons, and may become (un-)available to the market at different times. Accordingly, aggregating real-world datasets in an online fashion may not be straightforward and may require revision of fundamental concepts in online learning and mechanism design. Additionally, much of the current literature relies on the assumption that the valuation of the central agent is both linear and easily conceivable with respect of the objective function, which may not be true if the downstream decision-making process is complex or in the face of externalities (e.g., whether or not competing firms also get access to the data may affect the valuation). Support agents may also have reservations to share their information, for instance due to privacy concerns or conflicts of interest. This, as well as physical costs of collecting and storing data, may require a minimum revenue to be obtained. Lastly, if firms that share data are indeed competitors in a downstream market, one may be interested in if, by providing better use of information, the analytics market is beneficial to social welfare, and if those that lose competitive advantage by sharing their information are adequately compensated.
|
2305.01697 | How Many Clues To Give? A Bilevel Formulation For The Minimum Sudoku
Clue Problem | It has been shown that any 9 by 9 Sudoku puzzle must contain at least 17
clues to have a unique solution. This paper investigates the more specific
question: given a particular completed Sudoku grid, what is the minimum number
of clues in any puzzle whose unique solution is the given grid? We call this
problem the Minimum Sudoku Clue Problem (MSCP). We formulate MSCP as a binary
bilevel linear program, present a class of globally valid inequalities, and
provide a computational study on 50 MSCP instances of 9 by 9 Sudoku grids.
Using a general bilevel solver, we solve 95% of instances to optimality, and
show that the solution process benefits from the addition of a moderate amount
of inequalities. Finally, we extend the proposed model to other combinatorial
problems in which uniqueness of the solution is of interest. | Gennesaret Tjusila, Mathieu Besançon, Mark Turner, Thorsten Koch | 2023-05-02T18:05:31Z | http://arxiv.org/abs/2305.01697v1 | # How Many Clues To Give? A Bilevel Formulation For The Minimum Sudoku Clue Problem
###### Abstract
It has been shown that any 9 by 9 Sudoku puzzle must contain at least 17 clues to have a unique solution. This paper investigates the more specific question: given a particular completed Sudoku grid, what is the minimum number of clues in any puzzle whose unique solution is the given grid? We call this problem the Minimum Sudoku Clue Problem (MSCP). We formulate MSCP as a binary bilevel linear program, present a class of globally valid inequalities, and provide a computational study on 50 MSCP instances of 9 by 9 Sudoku grids. Using a general bilevel solver, we solve 95% of instances to optimality, and show that the solution process benefits from the addition of a moderate amount of inequalities. Finally, we extend the proposed model to other combinatorial problems in which uniqueness of the solution is of interest.
## 1 Introduction
The Sudoku puzzle first appeared in the May 1979 edition of _Dell Pencil Puzzle and Word Games_[5]. Given a square integer \(n\), the puzzle is given on a \(n\times n\) grid divided into \(n\) subgrids each of size \(\sqrt{n}\times\sqrt{n}\). As input, some cells are already filled with numbers between 1-n. The goal of the puzzle is to fill the rest of the cells such that each number between 1-n appears exactly once in each row, column, and subgrid. An example of a Sudoku puzzle along with its solution is given in Figure 1. For most Sudoku puzzles, uniqueness of the solution is a desirable property. We call such puzzles _valid_. It is fairly easy to construct examples of \(9\times 9\) Sudoku puzzles with \(77\) clues and multiple solutions (such as removing the entries marked in green in Figure 0(b)). One can also observe that any puzzle with at least \(78\) clues will always have a unique solution.
A natural question that arises is: what is the minimum number of clues that a valid puzzle can have? It is shown in [25] that the answer to this question is 17 clues. But what if the puzzle designer already has a solution grid in mind? This motivates the Minimum Sudoku Clue Problem (MSCP): what is the minimum number of clues on any valid puzzle for a given Sudoku grid?
In this paper, we make four key contributions. First, we formulate the MSCP as a binary bilevel linear program, allowing the use of generic integer bilevel methods and solvers, which to the best of our knowledge is a first in the literature. Second, we present unavoidable set inequalities, a set of globally valid inequalities, which we add at the start of the solving process to improve solver performance. Third, we provide computational results over a set of Sudoku grids to show the viability of our approach. Finally, we generalize our model to other problems which fulfill some assumption in the Fewest Clue Problem (FCP) class introduced in [6]. We note that this paper is an extension of the first author's thesis work [31].
## 2 Related Work
The problem of counting the total number of \(n\times n\) Sudoku grids is an open problem. For \(n=9\), it was shown in [11] that the number of Sudoku grids is around \(6.671\times 10^{21}\). A natural upper bound arises by considering that
Sudoku grids are a subset of Latin squares with additional subgrid constraints. The enumeration of Latin squares has been extensively studied in the literature [26], which has pushed similar studies for Sudoku [1; 17]. Many of these grids are equivalent under transformations such as relabeling of digits and rotations. We call the lexicographically smallest Sudoku grid that is equivalent to a given grid under these transformations the _minlex form_ of the Sudoku grid [22]. Taking these transformations into account, the number of \(9\times 9\) Sudoku grids is reduced to around \(5.47\times 10^{9}\)_essentially different_ grids [29]. While our work focuses on the minimum number of clues for a _given_ Sudoku grid, the minimum number of clues for _any_ Sudoku grid has been shown to be \(17\) through a computer-assisted proof [25]. A list of nearly \(50000\) Sudoku puzzles with \(17\) clues is collected by Gordon Royle [27]. This collection is only a fraction of the possible number of Sudoku grids, heavily suggesting that most Sudoku grids do not have a \(17\) clue valid puzzle. Minimum bounds for the \(4\times 4\) number of clues have been derived through an algebraic process in [14] by encoding the combinatorial problem as a polynomial and analyzing its structure. Research in this direction for \(9\times 9\) grids has focused on analyzing the underlying graph structure of the Sudoku grid [4; 23] and characterizing valid Sudoku puzzles using formal logic [24].
Finding a solution to a general \(n\times n\) Sudoku puzzle is ASP-complete, which implies NP-completeness of the decision problem as well as \(\sharp\)P-completeness to count the solutions [32]. However, practical methods for solving Sudoku puzzles of size \(9\times 9\) exist in the literature [3]. There has also been recent research in making algorithms that solve Sudoku puzzles explainable for humans [2]. Given a Sudoku grid, the decision problem "is there a setting of at most \(k\) clues such that the only solution is the given grid?" is a member of the class of problem "Fewest Clue Problem" (FCP) and has been shown to be \(\Sigma^{P}_{2}\)-complete [6]. Mixed-integer bilevel linear programming has also been shown in [7; 20] to be \(\Sigma^{P}_{2}\)-complete. Therefore, transforming MSCP into a binary bilevel linear program retains the same complexity but allows for a general solving method.
To the best of our knowledge, all existing software libraries for solving MSCP are problem-specific and created by the Sudoku community, see [9] for an example. The software uses pattern-matching algorithms to quickly find so-called _unavoidable sets_, such as described in [25]. An unavoidable set is defined as a set of cells whose contents if removed will result in an invalid Sudoku puzzle. An example of such sets would be the cells marked in green, red, or blue in Figure 0(b). Given a set of unavoidable sets \(S\), we call a set of cells \(H\) a _hitting set_ if for every unavoidable set in \(S\) at least one cell is contained in \(H\). Once a large enough set of unavoidable sets has been generated, one can enumerate over all hitting sets of these unavoidable sets, starting from ones with minimal cardinality until a valid puzzle is found. Although in theory, enumerating unavoidable sets is expensive, specialized algorithms are often fast in practice owing to additional problem-specific methods, e.g. exploiting equivalence classes of Sudoku grids. We also highlight that enumeration of hitting set, in particular minimal hitting set, is an active area of research [15]. In contrast to existing software, our work uses a general mathematical optimization approach to solve MSC. We will use integer linear programming models to find unavoidable sets and generate valid inequalities to speed up the bilevel-solving process.
## 3 Integer Bilevel Linear Formulations of Minimum Sudoku Clue Problem
We now formulate MSCP for a Sudoku grid of size \(n\times n\) where \(n\) is a square number. Let \(x_{ijk}\) be a set of binary decision variables where \(i,j,k\in[n]:=\{1,\ldots,n\}\). The variable \(x_{ijk}\) takes value one if cell \((i,j)\) has entry \(k\) and
Figure 1: A Sudoku puzzle along with its solution and unavoidable sets
zero otherwise. The variables construct an \(n\times n\) Sudoku grid if they satisfy
\[\sum_{k=1}^{n}x_{ijk} =1, \forall\;i,j\in[n]\] ( \[G0\] ) \[\sum_{j=1}^{n}x_{ijk} =1, \forall\;i,k\in[n]\] ( \[G1\] ) \[\sum_{i=1}^{n}x_{ijk} =1, \forall\;j,k\in[n]\] ( \[G2\] ) \[\sum_{i=sp-s+1}^{sp}\sum_{j=sq-s+1}^{sq}x_{ijk} =1, \forall\;p,q\in[s]\;\text{and}\;k\in[n]\] ( \[G3\] )
where \(s:=\sqrt{n}\). This is the standard Sudoku integer linear program (ILP) formulation found in the literature, see [18, 21].
Let \(G\) be a Sudoku grid given as an \(n\times n\) matrix with entries in \([n]\). The leader problem of our binary bilevel linear program will act as a "puzzle setter", and determine which entries of the Sudoku grid are given as clues. The follower problem will act as an "adversary" that tries to find a solution different from the given Sudoku grid. Concretely, our model is as follows
\[\min_{x,y,z} \sum_{i=1}^{n}\sum_{j=1}^{n}y_{ij}\] ( \[V1\] ) s.t. \[z=1\] \[y_{ij}\in\{0,1\},\quad\forall\;i,j\in[n]\] \[(x,z)\in S(y)\]
where \(S(y)\) is the set of optimum solutions to the \(y\)-parameterized follower problem
\[\min_{x,z} z\] s.t. \[(G0)-(G3)\] ( \[F1\] ) \[x_{ijG_{ij}}\geq y_{ij},\quad\forall\;i,j\in[n]\] \[\sum_{i=1}^{n}\sum_{j=1}^{n}x_{ijG_{ij}}-z\leq n^{2}-1\] \[x_{ijk},z\in\{0,1\},\quad\forall\;i,j,k\in[n].\]
The leader decision variable, \(y_{ij}\), determines whether the entry of a cell \((i,j)\) is given to the follower problem as a clue. The objective function of the leader problem is the number of clues given. Constraint \((F1)\) requires the follower problem to adhere to these given clues. Constraint \((N1)\) requires that the Sudoku grid defined by the set of decision variables \(x_{ijk}\) with \(i,j,k\in[n]\) is different from \(G\). This constraint can be relaxed by setting \(z\) to one and taking a penalty. The intuition of the leader constraint \((V1)\) is as follows: The objective of the follower is to minimize this penalty. If the puzzle determined by the leader problem has multiple solutions, the follower can find a feasible solution with a penalty of zero. However, if this is not possible, then the puzzle determined by the leader is a valid puzzle and the only option the follower has is to take the penalty.
Finally, we highlight that the high-point relaxation is always trivially achieved by setting \(z=1\), \(y_{ij}=0\) for all \(i,j\in[n]\) and \(x\) to be another Sudoku grid not equal to \(G\), by permuting digits for instance. This weakness of the relaxation suggests the hardness of the bilevel problem.
## 4 Strengthening The Bilevel Formulation Through Valid Inequalities
Consider the Sudoku grid given in Figure 0(b). We can swap the \(3\)'s and \(8\)'s in the green marked cells to get a new Sudoku grid \(G^{\prime}\) that has the same entries except for the cells marked in green. Thus, any valid puzzle \(P\) must have at least one clue in one of the green-marked cells. Similarly, we observe that it is possible to change the entries of
cells marked in blue or red. Thus, there must also be at least one clue in the cells marked red and one clue in the cells marked blue. We call a set of cells \(U\) an _unavoidable set_ for a Sudoku grid \(G\) if there exists a Sudoku grid \(G^{\prime}\neq G\) that differs from \(G\) only on cells in \(U\). We call an unavoidable set _minimally unavoidable_ if it contains no subset that is again unavoidable. In what follows, we represent Sudoku grids as \(n\times n\) matrices with entries from \([n]\) and Sudoku puzzles as \(n\times n\) matrices with entries from \([n]\cup\{0\}\) where \(0\) marks an empty cell.
**Proposition 1**.: _Let \(G\) be a grid and \(P\) a puzzle such that \(G\) is a solution of \(P\). Then \(P\) is a valid puzzle, if and only if, for every minimally unavoidable set \(U\) of \(G\) there exists a cell \((i,j)\in U\) that is given as a clue, i.e., \(P_{ij}\neq 0\)._
Proof.: To show sufficiency suppose that there exists a minimally unavoidable set \(U\) such that \(P_{ij}=0\) for all cells \((i,j)\in U\). By definition there exists a Sudoku grid \(G^{\prime}\neq G\) which differs from \(G\) only in the entries of cells that are in \(U\). Since \(P_{ij}=0\) for all cells \((i,j)\in U\) then \(G^{\prime}\) is also a solution of \(P\). Thus \(P\) is not a valid puzzle.
To show necessity, suppose that \(P\) is not a valid puzzle and there exists a Sudoku grid \(G^{\prime}\) which is a solution of \(P\) and \(G^{\prime}\neq G\). We define
\[U:=\{(i,j)\in\{1,\ldots,n\}^{2}\mid G^{\prime}_{ij}\neq G_{ij}\}\]
as the set of cells whose entry in \(G\) is different from its entry in \(G^{\prime}\). By construction, \(U\) is an unavoidable set and \(P_{ij}=0\) for all \((i,j)\in U\), as otherwise, their entries would be identical. If \(U\) is minimally unavoidable then we are done, otherwise, a subset of \(U\) is again unavoidable. Since \(U\) is finite, we can iterate the process until we end up with a minimally unavoidable set.
**Corollary 2**.: _Let \(G\) be a Sudoku grid and \(U\) be an arbitrary minimal unavoidable set. Then, the inequality_
\[\sum_{(i,j)\in U}y_{ij}\geq 1\] (U)
_is a globally valid inequality for the leader of our bilevel program. We call this inequality the **unavoidable set inequality** corresponding to \(U\)_
We give a method to generate the set of unavoidable sets \(\mathcal{U}\). Let \(m\in\mathbb{N}\) with \(m\geq 1\). Consider the \(m\)-parameterized integer linear program,
\[\begin{split}\min_{x}&\quad 0\\ \text{s.t.}&\quad(G0)-(G3)\\ &\quad\sum_{i=1}^{n}\sum_{j=1}^{n}x_{ijG_{ij}}=n^{2}-m\\ &\quad x_{ijk}\in\{0,1\},\quad\forall\,i,j,k\in[n],\end{split}\] ( \[D1\] )
The integer linear program gives us a Sudoku grid \(G^{\prime}\) which differs from \(G\) in exactly \(m\) entries. We get that
\[U:=\{(i,j)\in\{1,\ldots,n\}^{2}\mid G_{ij}\neq G^{\prime}_{ij}\}.\]
is an unavoidable set of G by construction. We start with \(m=1\) and repeatedly solve the ILP, adding in each iteration the no-good cut constraint
\[\sum_{(i,j)\in U}x_{ijG_{ij}}\geq 1\] (N2)
which bars the ILP from returning any \(G^{\prime}\), whose associated unavoidable set is a superset of \(U\). The ILP will thus return a different unavoidable set of size \(m\) in each iteration. Once all unavoidable sets of size \(m\) have been generated, we move on to \(m+1\). Note that we could have equivalently formulated this as a minimization problem.
**Proposition 3**.: _For the procedure described above it applies_
1. _At each iteration, the resulting unavoidable set will always be a minimally unavoidable set_
2. _Repeating the procedure eventually yields all minimal unavoidable sets_
Proof.: To show (i), let \(\bar{U}\) be an unavoidable set that is not minimal and \(U\subset\bar{U}\) a minimal unavoidable set with \(m:=|U|\). When generating all unavoidable sets of size \(m\), a no-good cut for \(U\) will also be added to the formulation. Thus, any \(G^{\prime}\) which generates \(\bar{U}\) will be infeasible because \(U\subset\bar{U}\). We get (ii) by construction.
## 5 Computational Results
In this section, we investigate the performance of our models for solving MSCP over \(50\) instances of \(9\times 9\) Sudoku grids. All of our computations run on a single thread of an Intel Xeon E5-2630V4 2.2 GHz. A wall-clock time limit of 4 days and a memory limit of 16 GB was used for each run. The algorithm to generate unavoidable set cuts uses Gurobi 9.5.1 [16] as an ILP solver, and we use the bilevel solver from [12] to solve the main model, where the authors granted us a license upon request. The solver uses CPLEX 12.7 [19] to solve linear programming relaxations. The code used for this section along with the computational results can be found in [https://github.com/gtjusila/minimum-sudoku](https://github.com/gtjusila/minimum-sudoku).
The 50 instances are split into two groups of 25. The first group is randomly selected from a list of Sudoku puzzles with 17 clues [28]. The second group is randomly selected from the list of Sudoku puzzles with a difficulty rating of more than \(11\) (the maximum difficulty rating being \(12\)) maintained by the new Sudoku players forum2. The known puzzles for all instances in this second group contain more than \(20\) clues each. To get a diverse instance set, we also ensure that we select Sudokus with different minlex forms [22]. To convert the Sudoku grids to minlex form, we use the code from [8].
Footnote 2: [http://forum.enjoysudoku.com/the-hardest-Sudokus-new-thread-t6539-600.html#p277835](http://forum.enjoysudoku.com/the-hardest-Sudokus-new-thread-t6539-600.html#p277835)
First, we evaluate the performance of the unavoidable sets generating procedure. For each of our \(9\times 9\) instances, we generate \(5000\) minimal unavoidable sets. In all instances, we generate all minimal unavoidable sets of size \(16\) or less. We observed no unavoidable sets of size \(5\) and \(7\), which leads us to conjecture that none exist for any instance. In 39 out of 50 instances, we generated all minimal unavoidable sets of size less than or equal to 17. We plot the geometric mean of the time needed to generate the kth unavoidable set over all \(50\) of our instances in Figure 1(a). We see that generally, the time needed to generate an unavoidable set increases as \(k\) gets larger. An interesting feature of the figure is the periodic peaks. Looking deeper into the result of individual instances, we see that as we try to enumerate all minimal unavoidable sets of size \(n\in\mathbb{N}\), the time increases in each iteration. This is expected as in each iteration there are fewer and fewer minimal unavoidable sets of size \(n\) available, and thus, they become increasingly hard to find. To visualize this effect, we computed the average number of unavoidable sets less than or equal to \(n\) for \(n=11,\ldots,17\) (for the instance in which not all unavoidable sets of size \(17\) have been found, we assume the number of unavoidable set of size less than \(17\) to be \(5000\)) and drew them as vertical lines in Figure 1(a). One can think of these lines as the average point where an instance switches from searching for unavoidable sets of size \(n\) to size \(n+1\). The leftmost line represents \(n=11\).
Figure 1(a) does not catch how extreme these peaks can be. To see this effect, we provide the frequency distribution table of the generation time of unavoidable sets in Table 1. Though the majority of the minimally unavoidable sets (\(94.10\%\)) can be generated in less than 1 minute, some minimal unavoidable sets are very hard to find with the longest taking nearly 3 hours to find.
Lastly, it is important to remember that we are not obliged to generate all unavoidable sets since their sole function is to help reduce the feasible region of our bilevel program and improve performance. For this reason, we find it
Figure 1: Experiment Results For Cut Generation
helpful to plot the average number of cuts generated as a function of time. To do this, for each instance \(I\) and each \(n\in[5000]\), we calculate the cumulative time our model takes to generate \(n\) unavoidable sets of instance \(I\). We then take the geometric mean of the cumulative time for each \(n\) over all the instances and plot the result as a function of \(n\). The resulting plot is shown in Figure 1(b). The figure reiterates that generating unavoidable sets is quicker in the beginning and shows how it becomes more difficult over time. It takes less than \(20000\) seconds to generate the first \(2000\) unavoidable sets and nearly \(40000\) seconds to generate the next \(2000\).
We will now test the effect of unavoidable set inequalities on our model by varying the number of inequalities that are used. For our initial analysis, we do not take into account the time needed to generate the unavoidable sets. We decide to test 500, 1000, 3000, and 5000 unavoidable set inequalities, where we use the first \(n\) inequalities generated by our unavoidable set generating algorithm. A summary of the optimization results is shown in Table 2. \(45\) out of the \(50\) instances of size \(9\times 9\) solved to optimality in at least one solver setting. Interestingly, all instances in the \(17\) clues puzzle group solve to optimality in at least one solver setting and they generally solve faster than the instance group with no \(17\) clue puzzle, see Figure 4.
We plot the resulting performance profile [10] in Figure 2(a). We observe that adding too few or too many inequalities results in slower optimization times. Nearly \(60\%\) of the instances solve fastest on models that use \(1000\) unavoidable set inequalities, followed by slightly under \(20\%\) of instances that solve fastest on models that use \(500\) unavoidable set inequalities. This claim is also supported when we see that we solve to optimality in most instances when we are using \(1000\) unavoidable sets inequalities. By looking deeper into node-level data as presented in Table 3, we see that too few inequalities result in a huge increase of nodes processed to prove optimality, while too many inequalities result in a huge decrease in node throughput. The best choice is therefore likely to be in the middle.
Finally, we take into account the time needed to generate the inequalities. Note that we only need to compare models with \(500\) and \(1000\) inequalities since \(1000\) inequalities models outperform the \(3000\) and \(5000\) inequalities model cut
\begin{table}
\begin{tabular}{r r} \hline \hline generation time [s] & \# of unavoidable sets \\ \hline \(\leq 1\) & 30763 \\ \(1-10\) & 136009 \\ \(10-30\) & 42389 \\ \(30-60\) & 26092 \\ \(60-300\) & 13809 \\ \(300-600\) & 542 \\ \(600-1800\) & 298 \\ \(1800-3600\) & 62 \\ \(3600-7200\) & 31 \\ \(\geq 7200\) & 5 \\ \hline \end{tabular}
\end{table}
Table 1: Frequency distribution table of the time needed to generate unavoidable set
\begin{table}
\begin{tabular}{r|r r} \hline \hline & \multicolumn{2}{c}{\# of instances} \\ \# of unavoidable set & optimal & time limit \\ \hline
500 & 37 & 13 \\
1000 & 43 & 7 \\
3000 & 36 & 14 \\
5000 & 29 & 21 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of end result for \(9\times 9\) standard bilevel model with different number of unavoidable set cuts
\begin{table}
\begin{tabular}{r|r r r} \hline \hline \# of unavoidable set & node count & time per node [s] & total runtime [s] \\ \hline
500 & 2503382 & 0.028 & 70312 \\
1000 & 2097150 & 0.031 & 64941 \\
3000 & 1126612 & 0.082 & 92931 \\
5000 & 887962 & 0.139 & 123472 \\ \hline \end{tabular}
\end{table}
Table 3: Geometric average of node count, time per node, and runtime of different settings
models. To compare if it is worth generating the \(500\) extra unavoidable sets, we compute the time needed to solve the bilevel instance plus the time needed to generate the unavoidable sets and plot the performance profile for instances with \(500\) and \(1000\) inequalities. The calculation is done by using the data from the generating unavoidable set experiments. The resulting plot is shown in (b)b. We see that even accounting for the unavoidable set cut generation time, using \(1000\) inequalities is still superior to using \(500\) inequalities.
For our experiments, we also obtained preliminary results with the MiBS solver [30]. Even on easy instances however, we quickly observed that MiBS required much more time than the solver from [13]. We believe that this is in large part due to the inability of MiBS to find a primal solution to our problem.
## 6 Generalization of the Model to other Fewest Clue Problems
A desire for unique solutions is not only relevant to Sudoku, with other example problems being Slither Link and Cross Sum [32]. This motivates the definition of the "Fewest Clue Problem" (FCP) class in [6]. In this section, we show how our model can also be adapted for FCP problems of other puzzles which have a linear binary formulation.
We restate the definition of the Fewest Clue Problem as in [6]. Let \(A\) be a problem in NP. We denote with \(R_{A}\) the set of instance-certificate pairs where the certificates are binary strings of length \(l\). For a given instance \(I\) of \(A\), we call a string \(c\in\{0,1,\bot\}\) a _clue_ if there exists a certificate \(c^{*}\) such that \((I,c^{*})\in R_{A}\) and \(c_{i}=c_{i}^{*}\) for all indices \(i\in[k]\) where \(c_{i}\neq\bot\). The symbol \(\bot\) can be interpreted as a missing or non-specified entry. We call \(c^{*}\) a _satisfying solution_ to clue \(c\). The _size_ of a clue is the number of non \(\bot\) characters.
We define \(\mathrm{FCP}\)\(A\) to be the decision problem: given an instance \(I\), a certificate \(c^{*}\) and an integer \(k\), does there exist a clue \(c\) of size at most \(k\) for which the unique satisfying solution is \(c^{*}\)? We note that our definition is a slight variant of that proposed in [6].
Figure 4: Comparison Of Solving Time For 17 Clue Instances and Non 17 Clue Instances
Figure 3: Experiment Results For Solving 9 by 9 instances
We make the assumption that there exists an \(l\)-dimensional polytope \(\mathcal{Q}\) such that \(c\) is a valid certificate if and only if \(c\in\mathcal{Q}\) and is binary. The \(\mathrm{FCP}\ A\) can be written as a bilevel optimization problem as follows:
\[\min_{x,y,z} \sum_{i=1}^{l}y_{i}\] s.t. \[z=1\] \[y_{i}\in\{0,1\},\quad\forall\;i\in[l]\] \[(x,z)\in S(y)\]
where \(S(y)\) is the set of optimum solutions to the \(y\)-parameterized follower problem:
\[\min_{x,z} z\] s.t. \[x\in\mathcal{Q}\] \[x_{i}\geq y_{i},\quad\forall\;i\in[l],\;c_{i}^{*}=1\] \[\sum_{i\in[l],c_{i}^{*}=1}x_{i}+\sum_{i\in[l],c_{i}^{*}=0}(1-x_{i })-z\leq l-1\] ( \[NG\] ) \[x_{i},z\in\{0,1\},\quad\forall\;i\in[l].\]
The leader program determines which indices are given in the clue, while the follower tries to find an alternative solution respecting the clue. Constraint (\(NG\)) is a no-good constraint prohibiting the assignment \(x=c^{*}\) if \(z=0\). It is trivially fulfilled if \(z=1\), it is a generalization of the equivalent constraint of the Sudoku-specific model presented in Section 3.
## 7 Conclusion and Outlook
In this paper, we have shown that the Minimum Sudoku Clue problem can be formulated and solved as a binary bilevel linear programming problem. By introducing unavoidable-set inequalities, we showed that the formulation can be tightened, and that solver performance can be improved. Our models are able to compute a provable optimal solution to the Minimum Sudoku Clue problem in \(95\%\) of instances. Despite these performance results, the inherent complexity of the Minimum Sudoku Clue problem and the more general Fewest Clue problem complicates scaling to larger instances. Unlike specialized ad hoc enumeration techniques developed in the Sudoku literature [9] however, our approach naturally benefits from the continued improved performance of mixed-integer programming solvers.
We see three main avenues of future research for the Minimum Sudoku Clue problem. First, we can use faster unavoidable set finding algorithms such as the one proposed by [25]. Second, we can develop formulations that exploit the symmetries of Sudoku grids. Third, we can develop a branch-and-cut approach leveraging unavoidable set inequalities to separate non-feasible solutions throughout the branch-and-bound process instead of initially applying a large number of inequalities.
## Acknowledgments
The work for this article has been conducted in the Research Campus MODAL funded by the German Federal Ministry of Education and Research (BMBF) (fund numbers 05M14ZAM, 05M20ZBM). The described research activities are funded by the Federal Ministry for Economic Affairs and Energy within the project UNSEEN (ID: 03EI1004-C). We thank Markus Sinnl and coauthors for providing us a license to their bilevel solver, Fakultat II at the Technische Universitat Berlin for allowing us to use their HPC facility, and Kai Hoppmann for initial advice on the integer formulation.
|
2306.08485 | Graph-Aligned Random Partition Model (GARP) | Bayesian nonparametric mixtures and random partition models are powerful
tools for probabilistic clustering. However, standard independent mixture
models can be restrictive in some applications such as inference on cell
lineage due to the biological relations of the clusters. The increasing
availability of large genomic data requires new statistical tools to perform
model-based clustering and infer the relationship between homogeneous subgroups
of units. Motivated by single-cell RNA applications we develop a novel
dependent mixture model to jointly perform cluster analysis and align the
clusters on a graph. Our flexible graph-aligned random partition model (GARP)
exploits Gibbs-type priors as building blocks, allowing us to derive analytical
results on the graph-aligned random partition's probability mass function
(pmf). We derive a generalization of the Chinese restaurant process from the
pmf and a related efficient and neat MCMC algorithm to perform Bayesian
inference. We perform posterior inference on real single-cell RNA data from
mice stem cells. We further investigate the performance of our model in
capturing the underlying clustering structure as well as the underlying graph
by means of simulation studies. | Giovanni Rebaudo, Peter Mueller | 2023-06-14T13:01:18Z | http://arxiv.org/abs/2306.08485v2 | ###### Abstract
###### Abstract
Bayesian nonparametric mixtures and random partition models are powerful tools for probabilistic clustering. However, standard independent mixture models can be restrictive in some applications such as inference on cell lineage due to the biological relations of the clusters. The increasing availability of large genomic data requires new statistical tools to perform model-based clustering and infer the relationship between homogeneous subgroups of units. Motivated by single-cell RNA applications we develop a novel dependent mixture model to jointly perform cluster analysis and align the clusters on a graph. Our flexible graph-aligned random partition model (GARP) exploits Gibbs-type priors as building blocks, allowing us to derive analytical results on the graph-aligned random partition's probability mass function (pmf). We derive a generalization of the Chinese restaurant process from the pmf and a related efficient and neat MCMC algorithm to perform Bayesian inference. We perform posterior inference on real single-cell RNA data from mice stem cells. We further investigate the performance of our model in capturing the underlying clustering structure as well as the underlying graph by means of simulation studies.
**Graph-Aligned Random Partition Model (GARP)**
Giovanni Rebaudo\({}^{a}\) ([email protected])
Peter Muller\({}^{b}\) ([email protected])
\({}^{a}\)Collegio Carlo Alberto & Department of ESOMAS, University of Turin, IT
\({}^{b}\)Department of Statistics and Data Sciences & Department of Mathematics,
University of Texas at Austin, USA
_Keywords:_ Bayesian Nonparametrics, Random Partition Model, Gibbs-Type Prior,
Dependent Mixture Model, Exchangeability, Single-Cell RNA
## 1 Introduction
We introduce a graph-aligned random partition model with one set of clusters being identified as vertices of a graph and other clusters being interpreted as edges between those. The model construction is motivated by the increasing availability of genomic data that requires new statistical tools to perform inference and uncertainty quantification on homogeneous subgroups of units (e.g., single-cells) and hypothesized relationships between the subgroups (e.g., transitions between the subgroups). In the present article, we deal with single-cell RNA sequencing experiments (scRNA-seq) that provide an unprecedented opportunity to study cellular heterogeneity and the evolution of complex tissues. The interest is to identify the main homogeneous cell subpopulations (i.e., clusters) in terms of gene expressions and jointly infer transitions of cells between these.
Dirichlet process (DP) mixtures (Lo, 1984) are well-established Bayesian nonparametric (BNP) models to infer homogeneous subgroups of observations via probabilistic clustering. However, the law of the random partition induced by the DP, related to the so-called Chinese restaurant process (CRP), is controlled by a single parameter. This leaves DP mixture models too restrictive for many applications and several alternative models were introduced in the literature to allow more flexible clustering. This includes the symmetric finite Dirichlet prior (Green and Richardson, 2001), the Pitman-Yor process (PYP) (Pitman and Yor, 1997), the normalized inverse Gaussian (NIG) (Lijoi _et al._, 2005), the normalized generalized gamma process (NGGP) (Lijoi _et al._, 2007b), mixture of finite mixtures (MFM) (Nobile, 1994; Nobile and Fearnside, 2007; Miller and Harrison, 2018) and mixture of DP (MDP) (Antoniak, 1974). All these belong to the larger family of Gibbs-type priors (Gnedin and Pitman, 2006) that can be seen as a natural, flexible generalization of the DP (De Blasi _et al._, 2015).
However, Gibbs-type processes entail independent cluster-specific parameters not allowing us to infer the relationship between clusters as needed in our motivating example. Recently, repulsive priors that allow for dependent cluster-specific parameters were successfully introduced to favor more parsimonious and well-separated clusters (Petralia _et al._, 2012; Xu _et al._, 2016; Beraha _et al._, 2022). Repulsive mixtures introduce (negative) dependence between cluster-specific values to better separate clusters. However, these models still stop short of inferring a biological relationship between the clusters, such as aligning the clusters on a graph, as desired in our framework.
In this article, we propose a graph-aligned random partition model (GARP) that exploits the flexible, but tractable, building blocks of Gibbs-type priors to build a random partition aligned on a graph. The desired interpretation of clusters as vertices and edges in a graph naturally gives rise to dependent priors on cluster-specific parameters. In the motivating example with single-cell RNA-seq data, vertex-clusters represent homogeneous cell subpopulations and edge-clusters correspond to cells that are transitioning between those. See Figure 1 for a scatter plot of single-cell RNA data in a two-dimensional space that captures most of the recorded genetic expressions of mice stem cell data.
The remainder of the article is as follows. In Section 2 we introduce a model for graph-aligned probabilistic clustering. In Section 3 we introduce special examples. In Sections 4, 5 and 6 we study a useful approximation, implied homogeneity assumptions, and identifiability of vertices versus edges. Section 7 applies the model to single-cell RNA-seq data of mice stem cells and Section 8 concludes with final comments. Substantive additional details, including code, proofs, validations on simulated data, a characterization in terms of discrete probabilities, a discussion of the hyperparameters' choice, and details on the strategy to obtain point estimates from posterior samples are available as an online supplement.
raph-Aligned Random Partition Model
We introduce a graph-aligned random partition model (GARP) for \(\mathbf{y}=\{\mathbf{y}_{i}:i=1,\ldots,N\}\), \(\mathbf{y}_{i}\in\mathbb{R}^{d}\). The two main features of the model are a two-level random partition structure that assigns observations into vertex-clusters and edge-clusters, and a mixture of normal sampling models with cluster-specific parameters that reflect this split into vertex and edge-clusters. That is, the mixture of normal models is set up such that observations in vertex-clusters form homogeneous subsets in the Euclidean space, and observations in edge-clusters are located between the adjacent vertices. We characterize the model in three different representations that are minor variations of representations that are traditionally used for infinitely exchangeable random partition models (Pitman, 1996), including (1) the probability mass function (pmf) of the graph-aligned random partition via the introduction of exchangeable partition probability functions (EPPF); (2) a composition of Polya urn schemes, i.e., predictive probability functions, using a generalized CRP (gCRP); and (3) the configuration of ties that is implied by sampling from a composition of discrete random probability measures, similar to the construction of species sampling processes (SSP). See Pitman (1996) and Lee _et al._ (2013a) for details on these three characterizations for infinitely exchangeable random partitions (without alignment on a graph).
### A Gaussian Mixture over Vertices and Edges
We start the model construction with a sampling model given the latent graph-aligned partition. We need some notation. Let \(V_{i}\) be an indicator for observation \(i\) being placed into a vertex-cluster and let \(Z_{i}\) denote a cluster membership indicator. We write \(\mathbf{V}=(V_{1},\ldots,V_{N})\) and \(\mathbf{Z}=(Z_{1},\ldots,Z_{N})\) (throughout \(\mathbf{x}\) denotes the collection of all previously
Figure 1: Two-dimensional representation of genetic expressions of the RNA mice single-cells data.
defined elements \(x_{a}\)). We denote with \(N_{v,N}=\sum_{i=1}^{N}V_{i}\) the number of observations in vertex-clusters, and with \(N_{e,N}=N-N_{v,N}\) the implied number in edge-clusters. For notational simplicity, we drop the subscript \({}_{N}\) when implied by the context. If \(i\) belongs to a vertex (i.e., \(V_{i}=1\)), then \(Z_{i}\in[K_{v}]\equiv\{1,\ldots,K_{v}\}\), where \(K_{v}\) is the random number of vertex-clusters. If \(i\) belongs to an edge (i.e., \(V_{i}=0\)), then \(Z_{i}=(k,k^{\prime})\), with \(k<k^{\prime}\) indicating the adjacent vertex-clusters. Let \(K_{e}\) denote the number of edge-clusters. Clearly, an edge must connect two vertices, implying \(K_{e}\leq\frac{K_{v}(K_{v}-1)}{2}\equiv M_{e}\). Finally, let \(\mathbf{Z_{v}}=(Z_{i}:V_{i}=1)\) and \(\mathbf{Z_{e}}=(Z_{i}:V_{i}=0)\) denote the set of cluster membership indicators for vertices and edges, respectively.
Given a graph-aligned random partition, we assume normal sampling
\[\mathbf{y}_{i}\mid Z_{i},\mathbf{\mu^{*}},\mathbf{\Sigma^{*}}\stackrel{{\text {ind}}}{{\sim}}\text{N}(\mathbf{y}_{i}\mid\mathbf{\mu^{*}_{Z_{i}}},\mathbf{\Sigma^{*}_{Z_{i }}}),\quad(i=1,\ldots,N), \tag{1}\]
keeping in mind that \(Z_{i}=k\) for \(V_{i}=1\) and \(Z_{i}=(k,k^{\prime})\) for \(V_{i}=0\). The cluster-specific parameters are defined as follows. For the vertex-parameters \(\mathbf{\theta^{*}_{k}}=(\mathbf{\mu^{*}_{k}},\mathbf{\Sigma^{*}_{k}})\) we assume (conditionally) conjugate normal-inverse Wishart priors
\[\mathbf{\theta^{*}_{k}}\mid K_{v}\stackrel{{\text{iid}}}{{\sim}} \text{N}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text {I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text {I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I }\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I}\text{I} \text
denote the sizes of the implied edge-clusters, with \(n_{k,k^{\prime}}=0\) indicating the lack of an edge between \(k,k^{\prime}\). We define a graph-aligned random partition model via the pmf of \(\mathbf{V},\mathbf{Z}\)
\[G^{(N)}(\mathbf{V},\mathbf{Z}) \propto p_{v}^{N_{v}}\,\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1}, \ldots,n_{K_{v}}\mid\alpha,\sigma)/K_{v}!\] \[(1-p_{v})^{N_{e}}\mathrm{DM}_{M_{e}}^{(N_{e})}((n_{k,k^{\prime}} )_{k<k^{\prime}}\mid\beta/M_{e})\,\mathbb{1}(\underbrace{\{N_{e}=0\}\cup\{M_{ e}>0\}}_{E_{N}}), \tag{4}\]
where \(\mathrm{EPPF}(\cdot\mid\alpha,\sigma)\) denotes the EPPF of a Gibbs-type prior, DM is the marginal likelihood of an \(M_{e}\)-symmetric Dirichlet-multinomial model (for categorical realizations, and defining \(\mathrm{DM}_{\cdot}^{(0)}(\cdot)=\mathrm{DM}_{0}^{(\cdot)}(\cdot)\equiv 1\)) and \(\mathbb{1}(\{N_{e}=0\}\cup\{M_{e}>0\})\) is an indicator that represents the constraint that edges can only be assigned if there are at least 2 vertices (\(M_{e}>0\), that is, \(K_{v}>1\)), or no units are assigned to edges (\(N_{e}=0\)). We will use \(E_{N}\) to refer to this truncation event. In particular, when \(K_{v}=1\) (and therefore \(M_{e}=0\)) (4) reduces to \(G^{(N)}(\mathbf{V},\mathbf{Z})\propto p_{v}^{N}\,\mathrm{EPPF}_{1}^{(N)}(N\mid\alpha,\sigma)\) with \(V_{i}=Z_{i}=1\), for all \(i\), and \(G^{(N)}(\mathbf{V},\mathbf{Z})=0\) for any other configuration \((\mathbf{V},\mathbf{Z})\), e.g., any configuration with \(N_{e}>0\) (i.e., \(E_{N}^{c}\)).
An EPPF characterizes the distribution of an exchangeable partition (Pitman, 1996), with \(\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}})\) being the probability of observing a particular (unordered) partition of \(N_{v}\) observations into \(K_{v}\) subsets of cardinalities \(\{n_{1},\ldots,n_{K_{v}}\}\). Since an EPPF refers to unordered partitions we include the additional denominator \(K_{v}!\) for the ordered \(\mathbf{Z}\). See Section 4 for more discussions of the homogeneity assumptions implied by our model. We specify the EPPF as a Gibbs-type prior,
\[\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}}\mid\alpha,\sigma)=W_{ N_{v},K_{v}}\prod_{k=1}^{K_{v}}(1-\sigma)_{n_{k}-1}, \tag{5}\]
where \((x)_{n}=x(x+1)\ldots(x+n-1)\) represents the ascending factorial, \(\sigma<1\) is a discount parameter and the set of non-negative weights \(\{W_{n,k}:1\leq k\leq n\}\) satisfies the recursive equation \(W_{n,k}=(n-\sigma k)W_{n+1,k}+W_{n+1,k+1}\). The parameter \(\alpha\) in the conditioning set is used to define \(W_{n,k}\) for some of the upcoming examples. In a second step, the observations assigned to edges are (ordered) clustered using a DM distribution.
\[G^{(N)}((Z_{i}:\ V_{i}=0)\mid\mathbf{V},K_{v})=\mathrm{DM}_{M_{e}}^{ (N_{e})}((n_{k,k^{\prime}})_{k<k_{v}^{\prime}}\mid\beta/M_{e})=\\ =\frac{\Gamma(\beta)}{\Gamma(N_{e}+\beta/M_{e})}\prod_{(k,k^{ \prime}):k<k^{\prime}\leq K_{v}}\frac{\Gamma(n_{k,k^{\prime}}+\beta/M_{e})}{ \Gamma(\beta/M_{e})}. \tag{6}\]
Model (4) is a hierarchical constrained composition of a Gibbs-type prior and a symmetric-DM with hyperparameter \(\beta/M_{e}\). As we shall show, the model preserves most of the analytical and computational tractability of the simpler building blocks.
### Generalized Chinese Restaurant Process
In an alternative characterization of (4), the model can be defined as a truncated version of a composition of gCRP. We denote the latter, that is, the model before the truncation, as \(\widetilde{G^{(N)}}\) and refer to it as the _relaxed model_.
\[G^{(N)}(\mathbf{V},\mathbf{Z})\propto\widetilde{G^{(N)}}(\mathbf{V},\mathbf{Z})\mathbb{1}(E_{N }). \tag{7}\]
Recall that \(E_{N}=\{N_{e}=0\}\cup\{M_{e}>0\}\) is the truncation. In Section 4 we show that \(\widetilde{G^{(N)}}\) assigns high probability to \(E_{N}\), going to \(1\) with \(n\to\infty\) for most Gibbs-type priors.
The relaxed model \(\widetilde{G^{(N)}}(\mathbf{V},\mathbf{Z})\) is a hierarchical composition of tractable generalized Polya urn schemes, starting with the assignments to vertices or edges
\[V_{i}\overset{\text{iid}}{\sim}\text{Bern}(p_{v}),\quad(i=1,\dots,N). \tag{8}\]
Next, we sample cluster membership indicators \(\mathbf{Z_{v}}=(Z_{i}:\ V_{i}=1)\) for the vertex-clusters from the gCRP associated with Gibbs-type prior, i.e., \(\mathbf{Z_{v}}\mid\mathbf{V}\sim\text{gCRP}(\alpha,\sigma)\), with the gCRP implied by \(\widetilde{G^{(N)}}\) given as
\[\widetilde{G^{(N)}}\{Z_{i}=k\mid\mathbf{Z}^{-i},\mathbf{V}^{-i},V_{i}=1\}=\begin{cases} \frac{W_{N_{v},K_{v}^{-i}}}{W_{N_{v}-1,K_{v}^{-i}}}\sum_{k=1}^{K_{v}^{-i}}(n_ {k}^{-i}-\sigma)&k\in[K_{v}^{-i}]\\ \frac{W_{N_{v},K_{v}^{-i}+1}}{W_{N_{v}-1,K_{v}^{-i}}}&k=K_{v}^{-i}+1.\end{cases} \tag{9}\]
Throughout \(\mathbf{x}^{-i}\) identifies a quantity after removing the element \(i\) from \(\mathbf{x}\). See Section 3 for examples of different gCRP and implied prior assumptions on the number of vertices.
Finally, the cluster membership indicators \(\mathbf{Z_{e}}\) for the observations in edges follow the Polya urn scheme induced by a DM distribution
\[\widetilde{G^{(N)}}\{Z_{i}=(k,k^{\prime})\mid V_{i}=0,\mathbf{Z}^{-i},E_{N}\} \propto n_{k,k^{\prime}}^{-i}+\beta/M_{e}, \tag{10}\]
with \(k^{\prime}<k\leq K_{v}\). Here, \(\beta/M_{e}\) favors sparsity as the dimension of the graph increases. Note that (8) might generate \(N_{e}>0\), even when (9) implies \(M_{e}=0\). For this case we define for completeness \(\widetilde{G^{(N)}}\{Z_{i}=(1,2)\mid V_{i}=0,\mathbf{Z}^{-i},E_{N}^{\,c}\}\equiv 1\) (without implications for \(G^{(N)}\), due to the truncation to \(E_{N}\) in (7)).
The aforementioned composition of urn schemes characterizes the GARP (4):
**Proposition 1**.: _The random partition structure of the GARP model (4) can be characterized as the truncated composition of gCRP defined in (7), (8), (9) and (10)._
We rely on this representation to derive an MCMC algorithm that generalizes the marginal MCMC algorithms for DP mixture models and Gibbs-type priors (Neal, 2000;
De Blasi _et al._, 2015; Miller and Harrison, 2018). Moreover, as we shall see, the probability of the truncation event \(E_{N}\) is high and rapidly goes to 1 in most cases.
Composition of Discrete Random Probabilities.Finally, in Section S.2 of the supplementary materials we derive a third characterization of the proposed GARP. We define \(\widetilde{G^{(N)}}\) as a graph-aligned random partition (with unique atoms) implied by the ties under conditionally i.i.d. sampling of \(\mathbf{\theta}_{i}\). Such a characterization will be used in a lemma to prove Theorem 3 and can be used to connect with existing BNP literature to derive a conditional Gibbs sampler.
## 3 Specific Model Choices
Conditioning on the vertex assignments \(\mathbf{V}\), under the relaxed model \(\widetilde{G^{(N)}}\) the distribution of the clustering indicators \(\mathbf{Z}_{v}\) is given by the EPPF of a Gibbs-type prior (Gnedin and Pitman, 2006; De Blasi _et al._, 2015). We introduce four specific choices, stating the EPPF\({}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}})\) for partitioning \(N_{v}\) observations into \(K_{v}\) vertices. Table 1 shows the corresponding expressions for \(\widetilde{G^{(N)}}\{Z_{i}=k\mid V_{i}=1,\mathbf{Z}_{v}^{-i},\mathbf{V}^{-i}\}\) in the gCRP of (9), and the weights and atoms for \(P_{v}=\sum_{m=1}^{M_{v}}\pi_{m}\delta_{\tilde{\mathbf{\theta}}_{m}}\) in (S.1). Throughout, the prior for cluster-specific parameters remains the NIW in (2).
\begin{table}
\begin{tabular}{c|c c|c|c} & \(\widetilde{G^{(N)}}\{Z_{i}=k\mid V_{i}=1,\mathbf{Z}_{v}^{-i},\mathbf{V}^{-i}\}\propto\) & \(P(\pi_{1},\pi_{2},\ldots\mid M_{v})\) & \(p(M_{v}=m)\) \\ Ex. & \(k\in\mathbf{Z}_{v}^{-i}\) & \(k=K_{v}^{-i}+1\) & & \\ \hline
1 & \(n_{k}^{-i}+\rho\) & \(\rho(M_{v}-K_{v}^{-i})\ ^{(a)}\) & Dir\((\rho,\ldots,\rho)\) & fixed \(M_{v}\in\mathbb{N}\) \\
2 & \((n_{k}^{-i}+1)\times\) & \((K_{v}^{-i})^{2}-K_{v}^{-i}\gamma\) & Dir\((1,\ldots,1)\) & \(\frac{\gamma(1-\gamma)_{m-1}}{m!}\) \\ & \((N_{v}^{-i}-K_{v}^{-i}+\gamma)\) & & & \\
3 & \(n_{k}^{-i}\) & \(\alpha\) & GEM\((\alpha)\ ^{(b)}\) & \(M_{v}=\infty\) \\
4 & \(n_{k}^{-i}-\sigma\) & \(\alpha+K_{v}^{-i}\sigma\) & GEM\((\alpha,\sigma)\ ^{(b)}\) & \(M_{v}=\infty\) \\ \end{tabular} \({}^{(a)}\) subject to \(K_{v}^{-i}<M_{v}\).
\({}^{(b)}\) GEM stands for the distribution of probability weights after Griffiths, Engen, and McCloskey (Ewens, 1990), using the 1-parameter version defined there and the related 2-parameters extension.
**Example 1** (\(M_{v}\)-dimensional symmetric Dirichlet).: _If prior information on an upper bound \(M_{v}\) on the number of vertices is available we can proceed with a finite-dimensional symmetric Dirichlet prior (Green and Richardson, 2001)._
\[\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}})=\frac{M_{v}!}{(M_{v}-K_{v })!}\frac{\Gamma(\rho\,M_{v})}{\Gamma(N_{v}+\rho\,M_{v})\Gamma(\rho)^{K_{v}}} \prod_{k=1}^{K_{v}}\Gamma(n_{k}+\rho). \tag{11}\]
Allowing for unknown \(M_{v}\) the model becomes a mixture of symmetric Dirichlet, that is, a mixture of finite mixtures (MFM). MFMs can be particularly interesting for allowing consistent estimation of any finite number of clusters (Nobile, 1994; Miller and Harrison, 2018). MFMs are a special case of Gibbs-type priors. A relevant example is the _Gnedin process_.
**Example 2** (Gnedin process, with \(\sigma=-1\)).: _Under the Gnedin prior with parameter \(\gamma\in(0,1)\) the \(\mathrm{EPPF}_{K_{v}}^{(N_{v})}\) in (4) becomes_
\[\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}})=\sum_{m=1}^{\infty} \mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}}\mid M_{v}=m)\,p(M_{v}= m),\]
_where \(\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}}\mid M_{v}=m)\) is the \(\mathrm{EPPF}\) of the \(M_{v}\)-symmetric Dirichlet prior in (11), with \(\rho=1\) and \(p(M_{v}=m)=\frac{\gamma(1-\gamma)_{m-1}}{m!}\)._
The gCRP for the Gnedin process allows tractable analytical results and efficient algorithms. Moreover, the Gnedin process entails a distribution on the number of components \(M_{v}\) that has the mode at \(1\), a heavy tail, and infinite expectation (Gnedin, 2010). Therefore, the implied MFM favors a small number of vertices, while also being robust due to the heavy tail distribution of \(M_{v}\).
Note that one can use \(M_{v}=\infty\) to let the number of vertices (i.e., \(K_{v}\)) grow to infinity with \(N_{v}\). Examples are the DP which entails a logarithmic growth of the number of vertices and the PYP which entails a polynomial growth of the number of vertices.
**Example 3** (Dp).: _Under the DP prior with parameter \(\alpha>0\) the \(\mathrm{EPPF}_{K_{v}}^{(N_{v})}\) in (4) becomes_
\[\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}})=\frac{\alpha^{K_{v}} \Gamma(\alpha)}{\Gamma(\alpha+N_{v})}\prod_{k=1}^{K_{v}}(n_{k}-1)!\]
**Example 4** (Pyr).: _Under a PYP prior with parameters \(\sigma\in[0,1)\) and \(\alpha>0\) the \(\mathrm{EPPF}_{K_{v}}^{(N_{v})}\) in (4) becomes_
\[\mathrm{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}})=\frac{\Gamma(\alpha+1 )\prod_{k=1}^{K_{v}-1}(\alpha+k\sigma)}{\Gamma(\alpha+N_{v})}\prod_{k=1}^{K_{v }}(1-\sigma)_{n_{k}-1}.\]
_With \(\sigma=0\) the PYP reduces to the DP._
With \(\sigma=0\) the PYP reduces to the DP. Other popular sub-classes of Gibbs-type priors include the NGPP (Lijoi _et al._, 2007b), the NIG (Lijoi _et al._, 2005, 2007a), and the MFM (Nobile and Fearnside, 2007; Miller and Harrison, 2018). See De Blasi _et al._ (2015) for a comprehensive review of Gibbs-type priors.
Finally, we note that here we focus on prior elicitation of the Gibbs-type random partition that controls the vertex-clusters and the number of vertices (i.e., \(K_{v}\leq\min(M_{v},N_{v})\leq\min(M_{v},N)\)). Given \(K_{v}\) the possible number of edges is finite. The only Gibbs-type prior with a finite fixed number of components \(M_{e}\) is the symmetric Dirichlet (see e.g, De Blasi _et al._, 2015), that is the \(DM_{M_{e}}\) in (4). Although the preceding discussion focuses on the Gibbs-type partition that controls the vertices assignment, it entails (thanks to the hierarchical definition e.g., in Section 2.3) similar flexibility in the joint prior elicitation of the vertices assignments.
## 4 Goodness of the Approximation
We discuss the nature of the approximation of the GARP model in (4) by the relaxed model \(\widetilde{G^{(N)}}\), and why it is a good approximation of \(G^{(N)}\), justifying the prior elicitation of \(G^{(N)}\) via \(\widetilde{G^{(N)}}\). Importantly, the results allow us to effectively sample from the GARP via rejection sampling, using proposals from \(\widetilde{G^{(N)}}\).
**Proposition 2**.: _The probability of the truncation event \(E_{N}\) in (7) is_
\[\widetilde{G^{(N)}}\{E_{N}\}=p_{v}^{N}+\sum_{n_{v}=2}^{N-1}{N\choose n_{v}}p_{ v}^{n_{v}}(1-p_{v})^{(N-n_{v})}\big{[}1-(1-\sigma)_{n_{v}-1}W_{n_{v},1}\big{]}. \tag{12}\]
Here \(p_{v}^{N}=\widetilde{G^{(N)}}\{N_{v}=N\}\), and \((1-\sigma)_{n_{v}-1}W_{n_{v},1}\) in the second term arises from (5) as the probability given \(\{N_{v}=n_{v}\}\) of having a single vertex, i.e., \(\widetilde{G^{(N)}}\{K_{v}=1\mid N_{v}=n_{v}\}=\text{EPPF}_{1}^{(n_{v})}(n_{v})\). For the Gibbs-type priors in the following examples, the latter reduces to simple analytical expressions.
In the upcoming discussion, we introduce several closely related distributions. To avoid confusion we provide a brief summary and list of defined distributions in Table S.1 in the supplementary materials. Let \(\widetilde{G^{(N)}_{\text{vZ}}}\) denote the law of \(V_{i}\), \(i=1,\ldots,N\) and \(\mathbf{Z_{v}}=(Z_{i}:i\in[N],V_{i}=1)\) under the relaxed model. More precisely, \(\widetilde{G^{(N)}_{\text{vZ}}}\) is the joint law of the random variables \((T_{1},\ldots,T_{N})\), where \(T_{i}=V_{i}\) if \(V_{i}=0\) and \(T_{i}=(V_{i},Z_{i})\) if \(V_{i}=1\). Let \(\widetilde{G^{-}_{\text{vZ}}}\) denote the law of the stochastic process with Kolmogorov consistent finite dimensional \(\widetilde{G^{(N)}_{\text{vZ}}})_{N\in\text{N}}\). Such a process exists due to the i.i.d. nature of \(V_{i}\) and the exchangeable nature of the Gibbs-type prior that defines \(\mathbf{Z}_{v}\) given \(\mathbf{V}\). We therefore have by the strong law of
large numbers \(\lim_{N\to\infty}N_{v}/N=p_{v}\), \(\widetilde{G_{\textsc{VZ}}}\)-a.s. Also, note that the truncation event \(E_{N}\) is a function of \((\mathbf{V},\mathbf{Z}_{v})\) (thus \(\mathbf{T}\)) only, allowing us to evaluate \(\widehat{G^{(N)}}\{E_{N}\}\) in (12) as probabilities under \(\widetilde{G_{\textsc{VZ}}}\).
We are now ready to analyze (12). First, note that \(E_{N}^{\mathrm{c}}\) can be decomposed as \(E_{N}^{\mathrm{c}}=(\{K_{v}=1\}\cap\{N_{v}\neq N\})\cup\{N_{v}=0\}\) and therefore
\[\widetilde{G_{\textsc{VZ}}}\{E_{N}^{\mathrm{c}}\}=\widetilde{G_{\textsc{VZ}}} \{K_{v}=1\}-p_{v}^{N}\widetilde{G_{\textsc{VZ}}}\{K_{v}=1\mid N_{v}=N\}+(1-p_ {v})^{N}, \tag{13}\]
with the last term corresponding to \(\widetilde{G_{\textsc{VZ}}}\{N_{v}=0\}\) and the sum of the first two terms corresponding to \(\widetilde{G_{\textsc{VZ}}}\{\{K_{v}=1\}\cap\{N_{v}\neq N\}\}\). Note that \((\widetilde{G^{(N)}}\{K_{v}=1\})_{N\in N}\) and \((\widetilde{G^{(N)}}\{K_{v}=1\mid N_{v}=n_{v}\})_{n_{v}\in\mathbb{N}}\) (well defined for any \(N=f(n)\geq n\)) are non-increasing sequences of elements in \([0,1]\). This is the case since they can be seen as the probability \(\widetilde{G_{\textsc{VZ}}}\) of non-increasing sequences of events. The two sequences are thus convergent.
For any \(p_{v}\in(0,1)\), \((\widetilde{G_{\textsc{VZ}}}\{E_{N}^{\mathrm{c}}\})_{N\in\mathbb{N}}\) in (13) has limit equal to \(\lim_{N\to\infty}\widetilde{G^{(N)}}\{K_{v}=1\}\) (since \(p_{V}^{N}\) and \((1-p_{v})^{N}\) go to \(0\)). Let then \(g^{\infty}=\lim_{N\to\infty}\widetilde{G^{(N)}}\{K_{v}=1\}\), and let \(g_{v}^{\infty}=\lim_{n_{v}\to\infty}\widetilde{G^{(N)}}\{K_{v}=1\mid N_{v}=n_ {v}\}\). Since \(K_{v}=K_{v}(N_{v})\) depends on \(N\) only indirectly through \(N_{v}\) and \(N_{v}/N\to p\) a.s. (see the proof of Theorem 1 for more discussion), the two limits are equal, i.e., \(g^{\infty}=g_{v}^{\infty}\). We shall show that they equal \(0\) for several Gibbs-type priors, implying that the GARP will go to the relaxed model, that is, \(\widetilde{G^{(N)}}\{E_{N}\}\to 1\) as \(N\to\infty\). Table 2 summarizes the results for the earlier four examples. We use \(n_{v}\leq N\) and for any sequences \(a_{n}\) and \(b_{n}\), we write \(a_{n}\asymp b_{n}\) if and only \(\lim_{n}a_{n}/b_{n}=1\).
**Theorem 1**.: _Under the relaxed model \(\widetilde{G^{(N)}}\) we have \(g^{\infty}=g_{v}^{\infty}=\lim_{N\to\infty}\widetilde{G^{(N)}}\{E_{N}^{ \mathrm{c}}\}\) with \(g^{\infty}=0\) under the symmetric Dirichlet, the DP, the PYP, and \(g^{\infty}=\gamma\in(0,1)\) under the Gnedin process. The asymptotic rates of \(g_{n_{v}}\) are given in the second column of Table 2._
Theorem 1 and (7) show that performing prior elicitation and posterior simulation based on the (analytically and computationally) simpler relaxed model \(\widetilde{G^{(N)}}\) becomes practically
attractive. Table 2 also provides the rate at which \(\widehat{G^{(N)}}(E_{N}^{c})\) (where the two models differ) converges. For instance, when \(\widehat{G^{(N)}}(E_{N}^{c})\approx 0\) (in Theorem 1), it is immediate to consider \(p_{v}\) as the prior proportion of observations assigned to vertex clusters under \(\widetilde{G^{(N)}}\) for any sample size \(N\). Another important consequence of Theorem 1 and (7) is that we can effectively sample from the prior GARP model with an acceptance-rejection method that proposes a realization from the simple relaxed model \(\widehat{G^{(N)}}\) having theoretical guarantees that the acceptance probability is around \(1\) in most of the cases. Also with the convergence of \(\widetilde{G^{(N)}}(E_{N}^{c})\) to \(\gamma>0\) under the Gnedin process, the approximation remains attractive, as rejection sampling remains practically feasible with known acceptance probability \(\widehat{G^{(N)}}(E_{N})\) going to \(1-\gamma\) (instead of \(1\), under the other models), where \(\gamma\) is a hyperparameter that we can control.
Finally, in most examples, the relaxed model \(\widetilde{G^{(N)}}\) approaches the GARP \(G^{(N)}\) as the sample \(N\) increases in an even stronger way.
**Theorem 2**.: _Under \(\widetilde{G^{(N)}}\) with symmetric Dirichlet, DP or PYP (\(\sigma\geq 0\)) in (4)_
\[\widetilde{G_{\mbox{\tiny{VZ}}}}\{E_{N}\mbox{ eventually}\}=1. \tag{14}\]
_Thus, for any \(k\in\mathbb{N}\) and any possible set of points \(A_{k}=(\mathbf{v}_{1:N+k},\mathbf{z}_{1:N+k})\)_
\[\widetilde{G}_{VZ}\left\{\left\{G^{(N+k)}(A_{k}\mid\mathbf{V}_{1:N},\mathbf{Z }_{v,N})=\widetilde{G^{(N+k)}}(A_{k}\mid\mathbf{V}_{1:N},\mathbf{Z}_{v,N}) \right\}\mbox{ eventually}\right\}=1. \tag{15}\]
_Under \(\widetilde{G^{(N)}}\) with the Gnedin process we have \(\widetilde{G_{\mbox{\tiny{VZ}}}}\{E_{N}\cup\{M_{v}=1\}\mbox{ eventually}\}=1\) and \(\widetilde{G}_{VZ}\left\{\left\{G^{(N+k)}(A_{k}\mid\mathbf{V}_{1:N},\mathbf{Z }_{v,N})=\widetilde{G^{(N+k)}}(A_{k}\mid\mathbf{V}_{1:N},\mathbf{Z}_{v,N}) \right\}\cup\{M_{v}=1\}\mbox{ eventually}\right\}=1\)._
In words, almost surely either the predictive pmf under the GARP and the relaxed will eventually coincide or (under \(\widehat{G^{(N)}}\) with the Gnedin process) there is only one possible vertex-cluster for any \(N\in\mathbb{N}\). The latter has a positive probability \(\widetilde{G_{\mbox{\tiny{VZ}}}}\{M_{v}=1\}=\gamma\in(0,1)\) for the Gnedin process.
## 5 Finite Exchangeability and Projectivity
Under the GARP the distribution of the sample is (finitely) exchangeable, that is the marginal law of \((\boldsymbol{y}_{i})_{i=1}^{N}\) from (1)-(4) is invariant with respect to permutations of the labels \(1,\ldots,N\). This homogeneity assumption entails that the order in which we look at the observations does not affect the prior and the inferential results, as it should. The same homogeneity assumption is true for the graph-aligned random partition induced by \((V_{i},Z_{i})_{i=1}^{N}\). We discuss some more details of homogeneity assumptions in the model. We will write \(G^{(N)}\) for different distributions implied by the GARP model (1)-(4), with the specific distribution being clear from the argument of \(G^{(N)}(\cdot)\).
Finite Eppr.Let \(\Psi_{N}\) denote the random partition of observations \([N]\) defined by clustering \(i\) and \(j\) together if and only if \(\mathbf{\theta}_{i}=\mathbf{\theta}_{j}\) (recall that \(\mathbf{\theta}_{i}=\mathbf{\theta}_{Z_{i}}^{*}\)). Under the GARP model \(\Psi_{N}\) is an exchangeable random partition with dependent cluster-specific parameters. We introduce the notion of finite EPPF (fEPPF) to characterize the distribution of such random partitions: \(G^{(N)}\{\Psi_{N}=\{C_{1},\ldots,C_{K}\}\}=\text{fEPPF}_{K}^{(N)}(c_{1},\ldots, c_{K})\), where \((c_{1},\ldots,c_{K})=(|C_{1}|,\ldots,|C_{K}|)\) are the cluster sizes (in a given arbitrary order). Note that \(\{c_{1},\ldots,c_{K}\}\) is a sufficient statistic for an exchangeable random partition. Here \(K\) denotes the number of clusters, i.e., \(K=K_{v}+K_{e}\). The fEPPF is a symmetric function of a composition of \(N\) (positive integers that sum up to \(N\)). The fEPPF induced by the GARP can be obtained via marginalization of the probability function (4) of the graph-aligned random partition. Several expressions can be aggregated via probabilistic invariance.
**Proposition 3**.: _Under the GARP_
\[\text{fEPPF}_{K}^{(N)}(|C_{1}|,\ldots,|C_{K}|)\propto\sum_{N_{v}= 1}^{N}\bigg{\{}\binom{N}{N_{v}}p_{v}^{N_{v}}(1-p_{v})^{N-N_{v}}\\ \sum_{K_{v}=1}^{M_{v}}\bigg{[}\binom{M_{e}}{K-K_{v}}\sum_{(n_{1}, \ldots,n_{K_{v}})}\text{EPPF}_{K_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v}})\text{ DM}_{M_{e}}^{(N-N_{v})}((n_{k,k_{\nu}})_{k<k_{\nu}^{\prime}}).\bigg{]} \bigg{\}} \tag{16}\]
_In the last sum, for given \((n_{1},\ldots,n_{K_{v}})\) the cardinalities \(n_{k,k^{\prime}}\) of edge-clusters are implied by the remaining elements of \((|C_{1}|,\ldots,|C_{K}|)\) that are not matched with the vertex-cluster cardinalities \(n_{k}\). The exact range of the sums is stated in Section S.6.5 of the supplementary materials. Essentially, \(\{n_{1},\ldots,n_{K_{v}}\}\cup\{n_{k,k^{\prime}}:\;k<k^{\prime}\}=\{c_{1}, \ldots,c_{K}\}\). Moreover, the normalization constant in (16) is \(1/\widetilde{G^{(N)}}\{E_{N}\}\), which we studied in detail before._
A common stronger assumption in the literature on random partitions is that the observed data \((\mathbf{y}_{i})_{i=1}^{N}\) are a subset of an infinite (thus unobservable) sequence of exchangeable random variables. This assumption does not apply to the GARP - see below. However, if the assumption applies then the exchangeable random partition of the sample can be seen as a projection of an exchangeable random partition of the natural numbers \(\mathbb{N}\) to the set \([N]\). Formally, this is equivalent to assuming:
* each random partition \(\Psi_{N}\) is exchangeable over \([N]\);
* the sequence of random partitions \((\Psi_{N})_{N=1}^{\infty}\) is Kolmogorov consistent, that is, \(\Psi_{n}\) is equal in distribution to the restriction of \(\Psi_{N}\) to \([n]\) for any \(1\leq n\leq N\).
Note that, although we stated the properties for the random partition, the same definitions hold for other sequences of random variables, such as the sample \((\mathbf{y}_{i})_{i=1}^{N}\). As done in, e.g., Betancourt _et al._ (2022) we refer to (a) as _finite exchangeability_, (b) as _projectivity_, and to their combination as _infinite exchangeability_.
**Proposition 4**.: _The graph-aligned random partition induced by \((V_{i},Z_{i})_{i=1}^{N}\), the sample \((\mathbf{y}_{i})_{i=1}^{N}\) and the random partition \(\Psi_{N}\) are finitely exchangeable but they are not a projection of infinite exchangeable processes._
From a modeling perspective, infinite exchangeability is a natural requirement only when there is a notion of a future unbounded number of homogeneous observations. In general, it is a desirable property for mathematical convenience to ease prior elicitation (e.g., via de Finetti's representation theorem) and to study the properties of the model across sample sizes. While the GARP is not infinitely exchangeable, as stated in the previous result, in some cases it turns out to be very close to infinite exchangeability, in the sense that the model is equivalent to an infinitely exchangeable model for large enough \(N\), as discussed next. See also Diaconis and Freedman (1980) for general results and probabilistic characterization of finite exchangeability and approximate projectivity. The next result shows that in some cases the prior predictive distribution of the GARP model eventually (i.e., for a large enough sample size \(N\)) can be characterized as a projection of the predictive of a limiting infinitely exchangeable model, thus where projectivity holds.
We also characterize the limit via the directing measure, i.e., the law of the random probability in de Finetti's representation theorem. See Table S.1 for a recap of the notation for different distributions.
**Theorem 3**.: _Under the GARP model with the \(M_{v}\)-symmetric Dirichlet (Example 1) in (4) there exists a finite random sample size \(\bar{N}\) and an infinite dimensional law \(G^{(\infty)}\), such that for any \(N>\bar{N}\) the prior predictive distributions under the GARP model, are \(\widetilde{G_{\mbox{\tiny{VZ}}}}\)-almost surely equal to the prior predictive distributions under the (Kolmogorov consistent) marginal laws \(\big{(}G_{N}^{(\infty)}\big{)}_{N\in\mathds{N}}\) of the infinite-dimensional law \(G^{(\infty)}\)._
_That is, for any possible sequence of sets of points \((A_{k})_{k\in\mathds{N}}\), with \(A_{k}=(\mathbf{v}_{1:N+k},\mathbf{z}_{1:N+k})\)_
\[\widetilde{G_{\mbox{\tiny{VZ}}}}\left\{\left\{G_{N+k}^{(\infty)}(A_{k}\mid \mathbf{V}_{1:N},\mathbf{Z}_{v,N})=G^{(N+k)}(A_{k}\mid\mathbf{V}_{1:N}, \mathbf{Z}_{v,N})\;\forall\,k\right\}\mbox{ eventually}\right\}=1, \tag{17}\]
_Here \(G^{(\infty)}\) can be characterized by the following gCRP. Let \(M_{e}^{+}=M_{v}(M_{v}-1)/2\)._
\[G^{(\infty)}\{V_{i}=v,Z_{i}=z\mid\cdots,\mathbf{V}_{1:N},\mathbf{Z}_{1:N}\} \propto\begin{cases}p_{v}\frac{n_{k}^{-i}+\gamma}{N_{v}^{-i}+\gamma M_{v}}& \mbox{if }v=1,\quad z\in[M_{v}]\\ (1-p_{v})\frac{\beta/M_{v}^{k}+n_{k,k^{\prime}}^{-i}}{\beta/M_{v}^{k}+N_{v}^{ -i}}&\mbox{if }v=0,\quad z=(k,k^{\prime}).\end{cases} \tag{18}\]
_The directing measure characterizing the infinitely exchangeable random parameters that imply \(G^{(\infty)}\) is defined as_
\[(\mathbf{\mu}_{i},\mathbf{\Sigma}_{i})\mid P\stackrel{{ iid}}{{\sim}}P, \qquad P=p_{v}\sum_{m=1}^{M_{v}}\pi_{m}\delta_{\mathbf{\tilde{\theta}}_{m}}+(1-p_{ v})\sum_{k<k^{\prime}\leq M_{v}}\pi_{k,k^{\prime}}\delta_{\mathbf{\tilde{\theta}}_{k,k^{ \prime}}} \tag{19}\]
_where \((\pi_{1},\ldots,\pi_{M_{v}})\sim\text{Dir}(\rho,\ldots,\rho)\), \((\pi_{k,k^{\prime}})_{k<k^{\prime}}\sim\text{Dir}(\beta/M_{e}^{+},\ldots,\beta/ M_{e}^{+})\), and \(\boldsymbol{\tilde{\theta}}_{m}\) and \(\boldsymbol{\tilde{\theta}}_{k,k^{\prime}}\) follow the same distributions as in (2) and (3)._
_Let \(G^{(\infty)}(\boldsymbol{V},\boldsymbol{Z})\) denote the pmf of \(\boldsymbol{V},\boldsymbol{Z}\) implied by (19). It can also be characterized by the projective pmfs for any \(N\in\mathbb{N}\) (we omit the sub-index \({}_{N}\) for the finite projections of \(G^{(\infty)}\) when it is clear from the context):_
\[G^{(\infty)}((V_{i},Z_{i})_{1:N})=p_{v}^{N_{v}}\text{EPPF}_{M_{v} }^{(N_{v})}(n_{1},\ldots,n_{K_{v}}\mid\alpha,\sigma)/K_{v}!\times\\ \times(1-p_{v})^{N_{e}}\text{DM}_{M_{e}^{+}}^{(N_{e})}((n_{k,k^{ \prime}})_{k<k^{\prime}_{v}}\mid\beta/M_{e}^{+}). \tag{20}\]
**Corollary 1**.: _Conditional on a given \(M_{v}\), Theorem 3 remains true also under the GARP with a Gnedin process (Example 2), with \(G^{(\infty)}(M_{v}=m)=\widetilde{G_{\text{vz}}}(M_{v}=m)=\frac{\gamma(1- \gamma)_{m-1}}{m!}\)._
See S.6 for the explicit statement of Corollary 1 and the proofs.
Analogous results hold for any MFM. We state it for the special case of the Gnedin process which we introduced and discussed in Section 3.
Note that even if projectivity is not strictly needed to carry out inference under the GARP, approximate projectivity is still a useful property. Without any form of approximate projectivity (i.e., coherence), inference on the partition structure for \(N\) observed units would depend on whether or not an investigator plans to collect more data in the future. This would greatly complicate the understanding of model assumption and learning mechanisms.
## 6 Posterior Inference
Building on the earlier results we develop MCMC algorithms for posterior simulation under the GARP. The algorithms generalize the posterior sampling scheme for the CRP under a DP mixture (Neal, 2000) and under Gibbs-type mixtures. To derive tractable full conditional distributions that are easy to sample from, we exploit the representation of the GARP as a truncated composition of Gibbs-type priors derived in Section 2.3.
In this way, we can exploit the product partition form of the pmf under the relaxed model to simplify the expressions of the conditional probability in the prior predictive (i.e., the composition of gCRPs) and full conditional distributions. Expressions reduce to simple ratios.
In general, without projectivity and composition of product partition EPPF, it is not possible to generalize a priori (and a posteriori) tractable Polya urn schemes and thus tractable marginal algorithms such as the ones in Neal (2000). Projectivity allows us to evaluate conditional probabilities (of cluster membership) as ratios of the same EPPFs over different \(n\). Under the specific product form of the EPPF for Gibbs-type priors, this ratio reduces a simple expression (De Blasi _et al._, 2015).
Specifically, the relaxed model \(\widehat{G^{(N)}}\) is a hierarchical composition of Kolmogorov consistent EPPFs with product partition forms (Sections 2.2 and 3) that thus induce tractable a priori composition of gCRPs (Section 2.3). This allows us to derive the following efficient marginal sampler. See Section S.4.1 in the supplementary materials for details.
For an explicit statement of Gibbs sampling transition probabilities, we introduce the notation \(I_{i}=\mathbbm{1}(\{n_{k}^{-i}=0\}\cap\{\sum_{k^{\prime}\neq k}\big{(}n_{k,k^{ \prime}}+n_{k^{\prime},k}\big{)}>0\})\) as an indicator for violating the support of the GARP in (4). That is, \(I_{i}=1\) if removing \(i\) from its current cluster removes the last unit in a vertex-cluster \(k\) (for some \(k\)) and it leaves an edge-cluster \((k,k^{\prime})\) (for some \(k^{\prime}\neq k\)) without adjacent vertex-cluster \(k\).
We then have the following full conditional probabilities.
**(1)**: Sample \((V_{i},Z_{i})\) form \(G^{(N)}(V_{i},Z_{i}\mid\cdots)\). If \(I_{i}=1\) we do not move. Otherwise sample from \(G^{(N)}\{V_{i}=v,Z_{i}=z\mid\cdots\}\propto\)
\[\begin{cases}p_{v}\frac{W_{N_{v},K_{v}^{-i}}}{W_{N_{v}-1,K_{v}^{-i}}}\mathrm{ N}(\mathbf{y}_{i}\mid\mathbf{\mu}_{k}^{*},\mathbf{\Sigma}_{k}^{*})&\text{if $v=1$, \quad$z\in[K_{v}^{-i}]$}\\ p_{v}\frac{W_{N_{v},K_{v}^{-i}+1}}{W_{N_{v}-1,K_{v}^{-i}}}\sum_{k=1}^{K_{v}}(n_ {k}^{-i}-\sigma)g_{\mathrm{new}}(\mathbf{y}_{i})&\text{if $v=1$, \quad$z=K_{v}^{-i}+1$}\\ (1-p_{v})\frac{\beta/M_{e}+n_{k,k^{\prime}}^{-i}}{\beta/M_{e}+N_{v}^{-i}} \mathrm{N}(\mathbf{y}_{i}\mid\mathbf{\mu}_{k,k^{\prime}}^{*},\mathbf{\Sigma}_{k,k^{\prime} }^{*})&\text{if $v=0$, \quad$z=(k,k^{\prime})$,}\end{cases}\]
where
\[g_{\mathrm{new}}(\mathbf{y}_{i})=\int\mathrm{N}(\mathbf{y}_{i}\mid\mathbf{\mu},\mathbf{\Sigma })\,\mathrm{d}\mathrm{NIW}(\mathbf{\mu},\mathbf{\Sigma}\mid\mathbf{\mu}_{0},\lambda_{0}, \kappa_{0},\mathbf{\Sigma}_{0})=\mathrm{T}_{\lambda_{0}-1}\bigg{(}\mathbf{y}_{i}\mid \mathbf{\mu}_{0},\frac{\kappa_{0}+1}{\kappa_{0}(\lambda_{0}-1)}\bigg{)}\]
is the density of a generalized Student-T distribution of degree \(\lambda_{0}-1\).
**(2)**: Sample the vertices parameters \((\mathbf{\mu}_{k}^{*},\mathbf{\Sigma}_{k}^{*})\) from
\[G^{(N)}(\mathbf{\mu}_{k},\mathbf{\Sigma}_{k}\mid\cdots)\propto\underbrace{\mathrm{ NIW}\big{(}\mathbf{\mu}_{k}^{*},\mathbf{\Sigma}_{k}^{*}\mid\hat{\mathbf{\mu}},\hat{\nu}, \hat{\kappa},\hat{\mathbf{\Sigma}}\big{)}}_{p^{0}(\mathbf{\theta}_{k}^{*})}\times \,\prod_{k^{\prime}\neq k}\mathrm{N}(\mathbf{y}_{i}\mid\mathbf{\mu}_{k,k^{\prime}}^{ *},\mathbf{\Sigma}_{k,k^{\prime}}^{*})\]
where in the last product for \(k^{\prime}<k\) we interpret \(\mathbf{\theta}_{k,k^{\prime}}^{*}\) as \(\mathbf{\theta}_{k,k^{\prime}}^{*}\equiv\mathbf{\theta}_{k^{\prime},k}^{*}\), and \(\hat{\nu}=\nu_{0}+n_{k}\), \(\hat{\kappa}_{k}=\kappa_{0}+n_{k}\), \(\hat{\mathbf{\mu}}=\frac{\kappa_{0}\mathbf{\mu}_{0}+n_{k}\tilde{\mathbf{y}}_{k}}{\tilde{ \kappa}_{k}}\) and \(\hat{\mathbf{\Sigma}}=\mathbf{\Sigma}_{0}+\mathbf{S}_{k}+\frac{\kappa_{0}n_{k}}{\tilde{ \kappa}_{k}}(\tilde{\mathbf{y}}_{k}-\mathbf{\mu}_{0})(\tilde{\mathbf{y}}_{k}-\mathbf{\mu}_{0})^ {\intercal}\), with \(\tilde{\mathbf{y}}_{k}=\frac{\sum_{i:Z_{i}=k}\mathbf{y}_{i}}{n_{k}}\) and \(\mathbf{S}_{k}=\sum_{i:Z_{i}=k}(\mathbf{y}_{i}-\bar{\mathbf{y}}_{k})(\mathbf{y}_{i}-\bar{\mathbf{y} }_{k})^{\intercal}\).
If a vertex is isolated, that is, no observations are assigned to any of the possible edges associated with the vertex, then the full conditional in (2) reduces to the conjugate NIW posterior distribution \(p^{0}(\mathbf{\theta}_{k}^{*})\). In general, the density of the full conditional is proportional to \(p^{0}\) times the likelihood of the observations assigned to corresponding edges. An effective transition probability is a Metropolis-Hasting step exploiting \(p^{0}\) as a proposal.
In step (1), when we create a new vertex-cluster, i.e., if \(v=1\) and \(k=K_{v}^{-i}+1\), we follow up with a transition probability (2) for the new cluster parameters, that reduces to the conjugate NIW for \(\mathbf{\theta}_{k}^{*}\). Throughout, edge-parameters \(\mathbf{\theta}_{k,k^{\prime}}^{*}\) are always evaluated using
the currently imputed adjacent vertex parameters \(\mathbf{\theta}_{k}^{\star},\mathbf{\theta}_{k^{\prime}}^{\star}\).
Note that it is also possible to add an additional transition probability to update \(Z_{i}\) as in (1), but leaving \(V_{i}\) unchanged. Such transition probabilities could lead to a better mixing Markov chain and are analogous to the ones used, for example, in Teh _et al._ (2006) exploiting the Chinese restaurant franchise representation of the hierarchical DP.
In principle, all posterior inference is implemented by appropriate summaries of the posterior Monte Carlo sample. However, how to report point estimates for a random partition or graph is not trivial. There are several proposals in the recent literature, including Wade and Ghahramani (2018) and Dahl _et al._ (2022). Both are based on casting the selection of the reported summary as a decision problem. In Section S.4.2 of the supplemental materials, we discuss an implementation for the GARP.
Finally, like in any mixture model, posterior inference about specific clusters must consider label switching. See, for example, Green (2018) for a discussion. An additional challenge that arises in the proposed model is the distinction between vertex versus edge clusters. Consider, for example, a configuration (A) with 2 vertices and a connecting edge, with cluster-specific parameters (\(\mathbf{\theta}_{1}^{\star},\mathbf{\theta}_{2}^{\star},\mathbf{\theta}_{1,2}^{\star}=f( \mathbf{\theta}_{1}^{\star},\mathbf{\theta}_{2}^{\star})\)) (as in (3)), versus an alternative configuration (B) with 3 vertices and \((\mathbf{\theta}_{1}^{\star},\mathbf{\theta}_{2}^{\star},\mathbf{\theta}_{3}^{\star})\) and \(\mathbf{\theta}_{3}^{\star}=f(\mathbf{\theta}_{1}^{\star},\mathbf{\theta}_{2}^{\star})\). While the sampling model (1) remains unchanged under (A) versus (B), we argue that the prior implements a strong preference for (the more parsimonious) model (A).
For given vertex parameters \(\mathbf{\theta}_{1}^{\star}\) and \(\mathbf{\theta}_{2}^{\star}\), the edge parameter \(\mathbf{\theta}_{1,2}^{\star}\) in (A) can assume just one value, i.e., its parameter space is the single point \(f(\mathbf{\theta}_{1}^{\star},\mathbf{\theta}_{2}^{\star})\) in the parameter space of the third vertex \(\mathbf{\theta}_{3}^{\star}\) in the latter model. In other words, when we consider the joint parameter space \(\mathbf{\Theta}_{0}\) of the atoms \(\mathbf{\theta}_{1}^{\star},\mathbf{\theta}_{2}^{\star},\mathbf{\theta}_{1,2}^{\star}\) (two vertices and one edge) it is a lower dimensional sub-space of the parameter space \(\mathbf{\Theta}\) for the three vertices \(\mathbf{\theta}_{1}^{\star},\mathbf{\theta}_{2}^{\star},\mathbf{\theta}_{3}^{\star}\). The NIW prior in (2) on \(\mathbf{\theta}_{j}^{\star}\) assigns prior probability 0 to \(\mathbf{\Theta}_{0}\), and thus also zero posterior probability. The issue is similar to identifiability related to the replication of terms in a standard mixture model with independent priors on cluster-specific parameters Green (2018).
## 7 Application to Single-Cell RNA Data
We fit the GARP model for the RNA-seq data shown in Figure 1. Single-cell RNA-seq experiments record cell-specific transcriptional profiles that allow us to infer, for example, cell differentiation or cancer progression. Inference under the GARP model for the data shown in Figure 1 reconstructs transitions of stem cells into fully differentiated cells in a scRNA-seq experiment on horizontal basal cells from the adult mouse olfactory epithelium. The original data is available on GEO in GSE95601.
The transcriptional profiles map differences in gene expressions due to the development phases of the cells. Stem cells evolve into fully differentiated cells by gradual transcriptional
changes, passing through a small number of homogeneous subpopulations of cells. The primary inferential goal is to find these homogeneous subpopulations of cells (i.e., vertex-clusters) and understand the relationships between them aligning such subpopulations on a biologically interpretable graph.
### ScRNAseq Data and Pre-Processing
The raw data is a count matrix with rows corresponding to cells and columns representing different genes. Most of the counts in the matrix are zeros, usually about 90% (the percentage can vary according to the scRNA-seq technology used).
The data set originally contains measurements for 28284 genes in 849 cells with 84% zeroes. To extract a lower dimensional signal we implemented pre-processing following the pipeline described in Perraudeau _et al._ (2017) and available in Bioconductor(Gentleman _et al._, 2004). We briefly describe the pipeline. We first discard around 100 low-quality cells and retain the 1000 most variables genes. Next, we normalize the data matrix and extract 50-dimensional biomarkers from the count data, accounting for zero-inflation and over-dispersion of the scRNA-seq data via "Zero-Inflated Negative Binomial Wanted Variation Extraction" (ZINB-WaVE) (Risso _et al._, 2018). Finally, we reduced the dimensionality to the 2 most relevant markers via multidimensional scaling analysis. The data matrix obtained after pre-processing is denoted by \(\mathbf{y}=(\mathbf{y}_{i,j}:i=1,\ldots,N,j=1,2)\), where the rows represent 747 cells, and the columns record the two final biomarkers. The data is shown in Figure 1.
### Results
We implement inference under the GARP model using the Gnedin process (Example 2) to control the vertex-clustering. We choose the Gnedin process because one of the goals is inference on \(K_{v}\). The Gnedin process is a particularly attractive Gibbs-type prior for clustering from both, a Bayesian modeling perspective as well as for its frequentist properties of the posterior distribution, as discussed in Section 3.
The posterior estimated GARP places 466 cells into vertex-clusters (main phases) and 281 into ordered edge-clusters (transition phases). Figure 1(a) summarizes inference. The heat-map in Figure 1(b) shows the posterior probabilities of co-clustering of pairs of observations, suggesting low posterior uncertainty around the estimated main phases, making the point estimate under the GARP a meaningful posterior summary. The conditional uncertainty of the graph-alignment of the vertices given the point estimates of the main phases is low. Visual inspection of the results suggests that the model is effectively working as expected. Once we have identified the main phases (vertex-clusters) we find the biomarkers that best characterize such clusters, i.e., the most differently expressed genes
(DE genes). We rely on the function findMarkers of the Bioconductor package scran(Lun _et al._, 2016). More precisely, we first perform an exact binomial test to identify DE genes between pairs of groups of cells (vertex-clusters). From that, we identify the 6 most significant biomarkers for each pairwise comparison. For each gene then a combined p-value is computed using Simes multiplicity adjustment applied to all p-values obtained by the pairwise comparisons (Simes, 1986). Note that these p-values are not directly used for ranking and are only used to find the DE genes. Finally, the p-values are consolidated across all genes using the BH method of Benjamini and Hochberg (1995) to implement multiple comparisons under a restriction on false discovery rate (FDR) (Benjamini _et al._, 2009). The adjusted p-values are reported in Table 3. The reported FDRs are intended only as a rough measure of significance. Note that properly correcting for multiple testing is not generally possible when clusters are based on the same data that is used for the DE testing. Nonetheless, a small FDR remains desirable. Table 3 shows the average within vertex-cluster gene expressions for the selected top 6 biomarkers and corresponding FDRs. The log means expression in the different biomarkers and vertices are also shown in Figure 3. On average the main phases obtained (vertex-clusters) have very different expressions of the selected biomarkers. Finally, we show the entire distribution of the cells in the different biomarkers and main phases in Figure 4.
Figure 2: Left Panel: Scatter-plot of the scRNA data. Triangular plot symbols indicate cells assigned to vertices (\(V_{i}=1\)) while the remaining cells are assigned to edges (\(V_{i}=0\)) and are represented with a circular shape. Cells are colored according to the different phases (i.e., \(Z_{i}\)) in the point estimate. The segments denote the edges of the graph and the color is darker if the probability of assigning observations to the edge is greater. Right panel: Posterior probabilities of co-clustering of observations assigned to vertices.
### Comparison with Independent Gaussian Mixtures
For comparison, we estimate an independent Gaussian mixture model without edges and cluster alignment (implemented as the GARP model with \(p_{v}=1\)). The posterior distribution of the number of clusters (see Table 4) shows more uncertainty since the model fails to find well-separated clusters, due to the noise that is introduced by the presence of the cells transitioning between the main phases. In other words, including cells in transition in the clustering has reduced the statistical power in detecting homogeneous subpopulations. This is illustrated in Figure 5. Recall that we are using VI loss to summarize the posterior random partition. As a consequence of the increased uncertainty, the point estimate of the clustering of the main phases becomes sensitive to the choice of the loss function. For instance, both the point estimate and the maximum a posteriori estimate of the number of main phases is 4 under GARP, while the earlier is 5 and the latter is 6 under the independent Gaussian mixture model. In the figures, we show the estimated cluster arrangement that minimizes the VI loss for coherency in the comparison.
\begin{table}
\begin{tabular}{r r r r r r} \hline DE Genes & Vertex 1 & Vertex 2 & Vertex 3 & Vertex 4 & FDR \\ \hline Slc26a7 & 397.98 & 142.45 & 0.27 & 0.05 & 1.10e-23 \\ Pik3c2b & 19.44 & 220.70 & 106.76 & 98.38 & 3.81e-08 \\ Hes6 & 3.00 & 16.15 & 669.49 & 41.62 & 7.56e-14 \\ Stmn3 & 0.41 & 0.08 & 22.97 & 320.38 & 2.89e-21 \\ Abca13 & 12.98 & 312.22 & 5.12 & 0.54 & 1.49e-07 \\ Il33 & 2.77 & 586.62 & 8.33 & 0.85 & 2.40e-07 \\ \hline \end{tabular}
\end{table}
Table 3: Average within vertex-cluster gene expressions and FDRs in the selected top 6 biomarkers.
Figure 3: Heatmap of the log mean expressions in top 6 DE genes in the main phases.
Figure 4: Left panel: Heat-map log genetic expressions in top 6 DE genes in all cells ordered by main phases. The cells are sorted by vertex-cluster memberships and the dashed blue lines separate the cells in the different clusters. Right panel: Boxplot genetic expressions (after \(\log(\cdot+1)\) transformation) in the top 6 DE genes in all cells in the different main phases (vertex-clusters).
\begin{table}
\begin{tabular}{c c c c c c c c c} & \multicolumn{6}{c}{Panel A: GARP} \\ \cline{2-9} & \multicolumn{2}{c}{\(k\)} & \multicolumn{2}{c}{4} & \multicolumn{2}{c}{5} & \multicolumn{2}{c}{6} & \multicolumn{2}{c}{7} & \multicolumn{2}{c}{8} \\ \cline{2-9} & \multicolumn{2}{c}{\(\hat{P}(K_{v}=k)\)} & \multicolumn{2}{c}{0.7801} & \multicolumn{2}{c}{0.1951} & \multicolumn{2}{c}{0.0240} & \multicolumn{2}{c}{0.0004} & \multicolumn{2}{c}{0.0004} \\ \cline{2-9} & \multicolumn{2}{c}{Panel B: Independent Mixture Model} & & & & & & \\ \hline \(k\) & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 \\ \hline \(\hat{P}(K=k)\) & 0.0692 & 0.4362 & 0.3115 & 0.114 & 0.0516 & 0.0128 & 0.002 & 0.0012 & 0.0016 \\ \hline \end{tabular}
\end{table}
Table 4: Panel A: Estimated posterior of the number of main phases under GARP.
Panel B: Estimated posterior of the number of main phases under the independent Gaussian mixture model.
Figure 5: Results with independent mixtures. Left Panel: Scatter-plot of the scRNA data. Cells are colored according to the different phases in the point estimate. Right panel: Co-clustering posterior probabilities.
Discussion
We proposed a graph-aligned random partition model to infer homogeneous subgroups of observations aligned on a graph, explicitly allowing for units transitioning between the clusters. The motivating applications are single-cell RNA experiments where scientists are interested in understanding fundamental biological processes such as cell differentiation and tumor evolution. Interesting future applications include inference for cell type transitions in a tumor microenvironment. Other extensions could include data integration with other modalities, such as histology data.
Methodological extensions include jointly clustering similar cells _and_ genes, via separately exchangeable nested random partition models (Lee _et al._, 2013b; Lin _et al._, 2021). Another interesting extension is to combine the results of partially exchangeable random partition models that arise from the compositions of Gibbs-type and species sampling priors (Teh _et al._, 2006; Camerlenghi _et al._, 2019; Argiento _et al._, 2020; Bassetti _et al._, 2020; Lijoi _et al._, 2023) to the GARP model with dependent locations. In the context of the scRNA-seq experiment, this would allow inference on multiple single-cell RNA-seq data matrices. In such a way one could borrow information across different measurements while accounting for relevant heterogeneity. Finally, including unit-specific spatial information, the model can be used for spatial clustering with transitions between the clusters.
## Acknowledgment
Most of the paper was completed while G. R. was a Postdoc at UT Austin. G. R. was partially funded by NSF/DMS 1952679 and he is also affiliated to the Bocconi Institute for Data Science and Analytics (BIDSA).
## References
* Antoniak (1974) Antoniak, C. E. (1974). Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. _Ann. Stat._, **2**, 1152-1174.
* Argiento _et al._ (2020) Argiento, R., Cremaschi, A., and Vannucci, M. (2020). Hierarchical normalized completely random measures to cluster grouped data. _J. Am. Stat. Assoc._, **115**, 318-333.
* Bassetti _et al._ (2020) Bassetti, F., Casarin, R., and Rossini, L. (2020). Hierarchical species sampling models. _Bayesian Anal._, **15**, 809-838.
* Benjamini and Hochberg (1995) Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: a practical and powerful approach to multiple testing. _J. R. Stat. Soc. Series B Stat. Methodol._, **57**, 289-300.
* Benjamini and Hochberg (1996)
Benjamini, Y., Heller, R., and Yekutieli, D. (2009). Selective inference in complex research. _Philos. Trans. Royal Soc. A_, **367**, 4255-4271.
* Beraha et al. (2022) Beraha, M., Argiento, R., Moller, J., and Guglielmi, A. (2022). MCMC computations for Bayesian mixture models using repulsive point processes. _J. Comput. Graph. Stat._, **31**, 422-435.
* Betancourt et al. (2022) Betancourt, B., Zanella, G., and Steorts, R. C. (2022). Random partition models for microclustering tasks. _J. Am. Stat. Assoc._, **117**, 1215-1227.
* Camerlenghi et al. (2019) Camerlenghi, F., Lijoi, A., Orbanz, P., and Prunster, I. (2019). Distribution theory for hierarchical processes. _Ann. Stat._, **47**, 67-92.
* Dahl et al. (2022) Dahl, D. B., Johnson, D. J., and Muller, P. (2022). Search algorithms and loss functions for Bayesian clustering. _J. Comput. Graph. Stat._, **31**, 1189-1201.
* De Blasi et al. (2015) De Blasi, P., Favaro, S., Lijoi, A., Mena, R. H., Prunster, I., and Ruggiero, M. (2015). Are Gibbs-type priors the most natural generalization of the Dirichlet process? _IEEE Trans. Pattern Anal. Mach. Intell._, **37**, 212-229.
* Diaconis and Freedman (1980) Diaconis, P. and Freedman, D. (1980). Finite exchangeable sequences. _Ann. Probab._, **8**, 745-764.
* the past and the future. In S. Lessard, editor, _Mathematical and Statistical Developments of Evolutionary Theory_, volume 299, pages 177-227. Springer.
* Gentleman et al. (2004) Gentleman, R. C., Carey, V. J., Bates, D. M., Bolstad, B., Dettling, M., Dudoit, S., Ellis, B., Gautier, L., Ge, Y., Gentry, J., _et al._ (2004). Bioconductor: open software development for computational biology and bioinformatics. _Genome Biol._, **5**, 1-16.
* Gnedin (2010) Gnedin, A. V. (2010). A species sampling model with finitely many types. _Electron. Commun. Probab._, **15**, 79-88.
* Gnedin and Pitman (2006) Gnedin, A. V. and Pitman, J. (2006). Exchangeable Gibbs partitions and Stirling triangles. _J. Math. Sci._, **138**, 5674-5685.
* Green (2018) Green, P. J. (2018). Introduction to finite mixtures. In S. Fruhwirth-Schnatter, G. Celeux, and C. P. Robert, editors, _Handbook of Mixture Analysis_, pages 3-20. Chapman and Hall/CRC.
* Green and Richardson (2001) Green, P. J. and Richardson, S. (2001). Modelling heterogeneity with and without the Dirichlet process. _Scand. J. Stat._, **28**, 355-375.
* Green et al. (2019)
Lee, J., Quintana, F. A., Muller, P., and Trippa, L. (2013a). Defining predictive probability functions for species sampling models. _Stat. Sci._, **28**, 209-222.
* Lee et al. (2013b) Lee, J., Muller, P., Zhu, Y., and Ji, Y. (2013b). A nonparametric Bayesian model for local clustering with application to proteomics. _J. Am. Stat. Assoc._, **108**, 775-788.
* Lijoi et al. (2005) Lijoi, A., Mena, R. H., and Prunster, I. (2005). Hierarchical mixture modeling with normalized inverse-Gaussian priors. _J. Am. Stat. Assoc._, **100**, 1278-1291.
* Lijoi et al. (2007a) Lijoi, A., Mena, R. H., and Prunster, I. (2007a). Bayesian nonparametric estimation of the probability of discovering new species. _Biometrika_, **94**, 769-786.
* Lijoi et al. (2007b) Lijoi, A., Mena, R. H., and Prunster, I. (2007b). Controlling the reinforcement in Bayesian non-parametric mixture models. _J. R. Stat. Soc. Series B Stat. Methodol._, **69**, 715-740.
* Lijoi et al. (2023) Lijoi, A., Prunster, I., and Rebaudo, G. (2023). Flexible clustering via hidden hierarchical Dirichlet priors. _Scand. J. Stat._, **50**, 213-234.
* Lin et al. (2021) Lin, Q., Rebaudo, G., and Muller, P. (2021). Separate exchangeability as modeling principle in Bayesian nonparametrics. _Preprint at arXiv:2112.07755_.
* Lo (1984) Lo, A. Y. (1984). On a class of Bayesian nonparametric estimates: I. density estimates. _Ann. Stat._, **12**, 351-357.
* Lun et al. (2016) Lun, A. T., McCarthy, D. J., and Marioni, J. C. (2016). A step-by-step workflow for low-level analysis of single-cell RNA-seq data with Bioconductor. _F1000Research_, **5**, 1-64.
* Miller and Harrison (2018) Miller, J. W. and Harrison, M. T. (2018). Mixture models with a prior on the number of components. _J. Am. Stat. Assoc._, **113**, 340-356.
* Neal (2000) Neal, R. M. (2000). Markov chain sampling methods for Dirichlet process mixture models. _J. Comput. Graph. Stat._, **9**, 249-265.
* Nobile (1994) Nobile, A. (1994). _Bayesian Analysis of Finite Mixture Distributions_. Ph.D. thesis, Carnegie Mellon Univ.
* Nobile and Fearnside (2007) Nobile, A. and Fearnside, A. T. (2007). Bayesian finite mixtures with an unknown number of components: the allocation sampler. _Stat. Comput._, **17**, 147-162.
* Perraudeau et al. (2017) Perraudeau, F., Risso, D., Street, K., Purdom, E., and Dudoit, S. (2017). Bioconductor workflow for single-cell RNA sequencing: normalization, dimensionality reduction, clustering, and lineage inference. _F1000Research_, **6**, 1-28.
* Perraudeau et al. (2017)
Petralia, F., Rao, V., and Dunson, D. B. (2012). Repulsive mixtures. In _Adv. Neural Inf. Process. Syst._, volume 25, pages 1889-1897.
* Pitman (1996) Pitman, J. (1996). Some developments of the Blackwell-MacQueen urn scheme. _Lect. Notes-Monogr. Series_, **30**, 245-267.
* Pitman and Yor (1997) Pitman, J. and Yor, M. (1997). The two-parameter Poisson-Dirichlet distribution derived from a stable subordinator. _Ann. Probab._, **25**, 855-900.
* Risso et al. (2018) Risso, D., Perraudeau, F., Gribkova, S., Dudoit, S., and Vert, J.-P. (2018). A general and flexible method for signal extraction from single-cell RNA-seq data. _Nat. Commun._, **9**, 1-17.
* Simes (1986) Simes, R. J. (1986). An improved Bonferroni procedure for multiple tests of significance. _Biometrika_, **73**, 751-754.
* Teh et al. (2006) Teh, Y. W., Jordan, M. I., Beal, M. J., and Blei, D. M. (2006). Hierarchical Dirichlet processes. _J. Am. Stat. Assoc._, **101**, 1566-1581.
* Wade and Ghahramani (2018) Wade, S. and Ghahramani, Z. (2018). Bayesian cluster analysis: point estimation and credible balls (with discussion). _Bayesian Anal._, **13**, 559-626.
* Xu et al. (2016) Xu, Y., Muller, P., and Telesca, D. (2016). Bayesian inference for latent biologic structure with determinantal point processes (DPP). _Biometrics_, **72**, 955-964.
Supplementary materials of
**Graph-Aligned Random Partition Model (GARP)**
Giovanni Rebaudo\({}^{a}\) ([email protected])
Peter Muller\({}^{b}\) ([email protected])
\({}^{a}\)Collegio Carlo Alberto & Department of ESOMAS, University of Turin, IT
\({}^{b}\)Department of Statistics and Data Sciences & Department of Mathematics,
University of Texas at Austin, USA
## S.1 Edge Multivariate Gaussian Mixtures
Figure S.1 shows the contour plot of an edge cluster in \(\mathbb{R}^{2}\).
Without loss of generality consider an edge connecting the two vertex-clusters, \(k=1\) and \(k^{\prime}=2\), with cluster-specific parameters \(\mathbf{\mu}_{1}^{*}\) and \(\mathbf{\mu}_{2}^{*}\). The edge-cluster is centered around the half-point \(\mathbf{\mu}_{1,2}^{*}=\frac{\mathbf{\mu}_{1}^{*}+\mathbf{\mu}_{2}^{*}}{2}\). The following construction defines \(\mathbf{\Sigma}_{1,2}^{*}\) such that the edge is aligned along the connecting line \(L_{1,2}\), as described in Section 2.1 of the main manuscript. Let \(\mathbf{e}=\frac{\mathbf{\mu}_{1}^{*}-\mathbf{\mu}_{2}^{*}}{||\mathbf{\mu}_{1}^{*}-\mathbf{\mu}_{2 }^{*}||}\), where \(||\mathbf{\mu}_{1}^{*}-\mathbf{\mu}_{2}^{*}||\) denotes the Euclidean distance between \(\mathbf{\mu}_{1}^{*}\) and \(\mathbf{\mu}_{2}^{*}\). Let \(\mathbf{P}=\mathbf{e}\mathbf{e}^{\intercal}\) be the perpendicular projection matrix such that for any \(\mathbf{y}_{i}\in\mathbb{R}^{p}\), \(\mathbf{y}_{i}^{(p)}=\mathbf{P}\mathbf{y}_{i}\) is the perpendicular projection of \(\mathbf{y}_{i}\) onto the connecting line between \(\mathbf{\mu}_{1}^{*}\) and \(\mathbf{\mu}_{2}^{*}\). Let \(\mathbf{P}=\mathbf{Q}\mathbf{D}\mathbf{Q}^{\intercal}\) denote a singular value decomposition (SVD) with \(\mathbf{D}=\text{diag}(1,0,\ldots,0)\). Thus \(\tilde{\mathbf{R}}=\mathbf{Q}^{\intercal}\) is the rotation matrix such that \(\tilde{\mathbf{y}}_{i}=\tilde{\mathbf{R}}\mathbf{y}_{i}\) is the rotation of \(\mathbf{y}_{i}\) in the new axes where the first axis is the line connecting \(\mathbf{\mu}_{1}^{*}\) and \(\mathbf{\mu}_{2}^{*}\) and the others are the orthogonal directions. Now, we define \(\tilde{\mathbf{S}}=\text{diag}(||\mathbf{\mu}_{1}^{*}-\mathbf{\mu}_{2}^{*}||r_{0},r_{1}, \ldots,r_{1})\) and \(\mathbf{\Sigma}_{1,2}^{*}=\tilde{\mathbf{R}}\tilde{\mathbf{S}}\tilde{\mathbf{R}}^{\intercal}\).
S.1
Under this construction, the term in the mixture of normal sampling model (1) corresponding to the edge \((k,k^{\prime})\) is such that the Gaussian component projected onto the connecting line \(L_{1,2}\) has a standard deviation \(r_{0}\,||\mathbf{\mu}_{1}^{*}-\mathbf{\mu}_{2}^{*}||\), implying lower likelihood for edges between distant vertices' locations. The standard deviations of the independent Gaussian distributions on the projection onto \(L_{1,2}^{\perp}\) is \(r_{1}\).
## S.2 Composition of Discrete Random Probabilities
Let \(\mathbf{\theta}_{i}=\mathbf{\theta}_{Z_{i}}^{*}\) denote the normal moments in the sampling model (1). As a third characterization of the proposed GARP, we define \(\widetilde{G^{(N)}}\) as a graph-aligned random partition (with unique atoms) implied by the ties under conditional i.i.d. sampling of \(\mathbf{\theta}_{i}\), with separate models for vertex and edge-clusters. For vertex-clusters
\[\begin{split}&\mathbf{\theta}_{i}\mid V_{i}=1,\mathbf{V},P_{v}\stackrel{{ \text{iid}}}{{\sim}}P_{v},\\ & P_{v}=\sum_{m=1}^{M_{v}}\pi_{m}\delta_{\tilde{\mathbf{\theta}}_{m}} \sim\text{Gibbs-Type Process},\end{split}\] (S.1)
where \(M_{v}\) is the number of atoms of the discrete random probability \(P_{v}\) that is a Gibbs-type process and can be finite, as in the finite symmetric DM case, infinite as in the DP and PYP case, or be a random variable on \(\mathbb{N}\) as in the MFM case. Thus \((\pi_{m})_{m=1}^{M_{v}}\) are the random weights (that are sampled independently from the atoms) from the distribution on the simplex associated with the Gibbs-type process. The unique atoms \(\mathbf{\tilde{\theta}}_{m}\) of \(P_{v}\) are i.i.d. samples from the NIW distribution in (2). Note that the unique sampled vertex parameters \(\mathbf{\theta}_{v}^{*}=\{\mathbf{\theta}_{1}^{*},\ldots,\mathbf{\theta}_{K_{v}}^{*}\}\) are a subset of \(\{\mathbf{\tilde{\theta}}_{1},\ldots,\tilde{\mathbf{\theta}}_{M_{v}}\}\).
The edge-clusters are implied by
\[\begin{split}&\mathbf{\theta}_{i}\mid V_{i}=0,\mathbf{V}^{-i},K_{v},\mathbf{ \theta}_{v}^{*}\stackrel{{\text{iid}}}{{\sim}}P_{e},\\ & P_{e}=\sum_{1\leq k<k^{\prime}\leq K_{v}}\pi_{k,k^{\prime}} \delta_{(\mathbf{\theta}_{k,k^{\prime}}^{*})}.\end{split}\] (S.2)
Recall that \(M_{e}=K_{v}(K_{v}-1)/2\). The random weights follow a symmetric \(M_{e}\)-dimensional Dirichlet with hyper-parameter \(\beta/M_{e}\),
\[(\pi_{k,k^{\prime}})_{1\leq k<k^{\prime}\leq K_{v}}\sim\text{Dir}(\beta/M_{e}, \ldots,\beta/M_{e}).\] (S.3)
From the characterizations of Gibbs-type and DM processes, it is straightforward to show that the aforementioned discrete conditional random probability models for the parameters characterize the GARP as stated in the following proposition.
**Proposition S.5**.: _The random partition structure of the GARP model (4) and the vertex
and edge-parameters distributions can be characterized as the configuration of ties implied by the truncation sampling model in (7), (8), (S.1), (S.2), and (S.3)._
## S.3 Hyperparameters Settings
In both the application and the simulation we set \(\gamma=0.5\) for the Gnedin process controlling the vertex-clusters and \(\beta=0.5\) for the symmetric DM with hyperparameter \(0.5/M_{v}\) to favor the sparsity of the graph. Moreover, for the choice of the hyperparameters of the NIW we set \(\mathbf{\mu}_{0}=\bar{\mathbf{y}}\), \(\kappa_{0}=0.001\)\(\nu_{0}=100\)\(\mathbf{\Lambda}_{0}=\xi^{2}\,\mathbf{I}\) and \(\mathbf{\Sigma}_{0}=\mathbf{\Lambda}_{0}^{-1}\). For scenarios in which the cluster are well separated, we recommend a large value of \(\xi^{2}\) (that we set equal to 150), while we recommend a smaller value of \(\xi^{2}\) (that we set equal to 15) if the data are not well separated in the Euclidean space. Moreover, in both the application and the simulation we set \(r_{0}^{2}=4(\chi_{2,1-\alpha}^{2})^{-1}\) and \(r_{1}^{2}=(2\chi_{2,1-\alpha}^{2})^{-1}\), where \(\chi_{2,1-\alpha}^{2}\) is the quantile of order \(1-\alpha\) (we set \(\alpha=1\%\)) of a Chi-squared distribution with 2 degrees of freedom to have the desired eccentricity of the elliptical contour plot of the edge as well as the 99%-level of the contour plot not too spread. To obtain that, recall that \(c\)-level counter-plot of multivariate Gaussian density, such as the edge Gaussian in (1), are points \(\mathbf{y}\in\mathbb{R}^{d}\) such that \((\mathbf{y}-\mathbf{\mu}_{k,k^{\prime}})^{\intercal}\mathbf{\Sigma}_{k,k^{\prime}}^{-1}( \mathbf{y}-\mathbf{\mu}_{k,k^{\prime}})\) is constant, that is the contour levels are ellipsoid centered at \(\mathbf{\mu}_{k,k^{\prime}}\). Finally, note that if \(\mathbf{y}\sim\mathrm{N}(\mathbf{\mu}_{k,k^{\prime}},\mathbf{\Sigma}_{k,k^{\prime}})\) then,
\[(\mathbf{y}-\mathbf{\mu}_{k,k^{\prime}})^{\intercal}\mathbf{\Sigma}_{k,k^{\prime}}^{-1}( \mathbf{y}-\mathbf{\mu}_{k,k^{\prime}})\sim\chi_{d}^{2}\,,\]
where \(\chi_{d}^{2}\) denotes the Chi-square distribution with \(d\) degrees of freedom.
Visually the contour plots of such edge density are shown in Figure S.1 and the data sampled from such configuration looks like the one in Figure S.2.
Figure S.2: Scatter-plot of the simulated data. The red segments represent the edges that connect the truth vertices.
Implementing Posterior Inference
### Use of the Relaxed Model \(\widetilde{G^{(N)}}\) in Posterior Simulation
We discuss in more detail the use of the projectivity property of \(\widetilde{G^{(N)}}\) to define a Polya urn scheme for a tractable marginal posterior simulation algorithm. First, recall that the relaxed model \(\widetilde{G^{(N)}}\) can be seen as a hierarchical composition of a Kolmogorov consistent EPPFs with product partition forms (Sections 2.2 and 3), which implies tractable expressions for \(\widetilde{G^{(N)}}(V_{i},Z_{i}\mid\boldsymbol{V}^{-^{i}},\boldsymbol{Z}^{-^ {i}})=\frac{\widetilde{G^{(N)}}(\boldsymbol{V},\boldsymbol{Z})}{\widetilde{G^ {(N)}}(\boldsymbol{V}^{-^{i}},\boldsymbol{Z}^{-^{i}})}\) under the relaxed model \(\widetilde{G^{(N)}}\).
To derive then the desired full conditional distributions under \(G^{(N)}\) we note, e.g., that if \(I_{i}=0\) for \((\boldsymbol{V},\boldsymbol{Z})\) (recall the definition of \(I_{i}\) in Section 6), then
\[G^{(N)}\{V_{i}=v,Z_{i}=z\mid\cdots\}\propto\frac{\widetilde{G^{(N)}}( \boldsymbol{V},\boldsymbol{Z})\prod_{j\in[N]}\mathrm{N}(\boldsymbol{y}_{j} \mid\boldsymbol{\theta}_{Z_{j}}^{\boldsymbol{*}})}{\widetilde{G^{(N)}}( \boldsymbol{V}^{-^{i}},\boldsymbol{Z}^{-^{i}})\prod_{j\in[N]^{-i}}\mathrm{N}( \boldsymbol{y}_{j}\mid\boldsymbol{\theta}_{Z_{j}}^{\boldsymbol{*}})},\]
for any \(v\in\{0,1\}\) and \(z\in\boldsymbol{Z}^{-^{i}}\). Moreover, when \(I_{i}=0\) for \((\boldsymbol{V},\boldsymbol{Z})\), the marginal probability in the denominator is equal to the one in a Kolmogorov consistent model (i.e., if \(I_{i}=0\), \(\widetilde{G^{(N)}}(\boldsymbol{V}^{-^{i}},\boldsymbol{Z}^{-^{i}})=\widetilde {G^{(N-1)}}(\boldsymbol{V}^{-^{i}},\boldsymbol{Z}^{-^{i}})\), up to a normalization constant) and this allows us to generalize then tractable marginal samplers such as in Neal (2000) or Teh et al. (2006) relying on the characterization of the GARP via a composition of gCRP in Section 2.3.
### Point Estimates for the GARP Random Partition
How to choose good summaries (i.e., point estimates) for reporting posterior inference on functionals of interest can be a fundamental and nontrivial question in Bayesian analysis. It is especially challenging if the object of interest is a partition or a graph. To define a posterior point estimate and perform uncertainty quantification we build on the existing literature of posterior point estimates of random partition based on a decision-theoretic approach (Wade and Ghahramani, 2018; Dahl et al., 2022b) generalizing the results for the more challenging case of GARP. We propose a point estimate for the GARP as follows.
1. Assign observations to vertices versus edges using the posterior mode, \[\hat{V}_{i}=1\text{ if }\bar{V}_{i}\equiv\sum_{t}\frac{V_{i}^{(t)}}{T}>0.5,\] where \(T\) is the Monte Carlo sample size, and \(V_{i}^{(t)}\) is the imputed value in iteration \(t\) of the MCMC simulations. The uncertainty around the point estimate is quantified using \((1-\hat{V}_{i})\bar{V}_{i}+\hat{V}_{i}(1-\bar{V}_{i})\).
2. Given \(\hat{\boldsymbol{V}}\) we find a point estimate \(\hat{\boldsymbol{Z}}_{v}\) for the partition of vertex units by minimizing
the variation of information loss (Meila, 2007) as suggested by Wade and Ghahramani (2018) and implemented in the R package salso(Dahl et al., 2022a). Alternative loss functions can be used as needed for different applications (See e.g., Binder, 1978). For uncertainty quantification, we report the heat-map with the posterior probabilities of co-clustering.
(3) Given \(\hat{\mathbf{V}}\) and \(\hat{\mathbf{Z}}_{v}\) we find a point estimate \(\hat{\mathbf{Z}}_{e}\) and conditional uncertainty quantification for \(\mathbf{Z_{e}}\) using the posterior probability of observations being assigned to the different edges. We evaluate conditional posterior probabilities of assigning the remaining observations to the possible edges,
\[G(\mathbf{Z_{e}}\mid\cdots)\propto\prod_{k<k^{\prime}}\Gamma(n_{k,k^{\prime}}+ \beta/M_{e})\prod_{C_{k,k^{\prime}}}\mathrm{N}(\mathbf{y}_{i}\mid\mathbf{\mu}_{k,k^{ \prime}}^{*},v\mathbf{\Sigma}_{k,k^{\prime}}^{*}).\] (S.4)
Here the first product goes over all \((k,k^{\prime})\) with \(1\leq k<k^{\prime}\leq K_{v}\) and the second over the \(\mathbf{y}_{i}\) such that \(z_{i}=(k,k^{\prime})\), i.e., the set \(C_{k,k^{\prime}}\). Probabilities (S.4) are evaluated by Rao-Blackwellization (Robert and Roberts, 2021), using the full conditionals
\[G\{Z_{i}=(k,k^{\prime})\mid V_{i}=0,\cdots\}\propto(n_{k,k^{\prime}}^{-i}+ \beta/M_{e})\mathrm{Norm}(\mathbf{y}_{i}\mid\mathbf{\mu}_{k,k^{\prime}}^{*},\mathbf{\Sigma }_{k,k^{\prime}}^{*}).\]
We visualize \(p(\mathbf{Z_{e}}\mid\hat{\mathbf{Z}}_{v},\,\mathbf{\mu}^{*},\,\mathbf{\Sigma}^{*})\) by adding edges between vertices with color intensity proportional to the sum over observations assigned to edges (i.e., \(\hat{V}_{i}=0\)) of the probability that such observations will be assigned to the different edges \((k,k^{\prime})\)'s.
## S.5 Simulation Studies
We carried out a simulation study under a well-specified and a miss-specified data generating truth to assess inference under finite sample size scenarios. We set up simulation truths close to the mouse data. The data are simulated from a 5 vertex mixture with \(n_{k}=200\) observations in each vertex and an additional \(n_{k,k^{\prime}}=100\) observations around 5 assumed edges.
### Well Specified Scenario
In the first simulation scenario, we assume a simulation truth with \(K_{v}=5\) vertex clusters with cluster-specific Gaussians with mean vectors \((-5,-4)\), \((-4,2)\), \((0,7)\), \((5,3)\) and \((6,-3)\), and a common covariance matrix \(\mathrm{diag}(0.25,0.25)\). Observations assigned to edge components are sampled from a Gaussian mixture with cluster-specific kernels as in (3). The simulated \(N=1500\) observations are shown in Figure S.3a.
Figure S.3 shows that the GARP was able to recover the simulated truth in the point
estimate. Moreover, the uncertainty around the point estimate is low.
### Misspecified Scenario
Here we consider a misspecified data-generating truth, using the same true mean vectors for five vertex clusters with cluster-specific Gaussian kernels as in the previous scenario, but inflated vertex-specific covariance matrices \(\text{diag}(0.5,0.5)\). For the edge components, we introduce two sources of misspecification. First, we center the edge components not at the midpoint of the two adjacent vertices but introduce a bias term. Instead, the edge-specific kernels are centered at \(\frac{\mathbf{\mu}_{k}^{\star}+\mathbf{\mu}_{k^{\prime}}^{\star}}{2}\) plus a shift of \(+0.25\) in the direction of the line connecting the adjacent vertices, as well as in the perpendicular direction. Second, the observations for the edge components are generated from a uniform distribution on a rectangle centered at the described \(\mathbf{\mu}_{k,k^{\prime}}^{\star}\) and with the length of the side in the direction of the connecting line equal to half the length of the Euclidean distance between the adjacent vertices and the length of the other side equals 2. Under this simulation truth, the scatter plot of the simulated data still allows a meaningful definition of vertex and edge clusters, but the additional misspecification and variability with respect to the well-specified scenario make the inference with our model more challenging. The simulated \(N=1500\) observations are shown in Figure S.4a.
Figure S.3: Well-specified simulation scenario. Left Panel: Scatter plot of the simulated data. Observations are colored according to the estimated cluster membership. Line segments show edges of the estimated graph, with the clusters at the end of line segments being vertex clusters, and clusters along the line segments being edge clusters. The grey level of the line segments shows the estimated probability of assigning observations to the respective edge (barely varying in this case). Right panel: Posterior co-clustering probabilities for all observations assigned to vertices.
Figure S.4 shows that the GARP was able to recover well the simulated truth in the point estimate in this misspecified scenario. The uncertainty around the point estimate is low.
### Non-Connected Graph Scenario
Here we investigate how the model works in a scenario with no meaningful notion of the connected graph in the data. More precisely, we simulate from a mixture of five vertex clusters, exactly as in Section S.5.1, but without any edge components.
The simulated \(N=1000\) observations and inference under the GARP are shown in Figure S.5a.
Figure S.5 shows that the GARP was able to recover well the simulated truth in the point estimate also under this not connected graph simulation truth.
## S.6 Proof of the Main Results
For easy reference, we provide in Table S.1 a brief statement of the various probability models used in the discussion and the results, and in the following list a brief summary of the main results. Here, Ex. 1 - 4 refer to the four examples for the EPPF from Section 3.
For notational simplicity, we refer with \(\widetilde{G_{\text{vZ}}}\) also to the marginal laws of the stochastic process \((T_{i})_{i\in\mathbb{N}}\) as well as the law of \(M_{v}\) and \((\pi_{m})_{m=1}^{M_{v}}\) in (S.1) since they do not depend on the dimension \(N\). Finally, we refer with \(G^{(N)}\) to the probability density and mass functions of random variables under the GARP models (1)-(4).
**Propositions 1 and S.5:** Characterizations of the GARP \(G^{(N)}\) as truncation of \(\widetilde{G^{(N)}}\)
which in turn is characterized as (i) a gCRP or, (ii) a composition of random discrete prob measures, respectively.
**Proposition 2**:: Analytical statement of \(\widetilde{G^{(N)}}\{E_{N}\}\) for a general Gibbs-type prior.
**Theorem 1**:: Let \(g^{\infty}=\lim_{N\to\infty}\widetilde{G^{(N)}}\{K_{v}=1\}\) and \(g^{\infty}_{v}=\lim_{n_{v}\to\infty}\widetilde{G^{(N)}}\{K_{v}=1\mid N_{v}=n_{v}\}\). Then
\[g^{\infty}=g^{\infty}_{v}=\begin{cases}0&\text{Ex 1, 3, 4}\\ \gamma\in(0,1)&\text{Ex 2}\end{cases}\]
**Theorem 2**:: \(\widetilde{G_{\text{\tiny{VZ}}}}\{E_{N}\text{ eventually}\}=\begin{cases}1&\text{Ex 1, 3, 4}\\ 1-\widetilde{G_{\text{\tiny{VZ}}}}\{M_{v}=1\}&\text{Ex 2}\end{cases}\).
**Proposition 3**:: \(\text{fEPPF}^{(N)}_{K}\) under the GARP model in (4).
**Proposition 4**:: The data \(\boldsymbol{y}\), the graph-aligned random partition induced by \((V_{i},Z_{i})\) and the random partition \(\Psi_{N}\) are finitely exchangeable, but not a projection of an infinitely exchangeable process under our proposal (1)-(4).
**Theorem 3 and Corollary 1**:: Under Ex 1, the prior predictive probabilities for \(V_{i},Z_{i}\) under the GARP are eventually equal to the same under a Kolmogorov-consistent sequence \(\left(G^{(\infty)}_{N}\right)\); statement of a Polya urn and directing measure for \(\left(G^{(\infty)}_{N}\right)\).
The same remains true for any MFM.
### Proof of Proposition 1
Proof.: We assume the GARP definition via the _relaxed model_ in (7), (8), (9), and (10) and show that is equivalent to the definition in (4).
First, we note that in (4) the constraint \(1\left(E_{N}\right)\) can be rewritten as \(1\left(\{N_{v}=N\}\cup\{K_{v}>1\}\right)\). Note also that under \(N_{e}=0\) the second line in (4) does not arise. For notational simplicity, we naturally extend the definition of \(K_{v}\) and \(M_{e}\) by defining \(K_{v}=M_{e}=0\) if \(N_{v}=0\) and defining \(\text{DM}^{(0)}_{0}(\cdot)=\text{DM}^{(\cdot)}_{0}(\cdot)\equiv 1\).
Note also that (9) is equivalent to sample \(\boldsymbol{Z}_{v}=(Z_{i}:i\in[N]\), \(V_{i}=1)\) from
\[\widetilde{G^{(N)}}(\boldsymbol{Z_{v}}\mid\boldsymbol{V})=\text{EPPF}^{(N_{v })}_{K_{v}}(n_{1},\ldots,n_{K_{v}}\mid\alpha,\sigma)/K_{v}!\] (S.5)
The clustering indicators \(Z_{i}\) are a 1-to-1 mapping of the induced exchangeable random partition up to possible relabelings. By (conditional) exchangeability of the partition, any possible relabeling of \(\boldsymbol{Z_{v}}\) has the same probability that is equal to the EPPF divided by the number of relabelings, i.e., \(K_{v}!\).
Similarly, sampling from (10) is equivalent to sample \(\mathbf{Z_{e}}=(Z_{i}:i\in[N]\), \(V_{i}=0)\) from
\[\widetilde{G^{(N)}}(\mathbf{Z_{e}}\mid\mathbf{Z_{v}})=\mathrm{DM}^{(N_{e})}_{M_{e}}((n_ {k,k^{\prime}})_{k<k^{\prime}_{v}}\mid\beta/M_{e}),\]
where \(\mathrm{DM}^{(N_{e})}_{M_{e}}((n_{k,k^{\prime}})_{k<k^{\prime}_{v}})\) denotes the marginal likelihood of the DM distribution for the categorical random variables, which is a function of the sufficient statistics \((n_{k,k^{\prime}})_{k<k^{\prime}_{v}}\), i.e., the ordered cardinalities of the different edges. In contrast to the EPPF and fEPPF, here some \(n_{k,k^{\prime}}\) can be 0, implying that there is no edge connecting the vertices \(k\) and \(k^{\prime}\).
Finally, we obtain (4) via the multiplication rule of probability, i.e.,
\[\widetilde{G^{(N)}}(\mathbf{V},\mathbf{Z})=\widetilde{G^{(N)}}(\mathbf{V})\,\widetilde{G^ {(N)}}(\mathbf{Z_{v}}\mid\mathbf{V})\cdot\widetilde{G^{(N)}}(\mathbf{Z_{e}}\mid\mathbf{V},\bm {Z_{v}})\]
where \(\widetilde{G^{(N)}}(\mathbf{V})=p_{v}^{N_{v}}(1-p_{v})^{N_{e}}\) by (8).
#### Proof of Proposition 2
Proof.: First, recall \(E_{N}=\{N_{v}=N\}\cup\{K_{v}>1\}\). That is, \(E_{N}\) occurs if and only if there are at least two vertex-clusters (i.e., \(K_{v}>1\)) unless no observations are allocated to edge-clusters (i.e., \(N_{v}=N\)). Thus, by additivity of probability,
\[\widetilde{G^{(N)}}\{E_{N}\} =\widetilde{G^{(N)}}\{\{N_{v}=N\}\cup\{K_{v}>1\}\}\] \[=\widetilde{G^{(N)}}\{N_{v}=N\}+\widetilde{G^{(N)}}\{\{N_{v}\neq N \}\cap\,\{K_{v}>1\}\},\]
where \(\widetilde{G^{(N)}}\{N_{v}=N\}=p_{v}^{N}\). In words, we decompose \(E_{N}\) into the union of the (disjoint) events "all clusters are vertices" and "not all observations are in vertices and there are at least 2 vertex-clusters". The second term is further expanded by conditioning on \(N_{v}\) as:
\[\widetilde{G^{(N)}}\{\{N_{v}\neq N\}\cap\{K_{v}>1\}\}=\] \[\widetilde{G^{(N)}}\{\{N_{v}\notin\{0,1,N\}\}\cap\{K_{v}>1\}\}=\] \[\sum_{n_{v}=2}^{N-1}\widetilde{G^{(N)}}\{N_{v}=n_{v}\}\widetilde{ G^{(N)}}\{K_{v}\neq 1\mid N_{v}=n_{v}\}=\] \[\sum_{n_{v}=2}^{N-1}\binom{N}{n_{v}}p_{v}^{n_{v}}(1-p_{v})^{n_{v}- 1}\big{[}1-\mathrm{EPPF}_{1}^{(n_{v})}(n_{v})\big{]}=\] \[\sum_{n_{v}=2}^{N-1}\binom{N}{n_{v}}p_{v}^{n_{v}}(1-p_{v})^{n_{v}- 1}\big{[}1-(1-\sigma)_{n_{v}-1}W_{n_{v},1}\big{]},\]
where the last equality follows from the definition of the Gibbs-type priors.
### Proof of Theorem 1
Proof.: First, note that the finite sample behavior of
\[g_{n_{v}}=\widetilde{G^{(N)}}\{K_{v,N}=1\mid N_{v,N}=n_{v}\}=\widetilde{G_{\rm{ VZ}}}\{K_{v,N}=1\mid N_{v,N}=n_{v}\}=\text{EPPF}_{1}^{(n_{v})}(n_{v})\]
is derived as a special case of the EPPF in the different examples in Section 3 of the main manuscript. From it, we can derive the large sample behavior \(g_{n_{v}}\) and the limit \(g_{v}^{\infty}\) reported in Table 2. Let \((x)_{n}=\Gamma(x+n)/\Gamma(x)=x(x+1)\cdots(x+n-1)\). To compute the rate of \(g_{n_{v}}\) we note that by the Stirling approximation
\[\frac{(x)_{n}}{n!}=\frac{\Gamma(x+n)}{\Gamma(x)n!}\asymp\frac{n^{x-1}}{\Gamma (x)}\quad\text{as $n\to\infty$}.\]
Note also that \((N_{v,N})_{N\in\mathbb{N}}\) is a \((\widetilde{G_{\rm{VZ}}}\)-almost surely) Markovian non-decreasing sequence of random integers such that
\[\frac{N_{v,N}}{N}\to p\quad\text{as $N\to\infty$}\]
\(\widetilde{G_{\rm{VZ}}}\)-a.s. by the strong law of large numbers. Therefore, \(N_{v,N}\) diverges \(\widetilde{G_{\rm{VZ}}}\)-almost surely and \(g_{v}^{\infty}\equiv\lim_{n_{v}\to\infty}\widetilde{G_{\rm{VZ}}}\{K_{v}=1\mid N _{v,N}=n_{v}\}=\lim_{N\to\infty}\widetilde{G_{\rm{VZ}}}\{K_{v}=1\}=g^{\infty}\).
We note, as a remark, that to have \(g_{v}^{\infty}\) well defined we consider a sequence \((N=f(n_{v}))_{n_{v}\in\mathbb{N}}\) such that \(f:\mathbb{N}\to\mathbb{N}\) and \(f(n)\geq n\) for any \(n\in\mathbb{N}\). Moreover, the hierarchical definitions of \(\mathbf{V}\) and \(\mathbf{Z_{v}}\) imply that \(K_{v}=K_{v}(\mathbf{Z}_{v})\)\(\widetilde{G_{\rm{VZ}}}\)-almost surely, where \(K_{v}=K_{v}(\mathbf{Z}_{v})\) indicates a function of the \(N\) units \((V_{i},Z_{i})\) that depends on \(\mathbf{Z}\) only indirectly through the \(N_{v,N}\) units allocated to vertices, i.e., \(\mathbf{Z}_{v}\).
Finally, as derived in Section 4 of the main manuscript, \(g_{v}^{\infty}=g^{\infty}=\lim_{N\to\infty}\widetilde{G_{\rm{VZ}}}\{E_{N}^{c }\}\).
### Proof of Theorem 2
Recall the definition of eventually. Let \((E_{N})_{N\in\mathbb{N}}\) be a sequence of events in the measurable space \((\Omega,\mathcal{F})\),
\[\{E_{N}\text{ eventually}\}=\liminf_{N}E_{N}=\cup_{\bar{N}=0}^{\infty}\cap_{ N=\bar{N}}^{\infty}E_{N}.\]
In words, it is the set of \(\omega\in\Omega\) such that there exists an integer \(\bar{N}(\omega)\) such that for any integer \(N\geq\bar{N}(\omega)\), \(\omega\in E_{N}\).
Proof.: **Case with a \(M_{v}\)-dimensional symmetric Dirichlet (where \(M_{v}>1\)) or with a DP or with a PYP in \((4)\).**
First, since \(K_{v,N},N_{v,N}\) are functions of \(T_{1:N}\) (that is of \((\mathbf{V}_{1:N},\mathbf{Z}_{v,N})\)) only, \(\widetilde{G^{(N)}}(K_{v,N},N_{v,N})=\widetilde{G_{\rm{VZ}}}(K_{v,N},N_{v,N})\) for any \(N\in\mathbb{N}\).
Note that under \(\widetilde{G_{\text{VZ}}}\), \((K_{v,N})_{N}\) is an a.s. non-decreasing Markovian sequence of positive integers such that for any natural \(N>1\), \(\widetilde{G_{\text{VZ}}}\{K_{v,N}>1\}>0\) and it can be computed from (8)-(9).
Moreover, by Kingman's representation theorem (see Kingman, 1978 and Theorem 14.7 in Ghosal and van der Vaart, 2017) the random partition can be characterized as arising from the ties obtained by sampling from a unique discrete probability measure \(P_{v}=\sum_{m=1}^{M_{v}}\pi_{m}\delta_{\tilde{\boldsymbol{\theta}}}\) (we know that is \(M_{v}\)-symmetric Dirichlet or a DP or a PYP distributed) and the frequency of the \(k\)th largest partition block converges almost surely to \(k\)th largest random weight in \((\pi_{m})_{m=1}^{M_{v}}\) for any \(k\in 1,\ldots,M_{v}\). Therefore, together with the assumption \(M_{v}\geq 2\), it implies that
\[\widetilde{G_{\text{VZ}}}\{\{K_{v,N}>1\}\text{ eventually }w.r.t.\ N\}=1.\]
To conclude the proof of (14), note that
\[E_{N}=\{N_{v,N}=N\}\cup\{K_{v,N}>1\}\supset\{K_{v,N}>1\}.\]
Thus, we have shown that \(\widetilde{G_{\text{VZ}}}\{E_{N}\text{ eventually}\}=1\).
To prove (15), first recall that for any \(N\in\mathbb{N}\), \(G^{(N)}\) and \(\widetilde{G^{(N)}}\) denote the probability mass function of \((V_{i},Z_{i})_{i=1}^{N}\) under the GARP and the relaxed model, respectively. Next, for any \(N,k\in\mathbb{N}\) and any set of possible points \(A_{k}=(\mathbf{v}_{1:N+k},\mathbf{z}_{1:N+k})\), by definition of conditional probability we have
\[\widetilde{G^{(N+k)}}(A_{k}\mid\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{ v,N}=\mathbf{z}_{v,N})=\frac{\widetilde{G^{(N+k)}}\big{(}A_{k}\big{)}}{ \widetilde{G^{(N+k)}}\{\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\}},\] (S.6)
where, by additivity of probability,
\[\widetilde{G^{(N+k)}}\{\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\}=\sum_{\big{\{}(v_{i}^{\prime},z_{i}^{\prime})_{i=1}^{N+k} :\mathbf{v}_{1:N}^{\prime}=\mathbf{v}_{1:N},\,\mathbf{z}_{v,N}^{\prime}= \mathbf{z}_{v,N}\big{\}}}\widetilde{G^{(N+k)}}((v_{i}^{\prime},z_{i}^{\prime })_{i=1}^{N+k}).\]
Moreover, for any \(k,N\in\mathbb{N}\) and any possible points \(A_{k}=\mathbf{v}_{1:N+k},\mathbf{z}_{1:N+k}\) such that \(\{\boldsymbol{V}_{1:N}=\mathbf{v}_{1:N+k},\boldsymbol{Z}_{1:N}=\mathbf{z}_{1:N}\}\) entails that \(\{K_{v,N}>1\}\) holds (and thus \(\mathbb{I}(E_{N})=1\)) we have
\[G^{(N+k)}\big{(}A_{k}\mid\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\big{)}=\frac{\widetilde{G^{(N+k)}}\big{(}A_{k}\big{)}}{ \widetilde{G^{(N+k)}}\{\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\}},\]
by (7) and definition of conditional probability.
To conclude the proof of (15) we note that, by (14), there exists a set \(\mathcal{T}\) of sequences \((t_{i})_{i=1}^{\infty}\) that are possible realizations of \((T_{i})_{i=1}^{\infty}\) such that \(\widetilde{G_{\text{VZ}}}\{\mathcal{T}\}=1\) and such that for any
sequence \(t=(t_{i})_{i=1}^{\infty}\in\mathcal{T}\) there exists a \(\bar{N}(t)\in\mathbb{N}\) such that \(\{K_{v,N}(t)>1\}\) holds for any \(N\geq\bar{N}(t)\). Therefore, for any \(N\geq\bar{N}(t)\)
\[G^{(N+k)}\big{(}A_{k}\mid\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\big{)}=\widetilde{G^{(N+k)}}(A_{k}\mid\mathbf{V}_{1:N}= \mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}),\]
where \(t_{i}=v_{i}\) if \(v_{i}=0\) and \(t_{i}=(v_{i},z_{i})\) if \(v_{i}=1\). Thus we proved (15).
#### Case with a Gnedin process in (4).
Similarly to the previous case, note that under \(\widetilde{G_{\mathrm{vz}}}\), \((K_{v,N})_{N}\) is an a.s. non-decreasing Markovian sequence of positive integers. Moreover, by Kingman representation theorem and the fact that \(\widetilde{G_{\mathrm{vz}}}\{M_{v}<\infty\}=1\) we have that
\[\widetilde{G_{\mathrm{vz}}}\{\{K_{v,N}=M_{v}\}\text{ eventually }w.r.t.\ N\}=1.\]
Indeed, the random partition can be thought of as arising from the ties obtained by sampling from a unique discrete probability measure \(P_{v}=\sum_{m=1}^{M_{v}}\pi_{m}\delta_{\tilde{\boldsymbol{\theta}}}\) (here distributed as a Gnedin process) and the frequency of the \(k\)th largest partition block converges almost surely to \(k\)th largest random weight in \((\pi_{m})_{m=1}^{M_{v}}\) for any \(k\in 1,\ldots,M_{v}\).
Note that \(\{K_{v,N}=M_{v}\}\subset\{K_{v,N}>1\}\cup\{M_{v}=1\}\subset E_{N}\cup\{M_{v}=1\}\), thus
\[\widetilde{G_{\mathrm{vz}}}\{\{K_{v,N}>1\}\cup\{M_{v}=1\}\text{ eventually }w.r.t.\ N\}=1\]
and
\[\widetilde{G_{\mathrm{vz}}}\big{\{}E_{N}\cup\{M_{v}=1\}\text{ eventually }w.r.t.\ N\big{\}}=1.\] (S.7)
To conclude the proof we need to show that, for any \(k\in\mathbb{N}\) and any possible set of points \(A_{k}=(\mathbf{v}_{1:N+k},\mathbf{z}_{1:N+k})\)
\[\widetilde{G}_{VZ}\left\{\big{\{}G^{(N+k)}(A_{k}\mid\mathbf{V}_{1:N}, \mathbf{Z}_{v,N})=\widetilde{G^{(N+k)}}(A_{k}\mid\mathbf{V}_{1:N},\mathbf{Z}_ {v,N})\big{\}}\cup\{M_{v}=1\}\text{ eventually}\right\}=1.\] (S.8)
To prove (S.8) we note that, by (S.7), there exists a set \(\mathcal{T}\) of sequences \((t_{i})_{i=1}^{\infty}\) that are possible realizations of \((T_{i})_{i=1}^{\infty}\) such that \(\widetilde{G_{\mathrm{vz}}}\{\mathcal{T}\}=1\) and such that for any sequence \(t=(t_{i})_{i=1}^{\infty}\in\mathcal{T}\) there exists a \(\bar{N}(t)\in\mathbb{N}\) such that \(\{K_{v,N}(t)>1\}\cup\{M_{v}(t)=1\}\) (and thus \(E_{N}\)) holds for any \(N\geq\bar{N}(t)\). Therefore, by (7) and definition of conditional probability, for any \(N\geq\bar{N}(t)\)
\[G^{(N+k)}\big{(}A_{k}\mid\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\big{)}=\widetilde{G^{(N+k)}}(A_{k}\mid\mathbf{V}_{1:N}= \mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}),\]
where \(t_{i}=v_{i}\) if \(v_{i}=0\) and \(t_{i}=(v_{i},z_{i})\) if \(v_{i}=1\)
### Proof of Proposition 3
The fEPPF in Proposition 3 is computed via marginalization of the pmf of the GARP in (4) over all the quantities that are compatible with the cardinalities \(\{c_{1},\ldots,c_{K_{v}}\}\) of \(\Psi_{N}\).
We state a more complete version of Proposition 3, now including a statement of the range of the three sums that appear in
\[\text{fEPPF}^{(N)}_{K_{N}}(|C_{1}|,\ldots,|C_{K_{N}}|)\propto\\ \sum_{N_{v}}\left\{\binom{N}{N_{v}}p_{v}^{N_{v}}(1-p_{v})^{N-N_{v }}\sum_{K_{v}}\left[\binom{M_{e}}{K_{N}-K_{v}}\right.\right.\\ \left.\sum_{(n_{k_{1}},\ldots,n_{K_{v}})}\text{EPPF}^{(N_{v})}_{K _{v}}(n_{k_{1}},\ldots,n_{K_{v}})\,\text{DM}^{(N-N_{v})}_{M_{e}}((n_{k,k_{v^{ \prime}}})_{k<k^{\prime}_{v}})\right]\right\}\]
The first sum runs over \(N_{v}\in[N]\) with the restriction that \(N_{v}=N\) if \(K_{N}\leq 2\). The second sum runs over \(K_{v}\in[K_{N}]\) with the restrictions that
1. \(K_{v}\geq 2\) if \(K_{N}\geq 2\);
2. \(K_{v}<K\) if \(N_{v}\neq N\);
3. \(K_{v}=K\) if \(N_{v}=N\);
4. \(K_{N}\leq N_{v}+\)\(\min\{M_{e},N-N_{v}\}\), keeping in mind that \(M_{e}\coloneqq\frac{K_{v}(K_{v}-1)}{2}\).
Finally, the last sum runs over \((n_{1},\ldots,n_{K_{v}})\) where \(\sum_{k=1}^{K_{v}}n_{k}=N_{v}\) and \(n_{1},\ldots,n_{K_{v}}\) are distinct elements of \(\{c_{1},\ldots c_{K_{N}}\}\) ordered, e.g., by cardinalities. And the non-zero edge-cluster sizes \(n_{k,k^{\prime}}\) are the remaining (ordered) elements of \((c_{1},\ldots c_{K_{N}})\) that are not matched with vertex-cluster sizes \(n_{k}\).
### Proof of Proposition 4
Proof.: **Finite exchangeability.**
First note that \(\mathbf{Z}_{v}=(Z_{v,i})_{i=1}^{N_{v}}\coloneqq(Z_{i}:i\in[N]\), \(V_{i}=1)\) identifies arbitrarily labeled vertex-clusters (e.g., in order of appearance). Hence, formally the vector \(\mathbf{Z}_{v}\) and its relabeling are regarded as distinct objects, even though they identify the same vertex-partition.
Moreover, if the edge-clusters are relabelled according to the relabeling of the vertex-clusters this identifies the exact same graph-aligned random partition.
For instance, \((Z_{1}=1,Z_{2}=2,Z_{3}=5,Z_{4}=(1,2))\) entails the same graph-aligned partition as \((Z_{1}=3,Z_{2}=2,Z_{3}=1,Z_{4}=(2,3))\), but a different one than \((Z_{1}=3,Z_{2}=2,Z_{3}=3,Z_{4}=(1,2))\). A relabeling of \(Z_{i}\) which preserves the same graph-aligned random partition
does not modify the likelihood distribution \(G^{(N)}(\mathbf{y}\mid\mathbf{V},\mathbf{Z})\) in (1), which is invariant under such a relabeling.
By construction, the graph-aligned random partition (4) induced by \((V_{i},Z_{i})\) is exchangeable, i.e., the joint law is invariant to permutation of the labels \(i\). Note that we cannot state the same argument directly in terms of the pmf of \((V_{i},Z_{i})\) since we have an arbitrary order of \(\mathbf{Z}_{v}\), i.e., the order of arrival (irrelevant for the graph-aligned random partition) that gives probability zero to permutations of \(i\)'s that entails a non-increasing sequence of \(\mathbf{Z}_{v}\).
Since the likelihood of the sample (1) can be defined as a function of the graph-aligned random partition, we immediately obtain the exchangeability of the sample \((\mathbf{y})_{i=1}^{N}\).
Finally, since the random partition \(\Psi_{N}\) can be seen as the marginalization of the graph-aligned random partition, we also have finite exchangeability of \(\Psi_{N}\) as also shown via the fEPPF (3).
**Lack of projectivity.**
To prove that _infinity exchangeability_ does not hold we show a simple counterexample where projectivity does not hold.
We first show the lack of projectivity for the graph-aligned random partition \(G^{(N)}(\mathbf{V},\mathbf{Z})\). It suffices to note that for a sample of size \(N=1\) the probability of assigning an observation to a vertex is \(1\), i.e., \(G^{(1)}\{V_{1}=1\}=1\), while it is strictly smaller than \(1\) for \(N=3\), since, by (4),
\[G^{(3)}\{V_{1}=0\}=G^{(3)}\{V_{1}=0,Z_{1}=(1,2),V_{2}=1,Z_{2}=1,V_{3}=1,Z_{3}= 2\}>0.\]
Next, we show the lack of projectivity for \(\mathbf{y}\). The last argument also implies that in a sample of size \(N=1\) the marginal density of the observations \(\mathbf{y}_{1}\) can be rewritten as
\[\int\mathrm{N}(\mathbf{y}\mid\mathbf{\mu}^{*},\mathbf{\Sigma}^{*})\mathrm{d}\mathrm{NIW}( \mathbf{\mu}^{*},\mathbf{\Sigma}^{*}\mid\mathbf{\mu}_{0},\lambda_{0},\kappa_{0},\mathbf{ \Sigma}_{0})\] (S.9)
while under \(N=3\) it is a mixture of (S.9) and an additional term corresponding to an allocation as an edge:
\[\int\mathrm{N}(\mathbf{y}\mid\mathbf{\mu}^{*}_{1,2},\mathbf{\Sigma}^{*}_{1,2})\mathrm{d}G ^{(3)}(\mathbf{\mu}^{*}_{1,2},\mathbf{\Sigma}^{*}_{1,2}),\]
with \(G^{(3)}(\mathbf{\mu}^{*}_{1,2},\mathbf{\Sigma}^{*}_{1,2})\) characterized by \(\mathbf{\mu}^{*}_{1,2}=(\mathbf{\mu}^{*}_{1}+\mathbf{\mu}^{*}_{2})/2\) and \(\mathbf{\Sigma}^{*}_{1,2}=f(\mathbf{\mu}^{*}_{1},\mathbf{\mu}_{2})\), where \(\mathbf{\mu}^{*}_{1}\) and \(\mathbf{\mu}^{*}_{2}\) are independent draws of a generalized Student-T distribution. This shows that \(\mathbf{y}_{i}\) is not infinitely exchangeable.
Finally, we consider the random partition \(\Psi_{N}\). Note that the probability of observations \(i=1,2\) being clustered together in a sample of size \(2\) (i.e., of a partition with a single
cluster), is equal to
\[G^{(2)}\{\Psi_{2}=\{1,2\}\} = \mbox{fEPPF}_{1}^{(2)}(2)=\mbox{EPPF}_{1}^{(2)}(2)>\] \[> G^{(3)}\{\Psi_{3}:Z_{1}=Z_{2}\}=\mbox{EPPF}_{1}^{(2)}(2)\,G^{(3)} \{V_{1}=V_{2}=1\}.\]
Thus, in the last expression, the first factor is the probability of having the observations with labels \(i=1,2\) in the same cluster given that they are in vertex-clusters, and the second factor is the probability of those two observations being assigned to vertex clusters. Note that, in the case of \(N=2\) the probability of the two observations to be assigned in vertex-cluster is 1.
### Proof of Theorem 3 and Corollary 1
Theorem 3 in the main manuscript shows that in some cases the prior predictive distributions of the GARP model eventually (i.e., for a large enough sample size \(N\)) can be characterized as a projection of the predictive distributions of a limiting infinitely exchangeable model, thus where projectivity holds.
Proof.: **Proof of Theorem 3 (\(M_{v}\)-dimensional symmetric Dirichlet)**
**(Case 1: \(M_{v}=1\))**
For any \(N\in\mathbb{N}\) our proposal degenerates to a single Gaussian model because \(G^{(N)}\)-a.s. all the observations are clustered together in a single vertex. In such a case it is immediate to check that we have projectivity and (18), (19) and (20) hold. However, this is clearly an uninteresting case from a modeling perspective.
**(Case 2: \(M_{v}>1\))**
First, recall that
\[\widetilde{G_{\mbox{\tiny{VZ}}}}\bigg{\{}\lim_{N\to\infty}\frac{N_{v},N}{N} \to p_{v}\bigg{\}}=1\]
by the strong law of large numbers.
Recall also that under \(\widetilde{G_{\mbox{\tiny{VZ}}}}\), \((K_{v,N})_{N}\) is an a.s. non-decreasing Markovian sequence of positive integers such that for any \(N\in\mathbb{N}\), \(K_{v,N}\leq M_{v}\) and \(\widetilde{G_{\mbox{\tiny{VZ}}}}\big{\{}K_{v,N}=\min(N,M_{v})\big{\}}>0\) and it can be computed from (8)-(9).
Moreover, by Kingman's representation theorem (see Kingman, 1978 and Theorem 14.7 in Ghosal and van der Vaart, 2017) the random partition can be thought of as arising from the ties obtained by sampling from a unique discrete probability measure \(P_{v}=\sum_{m=1}^{M_{v}}\pi_{m}\delta_{\tilde{\mathbf{\theta}}}\) (we know that is \(M_{v}\)-symmetric Dirichlet distributed) and the frequency of the \(k\)th largest partition block converges almost surely to \(k\)th largest random weight in \((\pi_{m})_{m=1}^{M_{v}}\) for any \(k\in\{1,\ldots,M_{v}\}\). Therefore, together with the assumption that \(M_{v}\) is
finite \(\widetilde{G_{\text{\tiny V2}}}\{\lim_{N\to\infty}K_{v,N}=M_{v}\}=1\). Thus, since \(K_{v,N}\) are random integers,
\[\widetilde{G_{\text{\tiny V2}}}\,\{\{K_{v,N}=M_{v}\}\text{ eventually w.r.t. }N\}=1.\] (S.10)
Note also that
\[\{K_{v,N}=M_{v}\}\subset E_{N}.\]
Thus, for any \(N,k\in\mathbb{N}\) and \(A_{k}=(v_{i},z_{i})_{i=1}^{N+k}\) such that \(\{\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}\}\) entails that \(\{K_{v,N}=M_{v}\}\) holds (and so \(\mathbbm{1}(E_{N})=1\)) we have
\[G^{(N+k)}\big{(}(v_{i},z_{i})_{i=1}^{N+k}\mid\mathbf{V}_{1:N}= \mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}\big{)}=\] \[\widetilde{\widetilde{G^{(N+k)}}\{\mathbf{V}_{1:N}=\mathbf{v}_{1 :N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}\}}=\] \[\widetilde{\widetilde{G^{(N+k)}}\big{(}(v_{i},z_{i})_{i=1}^{N+k} \mid\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}\big{)}},\]
by definition of conditional probability and (7).
Note that, for any \(N\in\mathbb{N}\), \(K_{v,N}=M_{v}\) entails that \(K_{v,N+k}=M_{v}\) and \(M_{e,N+k}=M_{e}^{+}\coloneqq\frac{M_{v}(M_{v}-1)}{2}\) for any \(k=0,1,\ldots\). Therefore, by definition of \(\widetilde{G^{(N)}}\) and the fact that \(\{\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}\}\) entails that \(\{K_{v,N}=M_{v}\}\) and \(\{M_{e,N}=M_{e}^{+}\}\) hold, for any \(k\in\mathbb{N}\) we have
\[\widetilde{G^{(N+k)}}\big{(}(v_{i},z_{i})_{i=1}^{N+k}\mid\mathbf{ V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}\big{)}=\] \[\frac{G^{(\infty)}_{N+k}\big{(}(v_{i},z_{i})_{i=1}^{N+k}\big{)}} {G^{(\infty)}_{N+k}\{\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\}}=\] \[G^{(\infty)}_{N+k}\big{(}(v_{i},z_{i})_{i=1}^{N+k}\mid\mathbf{ V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}\big{)},\]
where \(G^{(\infty)}_{N+k}\) refers to the pmf of \(\mathbf{V}_{1:N+k},\mathbf{Z}_{1:N+k}\) defined in (20). We now explicitly call such law \(G^{(\infty)}_{N+k}\) (i.e., with the subscript) to stress the dimension to show that \((G^{(\infty)}_{N})_{N\in\mathbb{N}}\) are indeed Kolmogorov consistent and can be seen as the projection of the law of a stochastic process \(G^{(\infty)}\).
To conclude the proof of (17) recall that by (S.10), there exists a set \(\mathcal{T}\) of sequences \((t_{i})_{i=1}^{\infty}\) that are possible realizations of \((T_{i})_{i=1}^{\infty}\) such that \(\widetilde{G_{\text{\tiny V2}}}\{\mathcal{T}\}=1\) and such that for any sequence \(t=(t_{i})_{i=1}^{\infty}\in\mathcal{T}\) there exists a \(\bar{N}(t)\in\mathbb{N}\) such that \(\{K_{v,N}(t)=M_{v}\}\) (and thus also \(\{M_{e,N}(t)=M_{e}^{+}\}\)) holds for any \(N\geq\bar{N}(t)\). Therefore, for any \(N\geq\bar{N}(t)\)
\[G^{(N+k)}\big{(}A_{k}\mid\mathbf{V}_{1:N}=\mathbf{v}_{1:N},\mathbf{Z}_{v,N}= \mathbf{z}_{v,N}\big{)}=G^{(\infty)}_{N+k}(A_{k}\mid\mathbf{V}_{1:N}=\mathbf{v }_{1:N},\mathbf{Z}_{v,N}=\mathbf{z}_{v,N}),\]
where \(t_{i}=v_{i}\) if \(v_{i}=0\) and \(t_{i}=(v_{i},z_{i})\) if \(v_{i}=1\).
To check the projectivity of \((G^{(\infty)}_{N})_{N}\) we note that for any \(N\in\mathbb{N}\) and possible values
\((v_{i},z_{i})_{i\in[N]}\)
\[G_{N}^{(\infty)}((v_{i},z_{i})_{i\in[N]}) =p_{v}^{N_{v}}\mathrm{EPPF}_{M_{v}}^{(N_{v})}(n_{1},\ldots,n_{K_{v} }\mid\alpha,\sigma)/K_{v}!\] \[(1-p_{v})^{N_{e}}\mathrm{DM}_{M_{v}(M_{v}-1)/2}^{(N_{e})}((n_{k, k^{\prime}})_{k<k^{\prime}}\mid\beta/M_{e})\] \[=\sum_{v_{N+1},z_{N+1}}G_{N+1}^{(\infty)}((v_{i},z_{i})_{i\in[N]} )=G^{(\infty)}((V,Z)_{i\in[N]}).\]
The second and third equalities hold by projectivity of the EPPF and DM (where the sum is over all possible values of \(v_{N+1},z_{N+1}\)). We denote by \(G^{(\infty)}\) the infinite-dimensional GARP defined via such Kolmogorov consistent finite-dimensional distributions.
From \(G^{(\infty)}\) (and its Kolmogorov consistent finite-dimensional) we derive the urn schemes in (18) via the definition of conditional probability. The ratio boils down to (18) thanks to the product form of the EPPF and of the DM.
Finally, note that via the characterization of the EPPF and DM in terms of discrete random probabilities (see e.g., Section S.2), the induced law on \((\boldsymbol{\theta}_{i})_{i=1}^{N}\) can thus be characterized by first sampling \(V_{i}\stackrel{{\mathrm{iid}}}{{\sim}}\mathrm{Bern}(p_{v})\) and \(\boldsymbol{\theta}_{i}\mid P_{v},V_{i}=1\stackrel{{\mathrm{ind}} }{{\sim}}P_{v}\coloneqq\sum_{m=1}^{M}\pi_{m}\delta_{\tilde{\boldsymbol{ \theta}}_{m}}\) and \(\boldsymbol{\theta}_{i}\mid P_{e},V_{i}=0\stackrel{{\mathrm{ind}} }{{\sim}}P_{e}\coloneqq\sum_{k<k^{\prime}<M}\pi_{k,k^{\prime}}\delta_{\tilde{ \boldsymbol{\theta}}_{k,k^{\prime}}}\). Thus we derive (19) marginalizing with respect to \(\mathbf{V}\) and by the uniqueness of the directing measure.
#### Proof of corollary 1
First, we write explicitly the statement of Corollary 1.
**Corollary S.2** (Corollary 1 of the main manuscript).: _Under the GARP with a Gnedin process (Example 2) in (4) there exists a finite random sample size \(\bar{N}\) such that for any \(N>\bar{N}\) the prior predictive distributions under the proposed GARP model given \(M_{v}\) are \(\widetilde{G_{\mbox{\tiny{VZ}}}}\)-a.s. equal to the prior predictive distributions given \(M_{v}\) under a Kolmogorov consistent \(G^{(\infty)}\), i.e., for any possible sequence of sets of points \((A_{k})_{k\in\mathbb{N}}\), with \(A_{k}=\mathbf{v}_{1:N+k},\mathbf{z}_{1:N+k})\)_
\[\widetilde{G_{\mbox{\tiny{VZ}}}}\Big{\{}\big{\{}G_{N+k}^{(\infty)}(A_{k}\mid \mathbf{V}_{1:N},\mathbf{Z}_{1:N},M_{v})=G^{(N+k)}(A_{k}\mid\mathbf{V}_{1:N}, \mathbf{Z}_{1:N},M_{v})\;\forall\,k\big{\}}\mbox{ eventually}\Big{\}}=1.\]
_Moreover, \(G^{(\infty)}(\cdot\mid\mathbf{V}_{1:N},\mathbf{Z}_{1:N},M_{v})\) can be characterized by the urn scheme in (18) and \(G^{(\infty)}(\cdot\mid M_{v})\) by the pmf (20) and by an exchangeable sequence with directing measure being the law of \(P\mid M_{v}\) as in (19). Finally, \(G^{(\infty)}(M_{v}=m)=\widetilde{G_{\mbox{\tiny{VZ}}}}(M_{v}=m)=\frac{\gamma(1 -\gamma)_{m-1}}{m!}\)._
Note that \(\widetilde{G_{\mbox{\tiny{VZ}}}}\)-a.s. \(M_{v}\in\mathbb{N}\) and that for any realization of \(M_{v}=m\in\mathbb{N}\) we are back to the finite symmetric Dirichlet GARP and thus the result follows from Theorem 3.
Software, Runtime, etc.
The results reported in this article are based on 10,000 MCMC iterations with the initial 50,000 iterations discarded as burn-in. The remaining samples were further thinned by an interval 2. We programmed everything in R. The analyses are performed with a Lenovo ThinkStation P330 with 16Gb RAM (Windows 10), using a R version 4.2.3. The MCMC algorithm takes 29.8 minutes.
|
2308.00575 | Lee-Yang zeros at $O(3)$ and deconfined quantum critical points | Lee-Yang theory, based on the study of zeros of the partition function, is
widely regarded as a powerful and complimentary approach to the study of
critical phenomena and forms a foundational part of the theory of phase
transitions. Its widespread use, however, is complicated by the fact that it
requires introducing complex-valued fields that create an obstacle for many
numerical methods, especially in the quantum case where very limited studies
exist beyond one dimension. Here we present a simple and statistically exact
method to compute partition function zeros with general complex-valued external
fields in the context of large-scale quantum Monte Carlo simulations. We
demonstrate the power of this approach by extracting critical exponents from
the leading Lee-Yang zeros of 2D quantum antiferromagnets with a complex
staggered field, focusing on the Heisenberg bilayer and square-lattice $J$-$Q$
models. The method also allows us to introduce a complex field that couples to
valence bond solid order, where we observe extended rings of zeros in the
$J$-$Q$ model with purely imaginary staggered and valence bond solid fields. | Jonathan D'Emidio | 2023-08-01T14:40:45Z | http://arxiv.org/abs/2308.00575v1 | # Lee-Yang zeros at \(O(3)\) and deconfined quantum critical points
###### Abstract
Lee-Yang theory, based on the study of zeros of the partition function, is widely regarded as a powerful and complimentary approach to the study of critical phenomena and forms a foundational part of the theory of phase transitions. Its widespread use, however, is complicated by the fact that it requires introducing complex-valued fields that create an obstacle for many numerical methods, especially in the quantum case where very limited studies exist beyond one dimension. Here we present a simple and statistically exact method to compute partition function zeros with general complex-valued external fields in the context of large-scale quantum Monte Carlo simulations. We demonstrate the power of this approach by extracting critical exponents from the leading Lee-Yang zeros of 2D quantum antiferromagnets with a complex staggered field, focusing on the Heisenberg bilayer and square-lattice \(J\)-\(Q\) models. The method also allows us to introduce a complex field that couples to valence bond solid order, where we observe extended rings of zeros in the \(J\)-\(Q\) model with purely imaginary staggered and valence bond solid fields.
pacs: 71.10.-m, 71.10.-a, 71.10.-a, 71.10.-a, 71.10.-a Introduced over a half century ago, the Lee-Yang theory of phase transitions [1, 2] provides deep insights into how non-analyticities of the free energy arise in the thermodynamic limit of finite-size systems. The theory rests on the fact that while finite systems are manifestly analytic in the domain of real-valued physical control parameters, they can display singularities--zeros of the partition function--when the control parameters are extended to the complex plane. As the thermodynamic limit is approached, these zeros can accumulate near and eventually pinch the real axis, producing a genuine phase transition. In fact, even when zeros accumulate away from the real-axis they provide an interesting example of non-unitary critical points [3, 4].
While the study of Lee-Yang zeros may seem relegated to the realm of pure theory, they have in fact been demonstrated in several experimental systems [5, 6, 7] and quantum computers [8]. But perhaps the greatest impact of the theory lies in computational studies of many-body systems where the themes range from protein folding [9] and DNA zippers [10, 11] to quantum chromodynamics [12], to name but a few. See Ref. [13] for a review. However, despite the broad range of studies, there remains a glaring disparity between the strength of numerical methods in the classical versus the quantum case, an issue that we seek to address in this work.
While classical systems with complex fields can be efficiently treated with a variety of methods, from histogram-based approaches [14, 15, 16] or high-order cumulants [17, 18] to tensor network methods [19, 20, 21, 22], the quantum case is more restrictive, mainly being limited to 1D [23, 24, 25] or small 2D lattices [26, 27, 28, 29]. Here we develop a statistically exact method for extracting Lee-Yang zeros in large-scale quantum Monte Carlo (QMC) simulations of spin systems. The method builds on a simple, yet so far unused, formula for computing free energy differences in QMC, and allows for the computation of partition function ratios for a whole range of complex external fields based a single simulation in zero field.
_Free energy differences in QMC:_ We work in the context of the stochastic series expansion (SSE) quantum Monte Carlo algorithm [30], which is a statistically exact method for sampling the partition function as a Taylor expansion: \(Z(\beta)=\mathrm{Tr}\left(e^{-\beta H}\right)=\sum_{n=0}^{\infty}\frac{\beta^{ n}}{n!}\mathrm{Tr}\left(\left(-H\right)^{n}\right).\) The trace is then further decomposed into a sum of powers of Hamiltonian matrix elements by inserting complete sets of states, giving a sum over SSE configurations [30].
In this formulation it is simple to show that partition function ratios at different inverse temperatures can be computed as follows:
\[R_{\beta}(\tilde{\beta})\equiv\frac{Z(\tilde{\beta})}{Z(\beta)}=\left\langle \left(\frac{\tilde{\beta}}{\tilde{\beta}}\right)^{n}\right\rangle_{\beta}, \tag{1}\]
where \(n\) is the expansion order of the SSE configuration and the average is taken in the ensemble with inverse temperature \(\beta\). To the best of our knowledge, Eqn. (1) has not yet been used nor has appeared in the literature.
The ratio formula can be extended to the case where the partition functions differ only in the value of a Hamiltonian coupling \(J\). In this case we simply replace \(\beta\to J\), \(\tilde{\beta}\rightarrow\tilde{J}\), and \(n\to n_{J}\) in Eq. (1) where \(n_{J}\) refers to the number of \(J\)-type matrix elements in the SSE configuration.
Note that in these computations, only the denominator partition function needs to be simulated, and the ratio with any "nearby" partition function can be computed by recording a histogram of the number of operators (\(n\)) during an SSE simulation. This formula then allows us to extend the ratio estimator to complex values of its argument, which is the focus of this work. Before demonstrating this, we first will describe how to introduce a complex-valued staggered field that couples to the Neel order parameter, which will allow for the meaningful extraction of Lee-Yang zeros in the case of quantum antiferromagnets.
_Lee-Yang fields for quantum antiferromagnets:_ The classic case for Lee-Yang zeros is that of the Ising model
in an external magnetic field, which is taken to be complex. The Lee-Yang circle theorem [2] then states that all of the Ising partition function zeros with a complex magnetic field lie on the imaginary axis. In order to extend this picture to quantum antiferromagnets we introduce a field that couples to the Neel order parameter. Here, remarkably, we will show that it is possible to compute the ratio at finite complex field while simulating the partition function in zero field.
The idea is to _imagine_ embedding the \(z-\)component of the Neel field operator inside nearest-neighbor operators that are already present in the Hamiltonian \(h_{i,j}=J(\vec{S}_{i}\cdot\vec{S}_{j}-\frac{1}{4})\to J(\vec{S}_{i}\cdot\vec{S} _{j}-\frac{1}{4})+\frac{h}{N_{c}}(S_{i}^{z}-S_{j}^{z})\). Here \(i\in\) sublattice \(A\) and \(j\in\) sublattice \(B\) and \(N_{c}\) is the number of \(J\)-operators that touch each site (usually the coordination number of the lattice). The diagonal matrix elements then change as \(-\langle\uparrow_{i\downarrow}j\ |h_{i,j}|\uparrow_{i\downarrow j}\rangle= \frac{J}{2}\to\frac{J}{2}-\frac{h}{N_{c}}\) and \(-\langle\downarrow_{\uparrow}j\ |h_{i,j}|\downarrow_{\uparrow}\rangle= \frac{J}{2}\to\frac{J}{2}+\frac{h}{N_{c}}\). The ratio of these matrix elements before and after the change gives \(1\mp\frac{2h}{N_{c}J}\). If we denote the number of such matrix elements in an SSE configuration by \(n_{\uparrow\downarrow}\) and \(n_{\downarrow\uparrow}\) then the ratio formula with a Neel field reads
\[R_{0}(h)=\left\langle\left(1-\frac{2h}{N_{c}J}\right)^{n_{\uparrow\downarrow}} \left(1+\frac{2h}{N_{c}J}\right)^{n_{\downarrow\uparrow}}\right\rangle_{h=0}. \tag{2}\]
We emphasize that this formula only requires a histogram of the values \((n_{\uparrow\downarrow},n_{\downarrow\uparrow})\) obtained by QMC simulation (see Fig. (1)) of typical Heisenberg models in the absence of external fields, and allows for the statistically exact extraction of partition function zeros with a complex-valued Neel field. We now demonstrate this with a concrete example, the \(O(3)\) quantum critical Heisenberg bilayer.
_Heisenberg bilayer:_ As a first test case for this technique we select a well studied 2D quantum spin model for the \(O(3)\) transition in \(d=2+1\) dimensions, the Heisenberg bilayer given by the Hamiltonian
\[H_{\rm bl}=J\sum_{\langle i,j\rangle,a=1,2}(\vec{S}_{i,a}\cdot\vec{S}_{j,a}- \tfrac{1}{4})+J_{\perp}\sum_{i}(\vec{S}_{i,1}\cdot\vec{S}_{i,2}-\tfrac{1}{4}), \tag{3}\]
where \(\langle i,j\rangle\) are nearest neighbor pairs of a square lattice and \(a\) is the layer index. When \(J_{\perp}=0\) the model reduces to two independent Heisenberg models, which exhibits long-range Neel order at zero temperature, while for strong \(J_{\perp}\) singlets are formed on the interlayer bonds and Neel order is destroyed. A continuous transition in the \(O(3)\) universality class occurs at \(J_{\perp}/J=2.5220(1)\)[31].
Here we will focus on extracting the Lee-Yang zeros, zeros of Eq. (2) at the critical point. First we show that the zeros of the partition function with a complex-valued Neel field lie exactly on the imaginary axis, akin to the Lee-Yang circle theorem for the Ising model [2] and its generalization to the classical Heisenberg model [32]. This is shown if Fig. (1), where in the main panel we display the histogram of the number operators from Eq. (2) that are used to compute the partition function ratio. The data is collected for a bilayer system with side length \(L=80\) at the critical point, fixing \(J=1,J_{\perp}=2.5222\) and \(\beta=L/2\). The inset shows \(-\ln(|R_{0}(h)|)\) computed with this histogram, where the dark spikes indicate the first five zeros near \(|h|=0\), which lie directly on the imaginary axis.
Now we investigate the critical scaling of the leading zeros of the Heisenberg bilayer as we approach the thermo
Figure 1: Main panel: A histogram of the number operators from Eq. (2) for the Heisenberg bilayer with \(L=80\), \(J_{\perp}=2.5222\) and \(\beta=40\) with \(J=1\). Inset: Eq. (2) applied to the histogram with complex-valued external Néel field \(h\), where we plot \(-\ln(|R_{0}(h)|)\) and the Lee-Yang zeros appear as the dark points.
Figure 2: Main panel: The location on the imaginary axis of the first three Lee-Yang zeros of the Heisenberg bilayer in a complex Néel field at the critical point \(J_{\perp}/J=2.5222\) and \(\beta=L/2\). Panel (a): the exponent extracted from a fit of the data to the form \(h_{\rm{LY}}(L)\sim L^{-(d+2-\eta)/2}(1+bL^{-\omega})\), where the fit is performed separately for each of the first three zeros. The grey line is the best estimate of the exponent from the literature [33]. Panel (b): the raw data minus the fit.
dynamic limit, using finite-size scaling of the location of the zeros to extract critical exponents. At a critical point, Lee-Yang zeros are expected to scale as \(h_{\rm LY}(L)\sim L^{-\beta\delta/\nu}\)[34], where, through scaling relations, the exponent can be expressed as \(\beta\delta/\nu=(d+2-\eta)/2\). For the Heisenberg bilayer, we find it necessary to include corrections to scaling of the form \(h_{\rm LY}(L)\sim L^{-(d+2-\eta)/2}(1+bL^{-\omega})\), as has been previously observed with Lee-Yang zeros for the 3D classical Heisenberg model [35]. Fig. (2) shows the first three zeros extracted from bilayer systems collected at \(J=1,J_{\perp}=2.5222\) and \(\beta=L/2\). The extracted exponent is given in inset (\(a\)), which agrees perfectly with the best estimate from classical Monte Carlo simulations [33]. Interestingly, we also find the correction to scaling \(\omega\approx 1\) in agreement with the classical case [35].
_Deconfined criticality:_ Now that we have demonstrated the ability to extract critical exponents from the finite-size scaling of Lee-Yang zeros for the first time at a quantum critical point, we move on to a more exotic quantum spin model that shows criticality and emergent symmetry between two seemingly unrelated order parameters.
The \(J\)-\(Q\) model [36] is given by the following Hamiltonian:
\[H_{JQ}=J\sum_{\langle i,j\rangle}(\vec{S}_{i}\cdot\vec{S}_{j}-\tfrac{1}{4})-Q \sum_{\langle i,j,k,l\rangle}(\vec{S}_{i}\cdot\vec{S}_{j}-\tfrac{1}{4})(\vec{ S}_{k}\cdot\vec{S}_{l}-\tfrac{1}{4}), \tag{4}\]
where \(J\) is the nearest-neighbor coupling on a square lattice and \(Q\) is a four-spin interaction that acts on elementary plaquettes of the square lattice. \(Q\) can be thought of as a product of two \(J\)'s and the sum over \(\langle i,j,k,l\rangle\) includes both \(\hat{x}\) and \(\hat{y}\) orientations to preserve lattice symmetries. When \(Q=0\) the \(J\)-\(Q\) model reduces to the Heisenberg antiferromagnet, which exhibits long-range Neel order at \(T=0\). However, for large \(Q\), magnetic order is destroyed by locally forming singlets that stack along columns of the square lattice, referred to as valence-bond solid (VBS) order, which breaks lattice translation symmetry. At \(J/Q\approx 0.045\) the system undergoes a seemingly continuous transition between these two distinct ordered phases, which belongs to a class of transitions known as deconfined quantum critical points (DQCP) [37; 38].
The true nature of the phase transition in the \(J\)-\(Q\) model has been the topic of continued debate. While most studies observe a direct continuous transition between Neel and VBS order [39; 40; 41; 42; 43; 44; 45; 36], other studies show evidence of a weak first-order transition [46; 47; 48; 49; 50; 51]. The \(J\)-\(Q\) model therefore provides us with an important case for the study of Lee-Yang zeros.
Having already developed the methodology to extract Lee-Yang zeros associated with fluctuations of the Neel order parameter, we would now like to develop an analogous treatment for the VBS order parameter. Here we must introduce a complex-valued field that couples to the VBS order parameter, which is a field that preserves spin rotation symmetry but breaks lattice translation symmetry. In order to do so, we make use of the \(Q\) plaquette terms that are already present in the Hamiltonian, and we introduce the field \(d\sum_{\langle i,j,k,l\rangle\in\hat{x}_{a}}P_{i,j,k,l}-d\sum_{\langle i,j,k,l \rangle\in\hat{x}_{a}}P_{i,j,k,l}\) where \(P_{i,j,k,l}\equiv(\vec{S}_{i}\cdot\vec{S}_{j}-\tfrac{1}{4})(\vec{S}_{k}\cdot \vec{S}_{l}-\tfrac{1}{4})\) and \(\hat{x}_{e}\) are all \(\hat{x}\)-oriented plaquettes on even columns of the square lattice and \(\hat{x}_{o}\) are all \(\hat{x}\)-oriented plaquettes on odd columns. \(d\) is the coupling strength of the VBS field, which we will take to be complex-valued.
Following the same prescription as the ratio formula with a complex Neel field, we have the ratio with a complex VBS field as follows:
\[R_{0}(d)=\left\langle\left(1-\frac{d}{Q}\right)^{n_{Qx}}\left(1+\frac{d}{Q} \right)^{n_{Qx}}\right\rangle_{d=0}. \tag{5}\]
Here \(n_{Qx}\) is the number \(\hat{x}\)-oriented \(Q\) matrix elements on even columns of the square lattice and \(n_{Qx}\) is the same but for odd columns. Again, we only require a histogram of the values \((n_{Qx},n_{Qx})\) obtained from QMC simulation of the pure \(J\)-\(Q\) model without any fields, and Eqn. (5) allows for the statistically exact extraction of Lee-Yang zeros of the partition function with a complex VBS field. For the \(J\)-\(Q\) model we can simultaneously gather histograms for both the Neel zeros and VBS zeros in a single simulation.
In Fig. (3) we plot the location of the Lee-Yang zeros \(h_{\rm LY},d_{\rm LY}\) as a function of system size at the critical point \(J=0.04502\), \(Q=1\) with \(\beta=L/2\). Indeed, as with the Neel zeros, we find the nearest VBS zeros lie exactly on the imaginary axis: \(\mathrm{re}(h_{\rm LY})=\mathrm{re}(d_{\rm LY})=0\). For simplicity we plot only the location of the leading zeros. In
Figure 3: Main panel: the locations of the first Lee-Yang zeros in the \(J\)-\(Q\) model at the critical point in two separate cases: with a complex Néel field and with a complex VBS field. The thin black line shows the power law behavior expected at a first order transition with \(\eta=0\). Inset: The fitted exponent from a simple power law fit of the zeros to the form \(\sim L^{-(d+2-\eta)/2}\) as a function of the smallest system size used in the fit.
the \(J\)-\(Q\) model, contrary to the Heisenberg bilayer, we achieve much better fits using only a simple power law with no corrections to scaling. The extracted exponents are far below the values of the bilayer, signaling a much larger value of \(\eta\) here, with our values in the range of previous studies. We do, however, observe drifting of the critical exponents, as shown in the inset, which is also similar to what was observed in other studies [41; 52].
_Combined Neel and VBS fields:_ The transition in the \(J\)-\(Q\) model is frequently described in terms of an emergent SO(5) symmetry that arises between the Neel and VBS order parameters exactly at the critical point. From this point of view, it is interesting to ask the how the zeros of the partition function are distributed in the presence of combined Neel and VBS fields. Fortunately, our formulation allows us to simply compute the ratio when both fields are nonzero, again, while the actual simulation is carried out in zero field. We can straightforwardly define the combined ratio as:
\[\begin{split} R_{0}(h,d)=\left\langle\left(1-\frac{2h}{N_{c}J} \right)^{n_{\uparrow\downarrow}}\left(1+\frac{2h}{N_{c}J}\right)^{n_{z \uparrow}}\right.\\ \left.\left(1-\frac{d}{Q}\right)^{n_{Q\pi}}\left(1+\frac{d}{Q} \right)^{n_{Q\pi}}\right\rangle_{h=0,d=0}.\end{split} \tag{6}\]
In Fig. (4) we plot \(-\ln(|R_{0}(h,d)|)\) for purely imaginary values of \(h\) and \(d\) at the critical point. Interestingly, we see that the partition function zeros form extended rings in the plane (im(\(h\)),im(\(d\))). On either side of the transition the ovals become elongated in either the vertical or horizontal direction (see supplemental materials). Our conclusion is that at the critical point these rings shrink down to the origin in the thermodynamic limit, whereas on either side of the transition they become squashed in either the horizontal or vertical direction. It is interesting to note that although the values of \(J\) and \(Q\) differ by an order of magnitude at the transition, the lines of zeros are nearly circular in terms of \(h\) and \(d\).
_Conclusions:_ We have presented the first large-scale QMC computations of Lee-Yang zeros in quantum spin systems. Our formulation is simple to implement, in that it only requires gathering a histogram of the number of certain types of matrix elements that appear during standard QMC simulations in the absence of any external fields. Furthermore, we find that the location of Lee-Yang zeros computed in this way is extremely precise, easily allowing for the extraction of the the first three zeros for the critical Heisenberg bilayer on system sizes up to \(L=128\) and excellent agreement with known critical exponents. We also have demonstrated the generality of this approach by using it to extract Lee-Yang zeros associated with critical Neel, VBS and combined Neel-VBS fluctuations in the \(J\)-\(Q\) model, the emblematic model of deconfined criticality.
Moving forward, we envision that this technique could be applied to extract properties of non-unitary critical points that occur at finite imaginary field values. Specifically, with the partition function ratio in hand, it allows one to compute expectation values in the ensemble with a complex field. This seems promising given that the ratio is computed to high precision in this framework. On a speculative note, these types of studies could aid in understanding lattice models with pseudo-critical behavior, where the true critical point lies in the complex plane but with a small imaginary component, as has been proposed to explain anomalous behavior in models of deconfined criticality [51; 52; 53; 54; 55].
_Acknowledgements:_ We thank Roman Orus for his support for this project. Computing resources were used from the XSEDE allocation NSF DMR-130040 using the Expanse cluster at the San Diego Supercomputer Center.
|
2306.16863 | On the mean field theory of Ensemble Kalman filters for SPDEs | This paper is concerned with the mathematical analysis of continuous time
Ensemble Kalman Filters (EnKBFs) and their mean field limit in an infinite
dimensional setting. The signal is determined by a nonlinear Stochastic Partial
Differential Equation (SPDE), which is posed in the standard variational
setting. Assuming global one-sided Lipschitz conditions and finite dimensional
observations we first prove the well posedness of both the EnKBF and its
corresponding mean field version. We then investigate the quantitative
convergence of the EnKBF towards its mean field limit, recovering the rates
suggested by the law of large numbers for bounded observation functions. The
main tool hereby are exponential moment estimates for the empirical covariance
of the EnKBF, which may be of independent interest. In the appendix of the
paper we investigate the connection of the mean field EnKBF and the Stochastic
Filtering Problem. In particular we derive the Feedback Particle Filter for
infinite dimensional signals and show that the mean field EnKBF can viewed as
its constant gain approximation. | Sebastian Ertel | 2023-06-29T11:19:08Z | http://arxiv.org/abs/2306.16863v4 | # Filtering of SPDEs:
###### Abstract
This paper is concerned with the derivation and mathematical analysis of continuous time Ensemble Kalman Filters (EnKBFs) and related data assimilation methods for Stochastic Partial Differential Equations (SPDEs) with finite dimensional observations. The signal SPDE is allowed to be nonlinear and is posed in the standard abstract variational setting. Its coefficients are assumed to satisfy global one-sided Lipschitz conditions.
We first review classical filtering algorithms in this setting, namely the Kushner-Stratonovich and the Kalman-Bucy filter, proving a law of total variance.
Then we consider mean-field filtering equations, deriving both a Feedback Particle Filter and a mean-field EnKBF for nonlinear signal SPDEs.
The second part of the paper is devoted to the elementary mathematical analysis of the EnKBF in this infinite dimensional setting, showing the well posedness of both the mean-field EnKBF and its interacting particle approximation. Finally we prove the convergence of the particle approximation. Under the additional assumption that the observation function is bounded, we even recover explicit and (nearly) optimal rates.
###### Contents
* 1 Introduction
* 2 Problem setting, assumptions and notations
* 3 The Kushner-Stratonovich equation and the law of total variance
* 4 Linear and Gaussian Filtering
* 4.1 The classical Kalman-Bucy filter
* 4.2 The consistent mean field EnKBF
* 5 The Feedback Particle Filter
* 6 The mean field EnKBF for nonlinear signals
* 7 The EnKBF as an interacting particle system
* 7.1 Analysis of the particle approximations
* 7.2 Quantitative propagation of chaos
## 1 Introduction
In recent years the field of data assimilation, that is the (optimal) integration of real world data into mathematical models, has become an important tool for practitioners in various scientific fields. As it shares similar objectives to stochastic filtering, which is essentially the discipline of Bayesian estimation of dynamic processes from noisy, potentially incomplete data, many algorithms from
filtering are used for data assimilation tasks. Vice versa algorithms for data assimilation of dynamical processes can be viewed through the lens of filtering, the mathematical model is then referred to as the signal and the available data as the observations. To combine these two objects in an optimal manner one aims to compute/approximate the posterior distribution, that is the conditional distribution of the signal given all past observations.
One such algorithm is the Ensemble Kalman Filter (EnKF), which was introduced by Geir Evensen in 94 [22] and employs an ensemble of interacting particles to estimate the state of a dynamical system. Since its inception many different variants of the EnKF have been introduced. For an overview and historical context we refer to [13], [23] or [50]. The EnKF has by now become one of the most widely used techniques for data assimilation in high dimensional settings, particularly popular amongst practitioners in the geosciences and numerical weather forecasting. Besides its usage for state estimation in dynamical systems, the EnKF and related algorithms have also been applied to parameter estimation in inverse problems [36],[46].
While the original EnKF is a discrete time recursion, continuous time counterparts, referred to as Ensemble Kalman-Bucy Filters (EnKBFs) have by now been firmly established in the literature. In many cases they can also be shown to be the limit of their discrete time counterparts for vanishing step size, see e.g. [31] and the references found therein. In this paper we will only consider the basic continuous time framework. In this case the signal \(u\) is given by a stochastic (partial) differential equation
\[\mathrm{d}u_{t}=\mathcal{A}(u_{t})\mathrm{d}t+\mathcal{B}(u_{t})\mathrm{d}W_ {t}\] (S)
for a given initial datum \(u_{0}\). We allow for \(u\) to be an element of a possibly infinite dimensional Hilbert space, \(\mathcal{A}\) to be a general coercive operator and the signal noise \(W\) to be a Wiener process with a covariance of finite trace. The observations \(Y\) shall be given by the stochastic differential
\[\mathrm{d}Y_{t}=H(u_{t})\mathrm{d}t+\Gamma_{t}\mathrm{d}V_{t},\ Y_{0}=0.\] (O)
We only consider finite dimensional observations taking values in \(\mathbb{R}^{d_{y}}\) for some \(d_{y}\in\mathbb{N}\). This is the more practically relevant case and also avoids discussions of the regularity/degeneracy of the observation noise \(V\), which in this work is assumed to be white, i.e. some finite dimensional standard Brownian motion. A more thorough discussion of the setting we consider, and the assumptions we have to make, is found in section 2.
In our setting the EnKBF we consider in this paper takes the form
\[\begin{split}\mathrm{d}u_{t}^{i}&=\mathcal{A}(u_{t }^{i})\mathrm{d}t+\mathcal{B}(u_{t}^{i})\mathrm{d}\bar{W}_{t}^{i}\\ &\quad+\frac{1}{N}\sum_{j=1}^{N}u_{t}^{j}\left(H(u_{t}^{j})-\frac {1}{N}\sum_{k=1}^{N}H(u_{t}^{k})\right)^{\prime}R_{t}^{-1}\left(\mathrm{d}Y_{ t}-\frac{H(u_{t}^{i})+\frac{1}{N}\sum_{k=1}^{N}H(u_{t}^{k})}{2}\mathrm{d}t \right)\end{split} \tag{1}\]
for \(i=1,\cdots,N\), where \((\bar{W}^{i})_{i=1,\cdots,N}\) are independent copies of the Wiener process \(\bar{W}\). The system (1) is often referred to as the deterministic EnKBF [6], which is the continuous time counterpart of the Ensemble Square Root Filter [32]. Our main results can be generalized to other types of EnKBFs, in particular the more classical version which involves randomness in the innovation term.
As already mentioned the task of filtering is to compute the posterior distribution \((\eta)_{t\geq 0}\), which in our setting is given by
\[\eta_{t}:=\mathbb{P}\left(\ u_{t}\in\cdot\ |\ Y_{s},\ s\leq t\ \right),\ \text{for}\ t\geq 0.\] (P)
Even though the EnK(B)F has seen wide success in applications. Its mathematical foundations and connections to the filtering problem only started to emerge in last decade and still contain
many important gaps. The algorithm is so far best understood in the case of linear signal and observation dynamics under Gaussian noise perturbations and Gaussian initial conditions, henceforth called the linear Gaussian setting (see Assumption 4 in section 4). In this particular case the mean-field limit1 of (1) is a McKean-Vlasov process with the property, that its mean and covariance evolve according to the Kalman-Bucy equations. Thus the time marginals of this mean-field limit coincide with the posterior for all times, which was first shown in [26] and [35]. Therefore quantitative propagation of chaos results are important in the Gaussian setting, as they provide error bounds for the EnK(B)F as an approximation of the true posterior (P). In the continuous time setting a quantitative convergence result that is even uniform in time was first obtained by [20]. By now there exists an extensive body of work on the mean-field theory of EnKBFs in the linear Gaussian setting, including very strong convergence and stability results. For more details we refer the reader to the survey paper [6] (see also Remark 43). Interestingly infinite dimensional signals do not seem to have been considered in this continuous time, linear Gaussian setting so far.
Footnote 1: That is the limiting process one obtains by taking the ensemble size \(N\rightarrow+\infty\) in (1).
For nonlinear dynamics or non-Gaussian initial conditions the true posterior can not be expected to be given by the mean-field limit of the EnK(B)F. Therefore the mathematical foundations of the EnK(B)F and rigorous justifications for its usage in nonlinear regimes are still sparse and an ongoing research topic. Numerous works so far have focussed on investigating stability and accuracy. In particular we mention here the seminal paper [29], which investigated well posedness and accuracy of the EnK(B)F for finite ensemble size. The signals considered were allowed to be infinite dimensional (but deterministic) and included the (2D) Navier-Stokes equations. The perspective that was put forward in this paper was to view the EnK(B)F as a state estimator, rather than as an approximation of the true posterior.
Nevertheless the mean-field theory for EnK(B)Fs in finite dimensional settings has lately received increasing interest [13] as it allows to connect the EnK(B)F to the true posterior (P). Indeed one can obtain the mean-field limit of the EnK(B)F from a McKean-Vlasov representation of the posterior, called the Feedback Particle Filter [51][52] (also [49] for a survey), either by Gaussian approximation or by what is usually referred to as the constant gain approximation [48]. Even though these connections are well known in the literature, only recently [14] were able to derive error bounds for the inconsistency of the mean-field EnKF (as an approximation to the true posterior) under restricting conditions that ensure a near Gaussian setting.
This indicates the relevance of the mean-field theory to providing firm mathematical foundations for EnKBFs and their relation to the optimal filter. So far the mean-field theory of the (continuous time) EnKBF seems to only have been studied in a finite dimensional setting. [32] proved propagation of chaos with implicit rates for the EnK(B)F in discrete and continuous time for linear observations requiring the assumption that the mean-field limit actually exists. [15] investigated the EnKBF from a rough paths perspective. Treating the observation data as a continuous rough path they proved well posedness of the mean-field equation under the assumption of bounded observation functions. Under the additional assumption that the observation data is of bounded variation they were also able to derive propagation of chaos with logarithmic rates. [21] showed well posedness of the mean-field equation and propagation of chaos with implicit rates in a correlated noise setting. A good summary of the mean-field picture of EnK(B)Fs can be found in the recent survey paper [13].
In this paper we derive and analyse the EnKBF and its mean-field limit for signals that are given by SPDEs under finite dimensional observations. For the signal SPDEs we consider a variational setting (see [43]), which is in particular interesting for practical applications due to its relation to popular numerical approximation methods like Finite Elements or other Galerkin schemes. Nevertheless, to the best knowledge of the author, continuous time Ensemble Kalman methods for SPDEs do not seem covered by the existing literature. The problem setting and our basic assumptions are discussed in section 2. As we analyse the EnKBF from a Bayesian point of view, we first discuss the Kushner-Stratonovich equation describing the posterior distribution in section
3. In particular we prove a priori bounds for the variance of the Kushner-Stratonovich equation that in the Bayesian literature are often referred to as the law of total variance. Similar bounds also hold for the EnKBF and are the main tool for its analysis. In section 4 we restrict our investigation to the linear Gaussian setting. After first reviewing a classical result by Bensoussan [5] on the Kalman-Bucy filter in infinite dimensions, we then introduce the mean-field EnKBF as McKean-Vlasov representation of the Kalman-Bucy filter and discuss its well posedness. In section 5 we then generalize this principle and derive the Feedback Particle Filter (FPF) as a McKean-Vlasov representation of the posterior in nonlinear filtering problems. We also show that the mean-field EnKBF for nonlinear signals is the constant gain approximation of the FPF. Section 6 is devoted to proving the well posedness of the mean-field EnKBF. Finally in section 7 we investigate the EnKBF as a particle approximation of the mean-field version, proving well posedness and a propagation of chaos result with implicit rates. Under the additional assumption of a bounded observation function we are able to improve on this convergence results, deriving (almost) optimal rates. While optimal rates are well established in the linear Gaussian setting [20], to our best knowledge this is the first result that shows such rates for general nonlinear signal dynamics even in finite dimensions.
## 2 Problem setting, assumptions and notations
For the signal we consider SPDEs in a variational setting as they are found in e.g. [39],[43]. To fix notation and for the convenience of the reader we repeat some key concepts and results of this field in this section, for a more detailed introduction to this topic we refer the reader to [43].
Let \(\mathscr{H}\) be a Hilbert space and \(\mathscr{V}\) be a Banach space that form a Gelfand triple [43, Section 4.1]\(\mathscr{V}\hookrightarrow\mathscr{H}\hookrightarrow\mathscr{V}^{\prime}\). We assume that that there exists an Orthonormal basis of \(\mathscr{H}\), denoted by \(\left(\nu_{k}\right)_{k\in\mathbb{N}}\subset\mathscr{H}\) such that \(\nu_{k}\in\mathscr{V}\) for all \(k\in\mathbb{N}\).
For a given separable real Hilbert space \(\mathscr{U}\), we consider the \(\mathscr{U}\)-valued \(\mathcal{Q}\)-Wiener process \((W_{t})_{t\geq 0}\) with finite trace. To this end assume that \(\mathcal{Q}\) is a symmetric, positive semidefinite linear operator on \(\mathscr{U}\) with finite trace \(\operatorname{tr}_{\mathscr{U}}\mathcal{Q}<+\infty\) and Eigenvalues \((q_{k})_{k\in\mathbb{N}}\) with corresponding orthonormal Eigenbasis \((e_{k})_{k\in\mathbb{N}}\). Then there are exist independent Brownian motions \((w^{k})_{k\in\mathbb{N}}\) such that
\[W_{t}=\sum_{k\in\mathbb{N}}\sqrt{q_{k}}e_{k}w_{t}^{k}\text{ for all times }t\geq 0.\]
**Definition 1**.: _We will always identify the Hilbert spaces \(\mathscr{H},\mathscr{U}\) with their duals \(\mathscr{H}^{\prime},\mathscr{U}^{\prime}\). For any operator \(B\) acting on Hilbert spaces we denote its adjoint by \(B^{\prime}\). The adjoint of an element \(u\in\mathscr{H}\) is just it's image under the Riesz embedding, i.e. \(u^{\prime}:=\left\langle u,\cdot\right\rangle_{\mathscr{H}}\). This notation is consistent with finite dimensional settings. We note that therefore \(uu^{\prime}\in\operatorname{L}\left(\mathscr{H};\mathscr{H}\right)\) defines a bounded linear operator on \(\mathscr{H}\)._
To rigorously formulate the signal (S) as a variational SPDE on the Gelfand triple \((\mathscr{V},\mathscr{H},\mathscr{V}^{\prime})\), we make the following standard assumptions [43, page 56] that shall hold throughout this paper.
**Assumption 2** (Signal assumptions).: _Denote by \(\operatorname{L}_{2}\left(\mathscr{U};\mathscr{H}\right)\) the space of Hilbert-Schmidt operators, that is the space of all linear operators \(B:\mathscr{U}\rightarrow\mathscr{H}\) such that their Hilbert-Schmidt norm \(\left\|B\right\|_{\operatorname{L}_{2}\left(\mathscr{U};\mathscr{H}\right)}^{2 }:=\sum_{k\in\mathbb{N}}\left\|Be_{k}\right\|_{\mathscr{H}}^{2}\) is finite._
_We assume that \(\mathcal{A}:\mathscr{V}\rightarrow\mathscr{V}^{\prime}\) and \(\mathcal{B}:\mathscr{V}\rightarrow\operatorname{L}_{2}\left(\mathscr{U}; \mathscr{H}\right)\) satisfy the following conditions_
1. _Hemicontinuity:_ _For all_ \(u,v,w\in\mathscr{V}\) _the mapping_ \[r\rightarrow{}_{\mathscr{V}^{\prime}}\langle\mathcal{A}(v+ru),w\rangle_{ \mathscr{V}}\text{ is continuous.}\]
2. _Weak monotonicity/one-sided Lipschitz:_ _There exists_ \(\lambda>0\) _such that for all_ \(u,v\in\mathscr{V}\)__ \[2\left.{}_{\mathscr{V}^{\prime}}\langle\mathcal{A}(u)-\mathcal{A}(v),u-v \rangle_{\mathscr{V}}+\left\|(\mathcal{B}(u)-\mathcal{B}(v))\circ\sqrt{ \mathcal{Q}}\right\|_{\operatorname{L}_{2}\left(\mathscr{U};\mathscr{H} \right)}^{2}\leq\lambda\left\|u-v\right\|_{\mathscr{H}}^{2}.\] (2)
3. _Coercivity:_ _There exist constants_ \(\alpha_{V}>0,\alpha_{H},\alpha_{0}\in\mathbb{R}\) _and_ \(\alpha_{\mathfrak{p}}\in(1,+\infty)\)_, such that_ \(u\in\mathscr{V}\)__ \[2\,_{\mathscr{V}^{\prime}}\langle\mathcal{A}(u),u\rangle_{\mathscr{V}^{\prime}} +\left\|\mathcal{B}(u)\circ\sqrt{\mathcal{Q}}\right\|_{\mathrm{L}_{2}(\mathscr{U };\mathscr{H})}^{2}\leq-\alpha_{V}\left\|u\right\|_{\mathscr{V}^{\prime}}^{ \alpha_{\mathfrak{p}}}+\alpha_{H}\left\|u\right\|_{\mathscr{H}}^{2}+\alpha_{0}.\] (3)
4. _Boundedness:_ _There exists a constant_ \(c_{\mathcal{A}}>0\) _such that_ \[\left\|\mathcal{A}(u)\right\|_{\mathscr{V}^{\prime}}\leq c_{\mathcal{A}}\left( 1+\left\|u\right\|_{\mathscr{V}}\right)\ \forall u\in\mathscr{V}.\]
Next let us briefly discuss which (S)PDEs can be treated in this variational framework.
**Remark 3** (Possible Signals).: _A classical example of a differential operator that satisfies our Assumptions 2 is the \(p\)-Laplacian_
\[\mathcal{A}(v):=\Delta\left(v|v|^{p-2}\right) \tag{4}\]
_for any \(p\in[2,+\infty)\) on a suitable domain \(\Lambda\) with Dirichlet boundary conditions. In this case \(\mathscr{V}:=\mathcal{W}_{0}^{1,p}(\Lambda)\) is the classical \(p\)-integrable, first order Sobolev space of functions that vanish on the boundary. The Hilbert space is then set to \(\mathscr{H}:=L(\Lambda)\). Neumann, or mixed boundary conditions can be treated as well. Thus we can allow for signals that are given by a (noisy) heat or porous media equation. Another differential operator satisfying our assumptions is given by_
\[\mathcal{A}(v):=-\Delta v-\mathfrak{a}v^{3}+\mathfrak{b}v+\mathfrak{c},\]
_where \(\mathfrak{a}\geq 0,\ \mathfrak{b},\mathfrak{c}\in\mathbb{R}\), for Dirichlet, Neumann or mixed boundary conditions on suitable domains. Therefore we can treat signals evolving by a stochastic reaction diffusion equation with a double well potential. In particular Allen-Cahn and FitzHugh-Nagumo equations can be treated. For a more detailed discussion we refer to [43, Section 4.1]._
Under Assumption 2 it can be shown [43] that for a given initial condition \(u_{0}\in\mathscr{H}\) the signal SPDE (S) has a unique solution, which will henceforth be referred to as the signal.
An important tool to our analysis is Ito's Lemma for variational SPDEs [39, page 136] first derived in [40]. For later reference let us specify in the following definition for which functions one can use Ito's Lemma.
**Definition 4**.: _Any function \(\phi:\mathscr{H}\to\mathbb{R}\) is said to be an Ito function, if_
1. \(\phi\) _is twice Frechet differentiable, with the first two derivatives denoted by_ \(\mathrm{D}^{1}_{\mathscr{H}}\phi\) _and_ \(\mathrm{D}^{2}_{\mathscr{H}}\phi\)_._
2. _All of_ \(\phi\)_,_ \(\mathrm{D}^{1}_{\mathscr{H}}\phi\) _and_ \(\mathrm{D}^{2}_{\mathscr{H}}\phi\) _are locally bounded._
3. _For any operator_ \(\mathcal{Q}:\mathscr{H}\to\mathscr{H}\) _that is of trace class, the functional_ \(v\mapsto\mathrm{tr}_{\mathscr{H}}\left[\mathcal{Q}\mathrm{D}^{2}_{\mathscr{H}} \phi(v)\right]\) _is continuous on_ \(\mathscr{H}\)_._
4. _For any_ \(v\in\mathscr{V}\) _it holds that_ \(\mathrm{D}^{1}_{\mathscr{H}}\phi(v)\in\mathscr{V}\) _and the map_ \(\left.\mathrm{D}^{1}_{\mathscr{H}}\phi\right|_{\mathscr{V}}:\mathscr{V}\to \mathscr{V}\) _is continuous when the domain is equipped with the strong and the image is equipped with the weak topology._
5. _There exists a constant_ \(C>0\) _such that_ \(\left\|\mathrm{D}^{1}_{\mathscr{H}}v\right\|_{\mathscr{V}}\leq C\left(1+\left\| v\right\|_{\mathscr{V}}\right)\) _for all_ \(v\in\mathscr{V}\)_._
_If an Ito function \(\phi\) is twice continuously Frechet differentiable with compact support, we refer to it as an Ito testfunction._
One important example of an Ito function is the squared norm \(\left\|.\right\|_{\mathscr{H}}^{2}\). With the Ito Lemma it is easy to show the following basic identities for the signal distribution.
**Lemma 5**.: _The signal mean \(m_{t}:=\mathbb{E}\left[u_{t}\right]\) satisfies_
\[\partial_{t}m_{t}=\mathbb{E}\left[\mathcal{A}(u_{t})\right], \tag{5}\]
_and the covariance \(P_{t}:=\mathbb{C}\mathtt{ov}\left[u_{t}\right]:=\mathbb{E}\left[\left(u_{t}-m_{ t}\right)\left(u_{t}-m_{t}\right)^{\prime}\right]\) satisfies_
\[\partial_{t}\left\langle v,\mathbb{C}\mathtt{ov}\left[u_{t} \right]w\right\rangle_{\mathscr{H}} =\mathbb{E}\left[\left\langle v,u_{t}-m_{t}\right\rangle_{ \mathscr{H}\ \mathscr{V}^{\prime}}\langle\mathcal{A}(u_{t})-\mathcal{A}(m_{t}),w \rangle_{\mathscr{V}}\right] \tag{6}\] \[\quad+\mathbb{E}\left[\left\langle w,u_{t}-m_{t}\right\rangle_{ \mathscr{H}\ \mathscr{V}^{\prime}}\langle\mathcal{A}(u_{t})-\mathcal{A}(m_{t}),v \rangle_{\mathscr{V}}\right]\] \[\quad+\left\langle v,\mathbb{E}\left[\mathcal{B}(u_{t})\sqrt{ \mathcal{Q}}\left(\mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\right)^{\prime}\right] w\right\rangle_{\mathscr{H}}\]
_for all \(v,w\in\mathscr{V}\). The generator of the signal, denoted by \(\mathcal{L}\), is given by_
\[\mathcal{L}\phi=\left.{}_{\mathscr{V}^{\prime}}\langle\mathcal{A}(\cdot), \mathrm{D}_{\mathscr{H}}^{1}\phi\rangle_{\mathscr{V}}+\frac{1}{2}\mathrm{tr} _{\mathscr{H}}\left[\left(\mathrm{D}_{\mathscr{H}}^{2}\phi\right)\ \mathcal{B}(\cdot)\sqrt{\mathcal{Q}}\left(\mathcal{B}(\cdot)\sqrt{\mathcal{Q}} \right)^{\prime}\right] \tag{7}\]
_for every Ito testfunction \(\phi\) as per Definition 4._
Proof.: The equation of the mean (5) is a simple consequence of the centeredness of the Wiener process \(W\).
Using the fact that for any \(v\in\mathscr{H}\) we have
\[\left\langle v,\mathcal{B}(\bar{u}_{t})\mathrm{d}W_{t}\right\rangle_{\mathscr{ H}}=\sum_{k=1}^{+\infty}\sqrt{q_{k}}\left\langle v,\mathcal{B}(\bar{u}_{t})e_{k} \right\rangle_{\mathscr{H}}\mathrm{d}\beta_{k}(t).\]
we derive by Ito's formula on Hilbert spaces [39, page 136] the identity
\[\mathrm{d}\left(\left\langle v,u_{t}-m_{t}\right\rangle_{\mathscr{ H}}\left\langle w,u_{t}-_{t}\right\rangle_{\mathscr{H}}\right)\] \[\quad+\mathrm{d}\left[\left\langle v,u-m\right\rangle_{\mathscr{ H}}\left\langle w,u-m\right\rangle_{\mathscr{H}}\right]_{t}\] \[\quad+\sum_{k=1}^{+\infty}q_{k}\left\langle v,\mathcal{B}(u_{t})e _{k}\right\rangle_{\mathscr{H}}\left\langle w,\mathcal{B}(u_{t})e_{k}\right\rangle _{\mathscr{H}}\] \[\quad+\left\langle v,u_{t}-m_{t}\right\rangle_{\mathscr{H}} \left\langle w,\mathcal{B}(u_{t})\mathrm{d}W_{t}\right\rangle_{\mathscr{H}}+ \left\langle w,u_{t}-m_{t}\right\rangle_{\mathscr{H}}\left\langle v,\mathcal{B }(u_{t})\mathrm{d}W_{t}\right\rangle_{\mathscr{H}}.\]
for all \(v,w\in\mathscr{V}\). Now we note that we can also write
\[\sum_{k=1}^{+\infty}q_{k}\left\langle v,\mathcal{B}(u_{t})e_{k} \right\rangle_{\mathscr{H}}\left\langle w,\mathcal{B}(u_{t})e_{k}\right\rangle _{\mathscr{H}} =\left\langle v,\mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\sum_{k=1}^{+ \infty}\left\langle w,\mathcal{B}(u_{t})\sqrt{\mathcal{Q}}e_{k}\right\rangle _{\mathscr{H}}e_{k}\right\rangle_{\mathscr{H}}\] \[=\left\langle v,\mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\sum_{k=1}^{+ \infty}\left\langle\left(\mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\right)^{\prime }w,e_{k}\right\rangle_{\mathscr{H}}e_{k}\right\rangle_{\mathscr{H}}.\]
Due to Parseval, this gives us the identity
\[\sum_{k=1}^{+\infty}q_{k}\left\langle v,\mathcal{B}(u_{t})e_{k}\right\rangle_{ \mathscr{H}}\left\langle w,\mathcal{B}(u_{t})e_{k}\right\rangle_{\mathscr{H}} =\left\langle v,\mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\left(\mathcal{B}(u_{t}) \sqrt{\mathcal{Q}}\right)^{\prime}w\right\rangle_{\mathscr{H}}. \tag{8}\]
By taking the expectation in the previous evolution equation of the product and using the projection properties of the expectation we then derive (6).
Finally we address the generator. Using Ito's formula [39, page 136] and taking the expectation, we see immediately that
\[\partial_{t}\mathbb{E}\left[\phi(u_{t})\right]=\mathbb{E}\left[\left.\varphi^{ \prime}\big{\langle}\mathcal{A}(u_{t}),\left(\mathrm{D}_{\mathscr{H}}^{1}\phi \right)(u_{t})\right\rangle_{\mathscr{V}}\right]+\frac{1}{2}\mathrm{tr}_{ \mathscr{H}}\left[\mathbb{E}\left[\left(\mathrm{D}_{\mathscr{H}}^{2}\phi\right) (u_{t})\sum_{k\in\mathbb{N}}q_{k}\left((\mathcal{B}(u_{t})e_{k})(\mathcal{B}(u_ {t})e_{k})\right)\right]\right]\]
By using (8) again, we thus derive (7).
Let us address the observations next. As stated in the introduction we consider continuous, \(d_{y}\)-dimensional observations given by the differential equation (O). We make the following standard assumptions for the coefficients \(H\) and \(\Gamma\).
**Assumption 6** (Observation assumptions).: _The observation function \(H:\mathscr{H}\to\mathbb{R}^{d_{y}}\) is assumed to be globally Lipschitz continuous, \(\Gamma\in C^{0}\left([0,+\infty),\mathbb{R}^{d_{y}\times d_{v}}\right)\) and \(V\) is a \(d_{v}\)-dimensional standard Brownian motion, independent of the signal \(u\) and its driving noise \(W\). As usual we set \(R_{t}:=\Gamma_{t}\Gamma_{t}^{\mathrm{T}}\) and assume that \(R_{t}\) is invertible for all times \(t\geq 0\)._
In the following variance bounds for both the posterior (P) and the law of the EnKBF will play a crucial role in our analysis. That is why, besides the standard assumptions for the signal 2 and the observations 6, we make the following additional assumption, which will give us a priori bounds for the signal variance.
**Assumption 7** (Bounded signal diffusion).: _There exists a constant \(\beta>0\) such that_
\[\sup_{v\in\mathscr{V}}\mathrm{tr}_{\mathscr{H}}\left[\mathcal{B}(v)\sqrt{ \mathcal{Q}}\left(\mathcal{B}(v)\sqrt{\mathcal{Q}}\right)^{\prime}\right]\leq\beta.\]
## 3 The Kushner-Stratonovich equation and the law of total variance
In this section we investigate the Kushner-Stratonovich equation (KSE), which describes the evloution of the posterior distribution (P). We focus on proving bounds for the posterior covariance. Similar bounds will later on be key to the investigation of the EnKBF, and remarkably seem to be one of the few areas of consistency for general signals. By consistency we mean that it is a property that is shared by both the EnKBF and the optimal filter, regardless of whether the filter is (close to) a Gaussian. However first we note that in our setting with finite dimensional observations, the KSE can be derived just as for finite dimensional settings using an innovation process approach [4]. To this end we define the innovation Wiener process \(\hat{I}\) by
\[\mathrm{d}\hat{I}_{t}:=R_{t}^{-1/2}\mathrm{d}Y_{t}-R_{t}^{-1/2}\eta_{t}(H) \mathrm{d}t.\]
By design \(\hat{I}\) is an \(\mathbb{R}^{d_{y}}\)-valued diffusion process with continuous sample paths. Its quadratic variation is given by
\[\left[\hat{I}\right]_{t} =\left[\int_{0}^{\cdot}R_{s}^{-1/2}\mathrm{d}Y_{s}-\int_{0}^{ \cdot}R_{s}^{-1/2}\eta_{s}(H)\mathrm{d}s\right]_{t}=\left[\int_{0}^{\cdot}R_{ s}^{-1/2}\mathrm{d}Y_{s}\right]_{t}\] \[=\int_{0}^{t}R_{s}^{-1/2}\mathrm{d}\left[Y\right]_{s}R_{s}^{-1/2} =t\ \mathrm{id}_{\mathbb{R}^{d_{y}}}.\]
Next we show that \(\hat{I}\) is a \((Y_{0:t})_{t\geq 0}\) martingale. Let therefore \(s<t\) and \(\phi_{s}\) be a bounded and \(Y_{0:s}\)-measurable function, then by the martingale property of \(V\) and the projection property of the posterior \(\eta\), we have
\[\mathbb{E}\left[\left(\hat{I}_{t}-\hat{I}_{s}\right)\phi_{s}\right]=\mathbb{E} \left[\int_{s}^{t}R_{r}^{-1/2}\Gamma_{r}\mathrm{d}V_{r}\ \phi_{s}\right]+\mathbb{E}\left[\int_{s}^{t}R_{r}^{-1/2}\left(H(u_{r})-\eta_{r} (H)\right)\mathrm{d}r\ \phi_{s}\right]=0.\]
Thus by Levys characterization of the Brownian motion, we know that \(\hat{I}\) is indeed a standard \(\mathbb{R}^{d_{\Psi}}\)-valued Brownian motion.
Just as in the finite dimensional case [4], one can thus verify that the posterior (P) satisfies the Kushner-Stratonovich equation (KSE) in its weak form
\[\mathrm{d}\eta_{t}(\phi)=\eta_{t}(\mathcal{L}\phi)\mathrm{d}t+\left(\eta_{t}( \phi H)-\eta_{t}(\phi)\eta_{t}(H)\right)R_{t}^{-1}\left(\mathrm{d}Y_{t}-\eta_ {t}(H)\mathrm{d}t\right), \tag{9}\]
where \(\mathcal{L}\) is the generator of \(u\) defined in (7) and \(\phi\) is an arbitrary Ito testfunction (see Definition 4).
**Remark 8**.: _As mentioned, the path we took to derive the KSE is rather standard in finite dimensional settings [4]. There are various works establishing the nonlinear filtering equations in infinite dimensional settings, even in the more difficult case of correlated noise [2],[10]. In [53] an extension of the KSE to infinite dimensional filtering problems with infinite dimensional observations was derived._
A key tool in the analysis of the EnKBF is an inequality that bounds its conditional variance by the (same upper bounds as the) variance of the signal, irrespective of the observations. This is a feature that is actually shared by the posterior distribution (P), as the projection properties of the conditional expectation imply
\[\mathbb{E}\left[\ \eta_{t}(\left\|\cdot\right\|_{\mathscr{H}}^{2})-\left\|\eta_{t }(\mathrm{id}_{\mathscr{H}})\right\|_{\mathscr{H}}^{2}\ \right]=\mathbb{E}\left[\ \mathrm{tr}_{\mathscr{H}}\ \mathbb{C}\mathtt{ov}_{\eta_{t}}\left[ \mathrm{id}_{\mathscr{H}}\right]\ \right]\leq\mathrm{tr}_{\mathscr{H}}\ \mathbb{C} \mathtt{ov}\left[u_{t}\right].\]
Now we note that by the covariance dynamics (6) and Parseval2 we obtain
Footnote 2: One could have also directly looked at the dynamics of \(\mathbb{E}\left[\left\|u_{t}-m_{t}\right\|_{\mathscr{H}}^{2}\right]\) and proved this via the well known Ito formula for the norm.
\[\partial_{t}\mathrm{tr}_{\mathscr{H}}\ \mathbb{C}\mathtt{ov} \left[u_{t}\right] =2\sum_{k\in\mathbb{N}}\mathbb{E}\left[\left(\nu_{k},u_{t}-m_{t} \right)_{\mathscr{H}\ \mathscr{V}^{\prime}}\langle\mathcal{A}(u_{t})-\mathcal{A}(m_{t}),\nu_{k} \rangle_{\mathscr{V}}\right]\] \[\quad+\sum_{k\in\mathbb{N}}\left\langle\nu_{k},\mathbb{E}\left[ \mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\left(\mathcal{B}(u_{t})\sqrt{\mathcal{Q}} \right)^{\prime}\right]\nu_{k}\right\rangle_{\mathscr{H}}\] \[=\mathbb{E}\left[2\ _{\mathscr{V}^{\prime}}\langle\mathcal{A}(u_{t})- \mathcal{A}(m_{t}),u_{t}-m_{t}\rangle_{\mathscr{V}}\right]+\mathbb{E}\left[ \mathrm{tr}_{\mathscr{H}}\left[\mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\left( \mathcal{B}(u_{t})\sqrt{\mathcal{Q}}\right)^{\prime}\right]\right]\]
To estimate the first term we use the one-sided Lipschitz condition (2) and for the second term we use Assumption 7. This gives us
\[\partial_{t}\mathrm{tr}_{\mathscr{H}}\ \mathbb{C}\mathtt{ov}\left[u_{t}\right]\leq \lambda\ \mathrm{tr}_{\mathscr{H}}\ \mathbb{C}\mathtt{ov}\left[u_{t}\right]+\beta.\]
Which, by Gronwall, implies \(\mathrm{tr}_{\mathscr{H}}\ \mathbb{C}\mathtt{ov}\left[u_{t}\right]\leq\beta e^{ \lambda t}\). Together with the variance bound for the posterior this gives us
\[\mathbb{E}\left[\ \eta_{t}(\left\|\cdot\right\|_{\mathscr{H}}^{2})-\left\|\eta_{t }(\mathrm{id}_{\mathscr{H}})\right\|_{\mathscr{H}}^{2}\ \right]=\mathbb{E}\left[\ \mathrm{tr}_{\mathscr{H}}\ \mathbb{C} \mathtt{ov}_{\eta_{t}}\left[\mathrm{id}_{\mathscr{H}}\right]\ \right]\leq\beta e^{\lambda t}. \tag{10}\]
**Remark 9** (Law of total variance).: _In probability theory the identity_
\[\mathbb{E}\left[\mathtt{ov}\left[X|Y\right]\right]+\mathtt{Vor}\left[\mathbb{ E}\left[X|Y\right]\right]=\mathtt{Vor}\left[X\right]\]
for any random variables \(X,Y\) is often referred to as the law of total variance. As we have seen, inequality (10) is a direct consequence of this identity. Thus in the following we also refer to (10) as the law of total variance. We will later see that it also holds for the variance of the EnKBF and is key to its analysis._
As one would expect, one can also show the law of total variance (10) for any sufficiently regular solution of the KSE without invoking its connection to a Bayesian estimation problem. While this may seem trivial at first glance, it is in fact a highly non trivial task to connect a given solution of the KSE or Zakai equation back to the posterior distribution (P) without using uniqueness arguments for these equations. We therefore formulate this fact in a separate Lemma. The proof only uses the dynamics of the KSE and does not require the connection to the conditional distribution, which makes the law of total variance (10) an attractive a priori estimate for the analysis of the KSE or related equations.
**Lemma 10**.: _Let \(\eta\) be a solution to (9) and assume that at all times it's support lies in \(\mathscr{V}\). Assume furthermore that \(\mathrm{id}_{\mathscr{H}}\) and \(\left\|.\right\|_{\mathscr{H}}^{2}\) are always integrable with respect to \(\eta_{t},\ t\geq 0\). Then the law of total variance (10) holds._
To prove this Lemma one would be tempted to use \(\mathrm{id}_{\mathscr{H}}\) and \(\left\|.\right\|_{\mathscr{H}}^{2}\) as testfunctions in (9). However this is not allowed by the conditions of Ito's formula. Instead we make a Fourier argument, at the basis of which the following Lemma lies. This Lemma will also become useful in the next section.
**Lemma 11**.: _For every \(i,j\in\mathbb{N}\) denote the \(i\)-th Fourier coefficient function by \(\phi_{i}(v):=\left\langle v,\nu_{i}\right\rangle_{\mathscr{H}}\) and define the quadratic function \(\chi_{ij}(v):=\left\langle v,\nu_{i}\right\rangle_{\mathscr{H}}\left\langle v,\nu_{j}\right\rangle_{\mathscr{H}}\). Then it holds that_
\[\mathrm{d}\eta_{t}(\phi_{i})=\eta_{t}\left({}_{\mathscr{V}^{\prime}}\langle \mathcal{A}(\cdot),\nu_{i}\rangle_{\mathscr{V}}\right)\mathrm{d}t+\mathbb{C} \mathtt{ev}_{\eta_{t}}\left[\phi_{i},H\right]R_{t}^{-1}\left(\mathrm{d}Y_{t}- \eta_{t}(H)\mathrm{d}t\right) \tag{11}\]
_and_
\[\mathrm{d}\left(\eta_{t}(\chi_{ij})-\eta_{t}(\phi_{i})\eta_{t}( \phi_{j})\right) \tag{12}\] \[=\left(\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[\phi_{i},\,\mathscr{ V}^{\prime}\langle\mathcal{A}(\cdot),\nu_{j}\rangle_{\mathscr{V}}\right]+ \mathbb{C}\mathtt{ev}_{\eta_{t}}\left[\phi_{j},\,\mathscr{V}^{\prime}\langle \mathcal{A}(\cdot),\nu_{i}\rangle_{\mathscr{V}}\right]\right)\mathrm{d}t\] \[\quad+\eta_{t}\left(\left\langle\nu_{i},\left(\left(\mathcal{B} (\cdot)\sqrt{\mathcal{Q}}\right)\left(\left(\mathcal{B}(\cdot)\sqrt{\mathcal{Q }}\right)^{\prime}\nu_{j}\right)_{\mathscr{H}}\right)\mathrm{d}t\right.\right.\] \[\quad-\left.\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[\phi_{i},H \right]R_{t}^{-1}\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[H,\phi_{j}\right]+ \mathfrak{X}_{ij}R_{t}^{-1}\left(\mathrm{d}Y_{t}-\eta_{t}(H)\mathrm{d}t\right),\]
_with_
\[\mathfrak{X}_{ij}:=\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[\chi_{ij},H\right]- \eta_{t}(\phi_{j})\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[\phi_{i},H\right]- \eta_{t}(\phi_{i})\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[\phi_{j},H\right] \tag{13}\]
Proof.: We note that \(\left(\mathrm{D}^{1}_{\mathscr{H}}\phi_{i}\right)(v)=\nu_{i}\) and \(\left(\mathrm{D}^{2}_{\mathscr{H}}\phi_{i}\right)(v)=0\), which implies \(\left(\mathcal{L}\phi_{i}\right)(v)={}_{\mathscr{V}^{\prime}}\langle\mathcal{ A}(v)\rangle,\nu_{k}\rangle_{\mathscr{V}}\). This gives us (11).
Now we use (11) and Ito's product rule to derive
\[\mathrm{d}\left(\eta_{t}(\phi_{i})\eta_{t}(\phi_{j})\right) =\left(\eta_{t}(\phi_{j})\eta_{t}\left({}_{\mathscr{V}^{\prime}} \langle\mathcal{A}(\cdot)\rangle,\nu_{i}\rangle_{\mathscr{V}}\right)+\eta_{t} (\phi_{i})\eta_{t}\left({}_{\mathscr{V}^{\prime}}\langle\mathcal{A}(\cdot), \nu_{j}\rangle_{\mathscr{V}}\right)\right)\mathrm{d}t\] \[\quad+\left(\eta_{t}(\phi_{j})\mathbb{C}\mathtt{ev}_{\eta_{t}} \left[\phi_{i},H\right]+\eta_{t}(\phi_{i})\mathbb{C}\mathtt{ev}_{\eta_{t}} \left[\phi_{j},H\right]\right)R_{t}^{-1}\left(\mathrm{d}Y_{t}-\eta_{t}(H) \mathrm{d}t\right)\] \[\quad+\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[\phi_{j},H\right]R_{ t}^{-1}\mathbb{C}\mathtt{ev}_{\eta_{t}}\left[H,\phi_{j}\right].\]
Next we note that \(\left(\mathrm{D}^{1}_{\mathscr{H}}\chi_{ij}\right)(v)=\left\langle v,\nu_{j} \right\rangle_{\mathscr{H}}\nu_{i}+\left\langle v,\nu_{i}\right\rangle_{ \mathscr{H}}\nu_{j}\) and \(\left(\mathrm{D}^{2}_{\mathscr{H}}\chi_{ij}\right)(v)=\nu_{i}\left(\nu_{j} \right)^{\prime}+\nu_{j}\left(\nu_{i}\right)^{\prime}\). Just as in (7), this implies
\[\mathcal{L}\chi_{ij}(v) =\left\langle\nu_{i},v\right\rangle_{\mathscr{H}\ ^{\prime}}\langle\mathcal{A}(v),\nu_{j}\rangle_{\mathscr{V}}+\left\langle\nu_{ i},v\right\rangle_{\mathscr{H}\ ^{\prime}}\langle\mathcal{A}(v),\nu_{j}\rangle_{\mathscr{V}}\] \[\quad+\left\langle\nu_{i},\sum_{k\in\mathbb{N}}q_{k}\left(\left( \mathcal{B}(v)e_{k}\right)(\mathcal{B}(v)e_{k})^{\prime}\right)\nu_{j} \right\rangle_{\mathscr{H}}.\]
By (8), we thus derive
\[\mathrm{d}\eta_{t}(\chi_{ij}) =\left(\eta_{t}\left(\left\langle\nu_{i},v\right\rangle_{\mathscr{H} ^{\prime}\ \gamma^{\prime}}\langle\mathcal{A}(v),\nu_{j}\rangle_{\mathscr{F}}\right)+\eta_ {t}\left(\left\langle\nu_{i},v\right\rangle_{\mathscr{H}\ \gamma^{\prime}}\langle\mathcal{A}(v),\nu_{j}\rangle_{\mathscr{F}}\right) \right)\mathrm{d}t\] \[\quad+\eta_{t}\left(\left\langle\nu_{i},\left(\left(\mathcal{B}( \cdot)\sqrt{\mathcal{Q}}\right)\left(\left(\mathcal{B}(\cdot)\sqrt{\mathcal{Q} }\right)^{\prime}\nu_{j}\right)_{\mathscr{H}}\right)\right.\right.\] \[\quad+\left.\left.\mathbb{C}\mathbb{w}_{\eta_{t}}\left[\chi_{ij},H\right]R_{t}^{-1}\left(\mathrm{d}Y_{t}-\eta_{t}(H)\mathrm{d}t\right).\right.\]
Therefore by subtracting the equation for \(\eta_{t}(\phi_{i})\eta_{t}(\phi_{j})\) from the evolution equation of \(\eta_{t}(\chi_{ij})\), we derive (12).
Now we are able to prove the law of total variance (10) for the KSE.
Proof of Lemma 10.: By Parseval we have
\[\left(\eta_{t}(\left\|\cdot\right\|_{\mathscr{H}}^{2})-\left\|\eta_{t}( \mathrm{id}_{\mathscr{H}})\right\|_{\mathscr{H}}^{2}\right)=\sum_{k\in \mathbb{N}}\left(\eta_{t}(\chi_{k})-\eta_{t}(\phi_{k})\eta_{t}(\phi_{k})\right).\]
Thus we can use (12). Now we note that in equation (12) the innovation term \(\mathrm{d}Y_{t}-\eta_{t}(H)\mathrm{d}t\) vanishes under the expectation and the contribution of \(-\mathbb{C}\mathbb{w}_{\eta_{t}}\left[\phi_{k},H\right]R_{t}^{-1}\mathbb{C} \mathbb{w}_{\eta_{t}}\left[H,\phi_{k}\right]\) is indeed negative. Thus we have
\[\partial_{t}\mathbb{E}\left[\eta_{t}(\left\|\cdot\right\|_{ \mathscr{H}}^{2})-\left\|\eta_{t}(\mathrm{id}_{\mathscr{H}})\right\|_{ \mathscr{H}}^{2}\right]\] \[\leq\sum_{k\in\mathbb{N}}\mathbb{E}\left[\mathrm{Cov}_{\eta_{t}} \left[\left\langle\nu_{k},\cdot\right\rangle_{\mathscr{H}},\ \gamma^{\prime}\langle\mathcal{A}(v),\nu_{k}\rangle_{\mathscr{F}}\right]+ \mathrm{Cov}_{\eta_{t}}\left[\left\langle\nu_{k},\cdot\right\rangle_{\mathscr{H }},\ \gamma_{\mathscr{F}^{\prime}}\langle\mathcal{A}(v),\nu_{k}\rangle_{\mathscr{F} }\right]\right]\] \[\quad+\sum_{k\in\mathbb{N}}\mathbb{E}\left[\eta_{t}\left(\left\langle \nu_{k},\left(\left(\mathcal{B}(\cdot)\sqrt{\mathcal{Q}}\right)\left(\left( \mathcal{B}(\cdot)\sqrt{\mathcal{Q}}\right)^{\prime}\nu_{k}\right)_{\mathscr{H }}\right)\right)\right].\]
Then using Parseval in combination with the same bounds as before gives the desired inequality (10).
## 4 Linear and Gaussian Filtering
Only in this section we make the following assumption, which we refer to as the linear Gaussian setting.
**Assumption 12**.: _[Linear Gaussian Setting] We assume that_
* \(\mathcal{A}:\mathscr{V}\to\mathscr{V}^{\prime}\) _is linear_
* \(\mathcal{B}\in\mathrm{L}\left(\mathscr{U};\mathscr{H}\right)\) _is a constant linear operator, i.e. it is independent of the state_ \(u\)_._
* \(H\in\mathrm{L}\left(\mathscr{H};\mathbb{R}^{d_{y}}\right)\) _is a linear operator_
* _the initial condition (of the signal and posterior)_ \(u_{0}\) _is Gaussian in_ \(\mathscr{H}\)_, i.e._ \[u_{0}\sim\mathcal{N}\left(m_{0},P_{0}\right)\text{ with }m_{0}\in\mathscr{H}\text{ and }P_{0}\in\mathrm{L}\left(\mathscr{H};\mathscr{H}\right)\text{ is symmetric positive semidefinite}.\]
In this setting \(u\) and \(Y\) are jointly Gaussian and thus the posterior distribution is also Gaussian \(\eta_{t}=\mathcal{N}\left(m_{t},P_{t}\right),\ t\geq 0\), meaning that it is completely described by its (conditional) mean \(m\) and covariance \(P\). We refer to [38] for conditional Gaussian distributions on Hilbert spaces.
### The classical Kalman-Bucy filter
It is well known that in finite dimensions \(m\) and \(P\) satisfy the Kalman-Bucy equations [4]
\[\mathrm{d}m_{t}=\mathcal{A}m_{t}\mathrm{d}t+P_{t}H^{\prime}R_{t}^{-1}\left( \mathrm{d}Y_{t}-Hm_{t}\mathrm{d}t\right) \tag{14a}\] \[\frac{\mathrm{d}P_{t}}{\mathrm{d}t}=\mathcal{A}P_{t}+P_{t}\mathcal{A}^{\prime}- P_{t}H^{\prime}R_{t}^{-1}HP_{t}+\left(\mathcal{B}\sqrt{\mathcal{Q}}\right) \left(\mathcal{B}\sqrt{\mathcal{Q}}\right)^{\prime}. \tag{14b}\]
This is also true in infinite dimensional settings, which is a direct consequence of the Kushner-Stratonovich equation, see [53, Section 3.3.1] (as well as [4] for the finite dimensional case). For the sake of completeness and since, for finite dimensional observations, it is a trivial consequence of Lemma 11, we also show the derivation here.
**Lemma 13**.: _Assume that \(\eta\) is a Gaussian solution to the KSE, i.e. \(\eta_{t}\) is a Gaussian on \(\mathscr{V}\) with mean \(m_{t}\) and covariance \(P_{t}\) satisfying (9) for all times \(t\geq 0\). Then \(m\) and \(P\) satisfy the Kalman-Bucy equations (14)._
Proof.: The Kalman-Bucy filter is a direct consequence of Lemma 11. To show this we first note that for arbitrary \(i\in\mathbb{N}\), the linearity of \(\mathcal{A}\) and \(H\) turns (11) into
\[\mathrm{d}\left\langle\nu_{i},m_{t}\right\rangle_{\mathscr{H}} =\mathrm{d}\eta_{t}(\phi_{i})\] \[=\eta_{t}\left({}_{\mathscr{V}^{\prime}}\langle\mathcal{A}( \cdot)\rangle,\nu_{i}\rangle_{\mathscr{V}}\right)\mathrm{d}t+\mathbb{C} \mathtt{ov}_{\eta_{t}}\left[\phi_{i},H\right]R_{t}^{-1}\left(\mathrm{d}Y_{t}- \eta_{t}(H)\mathrm{d}t\right)\] \[={}_{\mathscr{V}^{\prime}}\langle\mathcal{A}m_{t},\nu_{i}\rangle _{\mathscr{V}}\mathrm{d}t+\left\langle\nu_{i},P_{t}H^{\prime}R_{t}^{-1}\left( \mathrm{d}Y_{t}-\eta_{t}(H)\mathrm{d}t\right)\right\rangle_{\mathscr{H}},\]
which is the equation of the \(i\)-th Fourier coefficient of (14a). Thus \(m\) indeed satisfies (14a).
Similarly one verifies immediately that the linearity of \(\mathcal{A}\) and \(H\) turns (12) into
\[\mathrm{d}\left\langle\nu_{i},P_{t}\nu_{j}\right\rangle_{\mathscr{H }} =\left({}_{\mathscr{V}^{\prime}}\langle\mathcal{A}P_{t}\nu_{j},\nu_{i} \rangle_{\mathscr{V}}+{}_{\mathscr{V}^{\prime}}\langle\mathcal{A}P_{t}\nu_{i},\nu_{j}\rangle_{\mathscr{V}}\right)\mathrm{d}t\] \[\quad+\left\langle\nu_{i},\left(\left(\mathcal{B}\sqrt{\mathcal{Q }}\right)\left(\left(\mathcal{B}\sqrt{\mathcal{Q}}\right)^{\prime}\nu_{j} \right)_{\mathscr{H}}\mathrm{d}t-\left\langle\nu_{i},P_{t}H^{\prime}R_{t}^{-1}HP _{t}\nu_{j}\right\rangle_{\mathscr{H}}\mathrm{d}t\right.\] \[\quad+\mathfrak{X}_{ij}R_{t}^{-1}\left(\mathrm{d}Y_{t}-\eta_{t}(H )\mathrm{d}t\right),\]
for every \(i,j\in\mathbb{N}\). Thus in order to prove (14b), it remains to show that for \(\mathfrak{X}\), defined in (13), it holds that \(\mathfrak{X}_{ij}=0\) for all \(i,j\in\mathbb{N}\).
The main idea is that this question reduces to identities for finite dimensional Gaussians. Let \(i,j\in\mathbb{N}\) be arbitrary, but fixed. Define the linear map \(\Psi:\mathscr{H}\rightarrow\mathbb{R}^{d_{y}+2}\) by
\[\Psi_{t}(v):=\left(\begin{array}{c}\left\langle v,\nu_{i}\right\rangle_{ \mathscr{H}}\\ \left\langle v,\nu_{j}\right\rangle_{\mathscr{H}}\\ Hv\end{array}\right).\]
Now we take an arbitrary, \(\eta_{t}\)-distributed random variable \(\bar{u}_{t}\sim\eta_{t}\), then
\[Z:=\Psi_{t}\left(\bar{u}_{t}\right)\]
is an \(\mathbb{R}^{d_{y}+2}\) Gaussian vector, with
\[\mu:=\mathbb{E}_{Y}\left[\Psi_{t}\left(\bar{u}_{t}\right)\right]=\left( \begin{array}{cc}\left\langle m_{t},\nu_{i}\right\rangle_{\mathscr{H}^{ \prime}}\\ \left\langle m_{t},\nu_{j}\right\rangle_{\mathscr{H}^{\prime}}\\ Hm_{t}\end{array}\right)\]
and
\[\Sigma:=\mathbb{C}\mathtt{ov}_{Y}\left[\Psi_{t}(\bar{u}_{t})\right]=\left( \begin{array}{cc}\left\langle\nu_{i},P_{t}\nu_{i}\right\rangle_{\mathscr{H }}&\left\langle\nu_{i},P_{t}\nu_{j}\right\rangle_{\mathscr{H}^{\prime}}& \left(HP_{t}\nu_{i}\right)^{\prime}\\ \left\langle\nu_{j},P_{t}\nu_{i}\right\rangle_{\mathscr{H}^{\prime}}&\left\langle \nu_{j},P_{t}\nu_{j}\right\rangle_{\mathscr{H}^{\prime}}&\left(HP_{t}\nu_{j} \right)^{\prime}\\ HP_{t}\nu_{i}&HP_{t}\nu_{j}&HP_{t}H^{\prime}\end{array}\right).\]
With these definitions we see that
\[\mathfrak{X}_{ij}=\mathbb{C}\mathtt{ov}_{Y}\left[Z_{1}Z_{2},Z_{3}\right]-\mathbb{E }_{Y}\left[Z_{2}\right]\mathbb{C}\mathtt{ov}_{Y}\left[Z_{1},Z_{3}\right]- \mathbb{E}_{Y}\left[Z_{1}\right]\mathbb{C}\mathtt{ov}_{Y}\left[Z_{2},Z_{3} \right], \tag{15}\]
and since
\[\mathbb{C}\mathtt{ov}_{Y}\left[Z_{1}Z_{2},Z_{3}\right] =\mathbb{E}_{Y}\left[\left(Z_{1}Z_{2}-\mu_{1}\mu_{2}\right)\left(Z _{3}-\mu_{3}\right)^{\prime}\right]\] \[=\mathbb{E}_{Y}\left[Z_{1}Z_{2}Z_{3}^{\prime}\right]-\mathbb{C} \mathtt{ov}_{Y}\left[Z_{1},Z_{2}\right]\mathbb{E}_{Y}\left[Z_{3}^{\prime} \right]-\mathbb{E}_{Y}\left[Z_{1}\right]\mathbb{E}_{Y}\left[Z_{2}\right] \mathbb{E}_{Y}\left[Z_{3}^{\prime}\right]\]
we can write (15) as
\[\mathfrak{X}_{ij}=\mathbb{E}_{Y}\left[Z_{1}Z_{2}Z_{3}^{\prime}\right]-\Sigma_ {12}\mu_{3}^{\prime}-\mu_{2}\Sigma_{13}-\mu_{1}\Sigma_{23}-\mu_{1}\mu_{2}\mu_ {3}^{\prime}. \tag{16}\]
Showing that \(\mathfrak{X}_{ij}=0\) is equivalent to a simple identity for third moments of finite dimensional Gaussians, which we prove here for the sake of completeness and the convenience of the reader. We use the moment generating function
\[\mathcal{M}(r):=\mathbb{E}_{Y}\left[\exp\left(r_{1}Z_{1}+r_{2}Z_{2}+r_{3} \cdot Z_{3}\right)\right]=\exp\left(r\cdot\mu+\frac{r^{\prime}\Sigma r}{2} \right),\]
where \(r=(r_{1},r_{2},r_{3})^{\mathrm{T}}\), \(r_{1},r_{2}\in\mathbb{R},\ r_{3}\in\mathbb{R}^{d_{y}}\). Then clearly we have
\[\mathbb{E}_{Y}\left[Z_{1}Z_{2}Z_{3}\right]=\partial_{r_{1}}\partial_{r_{2}} \nabla_{r_{3}}\mathcal{M}(0)=\mu_{1}\Sigma_{32}+\mu_{2}\Sigma_{31}+\Sigma_{12 }\mu_{3}+\mu_{1}\mu_{2}\mu_{3},\]
which in turn implies \(\mathfrak{X}_{ij}=0\) for all \(i,j\in\mathbb{N}\).
**Remark 14** (Literature).: _The first (mathematically rigorous) work treating the Kalman-Bucy Filter (14) for infinite dimensional SDEs seems to be by Falk [24] in 1967, which required \(\mathcal{A}\) to be a bounded operator on a Hilbert space, thus not allowing for differential operators. In the 70s, several works, which also allowed for \(\mathcal{A}\) to be a differential operator, expanded on this. For a good overview on the progress that has been made in this time period we refer the interested reader to an excellent survey paper by Curtain [17], who, in various papers treated semigroup approaches to (14). Very recent results for far more general infinite dimensional signals using a semigroup approach can be found in [30]. A variational approach, which is the one we use, was already developed in the 70s by Bensoussan [5]3. In this book Bensoussan studied two approaches for Kalman-Bucy filters, one introduced so called random linear functionals to mathematically describe model errors, the other one uses SPDEs. Many of the papers recited here, including the book by Bensoussan, actually allow for more general observations, taking values in Hilbert spaces._
Footnote 3: The book is written in French, an English text that also recites some of its results is [37].
Note that in the infinite dimensional setting the Riccati equation (14b) is operator valued. Well posedness of such equations is a classical subject of infinite dimensional control theory. Given the well posedness of the Riccati equation (14b) the well posedness of the equation for the mean (14a) can be followed from such results for linear SPDEs. Indeed the existence and uniqueness of solutions to (14b) was shown by Bensoussan [5, Theoreme 3.1],[37, Theorem 7.3.2]. For the convenience of the reader we recite this result in the following.
**Theorem 15** (Bensoussan 1971,[5]).: _Let \(T\) be an arbitrary timeframe, we denote the \(\mathscr{V}\)-valued Sobolev space on \([0,T]\) by_
\[\mathcal{W}([0,T]):=\left\{\ y\in L^{2}\left([0,T];\mathscr{V}\right)\ :\ \partial_{t}y\in L^{2}\left([0,T];\mathscr{V}^{\prime}\right)\ \right\}\subseteq C^{0}\left([0,T];\mathscr{H}\right),\]
_where the last inclusion is a consequence of Sobolev embedding. We assume that \(A\) is coercive, i.e. that (3) holds for \(\alpha_{0}=0\) and that both \(P_{0}\) and \(\mathcal{Q}\) are invertible._
_Then there exists a unique family of operators \((P_{t})_{t\in[0,T]}\) such that for all testfunctions \(\phi\in\mathcal{W}([0,T])\) that satisfy_
\[\partial_{t}\phi+A^{\prime}\phi\in L^{2}([0,T];\mathscr{H}),\]
_it holds that_
\[P\phi\in\mathcal{W}([0,T]) \tag{17}\]
_and that_
\[\partial_{t}(P\phi)=P\left(\partial_{t}\phi+\mathcal{A}^{\prime} \phi\right)+\mathcal{A}P\phi+(\mathcal{B}\mathcal{Q}\mathcal{B}^{\prime})\phi -(PH^{\prime}R^{-1}HP)\phi. \tag{18}\]
**Remark 16**.: _We remark that besides an appropriate well posedness of the Riccati equation (14b), [5, Theoreme 3.1] also derives the optimal filter in a random linear functional framework. We did not recite this in Theorem 15 and refer the interested reader to [5, Theoreme 3.1] or [37, Theorem 7.3.2]. We also remark that the invertibility conditions in Theorem 15 can be relaxed [17]._
**Remark 17**.: _Condition (17) is a relaxed regularity condition, as for constant testfunctions \(\phi\in\mathscr{V}\) the image of the adjoint \(\mathcal{A}^{\prime}\phi\) is not necessarily contained in \(\mathscr{H}\) and thus in (17) not all elements of \(\mathscr{V}\) are assumed to be mapped into the Sobolev space._
**Remark 18** (Extended Kalman-Bucy).: _While the Kalman-Bucy filter provides a consistent representation of the posterior in the linear Gaussian setting, adaptations for nonlinear signals called Extended Kalman-Bucy filters (EKF) are often used in practice, in particular in engineering. Hereby the signal is (a priori) linearized by a Taylor expansion and the standard Kalman-Bucy algorithm (14) is then applied to these altered signal dynamics. There is a recent paper [1] that discusses the derivation of an EKF for semilinear SPDEs and the elementary mathematical analysis via a mild-solutions/semigroup approach._
### The consistent mean field EnKBF
In the linear Gaussian setting, a representation of the posterior \(\eta\) by a stochastic process \(\bar{u}\) is given by the mean-field EnKBF. In the finite dimensional setting numerical approximations based on this equation have proven very successful in many practical applications, particular for high dimensional signals. One can also easily generalize this representation to our SPDE setting, resulting in the McKean-Vlasov SPDE
\[\mathrm{d}\bar{u}_{t}=\mathcal{A}\bar{u}_{t}\mathrm{d}t+\mathcal{ B}\mathrm{d}\bar{W}_{t}+\bar{P}_{t}H^{\prime}R_{t}^{-1}\left(\mathrm{d}Y_{t}-H \frac{\bar{u}_{t}+\bar{m}_{t}}{2}\mathrm{d}t\right) \tag{19}\]
with
\[\bar{m}_{t} :=\mathbb{E}\left[\bar{u}_{t}\right]\in\mathscr{V}\] \[\bar{P}_{t} :=\mathbb{C}\mathtt{ov}\left[\bar{u}_{t}\right]=\mathbb{E}\left[ \left(\bar{u}_{t}-\bar{m}_{t}\right)\left(\bar{u}_{t}-\bar{m}_{t}\right)^{ \prime}\right]\in\mathrm{L}\left(\mathscr{V}^{\prime};\mathscr{V}\right).\]
Before we investigate the well posedness of (19) we want to verify that it defines a consistent mean-field representation of the posterior \(\eta\). As a first step towards this goal we verify that (19) reproduces the correct posterior mean and covariance that are given by the Kalman-Bucy filter (14).
**Lemma 19** (moment consistency).: _Let \(\bar{m}\) and \(\bar{P}\) denote the mean and covariance of \(\bar{u}\), the solution to (19). Then \(\bar{m}\) and \(\bar{P}\) satisfy the Kalman-Bucy equations (14) and are thus identical to the mean and covariance of the posterior._
Proof.: Taking the expectation in (19) gives us the Kalman equation (14a). We thus only have to verify that the covariance matrix satisfies the Riccati equation (14b). Just as in the proof of Lemma 5, applying Ito's rule to the product \(\left\langle v,\bar{u}_{t}-\bar{m}_{t}\right\rangle_{\mathscr{H}}\left\langle w, \bar{u}_{t}-\bar{m}_{t}\right\rangle_{\mathscr{H}}\) gives us by taking the (conditional) expectation
\[\partial_{t}\left\langle v,\bar{P}_{t}w\right\rangle_{\mathscr{H}} =_{\mathscr{H}^{\prime}}\big{\langle}\mathcal{A}\bar{P}_{t}w,v \big{\rangle}_{\mathscr{V}}+_{\mathscr{V}^{\prime}}\big{\langle}\mathcal{A} \bar{P}_{t}v,w\big{\rangle}_{\mathscr{V}}-\left\langle v,\bar{P}_{t}H^{\prime }R_{t}^{-1}H\bar{P}_{t}w\right\rangle_{\mathscr{H}}\] \[\quad+\left\langle v,\mathcal{B}\sqrt{\mathcal{Q}}\left( \mathcal{B}\sqrt{\mathcal{Q}}\right)^{\prime}w\right\rangle_{\mathscr{H}}.\]
Thus \(\bar{P}\) indeed satisfies the Riccati equation (14b).
We can use the moment consistency to prove the well posedness of (19) as the following Lemma shows. For the finite dimensional setting this can also be found in [20].
**Lemma 20**.: _Under the assumptions of Theorem (15) there exists a unique solution of (19)_
Proof.: The standard method for proving well posedness of McKean-Vlasov equations uses a fixed point argument with respect to the law. For (19) this argument simplifies substantially as we can use (14) to guess the right fixed point. To this end let \(P\) be the unique solution to (14b) and define \(\tilde{u}\) to be the unique solution to the linear equation
\[\mathrm{d}\tilde{u}_{t}=\mathcal{A}\tilde{u}_{t}\mathrm{d}t+\mathcal{B} \mathrm{d}\bar{W}_{t}+P_{t}H^{\prime}R_{t}^{-1}\left(\mathrm{d}Y_{t}-H\frac{ \tilde{u}_{t}+\mathbb{E}_{Y}\left[\tilde{u}_{t}\right]}{2}\mathrm{d}t\right). \tag{20}\]
Then the covariance \(\tilde{P}\) of \(\tilde{u}\) satisfies the linear equation
\[\frac{\mathrm{d}P_{t}}{\mathrm{d}t}=\mathcal{A}\tilde{P}_{t}+\bar{P}_{t} \mathcal{A}^{\prime}-\frac{1}{2}P_{t}H^{\prime}R_{t}^{-1}H\tilde{P}_{t}-\frac{ 1}{2}\bar{P}_{t}H^{\prime}R_{t}^{-1}HP_{t}+\left(\mathcal{B}\sqrt{\mathcal{Q} }\right)\left(\mathcal{B}\sqrt{\mathcal{Q}}\right)^{\prime}.\]
However, by definition, this equation must also be satisfied by the solution \(P\) to the Riccati equation (14b). Thus by uniquenss of solutions to the linear problem above, we conclude that \(P=\tilde{P}\) and \(\tilde{u}\) is thus a solution to the EnKBF.
Uniqueness of solutions follows from the uniqueness of the Riccati equation. Given two solutions \(\bar{u}^{i},i=1,2\), their covariances \(\bar{P}^{i},\ i=1,2\) must both satisfy the Riccati equation and thus by uniqueness coincide \(\bar{P}=\bar{P}^{1}=\bar{P}^{2}\). Therefore \(\bar{u}^{1}\) and \(\bar{u}^{2}\) both satisfy the same linear equation (where \(\bar{P}\) is taken as a covariable/coefficient of the equation) and by uniqueness they too must coincide.
The fixed point argument in the proof of Lemma (20) can also be used to verify that the EnKBF is not just consistent with respect to the first two moments, but also with respect to its law. To this end we note that for given operator valued process \(P\), equation (20) defines an Ornstein-Uhlenbeck process, and thus for Gaussian initial condition its solution \(\bar{u}\) will have Gaussian (time-)marginals. Therefore, since \(\bar{u}\), the solution of (19), has consistent mean and covariance processes, its marginals are also consistent. We state this fact in the following Corollary.
**Corollary 21** (consistency of the law).: _Denote by \(\bar{\eta}\) the time-marginals of the law of \(\bar{u}\), i.e. \(\bar{u}_{t}\sim\bar{\eta}_{t}\) for all times \(t\geq 0\). Then, in the linear Gaussian setting, \(\bar{\eta}_{t}=\eta_{t}\) for all times \(t\geq 0\)._
With the fixed point equation (20) one can also derive an explicit formula for the solution to (19). As for given \(P\), the equation (20) is a linear McKean-Vlasov equation we can use a Duhamel/Variation of Constants formula to derive that (for the fixed point \(P=\bar{P}\)) it holds that
\[\begin{split}\bar{u}(t)&=\exp(\mathcal{A}_{P1}(t)) \left(\bar{u}_{0}-\mathbb{E}_{Y}\left[\bar{u}_{0}\right]\right)+\exp\left( \mathcal{A}_{P2}(t)\right)\mathbb{E}_{Y}\left[\bar{u}_{0}\right]\\ &\quad+\int_{0}^{t}\exp\left(\mathcal{A}_{P1}(s,t)\right) \mathcal{B}\mathrm{d}W_{s}+\int_{0}^{t}\exp\left(\mathcal{A}_{P2}(s,t)\right) \bar{P}_{s}H^{\prime}R_{s}^{-1}\mathrm{d}Y_{s},\end{split} \tag{21}\]
where
\[\mathcal{A}_{P1}(s,t) :=\int_{s}^{t}\mathcal{A}-\frac{\bar{P}_{r}H_{r}^{\prime}R_{r}^{-1}H} {2}\mathrm{d}r\] \[\mathcal{A}_{P2}(s,t) :=\int_{s}^{t}\mathcal{A}-\bar{P}_{r}H^{\prime}R_{r}^{-1}H\mathrm{ d}r,\]
and \(\exp\left(\mathcal{A}_{Pi}(s,t)\right),\ i=1,2\) are the corresponding solution semigroups.
**Remark 22**.: _One could also use (21) to formulate a mild solution theory for the EnKBF. This is out of the scope of this paper and we focus instead on a variational theory._
## 5 The Feedback Particle Filter
In the linear Gaussian setting the mean field EnKBF (19) describes a diffusion process with the remarkable property, that its (time) marginal laws are given by the desired posterior distribution. In the general setting this attribute is achieved by the Feedback Particle Filter (FPF), it is given by
\[\mathrm{d}\hat{u}_{t} =\mathcal{A}(\hat{u}_{t})\mathrm{d}t+\mathcal{B}(\hat{u}_{t}) \mathrm{d}\bar{W}_{t} \tag{22}\] \[\quad+K(\hat{u}_{t},\hat{\eta}_{t})\left(\mathrm{d}Y_{t}-\frac{H (\hat{u}_{t})+\mathbb{E}_{Y}\left[H(\hat{u}_{t})\right]}{2}\mathrm{d}t\right) +\frac{1}{2}\xi(\hat{u}_{t},\hat{\eta}_{t})\mathrm{d}t,\]
where \(\hat{\eta}_{t}\) denotes the conditional distribution of \(\hat{u}_{t}\), the so called gain term \(K(\cdot,\hat{\eta}_{t}):\mathscr{H}\to\mathscr{H}\times\mathbb{R}^{d_{y}}\) is (not uniquely) determined by the weak differential equation
\[\hat{\eta}_{t}\left(\left\langle\mathrm{D}_{\mathscr{H}}^{1}\phi\,\ K(\cdot,\hat{\eta}_{t}) \right\rangle_{\mathscr{H}}\right)=\hat{\eta}_{t}\left(\phi\left(H-\hat{\eta} _{t}(H)\right)^{\prime}\right)R_{t}^{-1}\ \text{for all Ito testfunctions }\phi \tag{23}\]
and the correctional drift term \(\xi(\cdot,\hat{\eta}_{t}):\mathscr{H}\to\mathscr{H}\) is given by
\[\xi(\hat{u}_{t},\hat{\eta}_{t}):=\left(\left\langle K\left(\hat{u}_{t},\hat{ \eta}_{t}\right),\nabla\right\rangle_{\mathscr{H}}R_{t}K\left(\hat{u}_{t},\hat {\eta}_{t}\right)^{{}^{\prime}}\right)^{{}^{\prime}}:=\sum_{j\in\mathbb{N}} \sum_{k\in\mathbb{N}}\left\langle\nu_{j},\partial_{\nu_{k}}K(\hat{u}_{t},\hat{ \eta}_{t})\right\rangle_{\mathscr{H}}R_{t}\left\langle\nu_{k},K(\hat{u}_{t}, \hat{\eta}_{t})\right\rangle_{\mathscr{H}}^{\mathrm{T}}\nu_{j}.\]
**Remark 23**.: _In the one dimensional setting, i.e. \(\mathscr{V}=\mathscr{H}=\mathbb{R},d_{y}=1\) the Kalman gain \(K\) can be determined explicitly. Let us not distinguish between \(\hat{\eta}\) and its density function (assuming it exists) and denote its cumulative distribution function (cdf) by \(\hat{\Xi}_{t}(x):=\int_{-\infty}^{x}\hat{\eta}_{t}(y)\mathrm{d}y\), then the gain is given by_
\[K(x,\hat{\eta}_{t}) =\frac{-\int_{-\infty}^{x}\left(H(y)-\hat{\eta}(H)\right)\hat{ \eta}_{t}(y)\mathrm{d}y}{\hat{\eta}_{t}(x)}R_{t}^{-1} \tag{24}\] \[=\left(\mathbb{E}_{Y}\left[H(\hat{u}_{t})\right]-\mathbb{E}_{Y} \left[H\ (\hat{u}_{t})\ \right]\ \hat{u}_{t}\leq x\ \right]\right)\left(\partial_{x}\log\hat{\Xi}_{t}(x)\right)^{-1}R_{t}^{-1}.\]
_In this setting the Ito correction \(\xi\) also simplifies drastically, as_
\[\partial_{x}K(x,\hat{\eta}_{t})=-R_{t}^{-1}\left(H(x)-\hat{\eta}(H)\right)-K(x,\hat{\eta}_{t})\partial_{x}\log\hat{\eta}_{t}(x),\]
_and therefore_
\[K(\hat{u}_{t},\hat{\eta}_{t})R_{t}\partial_{x}K(\hat{u}_{t},\hat{\eta}_{t})=-K (\hat{u}_{t},\hat{\eta}_{t})\left(H(\hat{u}_{t})-\hat{\eta}(H)\right)-K(\hat{u} _{t},\hat{\eta}_{t})^{2}R_{t}\partial_{x}\log\hat{\eta}_{t}(\hat{u}_{t}),\]
_which in turn gives us the 1D FPF_
\[\mathrm{d}\hat{u}_{t}=\mathcal{A}(\hat{u}_{t})\mathrm{d}t+\mathcal{B}(\hat{u}_ {t})\mathrm{d}\bar{W}_{t}+K(\hat{u}_{t},\hat{\eta}_{t})\left(\mathrm{d}Y_{t}- \left(H\left(\hat{u}_{t}\right)+\frac{R_{t}K(\hat{u}_{t},\hat{\eta}_{t}) \partial_{x}\log\hat{\eta}_{t}\left(\hat{u}_{t}\right)}{2}\right)\mathrm{d}t \right). \tag{25}\]
In the finite dimensional setting the FPF was first derived in [51] with an optimal control approach, independently and prior to this work a similar mean-field optimal filter for smoothed noise and finite dimensional signals has been found in [16]. In [41] various finite dimensional consistent mean field filters, among them the original FPF, have been derived by matching the strong Fokker-Planck equation of a diffusion process to the strong form of the KSE (9). Building on this work we now extend the FPF to infinite dimensions by showing that it describes the optimal filter, in the sense that the (conditional) law of (22) propagates in time exactly according to the KSE (9). However, since we are working in the infinite setting, we do so by matching the weak Fokker-Planck equation to the weak KSE (9).
**Remark 24**.: _The well posedness of the FPF (22) is an open problem, even in the much simpler finite dimensional case, and is thus just assumed in the following._
**Lemma 25**.: _Denote by \(\left(\hat{\eta}_{t}\right)_{t\geq 0}\) the law of the FPF \(\left(\hat{u}_{t}\right)_{t\geq 0}\), given by (22). We assume that for all times \(t\geq 0\) both \(K(\cdot,\hat{\eta}_{t})\) and \(\bar{\xi}(\cdot,\hat{\eta}_{t})\) are well defined functions from \(\mathscr{H}\) into \(\mathscr{H}\), that are integrable with respect to \(\hat{\eta}_{t}\)._
_Then \(\hat{\eta}\) satisfies the weak form of the KSE (9) for all Ito testfunctions (according to Definition 4) \(\phi:\mathscr{H}\to\mathbb{R}\) satisfying the following properties_
* \(\phi\) _is integrable with respect to_ \(\hat{\eta}_{t}\) _for all_ \(t\geq 0\)_._
* _For all_ \(v\in H\) _the Hessian_ \(\mathrm{D}^{2}_{\mathscr{H}}\phi(v)\) _is a self adjoint operator on_ \(\mathscr{H}\)_._
* _The map_ \(\hat{\Phi}:\mathscr{H}\to\mathbb{R}^{d_{\eta}}\)_, defined by_ \(\hat{\Phi}(v):=\left\langle\mathrm{D}^{1}_{\mathscr{H}}\phi(v),K(v,\hat{\eta}_ {t})\right\rangle_{\mathscr{H}}R_{t}\)_, is an Ito function (componentwise) that is also integrable with respect to_ \(\eta_{t}\) _for all_ \(t\geq 0\)_._
Proof.: Let \(\phi:\mathscr{H}\to\mathbb{R}\) be arbitrary, satisfying the properties specified above. We note by Ito's formula, that the Kolmogorov forward equation describing the evolution of \(\hat{\eta}\) is given by
\[\begin{split}\mathrm{d}\hat{\eta}_{t}(\phi)&=\eta_{ t}(\mathcal{L}\phi)\mathrm{d}t+\hat{\eta}_{t}\left(\left\langle\mathrm{D}^{1}_{ \mathscr{H}}\phi,K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}\right)( \mathrm{d}Y_{t}-\hat{\eta}_{t}(H)\mathrm{d}t)\\ &\quad+\frac{\hat{\eta}_{t}\left(\left\langle\mathrm{D}^{1}_{ \mathscr{H}}\phi,\xi(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}\right)+ \hat{\eta}_{t}\left(\mathrm{tr}_{\mathscr{H}}\left[\left(\mathrm{D}^{2}_{ \mathscr{H}}\phi\right)K(\cdot,\hat{\eta}_{t})RK(\cdot,\hat{\eta}_{t})^{\prime} \right]\right)}{2}\mathrm{d}t\\ &\quad-\frac{\hat{\eta}_{t}\left(\left\langle\mathrm{D}^{1}_{ \mathscr{H}}\phi,K(\cdot,\hat{\eta}_{t})(H-\hat{\eta}_{t}(H))\right\rangle_{ \mathscr{H}}\right)}{2}\mathrm{d}t\end{split} \tag{26}\]
Due to (23), the first line of (26) is exactly the KSE and thus, to show consistency, we only have to prove that the second line is zero. To this end we note that by Parseval we have
\[\begin{split}&\left\langle\mathrm{D}^{1}_{\mathscr{H}}\phi,\xi( \cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}+\mathrm{tr}_{\mathscr{H}} \left[\left(\mathrm{D}^{2}_{\mathscr{H}}\phi\right)K(\cdot,\hat{\eta}_{t})RK( \cdot,\hat{\eta}_{t})^{\prime}\right]\\ &=\sum_{j\in\mathbb{N}}\left\langle\nu_{j},\mathrm{D}^{1}_{ \mathscr{H}}\phi\right\rangle_{\mathscr{H}}\left\langle\nu_{j},\xi(\cdot,\hat {\eta}_{t})\right\rangle_{\mathscr{H}}+\sum_{k\in\mathbb{N}}\left\langle\nu_{k}, \left(\mathrm{D}^{2}_{\mathscr{H}}\phi\right)K(\cdot,\hat{\eta}_{t})RK(\cdot, \hat{\eta}_{t})^{\prime}\nu_{k}\right\rangle_{\mathscr{H}}\\ &=\sum_{j\in\mathbb{N}}\sum_{k\in\mathbb{N}}\left\langle\nu_{j}, \mathrm{D}^{1}_{\mathscr{H}}\phi\right\rangle_{\mathscr{H}}\left\langle\nu_{j}, \partial_{\nu_{k}}K(\hat{u}_{t},\hat{\eta}_{t})\right\rangle_{\mathscr{H}}R_{ t}\left\langle\nu_{k},K(\hat{u}_{t},\hat{\eta}_{t})\right\rangle_{ \mathscr{H}}^{\mathrm{T}}\\ &\quad+\sum_{k\in\mathbb{N}}\left\langle\nu_{k},\left(\mathrm{D}^{ 2}_{\mathscr{H}}\phi\right)K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}R_{ t}\left\langle\nu_{k},K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}^{ \mathrm{T}}\end{split}\]
Next we now that since \(\mathrm{D}^{2}_{\mathscr{H}}\phi\) is self adjoint and by Parseval we have
\[\begin{split}\left\langle\nu_{k},\mathrm{D}^{2}_{\mathscr{H}}\phi K (\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}&=\left\langle \mathrm{D}^{2}_{\mathscr{H}}\phi\nu_{k},K(\cdot,\hat{\eta}_{t})\right\rangle_{ \mathscr{H}}=\left\langle\partial_{\nu_{k}}\mathrm{D}^{1}_{\mathscr{H}}\phi,K( \cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}\\ &=\sum_{j\in\mathbb{N}}\left\langle\nu_{j},\partial_{\nu_{k}} \mathrm{D}^{1}_{\mathscr{H}}\phi\right\rangle_{\mathscr{H}}\left\langle\nu_{j},K( \cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}.\end{split}\]
This gives us by again using Parseval
\[\left\langle\mathrm{D}^{1}_{\mathscr{H}}\phi,\xi(\cdot,\hat{\eta}_{t} )\right\rangle_{\mathscr{H}}+\mathrm{tr}_{\mathscr{H}}\left[\left(\mathrm{D}^{2}_ {\mathscr{H}}\phi\right)K(\cdot,\hat{\eta}_{t})RK(\cdot,\hat{\eta}_{t})^{ \prime}\right]\] \[=\sum_{k\in\mathbb{N}}\sum_{j\in\mathbb{N}}\left\langle\nu_{j}, \mathrm{D}^{1}_{\mathscr{H}}\phi\right\rangle_{\mathscr{H}}\left\langle\nu_{j}, \partial_{\nu_{k}}K(\hat{u}_{t},\hat{\eta}_{t})\right\rangle_{\mathscr{H}}R_{t }\left\langle\nu_{k},K(\hat{u}_{t},\hat{\eta}_{t})\right\rangle^{\mathrm{T}}_{ \mathscr{H}}\] \[\quad+\sum_{k\in\mathbb{N}}\sum_{j\in\mathbb{N}}\left\langle\nu_{ j},\partial_{\nu_{k}}\mathrm{D}^{1}_{\mathscr{H}}\phi\right\rangle_{ \mathscr{H}}\left\langle\nu_{j},K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{ H}}R_{t}\left\langle\nu_{k},K(\cdot,\hat{\eta}_{t})\right\rangle^{\mathrm{T}}_{ \mathscr{H}}\] \[=\sum_{k\in\mathbb{N}}\left(\left\langle\mathrm{D}^{1}_{\mathscr{H }}\phi,\partial_{\nu_{k}}K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}+ \left\langle\partial_{\nu_{k}}\mathrm{D}^{1}_{\mathscr{H}}\phi,K(\cdot,\hat{ \eta}_{t})\right\rangle_{\mathscr{H}}\right)R_{t}\left\langle\nu_{k},K(\cdot, \hat{\eta}_{t})\right\rangle^{\mathrm{T}}_{\mathscr{H}}\]
Using the product formula in Hilbert spaces4, we thus derive
Footnote 4: More precisely the formula for the directional derivative of the scalar product of two differentiable functions.
\[\left\langle\mathrm{D}^{1}_{\mathscr{H}}\phi,\xi(\cdot,\hat{\eta }_{t})\right\rangle_{\mathscr{H}}+\mathrm{tr}_{\mathscr{H}}\left[\left(\mathrm{ D}^{2}_{\mathscr{H}}\phi\right)K(\cdot,\hat{\eta}_{t})RK(\cdot,\hat{\eta}_{t})^{ \prime}\right]\] \[=\sum_{k\in\mathbb{N}}\partial_{\nu_{k}}\left\langle\mathrm{D}^{1 }_{\mathscr{H}}\phi,K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}R_{t} \left\langle\nu_{k},K(\cdot,\hat{\eta}_{t})\right\rangle^{\mathrm{T}}_{ \mathscr{H}}.\]
The map \(\hat{\Phi}\), defined in the statement of the Lemma, then allows us to again use Parseval to derive
\[\left\langle\mathrm{D}^{1}_{\mathscr{H}}\phi,\xi(\cdot,\hat{\eta }_{t})\right\rangle_{\mathscr{H}}+\mathrm{tr}_{\mathscr{H}}\left[\left(\mathrm{ D}^{2}_{\mathscr{H}}\phi\right)K(\cdot,\hat{\eta}_{t})RK(\cdot,\hat{\eta}_{t})^{ \prime}\right]\] \[=\sum_{k\in\mathbb{N}}\left(\partial_{\nu_{k}}\hat{\Phi}\right) \left\langle\nu_{k},K(\cdot,\hat{\eta}_{t})\right\rangle^{\mathrm{T}}_{ \mathscr{H}}=\sum_{k\in\mathbb{N}}\left\langle\nu_{k},\mathrm{D}^{1}_{\mathscr{ H}}\hat{\Phi}\right\rangle_{\mathscr{H}}\langle\nu_{k},K(\cdot,\hat{\eta}_{t}) \rangle^{\mathrm{T}}_{\mathscr{H}}\] \[=\sum_{i=1}^{d_{y}}\sum_{k\in\mathbb{N}}\sum_{i=1}^{d_{y}}\left\langle \nu_{k},\mathrm{D}^{1}_{\mathscr{H}}\hat{\Phi}\right\rangle_{\mathscr{H}} \delta_{i}\delta_{i}^{\mathrm{T}}\left\langle\nu_{k},K(\cdot,\hat{\eta}_{t}) \right\rangle^{\mathrm{T}}_{\mathscr{H}}=\sum_{i=1}^{d_{y}}\left\langle\mathrm{ D}^{1}_{\mathscr{H}}\hat{\Phi}\delta_{i},K(\cdot,\hat{\eta}_{t})\right\rangle_{ \mathscr{H}}\delta_{i},\]
where \(\delta_{i},\ i=1,\cdots,d_{y}\) denotes the canonical basis of \(\mathbb{R}^{d_{y}}\). Thus, by the assumed regularity of \(\hat{\Phi}\), we derive by using (23)
\[\hat{\eta}_{t}\left(\left\langle\mathrm{D}^{1}_{\mathscr{H}}\phi, \xi(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}\right)+\hat{\eta}_{t} \left(\mathrm{tr}_{\mathscr{H}}\left[\left(\mathrm{D}^{2}_{\mathscr{H}}\phi \right)K(\cdot,\hat{\eta}_{t})RK(\cdot,\hat{\eta}_{t})^{\prime}\right]\right)= \sum_{i=1}^{d_{y}}\hat{\eta}_{t}\left(\left\langle\mathrm{D}^{1}_{\mathscr{H}} \hat{\Phi}\delta_{i},K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}\right) \delta_{i}\] \[=\sum_{i=1}^{d_{y}}\hat{\eta}_{t}\left(\left\langle\mathrm{D}^{1}_{ \mathscr{H}}\phi(v),K(v,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}R_{t}\delta_ {i}\delta_{i}^{\mathrm{T}}R_{t}^{-1}\left(H-\hat{\eta}(H)\right)^{\prime} \right)=\hat{\eta}_{t}\left(\left\langle\mathrm{D}^{1}_{\mathscr{H}}\phi(v),K(v, \hat{\eta}_{t})\left(H-\hat{\eta}(H)\right)\right\rangle_{\mathscr{H}}\right),\]
which in turn lets us conclude that (26) coincides with the KSE (9) and thus the FPF is indeed consistent.
The FPF is a true generalization of the EnKBF to general filtering problems and it even provides a connection between the EnKBF and the true posterior even in inconsistent setting as the following Lemma shows.
**Lemma 26**.: _Let again \(\left(\hat{\eta}\right)_{t\geq 0}\) be the (conditional) marginal laws to the FPF (22). Assuming integrability of \(K(\cdot,\hat{\eta}_{t})\), then it holds that_
\[\mathbb{E}_{Y}\left[K(\hat{u}_{t},\hat{\eta}_{t})\right]=\hat{\eta}_{t}\left(K( \cdot,\hat{\eta}_{t})\right)=\mathbb{C}\mathtt{ov}_{\hat{\eta}_{t}}\left[ \mathrm{id}_{\mathscr{H}},H\right]R_{t}^{-1}=\mathbb{C}\mathtt{ov}_{Y}\left[ \hat{u}_{t},H(\hat{u}_{t})\right]R_{t}^{-1}. \tag{27}\]
_If \(H\) is linear and \(\hat{\eta}_{t}\) is Gaussian, one can even choose the gain term \(K\) such that \(K(\cdot,\hat{\eta}_{t})=\mathbb{C}\mathtt{ov}_{\hat{\eta}_{t}}\left[\mathrm{id}_{ \mathscr{H}},H\right]R_{t}^{-1}\). In the linear Gaussian setting the EnKBF is thus just a special case of the FPF._
Proof.: For any \(i\in\mathbb{N}\) we set \(\phi_{i}(v):=\left\langle\nu_{i},v\right\rangle_{\mathscr{H}}\) as a testfunction in the gain equation (23), then we have
\[\left\langle\nu_{i},\hat{\eta}_{t}\left(K(\cdot,\hat{\eta}_{t}) \right)\right\rangle_{\mathscr{H}}=\hat{\eta}_{t}\left(\left\langle\mathrm{D} _{\mathscr{H}}^{1}\phi_{i}\,\ K(\cdot,\hat{\eta}_{t})\right\rangle_{\mathscr{H}}\right)=\hat{\eta}_{t} \left(\phi_{i}\left(H-\hat{\eta}(H)\right)^{\prime}\right)R_{t}^{-1}\] \[=\left\langle\nu_{i},\mathbb{C}\mathtt{ov}_{\hat{\eta}_{t}}\left[ \mathrm{id}_{H},H\right]R_{t}^{-1}\right\rangle_{\mathscr{H}}.\]
Since this holds for any \(i\in\mathbb{N}\) this indeed shows the validity of (27). The second claim follows from Gaussian integration by parts as in [11].
Identity (27) is the reason why the extension of the mean field EnKBF to nonlinear signals is sometimes referred to in the literature as the constant gain approximation (to the FPF) [48]. The next section is concerned with the basic properties of this McKean-Vlasov equation, showing existence and uniqueness of solutions.
## 6 The mean field EnKBF for nonlinear signals
We consider the extension of the EnKBF to nonlinear signals
\[\mathrm{d}\tilde{u}_{t}=\mathcal{A}(\bar{u}_{t})\mathrm{d}t+\mathcal{B}(\bar {u}_{t})\mathrm{d}\bar{W}_{t}+\mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{t},H( \bar{u}_{t})\right]R_{t}^{-1}\left(\mathrm{d}Y_{t}-\frac{H(\bar{u}_{t})+ \mathbb{E}_{Y}\left[H(\bar{u}_{t})\right]}{2}\mathrm{d}t\right), \tag{28}\]
with not necessarily Gaussian initial condition.
Note that for nonlinear signals, the mean field EnKBF (28) does not allow for a seperate description of its covariance matrix via a Riccati equation. Thus we can not simply apply the same argument as in Lemma 20 for well posedness. Indeed due to missing growth conditions, as well as only local Lipschitz properties of the coefficients involved, the question of well posedness of (28) is non trivial.
**Remark 27** (Literature).: _In a finite dimensional setting well posedness of the nonlinear EnKBF (28) was shown in [15] for possibly correlated observation noises and bounded observation functions. For linear observation functions [21] showed well posedness of finite dimensional mean-field EnKBFs that may also include singular correction terms in the presence of correlated noise. This was done by a combination of a fixed point and a stopping argument with respect to the covariance \(\mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{t},H(\bar{u}_{t})\right]\). The main tool was a law of total variance (30) that was made robust with respect to the fixed point argument via stopping times. In the infinite dimensional setting this argument does not work due to missing equivalence of norms. While using a Galerkin argument would thus seem tempting, it would also not imply the desired uniqueness of solutions, which is a property that is difficult to show, and sometimes does not even hold for McKean-Vlasov equations under local Lipschitz conditions [44]. So instead we use an adapted fixed point argument, that makes use of the same variance bounds as in section 3, that also hold for (28) and seem to be the only consistency of its distribution with respect to the actual posterior._
First we investigate the covariance structure in (28). We do this however in a more general form. Let \((h_{t})_{t\geq 0}\) be a given \(R^{d_{y}}\)-valued, adapted stochastic process and assume that \(\xi^{Y}\) is an \(\mathbb{R}^{d_{y}}\)-valued semimartingale that is adapted to the natural filtration generated by \(Y\). Then for a \(\tilde{u}\) satisfying
\[\mathrm{d}\tilde{u}_{t}=\mathcal{A}(\tilde{u}_{t})\mathrm{d}t+\mathcal{B}( \tilde{u}_{t})\mathrm{d}\bar{W}_{t}+\mathbb{C}\mathtt{ov}_{Y}\left[\tilde{u}_{ t},h_{t}\right]R_{t}^{-1}\left(\mathrm{d}\xi_{t}^{Y}-\frac{h_{t}+\mathbb{E}_{Y} \left[h_{t}\right]}{2}\mathrm{d}t\right) \tag{29}\]
with mean \(\tilde{m}_{t}:=\mathbb{E}_{Y}\left[\tilde{u}_{t}\right]\), it holds that
\[\partial_{t}\left\langle v,\mathbb{C}\mathtt{ov}_{Y}\left[\tilde{u} _{t}\right]w\right\rangle_{\mathscr{H}}\] \[=\mathbb{E}_{Y}\left[\left\langle v,\tilde{u}_{t}-\tilde{m}_{t} \right\rangle_{\mathscr{H}\ \mathscr{V}^{\prime}}\langle w,\mathcal{A}(\tilde{u}_{t})- \mathcal{A}(\tilde{m}_{t})\right\rangle_{\mathscr{V}}+\left\langle w,\tilde{u} _{t}-\tilde{m}_{t}\right\rangle_{\mathscr{H}\ \mathscr{V}^{\prime}}\langle v,\mathcal{A}( \tilde{u}_{t})-\mathcal{A}(\tilde{m}_{t})\rangle_{\mathscr{V}}\right]\mathrm{ d}t\] \[\quad+\mathbb{E}_{Y}\left[\left\langle v,\mathcal{B}(\tilde{u}_{ t})\sqrt{\mathcal{Q}}\left(\mathcal{B}(\tilde{u}_{t})\sqrt{\mathcal{Q}} \right)^{\prime}w\right\rangle_{\mathscr{H}}\right]-\left\langle v,\mathbb{C} \mathtt{ov}_{Y}\left[\tilde{u}_{t},h_{t}\right]R_{t}^{-1}\mathbb{C}\mathtt{ov }_{Y}\left[h_{t},\tilde{u}_{t}\right]w\right\rangle_{\mathscr{H}}.\]
for every \(v,w\in\mathscr{V}\). Thus by the positivity of \(\left\langle v,\mathbb{C}\mathtt{ov}_{Y}\left[\tilde{u}_{t},h_{t}\right]R_{t}^ {-1}\mathbb{C}\mathtt{ov}_{Y}\left[h_{t},\tilde{u}_{t}\right]v\right\rangle_{ \mathscr{H}}\) for every \(v\in\mathscr{V}\) we immediately derive
\[\partial_{t}\mathbb{E}_{Y}\left[\left\|\tilde{u}_{t}-\tilde{m}_{t} \right\|_{\mathscr{H}}^{2}\right]\leq\lambda\ \mathbb{E}_{Y}\left[\left\|\tilde{u}_{t}-\tilde{m}_{t} \right\|_{\mathscr{H}}^{2}\right]+\beta, \tag{30}\]
and therefore
\[\operatorname{tr}_{\mathscr{H}\mathscr{C}\mathtt{ov}_{Y}}\left[ \tilde{u}_{t}\right]=\mathbb{E}_{Y}\left[\left\|\tilde{u}_{t}-\tilde{m}_{t} \right\|_{\mathscr{H}}^{2}\right]\leq\beta e^{\lambda t}. \tag{31}\]
Thus the EnKBF satisfies the variance bound that is implied by the Bayesian filtering problem, it does so even in a stronger sense, as taking the expectation is not required. As implied by the law of total variance for the optimal filter, this bound is robust with respect to perturbations of both the modelled observation function \(H\) and the actual observation data \(Y\).
Next we show that the robust variance bound (31) can be used to show well posedness of the EnKBF via a Picard argument as the following Lemma shows.
**Theorem 28**.: _If the conditions in Assumption (2) and (7) are satisfied, there exists a unique (strong) solution to the nonlinear mean-field EnKBF (28)._
Proof.: For proving well posedness it is enough to restrict ourselves to a small time frame \([0,T]\) with \(T\) chosen later on. The extension to arbitrary time frames can then be easily achieved by standard glueing arguments
The proof is separated into two steps. First we introduce partially stopped dynamics and show their well posedness via a fixed point argument. Next we show that these stopped dynamics must always coincide with solutions to the EnKBF on events that cover the whole probability space almost surely.
In the following we will make use of the semimartingale decomposition of the observation process \(Y\). For better highlighting that the true signal process \(u\) plays the role of a parameter to the EnKBF and to easily distinguish it from other processes encountered in the proof we will denote it by \(u^{\text{ref}}\). The observation process \(Y\) is thus given by
\[\mathrm{d}Y_{t}=H(u_{t}^{\text{ref}})\mathrm{d}t+\Gamma_{t}\mathrm{d}V_{t}. \tag{32}\]
Step 1: _Well posedness for partially stopped dynamics._
For any \(k\in\mathbb{N}\) we denote by \(\tilde{\mathbbm{1}}_{k}\) a smoothed version of the indicator function \(\mathbbm{1}_{[0,k]}\), such that \(\mathbbm{1}_{[0,k]}\leq\tilde{\mathbbm{1}}_{k}\leq\mathbbm{1}_{[0,k+1]}\).
Define for \(k,l\in\mathbb{N}\) and any \(\mathscr{H}\)-valued random variable \(v\) the stopped observation function \(H^{k}\) by
\[H^{k}(v):=\tilde{\mathbbm{1}}_{k}\left(\left|\mathbb{E}_{Y}\left[H(v)\right] \right|\right)H(v)\]
as well as the stopped observation process \(Y^{l}\) by
\[\mathrm{d}Y^{l}_{t}:=\tilde{\mathbbm{1}}_{l}\left(\mathbb{E}_{Y}\left[\left|H(u_ {t}^{\text{ref}})\right|^{2}\right]\right)\mathrm{d}Y_{t}.\]
In this step we show that there exists a unique solution \(\bar{u}^{k}\) of
\[\begin{split}\mathrm{d}\bar{u}^{k}_{t}&=\mathcal{A}( \bar{u}^{k}_{t})\mathrm{d}t+\mathcal{B}(\bar{u}^{k}_{t})\mathrm{d}\bar{W}_{t}\\ &\quad+\mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}^{k}_{t},H^{k}\left( \bar{u}^{k}_{t}\right)\right]R^{-1}_{t}\left(\mathrm{d}Y^{k}_{t}-\frac{H^{k} \left(\bar{u}^{k}_{t}\right)+\mathbb{E}_{Y}\left[H^{k}\left(\bar{u}^{k}_{t} \right)\right]}{2}\mathrm{d}t\right),\end{split} \tag{33}\]
via a fixed point argument with respect to the stopped modelled observations \(\left(H(\bar{u}^{k}_{t})\right)_{t\in[0,T]}\).
To this end we consider for a given process \(h\) the unique solution \(\tilde{u}\) of
\[\begin{split}\mathrm{d}\tilde{u}_{t}&=\mathcal{A}( \tilde{u}_{t})\mathrm{d}t+\mathcal{B}(\tilde{u}_{t})\mathrm{d}\bar{W}_{t}\\ &\quad+\tilde{\mathbb{I}}_{k}\left(\left|\mathbb{E}_{Y}\left[h_{t }\right]\right|\right)\mathbb{C}\mathtt{ov}_{Y}\left[\tilde{u}_{t},h_{t}\right] R^{-1}_{t}\left(\mathrm{d}Y^{l}_{t}-\tilde{\mathbb{I}}_{k}\left(\left|\mathbb{E}_{Y} \left[h_{t}\right]\right|\right)\frac{h_{t}+\mathbb{E}_{Y}\left[h_{t}\right]}{ 2}\mathrm{d}t\right).\end{split} \tag{34}\]
Well posedness of (34) is assured by the standard (global) Lipschitz conditions. We define the map \(\Xi\) by
\[\Xi(h):=H(\tilde{u}).\]
The existence and uniqueness of solutions to (33) corresponds to the existence and uniqueness of fixed points of \(\Xi\). We prove this via a Banach fixed point argument and thus have to show the contractivity of \(\Xi\). Due to the Lipschitz continuity of \(H\), this further reduces to the problem of showing that the solution map \(h\mapsto\tilde{u}\) defined by the equation (29) is Lipschitz, with constant strictly smaller than \(1/\mathrm{Lip}(H)\).
Since (34) is of the form (29), the process \(\tilde{u}\) must also satisfy the uniform variance bound (31) corresponding to the law of total variance. Therefore, by the Lipschitz continuity of \(H\), we can assume that any potential fixed point \(h\) satisfies
\[\mathtt{Vor}_{Y}\left[h_{t}\right]=\mathrm{tr}_{\mathbb{R}^{d_{y}}}\mathbb{C} \mathtt{ov}_{Y}\left[h_{t}\right]\leq\mathrm{Lip}(\mathscr{H})\beta e^{ \lambda t}. \tag{35}\]
To show the contractivity of \(\Xi\), let \(h^{i},\ i=1,2\) be two given processes and denote by \(\tilde{u}^{i},\ i=1,2\) the corresponding solutions to (34). Using the uniform variance bounds (35) as well as the Lipschitz continuity of \(\hat{\mathbb{I}}_{k}\) and its boundedness \(0\leq\hat{\mathbb{I}}_{k}\leq 1\), we derive first the following bound for the gain difference
(36)
Using Ito's formula for the squared norm, we derive
\[\left\|\tilde{u}_{t}^{1}-\tilde{u}_{t}^{2}\right\|_{\mathscr{H}}^{2} =2\int_{0}^{t}{}_{\mathscr{Y}^{\prime}}\big{\langle}\mathcal{A}(\tilde{u}_{s}^{1 })-\mathcal{A}(\tilde{u}_{s}^{2}),\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}\big{\rangle} _{\mathscr{Y}}\mathrm{d}s\] \[+2\int_{0}^{t}\left\langle\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}, \left(\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{t}^{1}\right] \right|\right)\right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{1},h_{s}^{1} \right]-\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{t}^{2}\right] \right|\right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{2},h_{s}^{2}\right] \right)R_{s}^{-1}\mathrm{d}Y_{s}^{k}\big{\rangle}_{\mathscr{H}}\] \[-2\int_{0}^{t}\left\langle\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}, \left(\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{1}\right] \right|\right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{1},h_{s}^{1} \right]-\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2}\right] \right|\right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{2},h_{s}^{2}\right]\right)\] \[-2\int_{0}^{t}\left\langle\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}, \tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2}\right] \right|\right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{2},h_{s}^{2}\right] R_{s}^{-1}\] \[\frac{\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s }^{1}\right]\right|\right)\left(h_{s}^{1}+\mathbb{F}_{Y}\left[h_{s}^{1} \right]\right)-\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2 }\right]\right|\right)\left(h_{s}^{2}+\mathbb{F}_{Y}\left[h_{s}^{2}\right] \right)}{2}\right\rangle_{\mathscr{H}}\mathrm{d}s\] \[+2\int_{0}^{t}\left\langle\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}, \left(\mathcal{B}(\tilde{u}_{s}^{1})-\mathcal{B}(\tilde{u}_{s}^{2})\right) \mathrm{d}W_{s}\right\rangle_{\mathscr{H}}\] \[+\sum_{k\in\mathbb{N}}\int_{0}^{t}\left\langle\nu_{k},\left( \tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{1}\right]\right| \right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{1},h_{s}^{1}\right]- \tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2}\right]\right| \right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{2},h_{s}^{2}\right] \right)\right\rangle_{\mathscr{H}}R_{s}^{-1}\] \[\left\langle\nu_{k},\left(\tilde{\mathbb{1}}_{k}\left(\left| \mathbb{F}_{Y}\left[h_{s}^{1}\right]\right|\right)\mathbb{C}_{\mathsf{0}Y} \left[\tilde{u}_{s}^{1},h_{s}^{1}\right]-\tilde{\mathbb{1}}_{k}\left(\left| \mathbb{F}_{Y}\left[h_{s}^{2}\right]\right|\right)\mathbb{C}_{\mathsf{0}Y} \left[\tilde{u}_{s}^{2},h_{s}^{2}\right]\right)\right\rangle_{\mathscr{H}}^{ \mathrm{T}}\mathrm{d}s\] \[+\sum_{k\in\mathbb{N}}\sum_{n\in\mathbb{N}}\int_{0}^{t}q_{n} \left\langle\nu_{k},(\mathcal{B}(\tilde{u}_{s}^{1})-\mathcal{B}(\tilde{u}_{s}^{ 2}))e_{n}\right\rangle_{\mathscr{H}}^{2}\mathrm{d}s.\]
Now we note that by Parseval and the onesided Lipschitz condition (2) we have
\[\sum_{k\in\mathbb{N}}\sum_{n\in\mathbb{N}}q_{n}\left\langle\nu_{k},(\mathcal{B }(\tilde{u}_{s}^{1})-\mathcal{B}(\tilde{u}_{s}^{2}))e_{n}\right\rangle_{ \mathscr{H}}^{2}=\left\|(\mathcal{B}(u)-\mathcal{B}(v))\circ\sqrt{\mathcal{Q}} \right\|_{\mathrm{L}_{2}(\mathscr{H};\mathscr{H})}^{2}\leq\lambda\ \left\|\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}\right\|_{\mathscr{H}}^{2}, \tag{37}\]
as well as
\[\sum_{k\in\mathbb{N}}\left\langle\nu_{k},\left(\tilde{\mathbb{1}}_ {k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{1}\right]\right|\right)\mathbb{C}_{ \mathsf{0}Y}\left[\tilde{u}_{s}^{1},h_{s}^{1}\right]-\tilde{\mathbb{1}}_{k} \left(\left|\mathbb{F}_{Y}\left[h_{s}^{2}\right]\right|\right)\mathbb{C}_{ \mathsf{0}Y}\left[\tilde{u}_{s}^{2},h_{s}^{2}\right]\right)\right\rangle_{ \mathscr{H}}R_{s}^{-1}\] \[=\left\|\left(\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y} \left[h_{s}^{1}\right]\right|\right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^ {1},h_{s}^{1}\right]-\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s }^{2}\right]\right|\right)\mathbb{C}_{\mathsf{0}Y}\left[\tilde{u}_{s}^{2},h_{s }^{2}\right]\right)R_{s}^{-1/2}\right\|_{L(\mathbb{R}^{d_{y}},\mathscr{H})}^{2}\] \[\leq\left(\left(\mathrm{Lip}(\tilde{\mathbb{1}}_{k})+1\right) \sqrt{\mathrm{Lip}(H)}+1\right)^{2}\beta^{2}e^{2\lambda t}\left|R_{s}^{-1/2} \right|^{2}\left(\mathbb{F}_{Y}\left[\left\|\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2} \right\|_{\mathscr{H}}^{2}\right]+\mathbb{F}_{Y}\left[\left\|h_{s}^{1}-h_{s}^{2} \right\|_{\mathbb{R}^{d_{y}}}^{2}\right]\right). \tag{38}\]
Furthermore we note that
\[\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2} \right]\right|\right)\mathbb{F}_{Y}\left[\left|\tilde{\mathbb{1}}_{k}\left( \left|\mathbb{F}_{Y}\left[h_{s}^{1}\right]\right|\right)h_{s}^{1}-\tilde{ \mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2}\right]\right|\right)h_{s }^{2}\right|^{2}\right]\] \[\leq 2\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2} \right]\right|\right)\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s }^{1}\right]\right|\right)\mathbb{F}_{Y}\left[\left|h_{s}^{1}-h_{s}^{2}\right|^{2}\right]\] \[+2\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2} \right]\right|\right)\mathbb{F}_{Y}\left[\left|h_{s}^{2}\right|^{2}\right] \left|\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{1}\right] \right|\right)-\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2} \right]\right|\right)\right|^{2}\] \[\leq 2\mathbb{F}_{Y}\left[\left|h_{s}^{1}-h_{s}^{2}\right|^{2} \right]+2\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2} \right]\right|\right)\left(\left|\mathbb{F}_{Y}\left[h_{s}^{2}\right]\right|^{2}+ \mathbb{F}_{Y}\left[\left|h_{s}^{2}-\mathbb{F}_{Y}\left[h_{s
where we used that \(\tilde{\mathbb{1}}_{k}\leq\mathbb{1}_{[0,k+1]}\) and the variance bound (35) to derive the last inequality.
The variance bounds (31) and (35) also imply that
\[\left\|\mathbb{C}\mathtt{ov}_{Y}\left[\tilde{u}_{s}^{2},h_{s}^{2}\right]\right\| _{L(R^{d_{x}},\mathscr{H})}\leq\sqrt{\mathrm{Lip}(H)}\beta e^{\lambda t}. \tag{39}\]
If we now take the supremum on the time interval \([0,T]\) and the conditional expectation \(\mathbb{E}_{Y}\), standard Cauchy-Schwarz inequalities, together with (37), (38) and the one-sided Lipschitz condition (2), we derive that there exists a constant \(\kappa_{1}(T)\), that only depends on the timeframe \(T\), such that
\[\mathbb{E}_{Y}\left[\sup_{t\leq T}\left\|\tilde{u}_{t}^{1}-\tilde {u}_{t}^{2}\right\|_{\mathscr{H}}^{2}\right]\] \[\leq\kappa_{1}(T)\int_{0}^{T}\mathbb{E}_{Y}\left[\left\|\tilde{u} _{s}^{1}-\tilde{u}_{s}^{2}\right\|_{\mathscr{H}}^{2}\right]+\mathbb{E}_{Y} \left[\left\|h_{s}^{1}-h_{s}^{2}\right\|_{\mathscr{H}}^{2}\right]\mathrm{d}s\] \[\quad+\kappa_{1}(T)\int_{0}^{t}\left(\mathbb{E}_{Y}\left[\left\| \tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}\right\|_{\mathscr{H}}^{2}\right]+\mathbb{E }_{Y}\left[\left\|h_{s}^{1}-h_{s}^{2}\right\|_{\mathscr{H}}^{2}\right]\right) \tilde{\mathbb{1}}_{k}\left(\left\|\mathbb{E}_{Y}\left[h_{s}^{1}\right]\right\| \right)\mathbb{E}_{Y}\left[\left\|h_{s}^{1}+\mathbb{E}_{Y}\left[h_{s}^{1} \right]\right\|_{\mathscr{H}}^{2}\right]\mathrm{d}s\] \[\quad+2\mathbb{E}_{Y}\left[\sup_{t\leq T}\left|\int_{0}^{t}\left< \tilde{u}_{s}^{1}-\tilde{u}_{s}^{2},\left(B(\tilde{u}_{s}^{1})-B(\tilde{u}_{s} ^{2})\right)\mathrm{d}W_{s}\right>_{\mathscr{H}}\right]\right]. \tag{40}\]
Note that due to (35) we get
\[\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{E}_{Y}\left[h_{s}^{1} \right]\right|\right)\mathbb{E}_{Y}\left[\left\|h_{s}^{1}+\mathbb{E}_{Y}\left[ h_{s}^{1}\right]\right\|_{\mathscr{H}}^{2}\right] \leq 2\mathbb{E}_{Y}\left[\left\|h_{s}^{1}-\mathbb{E}_{Y}\left[h_{ s}^{1}\right]\right\|_{\mathscr{H}}^{2}\right]+8\tilde{\mathbb{1}}_{k}\left( \left|\mathbb{E}_{Y}\left[h_{s}^{1}\right]\right|\right)\left\|\mathbb{E}_{Y} \left[h_{s}^{1}\right]\right\|_{\mathscr{H}}^{2}\] \[\leq 2\mathrm{Lip}(H)\beta e^{\lambda t}+8(k+1)^{2}\]
Thus if we now use the specific form of the observations (32) and take the full expectation in (40) we derive
\[\mathbb{E}\left[\sup_{t\leq T}\left\|\tilde{u}_{t}^{1}-\tilde{u}_ {t}^{2}\right\|_{\mathscr{H}}^{2}\right]\] \[\leq\kappa_{2}(T,k)\int_{0}^{T}\mathbb{E}\left[\left\|\tilde{u}_{ s}^{1}-\tilde{u}_{s}^{2}\right\|_{\mathscr{H}}^{2}\right]+\mathbb{E}\left[\left\|h_ {s}^{1}-h_{s}^{2}\right\|_{\mathscr{H}}^{2}\right]\mathrm{d}s\] \[+2\mathbb{E}\left[\sup_{t\leq T}\int_{0}^{t}\left<\tilde{u}_{s}^{ 1}-\tilde{u}_{s}^{2},\left(\mathcal{B}(\tilde{u}_{s}^{1})-\mathcal{B}(\tilde{u} _{s}^{2})\right)\mathrm{d}W_{s}\right>_{\mathscr{H}}\right],\]
for some constant \(\kappa_{2}(T,k)\), where we of course used that \(\mathbb{E}\left[\ \mathbb{E}_{Y}\left[\cdot\right]\right]=\mathbb{E}\left[\cdot\right]\).
To dominate the second term on the right hand side of the inequality we use (36) together with the fact that \(\tilde{\mathbb{1}}_{k}\left(\mathbb{E}_{Y}\left[\left|H(u_{s}^{\mathrm{ref}}) \right|^{2}\right]\right)\mathbb{E}_{Y}\left[\left|H(u_{s}^{\mathrm{ref}}) \right|^{2}\right]\leq(k+1)\). For the other two terms we use the Burkholder-Davis-Gundy inequality together with (37) and (38) to derive that there exists a constant \(\kappa_{3}\left(T,k,l\right)>0\) we have
\[\mathbb{E}\left[\sup_{t\leq T}\left\|\tilde{u}_{t}^{1}-\tilde{u}_{t}^{2} \right\|_{\mathscr{H}}^{2}\right]\leq\kappa_{3}\left(T,k,l\right)\int_{0}^{T} \mathbb{E}\left[\left\|\tilde{u}_{s}^{1}-\tilde{u}_{s}^{2}\right\|_{\mathscr{H }}^{2}\right]+\mathbb{E}\left[\left\|h_{s}^{1}-h_{s}^{2}\right\|_{\mathscr{H}} ^{2}\right]\mathrm{d}s,\]
which by the (deterministic) Gronwall Lemma implies
\[\mathbb{E}\left[\sup_{t\leq T}\left\|\tilde{u}_{t}^{1}-\tilde{u}_{t}^{2}\right\|_{ \mathscr{H}}^{2}\right]\leq\kappa_{3}\left(T,k,l\right)\exp\left(T\;\kappa_{3} \left(T,\left\|H\right\|_{\infty}\right)\right)\int_{0}^{T}\mathbb{E}\left[ \left\|h_{s}^{1}-h_{s}^{2}\right\|_{\mathscr{H}}^{2}\right]\mathrm{d}s,\]
and thus for \(T\) small enough we indeed have the desired contraction property.
Step 2: _The stopping argument._
First we define the stopping times which we use for our argument by
\[\begin{split}\tau^{k}&:=\inf\Big{\{}\;t\geq 0\;:\; \left|\mathbb{E}_{Y}\left[H(u_{t}^{k})\right]\right|^{2}>k\;\Big{\}}\\ \tau_{\text{ref}}^{l}&:=\inf\Big{\{}\;t\geq 0\;:\; \mathbb{E}_{Y}\left[\left|H(u_{t}^{\text{ref}})\right|^{2}\right]>l\;\Big{\}} \,,\end{split} \tag{41}\]
and note that both are stopping times with respect to the filtration generated by \(Y\), implying that for any stochastic process \((z_{t})_{t\geq 0}\) and any (suitably integrable) functions \(f,g\), the identities
\[\begin{split} g\left(\mathbb{E}_{Y}\left[f(z_{\min\{\tau^{k},t \}})\right]\right)&=\left.g\left(\mathbb{E}_{Y}\left[f(z_{s}) \right]\right)\right|_{s=\min\{\tau^{k},t\}}\\ g\left(\mathbb{E}_{Y}\left[f(z_{\min\{\tau_{\text{ref}}^{l},t\}}) \right]\right)&=\left.g\left(\mathbb{E}_{Y}\left[f(z_{s})\right] \right)\right|_{s=\min\{\tau_{\text{ref}}^{l},t\}}\end{split}\]
hold and therefore \(\bar{u}^{k}\) is a solution to the EnKBF (28) on the random time interval \([0,\min\{\tau^{k},\tau_{\text{ref}}^{l}\}]\). By the uniqueness of (34), \(\bar{u}^{k}\) and \(\bar{u}^{k+1}\) must even coincide on \([0,\min\{\tau^{k},\tau_{\text{ref}}^{l}\}]\). Thus we can construct a solution to (28) using the solutions to (34) and in order to conclude existence and uniqueness of the EnKBF, we just have to show that
\[\bigcup_{k,l\in\mathbb{N}}\left\{\tau^{k}>T\right\}\cap\left\{\tau_{\text{ref }}^{l}>T\right\}\]
defines a covering of the sample space almost surely.
To this end we first note that
\[\begin{split}\mathrm{d}\left\|\bar{u}_{t}^{k}\right\|_{ \mathscr{H}}^{2}&=2\;_{\mathscr{V}^{\prime}}\big{\langle} \mathcal{A}(\bar{u}_{t}^{k}),\bar{u}_{t}^{k}\big{\rangle}_{\mathscr{V}} \mathrm{d}t+\big{\langle}\bar{u}_{t}^{k},\mathcal{B}(\bar{u}_{t}^{k})\mathrm{ d}\bar{W}_{t}\big{\rangle}_{\mathscr{H}}+\mathrm{tr}_{\mathscr{H}}\left[ \mathcal{B}(\bar{u}_{t}^{k})\sqrt{\mathcal{Q}}\left(\mathcal{B}(\bar{u}_{t}^{k} )\sqrt{\mathcal{Q}}\right)^{\prime}\right]\mathrm{d}t\\ &\quad+2\left\langle\bar{u}_{t}^{k},\mathbb{C}\mathtt{ov}_{Y} \left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right]R_{t}^{-1} \left(\mathrm{d}Y_{t}^{l}-\frac{H^{k}\left(\bar{u}_{t}^{k}\right)+\mathbb{E}_ {Y}\left[H^{k}\left(\bar{u}_{t}^{k}\right)\right]}{2}\mathrm{d}t\right)\right\rangle _{\mathscr{H}}\\ &\quad+\mathrm{tr}_{\mathscr{H}}\left[\mathbb{C}\mathtt{ov}_{Y} \left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right]R_{t}^{-1} \mathbb{C}\mathtt{ov}_{Y}\left[H^{k}\left(\bar{u}_{t}^{k}\right),\bar{u}_{t}^ {k}\right]\right].\end{split}\]
Taking the conditional expectation thus gives us
\[\begin{split}&\mathrm{d}\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k} \right\|_{\mathscr{H}}^{2}\right]\\ &=2\mathbb{E}_{Y}\left[\left.\left\langle\mathcal{A}(\bar{u}_{t}^{ k}),\bar{u}_{t}^{k}\right\rangle_{\mathscr{V}}\right]\mathrm{d}t+\mathbb{E}_{Y} \left[\mathrm{tr}_{\mathscr{H}}\left[\mathcal{B}(\bar{u}_{t}^{k})\sqrt{ \mathcal{Q}}\left(\mathcal{B}(\bar{u}_{t}^{k})\sqrt{\mathcal{Q}}\right)^{ \prime}\right]\right]\mathrm{d}t\\ &\quad+2\mathbb{E}_{Y}\left[\left\langle\bar{u}_{t}^{k},\mathbb{C} \mathtt{ov}_{Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right]R_ {t}^{-1}\left(\mathrm{d}Y_{t}^{l}-\frac{H^{k}\left(\bar{u}_{t}^{k}\right)+ \mathbb{E}_{Y}\left[H^{k}\left(\bar{u}_{t}^{k}\right)\right]}{2}\mathrm{d}t \right)\right\rangle_{\mathscr{H}}\right]\\ &\quad+\tilde{\mathbbm{1}}_{l}\left(\mathbb{E}_{Y}\left[\left|H(u_{ t}^{\text{ref}})\right|^{2}\right]\right)^{2}\mathrm{tr}_{\mathscr{H}}\left[ \mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k} \right)\right]R_{t}^{-1}\mathbb{C}\mathtt{ov}_{Y}\left[H^{k}\left(\bar{u}_{t}^{k }\right),\bar{u}_{t}^{k}\right]\right]\mathrm{d}t.\end{split} \tag{42}\]
The first two terms on the right hand side can be bounded using the growth condition (3) and the diffusivity bound (7). The last term is just the squared shadow 2-norm of the operator
\(\mathbb{C}_{\mathsf{ov}Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right) \right]R_{t}^{-1/2}\), which we can estimate using Parseval and the robust variance bound (31) as
\[\mathrm{tr}_{\mathscr{H}}\left[\mathbb{C}_{\mathsf{ov}Y}\left[\bar{u }_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right]R_{t}^{-1}\mathbb{C}_{ \mathsf{ov}Y}\left[H^{k}\left(\bar{u}_{t}^{k}\right),\bar{u}_{t}^{k}\right] \right]\leq\mathrm{tr}_{\mathscr{H}}\left[\mathbb{C}_{\mathsf{ov}Y}\left[\bar{u }_{t}^{k},H\left(\bar{u}_{t}^{k}\right)\right]R_{t}^{-1}\mathbb{C}_{\mathsf{ov}Y }\left[H\left(\bar{u}_{t}^{k}\right),\bar{u}_{t}^{k}\right]\right]\] \[\leq\sum_{j\in\mathbb{N}}\mathbb{C}_{\mathsf{ov}Y}\left[\left\langle \nu_{j},\bar{u}_{t}^{k}\right\rangle_{\mathscr{H}},H\left(\bar{u}_{t}^{k} \right)\right]R_{t}^{-1}\mathbb{C}_{\mathsf{ov}Y}\left[H\left(\bar{u}_{t}^{k} \right),\left\langle\nu_{j},\bar{u}_{t}^{k}\right\rangle_{\mathscr{H}}\right]\] \[\leq\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}-\mathbb{E}_{Y} \left[\bar{u}_{t}^{k}\right]\right\|_{\mathscr{H}}^{2}\right]\left|R_{t}^{-1} \right|\mathbb{E}_{Y}\left[\left|H(\bar{u}_{t}^{k})-\mathbb{E}_{Y}\left[H(\bar {u}_{t}^{k})\right]\right|^{2}\right]^{2}\leq\mathrm{Lip}(H)\left|R_{t}^{-1} \right|\beta^{2}e^{2\lambda t}.\]
Thus we can bound (42) by
\[\mathrm{d}\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{ \mathscr{H}}^{2}\right]= \left(2\alpha_{H}\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\| _{\mathscr{H}}^{2}\right]+2\alpha_{0}+\beta+\mathrm{Lip}(H)\left|R_{t}^{-1} \right|\beta^{2}e^{2\lambda t}\right)\mathrm{d}t \tag{43}\] \[+2\mathbb{E}_{Y}\left[\left\langle\bar{u}_{t}^{k},\mathbb{C}_{ \mathsf{ov}Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right] \right]R_{t}^{-1}\mathrm{d}Y_{t}^{I}\right\rangle_{\mathscr{H}}\] \[-2\mathbb{E}_{Y}\left[\left\langle\bar{u}_{t}^{k},\mathbb{C}_{ \mathsf{ov}Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right] \right]R_{t}^{-1}\frac{H^{k}(\bar{u}_{t}^{k})+\mathbb{E}_{Y}\left[H^{k}(\bar{u} _{t}^{k})\right]}{2}\right\rangle_{\mathscr{H}}\right]\mathrm{d}t.\]
We use (39) and \(\tilde{\mathbb{1}}_{k}\leq 1\) to derive that
\[\left|\mathbb{E}_{Y}\left[\left\langle\bar{u}_{t}^{k},\mathbb{C}_ {\mathsf{ov}Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right]R_{t }^{-1}\frac{H^{k}(\bar{u}_{t}^{k})+\mathbb{E}_{Y}\left[H^{k}(\bar{u}_{t}^{k}) \right]}{2}\right\rangle_{\mathscr{H}}\right]\right|\] \[\leq 2\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{ \mathscr{H}}^{2}\right]+\tilde{\mathbb{1}}_{k}\left(\left|\mathbb{E}_{Y}\left[H (\bar{u}_{t}^{k})\right]\right|\right)^{2}\left\|\mathbb{C}_{\mathsf{ov}Y} \left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right]\right\|_{\mathrm{ L}(R^{d_{x}},\mathscr{H})}^{2}\left|R_{t}^{-1}\right|\mathbb{E}_{Y}\left[|H(\bar{u}_{t}^{k}) |^{2}\right]\] \[\leq\left(2+\mathrm{Lip}(H)^{3}\beta^{2}e^{2\lambda t}\left|R_{t}^ {-1}\right|\right)\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{ \mathscr{H}}^{2}\right]+\mathrm{Lip}(H)\beta^{2}e^{2\lambda t}\left|R_{t}^{-1} \right|\left|H(0)\right|^{2}.\]
Thus, we note that there exist constants \(\kappa_{4}(T)\) and \(\kappa_{5}(T)\), only depending on the timeframe \(T\), such that
\[\mathrm{d}\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{ \mathscr{H}}^{2}\right]= \left(\kappa_{4}(T)\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_ {\mathscr{H}}^{2}\right]+\kappa_{5}(T)\right)\mathrm{d}t \tag{44}\] \[+2\mathbb{E}_{Y}\left[\left\langle\bar{u}_{t}^{k},\mathbb{C}_{ \mathsf{ov}Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right] \right]R_{t}^{-1}\tilde{\mathbb{1}}_{l}\left(\mathbb{E}_{Y}\left[\left|H(u_{t }^{\mathrm{ref}})\right|^{2}\right]\right)H(u_{t}^{\mathrm{ref}})\right\rangle_{ \mathscr{H}}\mathrm{d}t\] \[+2\mathbb{E}_{Y}\left[\left\langle\bar{u}_{t}^{k},\mathbb{C}_{ \mathsf{ov}Y}\left[\bar{u}_{t}^{k},H^{k}\left(\bar{u}_{t}^{k}\right)\right] \right]R_{t}^{-1}\tilde{\mathbb{1}}_{l}\left(\mathbb{E}_{Y}\left[\left|H(u_{t }^{\mathrm{ref}})\right|^{2}\right]\right)\Gamma_{t}\mathrm{d}V_{t}\right\rangle_ {\mathscr{H}}.\]
Again using (39) and the Burkholder-Davis-Gundy inequality we derive that there exist constants \(\kappa_{6}(T)\), depending solely on \(T\), and \(\kappa_{7}(T,l)\), depending on \(T\) and \(l\), such that
\[\mathbb{E}\left[\sup_{t\leq T}\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{ \mathscr{H}}^{2}\right]\right]=\kappa_{6}(T)\int_{0}^{T}\mathbb{E}_{Y}\left[ \left\|\bar{u}_{t}^{k}\right\|_{\mathscr{H}}^{2}\right]\mathrm{d}t+\mathbb{E}_{Y} \left[\left\|u_{0}\right\|_{\mathscr{H}}^{2}\right]+T\kappa_{7}(T,l). \tag{45}\]
Thus, by the Gronwall Lemma, we derive that for fixed \(l\) and \(T\)
\[\mathbb{E}\left[\sup_{t\leq T}\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{ \mathscr{H}}^{2}\right]\right]\leq\exp\left(T\kappa_{6}(T)\right)\left(\mathbb{ E}_{Y}\left[\left\|u_{0}\right\|_{\mathscr{H}}^{2}\right]+T\kappa_{7}(T,l)\right), \tag{46}\]
which, implies that almost surely there exists a \(k\) such that \(\sup_{t\leq T}\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{H}^{2}\right]\leq k\). By the Lipschitz continuity of \(H\) and the inequality \(\left|\mathbb{E}_{Y}\left[H(\bar{u}_{t}^{k})\right]\right|\leq|H(0)|+\mathrm{ Lip}(H)\sqrt{\mathbb{E}_{Y}\left[\left\|\bar{u}_{t}^{k}\right\|_{\mathscr{H}}^{2} \right]}\) this then in turn implies that
\[\bigcup_{k\in\mathbb{N}}\left\{\tau^{k}>T\right\}\cap\left\{\tau_{\mathrm{ref} }^{l}>T\right\}=\left\{\tau_{\mathrm{ref}}^{l}>T\right\}\text{ almost surely.}\]
Since \(\sup_{t\leq T}\mathbb{E}_{Y}\left[\left|H(u_{t}^{\mathrm{ref}})\right|^{2}\right]\) is finite almost surely, we can thus indeed conclude that there exists a solution to the EnKBF (28), defined on every event \(\left\{\tau^{k}>T\right\}\cap\left\{\tau_{\mathrm{ref}}^{l}>T\right\}\) by the sequence of \(\bar{u}^{k}\).
To conclude uniqueness of solutions to (28) one can simply employ stopping times (41) and the same Gronwall/contraction argument as in Step 1.
**Remark 29**.: _Note that even though in the proof above we used the specific form of the observations \(Y\), it actually does not matter that the true observation function and the modelled observation function coincide, i.e. if \(\mathrm{d}Y_{t}=\mathfrak{C}(\bar{X}_{t})\mathrm{d}t+\Gamma_{t}\mathrm{d}V_{t}\) with \(\mathfrak{C}\neq H\), then the proof would still hold, as long as \(\mathfrak{C}\) is assumed to be Lipschitz. Therefore, as an immediate corollary of our chosen fixed point argument, one derives the continuity of the EnKBF with respect to perturbations of the modelled observations \(H\). The continuous dependence on the signal parameters \(\mathcal{A}\), \(\mathcal{B}\) and the initial condition \(u_{0}\) can easily be shown as well. Only the robustness with respect to the observation stream \(Y\) is a delicate matter due to the discontinuity of the Ito-Lyons map._
## 7 The EnKBF as an interacting particle system
For the sake of brevity in formulas we make the following definition.
**Definition 30**.: _For any \(\mathfrak{v}=(v_{1},\cdots,v_{N})\in\mathscr{H}^{N},\ N\in\mathbb{N}\) we set_
\[\begin{split}\mathbb{E}^{N}\left[\mathfrak{v}\right]& :=\frac{1}{N}\sum_{i=1}^{N}v^{i},\ \mathbb{E}_{H}^{N}\left[\mathfrak{v}\right]:=\frac{1}{N}\sum_{i=1}^{N}H(v^{i} )\\ \mathbb{C}_{H}^{N}\left[\mathfrak{v}\right]&:=\frac{ 1}{N}\sum_{i=1}^{N}\left(v^{i}-\mathbb{E}^{N}\left[\mathfrak{v}\right] \right)\ \left(H(v^{i})-\mathbb{E}_{H}^{N}\left[\mathfrak{v}\right]\right)^{ \prime}.\end{split} \tag{47}\]
_We use the normalization by \(1/N\) for the empirical covariance instead of the usual unbiased normalization by \(1/(N-1)\) just for notational convenience in the calculations that are to follow._
The mean field EnKBF (28) is naturally approximated by a system of interacting S(P)DEs
\[\begin{split}\mathrm{d}u_{t}^{i}&=\mathcal{A}(u_{t }^{i})\mathrm{d}t+\mathcal{B}(u_{t}^{i})\mathrm{d}\bar{W}_{t}^{i}\\ &+\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^{N}\right]R_{t}^{-1} \left(\mathrm{d}Y_{t}-\frac{H(u_{t}^{i})+\mathbb{E}_{H}^{N}\left[\mathfrak{u}_ {t}^{N}\right]}{2}\mathrm{d}t\right),\text{ for }i=1,\cdots,N\end{split} \tag{48}\]
where \((\bar{W}^{i})_{i=1,\cdots,N}\) are independent copies of the Wiener process \(\bar{W}\), \(u_{0}^{i}\) are independent copies of \(u_{0}\) and \(\mathfrak{u}_{t}^{N}:=(u_{t}^{i})_{i=1,\cdots,N}\).
The interacting system of S(P)DEs (48) is often referred to as the deterministic EnKBF, which is the continuous time counterpart to the filter derived by Sakov and Oke [42].
### Analysis of the particle approximations
While (48) is just a system of interacting ordinary SPDEs for which local one-sided Lipschitz conditions are enough to derive well posedness [34], it does not seem to satisfy the usual growth conditions for unbounded observation functions \(H\). In [32] the well posedness was proven for the finite dimensional setting by showing that blow ups do not occur in finite times. This is of course not sufficient to conclude well posedness in infinite dimensions. Instead we employ a partial stopping argument, similar to the one employed in the proof of Theorem 28. The law of total variance will again play a key role, and so, to keep formulas simple we make the following definition.
**Definition 31**.: _For any \(N\in\mathbb{N}\) and any ensemble \(\mathfrak{v}=(v^{1},\cdots,v^{N})\in\mathscr{H}^{N}\) of \(N\) elements of \(\mathscr{H}\), we define the empirical variance \(\sigma^{N}[\mathfrak{v}]\) by_
\[\sigma^{N}[\mathfrak{v}]:=\frac{1}{N}\sum_{i=1}^{N}\left\|v^{i}_{s}-\mathbb{E }^{N}[\mathfrak{v}]\right\|_{\mathscr{H}}^{2}.\]
_And similarly we define the empirical observed variance by_
\[\sigma^{N,H}[\mathfrak{v}]:=\frac{1}{N}\sum_{i=1}^{N}\left|H(v^{i}_{s})- \mathbb{E}^{N}_{H}[\mathfrak{v}]\right|^{2}.\]
We are now in the position to formulate and prove the following Lemma.
**Lemma 32**.: _If the conditions in Assumption (2) and (7) are satisfied, there exists a unique solution to the nonlinear EnKBF (48)._
Proof.: First we note that for any fixed \(k\in\mathbb{N}\), the system
\[\begin{split}\mathrm{d}u^{i}_{t}&=\mathcal{A}(u^{i} _{t})\mathrm{d}t+\mathcal{B}(u^{i}_{t})\mathrm{d}\bar{W}^{i}_{t}\\ &\quad+\tilde{\mathfrak{1}}_{k}\left(\left\|\mathbb{C}^{N}_{H} \left[\mathfrak{u}^{N}_{t}\right]\right\|_{L(\mathbb{R}^{d_{y}},\mathscr{H})} ^{2}\right)\mathbb{C}^{N}_{H}\left[\mathfrak{u}^{N}_{t}\right]R^{-1}_{t} \left(\mathrm{d}Y_{t}-\frac{H(u^{i}_{t})+\mathbb{E}^{N}_{H}\left[\mathfrak{u} ^{N}_{t}\right]}{2}\mathrm{d}t\right),\end{split} \tag{49}\]
for \(i=1,\cdots,N\), satisfies standard one-sided Lipschitz and growth conditions and thus has a unique solution. To keep formulas simple we will omit the argument of \(\tilde{\mathfrak{1}}_{k}\left(\left\|\mathbb{C}^{N}_{H}\left[\mathfrak{u}^{N} _{t}\right]\right\|_{\mathscr{H}^{d_{y}}}^{2}\right)\) and instead simply write \(\tilde{\mathfrak{1}}_{k}\) in the rest of this proof.
We note that the ensemble mean \(\mathbb{E}^{N}\left[\mathfrak{u}^{N}_{t}\right]\) satisfies
\[\mathrm{d}\mathbb{E}^{N}\left[\mathfrak{u}^{N}_{t}\right]=\frac{1}{N}\sum_{j= 1}^{N}\mathcal{A}(u^{j}_{t})\mathrm{d}t+\frac{1}{N}\sum_{j=1}^{N}\mathcal{B}( u^{j}_{t})\mathrm{d}\bar{W}^{j}_{t}+\tilde{\mathfrak{1}}_{k}\ \mathbb{C}^{N}_{H}\left[\mathfrak{u}^{N}_{t}\right]R^{-1}_{t}\left(\mathrm{d}Y _{t}-\mathbb{E}^{N}_{H}\left[\mathfrak{u}^{N}_{t}\right]\mathrm{d}t\right). \tag{50}\]
This gives us the evolution equation for the centered particles
\[\begin{split}\mathrm{d}\left(u^{i}_{t}-\mathbb{E}^{N}\left[ \mathfrak{u}^{N}_{t}\right]\right)&=\left(\mathcal{A}(u^{i}_{t})- \frac{1}{N}\sum_{j=1}^{N}\mathcal{A}(u^{j}_{t})\right)\mathrm{d}t+\left( \mathcal{B}(u^{i}_{t})\mathrm{d}\bar{W}^{i}_{t}-\frac{1}{N}\sum_{j=1}^{N} \mathcal{B}(u^{j}_{t})\mathrm{d}\bar{W}^{j}_{t}\right)\\ &\quad-\tilde{\mathfrak{1}}_{k}\ \mathbb{C}^{N}_{H}\left[ \mathfrak{u}^{N}_{t}\right]R^{-1}_{t}\frac{H(u^{i}_{t})-\mathbb{E}^{N}_{H} \left[\mathfrak{u}^{N}_{t}\right]}{2}\mathrm{d}t.\end{split}\]
Note that by Parseval one easily verifies that
\[\frac{1}{N}\sum_{i=1}^{N}\left\langle u_{t}^{i}-\mathbb{E}^{N} \left[\mathfrak{u}_{t}^{N}\right],\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^{N} \right]R_{t}^{-1}\left(H(u_{t}^{i})-\mathbb{E}_{H}^{N}\left[\mathfrak{u}_{t}^{N }\right]\right)\right\rangle_{\mathscr{H}}\] \[=\frac{1}{N}\sum_{i=1}^{N}\sum_{k\in\mathbb{N}}\left\langle\nu_{k },u_{t}^{i}-\mathbb{E}^{N}\left[\mathfrak{u}_{t}^{N}\right]\right\rangle_{ \mathscr{H}}\left\langle\nu_{k},\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^{N} \right]R_{t}^{-1}\left(H(u_{t}^{i})-\mathbb{E}_{H}^{N}\left[\mathfrak{u}_{t}^{ N}\right]\right)\right\rangle_{\mathscr{H}}\] \[=\operatorname{tr}_{\mathscr{H}}\left[\mathbb{C}_{H}^{N}\left[ \mathfrak{u}_{t}^{N}\right]R_{t}^{-1}\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^ {N}\right]^{\mathbb{T}}\right].\]
Therefore, by Ito's formula we derive the following equation for the average deviation from the ensemble mean
\[\begin{split}\mathrm{d}\sigma^{N}[\mathfrak{u}_{t}^{N}]& =\frac{2}{N}\sum_{i=1}^{N}\left\langle\left(\mathcal{A}(u_{t}^{i})-\frac{1}{N }\sum_{j=1}^{N}\mathcal{A}(u_{t}^{j})\right),u_{t}^{i}-\mathbb{E}^{N}\left[ \mathfrak{u}_{t}^{N}\right]\right\rangle_{\mathscr{V}}\mathrm{d}t+\mathrm{d} \mathfrak{m}_{t}^{N}\\ &\quad-\tilde{1}_{k}\,\operatorname{tr}_{\mathscr{H}}\left[ \mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^{N}\right]R_{t}^{-1}\mathbb{C}_{H}^{ N}\left[\mathfrak{u}_{t}^{N}\right]^{\prime}\right]\mathrm{d}t\\ &\quad+\frac{1}{N}\sum_{i=1}^{N}\operatorname{tr}_{\mathscr{H}} \left[\mathcal{B}(u_{t}^{i})\sqrt{\mathcal{Q}}\left(\mathcal{B}(u_{t}^{i}) \sqrt{\mathcal{Q}}\right)^{\prime}\right]\mathrm{d}t\\ &\quad+\frac{1}{N^{3}}\sum_{i=1}^{N}\sum_{j\neq i}\operatorname{ tr}_{\mathscr{H}}\left[\mathcal{B}(u_{t}^{j})\sqrt{\mathcal{Q}}\left(\mathcal{B}(u_{t}^{j}) \sqrt{\mathcal{Q}}\right)^{\prime}\right]\mathrm{d}t,\end{split} \tag{51}\]
where \(\mathfrak{m}^{N}\) denotes the local martingale given by
\[\begin{split}\mathrm{d}\mathfrak{m}_{t}^{N}&:=\frac {2}{N}\sum_{i=1}^{N}\left\langle u_{t}^{i}-\mathbb{E}^{N}\left[\mathfrak{u}_{t }^{N}\right],\left(\mathcal{B}(u_{t}^{i})\mathrm{d}\bar{W}_{t}^{i}-\frac{1}{N} \sum_{j=1}^{N}\mathcal{B}(u_{t}^{j})\mathrm{d}\bar{W}_{t}^{j}\right)\right\rangle _{\mathscr{H}}\\ &=\frac{2}{N}\sum_{i=1}^{N}\left\langle u_{t}^{i}-\mathbb{E}^{N} \left[\mathfrak{u}_{t}^{N}\right],\mathcal{B}(u_{t}^{i})\mathrm{d}\bar{W}_{t} ^{i}\right\rangle_{\mathscr{H}},\text{ with }\mathfrak{m}_{0}^{N}=0.\end{split} \tag{52}\]
First we note that in (51) we can replace \(\frac{1}{N}\sum_{j=1}^{N}\mathcal{A}(u_{t}^{j})\) by \(\mathcal{A}\left(\mathbb{E}^{N}\left[\mathfrak{u}_{t}^{N}\right]\right)\). Thus by using the one-sided Lipschitz condition (2) and Assumption (7) as well as the positivity of the trace
\(\operatorname{tr}_{\mathscr{H}}\left[\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^ {N}\right]R_{t}^{-1}\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^{N}\right]^{ \prime}\right]\) we derive the inequality
\[\mathrm{d}\sigma^{N}[\mathfrak{u}_{t}^{N}]\leq\left(2\lambda\ \sigma^{N}[ \mathfrak{u}_{t}^{N}]+\beta\right)\mathrm{d}t+\mathrm{d}\mathfrak{m}_{t}^{N}. \tag{53}\]
Since \(\mathfrak{m}^{N}\) is a real valued local martingale we can deduce by the stochastic Gronwall Lemma [44, Theorem 4] that
\[\mathbb{E}\left[\sup_{t\leq T}\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left\|u_{t}^{i}- m_{t}^{N}\right\|_{\mathscr{H}}^{2}}\right]=\mathbb{E}\left[\sup_{t\leq T}\sqrt{ \sigma^{N}[\mathfrak{u}_{t}^{N}]}\right]\leq(\pi+1)\sqrt{\beta}e^{\lambda T}. \tag{54}\]
Due to the Lipschitz continuity of \(H\), this also gives a uniform bound for \(\left\|\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^{N}\right]\right\|_{\mathscr{H }^{d_{y}}}^{2}\). Since (49) coincide with (48) on \(\left\{\left\|\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{t}^{N}\right]\right\|_{ \mathscr{H}^{d_{y}}}^{2}\leq k\right\}\) we have thus derived the well posedness of (48).
**Remark 33** (Literature).: _As already mentioned well posedness of the particle system (48) was proven in [32]. An extension of this proof to the correlated noise framework, which requires the control of singular terms can be found in [21]._
_In the thesis [28] it was shown that the discrete EnKF is well defined in an infinite dimensional separable Hilbert space setting. Finally we mention that the seminal paper [29] considered the well posedness and accuracy of both discrete and continuous time EnKFs for a class of signals that included the 2D Navier-Stokes equation. Existence of strong solutions to the continuous time EnKF (48) with complete observations (\(H=\mathrm{id}_{\mathscr{H}}\)) was assumed and it was shown that solutions do not blow up._
The key identity in the proof of Lemma 32 was (52), which is a stochastic version of the law of total variance (30), i.e. the particle approximation (48) also satisfies (an empirical version of) the law of total variance up to martingale fluctuations. If (48) is to be a good approximation of (28) one would expect these fluctations to become small as the number of particles is increased sufficiently. Indeed, this is the case, as due to Assumption 7, we derive
\[\begin{split}\mathrm{d}\left[\mathfrak{m}^{N}\right]_{t}& =\frac{2}{N^{2}}\sum_{i=1}^{N}\left\langle u_{t}^{i}-\mathbb{E}^{ N}\left[\mathfrak{u}_{t}^{N}\right],\mathcal{B}(u_{t}^{i})\mathcal{Q}\mathcal{B}(u_ {t}^{i})^{\prime}(u_{t}^{i}-\mathbb{E}^{N}\left[\mathfrak{u}_{t}^{N}\right] )\right\rangle_{\mathscr{H}}\mathrm{d}t\\ &\leq\frac{2\beta}{N}\frac{1}{N}\sum_{i=1}^{N}\left\|u_{t}^{i}- \mathbb{E}^{N}\left[\mathfrak{u}_{t}^{N}\right]\right\|_{\mathscr{H}}^{2} \mathrm{d}t=\frac{2\beta}{N}\sigma^{N}[\mathfrak{u}_{t}^{N}]\mathrm{d}t.\end{split} \tag{55}\]
The last inequality is a consequence of the fact that the trace is invariant under the change of the orthonormal basis and that for every nonzero vector one can find an orthonormal basis that contains this vector.
Since we were able to bound \(\sigma^{N}[\mathfrak{u}_{t}^{N}]\) uniformly in time, the quadratic variation of \(\mathfrak{m}^{N}\) will decrease to zero for \(N\to\infty\). Thus, an empirical version of the law of total variance is almost satisfied for large ensemble sizes \(N\). For our convergence proof we will need a more rigorous quantification of this fact in the form of exponential moment bounds of the empirical variance. Such bounds are delicate as the ensemble \(\mathfrak{u}^{N}\) will likely show some Gaussian (tail behaviour) and thus \(\mathbb{E}\left[\sup_{t\leq T}\exp\left(r\sigma^{N}[\mathfrak{u}_{t}^{N}] \right)\right]\) might not be finite for all values of \(r\geq 0\). However, as \(N\to\infty\) one would expect \(\sigma^{N}\) to become deterministic and as such any exponential moment should exist for \(N\) sufficiently large. We prove this fact in the following Lemma by employing a Gronwall argument.
**Lemma 34**.: _Let \(q\geq 0\) be arbitrary. Then for any \(N\in\mathbb{N}\) such that \(N>2\beta q\ e^{(2\lambda+1)T}\) we have_
\[\mathbb{E}\left[\sup_{t\leq T}\exp\left(q\ \sigma^{N}\left[\mathfrak{u}_{t}^{N} \right]\right)\right]\leq(\pi+1)\exp\left(\frac{q\left(e^{(2\lambda+1)T}-1 \right)}{2(2\lambda+1)}\right)\mathbb{E}\left[\exp\left(2qe^{(2\lambda+1)T} \sigma^{N}\left[\mathfrak{u}_{0}^{N}\right]\right)\right].\]
_in particular the \(q\)-th exponential moment of the path of \(\sigma^{N}[\mathfrak{u}^{N}]\) exists up to time \(T\), if the \(\left(2qe^{(2\lambda+1)T}\right)\)-th exponential moment of the initial empirical variance \(\sigma^{N}[\mathfrak{u}^{N}]\) exists._
Proof.: Let \(\mathfrak{a}:=2\lambda+1\) (see Assumption 2) and \(\mathfrak{b}:=qe^{aT}\). We define the process \(\mathfrak{s}_{t}:=2\mathfrak{b}\ e^{-at}\sigma^{N}[\mathfrak{u}_{t}^{N}]\). Then, using inequality (53), we derive the inequality
\[\begin{split}\mathrm{d}\mathfrak{s}_{t}=2\mathfrak{b}e^{-at} \mathrm{d}\sigma^{N}[\mathfrak{u}_{t}^{N}]-2\mathfrak{a}\mathfrak{b}e^{-at} \sigma^{N}[\mathfrak{u}_{t}^{N}]\mathrm{d}t&\leq(2\lambda- \mathfrak{a})2\mathfrak{b}e^{-\mathfrak{a}}\sigma^{N}\left[\mathfrak{u}_{t}^{N }\right]\mathrm{d}t+2\mathfrak{b}\beta e^{-\mathfrak{a}}\mathrm{d}t+2 \mathfrak{b}e^{-at}\mathrm{d}\mathfrak{m}_{t}\\ &=(2\lambda-\mathfrak{a})\mathfrak{s}_{t}\mathrm{d}t+2\mathfrak{b }\beta e^{-\mathfrak{a}}\mathrm{d}t+2\mathfrak{b}e^{-at}\mathrm{d}\mathfrak{m} _{t}.\end{split}\]
Furthermore we derive from (51) the form of the quadratic variation of \(\mathfrak{s}\) and from (55) the estimate
\[\mathrm{d}\left[\mathfrak{s}\right]_{t}=(2\mathfrak{b})^{2}e^{-2at}\mathrm{d} \left[\mathfrak{m}^{N}\right]_{t}\leq(2\mathfrak{b})^{2}e^{-2at}\frac{2\beta}{N }\sigma^{N}[\mathfrak{u}_{t}^{N}]\mathrm{d}t=2\frac{2\beta\mathfrak{b}e^{- \mathfrak{a}t}}{N}\mathfrak{s}_{t}\mathrm{d}t.\]
These inequalities together with Ito's formula give us the following inequality
\[\mathrm{d}\exp(\mathfrak{s}_{t}) =\exp(\mathfrak{s}_{t})\mathrm{d}\mathfrak{s}_{t}+\exp(\mathfrak{s}_ {t})\frac{1}{2}\mathrm{d}\left[\mathfrak{s}\right]_{t}\] \[\leq(2\lambda-\mathfrak{a})\mathfrak{s}_{t}\exp(\mathfrak{s}_{t} )\mathrm{d}t+2\mathfrak{b}\beta e^{-at}\exp(\mathfrak{s}_{t})\mathrm{d}t+2 \mathfrak{b}e^{-at}\exp(\mathfrak{s}_{t})\mathrm{d}\mathfrak{m}_{t}+\frac{2 \beta\mathfrak{b}e^{-2at}}{N}\mathfrak{s}_{t}\exp(\mathfrak{s}_{t})\mathrm{d}t\] \[=\left(2\lambda-\mathfrak{a}+\frac{2\beta\mathfrak{b}e^{-at}}{N} \right)\mathfrak{s}_{t}\exp(\mathfrak{s}_{t})\mathrm{d}t+2\mathfrak{b}\beta e ^{-at}\exp(\mathfrak{s}_{t})\mathrm{d}t+2\mathfrak{b}e^{-at}\exp(\mathfrak{s} _{t})\mathrm{d}\mathfrak{m}_{t}.\]
Due to our assumptions we have \(\mathfrak{a}>2\lambda+\frac{2\beta\mathfrak{b}e^{-at}}{N}\) and thus derive the stochastic inequality
\[\mathrm{d}\exp(\mathfrak{s}_{t})\leq 2\mathfrak{b}\beta e^{-at}\exp( \mathfrak{s}_{t})\mathrm{d}t+2\mathfrak{b}e^{-at}\exp(\mathfrak{s}_{t}) \mathrm{d}\mathfrak{m}_{t}.\]
Since \(2\mathfrak{b}e^{-at}\exp(\mathfrak{s}_{t})\mathrm{d}\mathfrak{m}_{t}\) defines a local martingale, the stochastic Gronwall inequality [44] gives us
\[\mathbb{E}\left[\sup_{t\leq T}\exp\left(q\ \sigma^{N}\left[ \mathfrak{u}_{t}^{N}\right]\right)\right] \leq\mathbb{E}\left[\sup_{t\leq T}\sqrt{\exp\left(\mathfrak{s}_{t }\right)}\right]\leq(\pi+1)\exp\left(q/2e^{\mathfrak{a}T}\int_{0}^{T}e^{-as} \mathrm{d}s\right)\mathbb{E}\left[\exp\left(\mathfrak{s}_{0}/2\right)\right]\] \[\leq(\pi+1)\exp\left(\frac{q\left(e^{(2\lambda+1)T}-1\right)}{2( 2\lambda+1)}\right)\mathbb{E}\left[\exp\left(2qe^{(2\lambda+1)T}\sigma^{N} \left[\mathfrak{u}_{0}^{N}\right]\right)\right],\]
which concludes the proof.
**Remark 35**.: _In the proof of Lemma 34 we used a standard testfunction for our Gronwall argument. Since we have good controls for the quadratic variation of \(\mathfrak{m}\), we also could have just used the standard Burkholder-Davis-Gundy inequality in combination with a deterministic Gronwall Lemma. The usage of the stochastic Gronwall inequality is not necessary in our setting, however in [27] a similar testfunction and a novel stochastic Gronwall-Lyapunov inequality were used to derive uniform exponential moment bounds for SDEs satisfying an appropriate Lyapunov condition._
### Quantitative propagation of chaos
Next we show propagation of chaos, i.e. that the system of interacting SPDEs (48) indeed converges (in an appropriate sense) to the McKean-Vlasov SPDE (28). For this we use a standard synchronous coupling approach, i.e. we compare (48) to a tensorized version of (28) defined on the same probability space. To this end we define conditionally5 independent copies \(\bar{u}^{i},\ i\in\mathbb{N}\) of the mean field process (28) to be the solutions of
Footnote 5: Conditioned on \(Y\).
\[\mathrm{d}\bar{u}_{t}^{i} =\mathcal{A}(\bar{u}_{t}^{i})\mathrm{d}t+\mathcal{B}(\bar{u}_{t}^ {i})\mathrm{d}\bar{W}_{t}^{i}\] \[+\mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{t}^{i},H(\bar{u}_{t}^{i} )\right]R_{t}^{-1}\left(\mathrm{d}Y_{t}-\frac{H(\bar{u}_{t}^{i})+\mathbb{E}_{Y }\left[H(\bar{u}_{t}^{i})\right]}{2}\mathrm{d}t\right),\ i=1,\cdots,N\]
where \(\bar{W}^{i},\ i\in\mathbb{N}\) are the same Wiener processes that also drive the particle system (48). Furthermore we set \(\bar{\mathfrak{n}}^{N}:=\left(\bar{u}^{1},\cdots,\bar{u}^{N}\right)\in\mathscr{ H}^{N}\) and make the following definition.
**Definition 36**.: _We define the empirical observed accuracy_
\[\mathcal{R}_{H}^{N}(\mathfrak{u}_{s}^{N}):=\frac{1}{N}\sum_{i=1}^{N}\left\|H (u_{s}^{\mathrm{ref}})-\frac{H(u_{s}^{i})+\mathbb{E}_{H}^{N}\left[u_{s}\right] }{2}\right\|_{\mathscr{H}}^{2}.\]
_We also define the corresponding hitting times for any \(k\in\mathbb{N}\)_
\[\tau_{\sigma}^{k} :=\inf\left\{\ t\geq 0\ :\ \sigma^{N}[\mathfrak{u}_{t}^{N}]>k\ \right\},\ \tau_{ \sigma}^{k}:=\inf\left\{\ t\geq 0\ :\ \sigma^{N,H}[\bar{\mathfrak{u}}_{t}^{N}]>k\ \right\},\] \[\tau_{\mathcal{R}}^{k} :=\inf\left\{\ t\geq 0\ :\ \mathcal{R}_{H}^{N}(\mathfrak{u}_{t}^{N})>k\ \right\}.\]
**Definition 37**.: _Furthermore we define the error of the law of large numbers by_
\[\operatorname{LLN}_{H}^{N}(T):=\int_{0}^{T}\left\|\mathbb{C}_{H}^{N}\left[\bar{ \mathfrak{u}}_{s}^{N}\right]-\mathbb{C}\texttt{ov}_{Y}\left[\bar{u}_{s},H(\bar{ u}_{s})\right]\right\|_{\mathscr{H}^{dy}}^{2}+\left|\mathbb{E}_{H}^{N}\left[\bar{ \mathfrak{u}}_{s}^{N}\right]-\mathbb{E}_{Y}\left[H(\bar{u}_{s})\right]\right| ^{2}\mathrm{d}s.\]
Now we are able prove convergence of the particle system with implicit rates.
**Theorem 38**.: _Let \(\tau^{k}:=\min\left\{\tau_{\sigma}^{k},\tau_{\bar{\sigma}}^{k},\tau_{\bar{ \mathcal{R}}}^{k}\right\}\), then for any \(p\in(0,1)\) there exists a constant \(\kappa(T,k,p)\), such that_
\[\mathbb{E}\left[\sup_{t\leq\min\{T,\tau^{k}\}}\left(\frac{1}{N}\sum_{i=1}^{N} \left\|r_{\min\{t,\tau^{k}\}}^{i}\right\|_{\mathscr{H}}^{2}\right)^{p}\right] \leq\kappa(T,k,p)\ \mathbb{E}\left[\left(\operatorname{LLN}_{H}^{N}(\min\{T,\tau^{k}\}) \right)^{p}\right]\]
Proof.: We note that since \(u^{i}\) and \(\bar{u}^{i}\) share the same initial conditions we have for any \(t\geq 0,\ i=1,\cdots,N\) that
\[r_{t}^{i} =u_{t}^{i}-\bar{u}_{t}^{i}=\int_{0}^{t}\mathrm{d}\left(u_{s}^{i}- \bar{u}_{s}^{i}\right)\] \[=\int_{0}^{t}\mathcal{A}(u_{s}^{i})-\mathcal{A}(\bar{u}_{s}^{i}) \ \mathrm{d}s+\int_{0}^{t}\mathcal{B}(u_{s}^{i})-\mathcal{B}(\bar{u}_{s}^{i})\ \mathrm{d}\bar{W}_{s}^{i}\] \[\quad-\frac{1}{2}\int_{0}^{t}\mathbb{C}\texttt{ov}_{Y}\left[\bar{ u}_{s}^{i},H(\bar{u}_{s}^{i})\right]R_{s}^{-1}\left(H(u_{s}^{i})-H(\bar{u}_{s}^{i})+ \mathbb{E}_{H}^{N}[\mathfrak{u}_{s}^{N}]-\mathbb{E}_{Y}\left[H(\bar{u}_{s}^{i} )\right]\right)\mathrm{d}s.\]
Therefore by using the concrete form of the observation process \(\mathrm{d}Y_{t}=H(u_{t}^{\mathrm{ref}})\mathrm{d}t+\Gamma_{t}\mathrm{d}V_{t}\) we derive from Ito's Lemma
\[\left\|r_{t}^{i}\right\|_{\mathscr{H}}^{2}=2\int_{0}^{t}{}_{\mathscr{ Y}^{\prime}}\bigl{\langle}\mathcal{A}(u_{s}^{i})-\mathcal{A}(\bar{u}_{s}^{i}),u_{s}^{i }-\bar{u}_{s}^{i}\bigr{\rangle}_{\mathscr{Y}}\mathrm{d}s+2\int_{0}^{t}\left\langle u _{s}^{i}-\bar{u}_{s}^{i},\left(\mathcal{B}(u_{s}^{i})-\mathcal{B}(\bar{u}_{s}^ {i})\right)\ \mathrm{d}\bar{W}_{s}^{i}\bigr{\rangle}_{\mathscr{H}}\] \[\quad+\int_{0}^{t}\left\|(\mathcal{B}(u_{s}^{i})-\mathcal{B}(\bar {u}_{s}^{i}))\circ\sqrt{\mathcal{Q}}\right\|_{\mathrm{L}_{2}(\mathscr{U}; \mathscr{H})}^{2}\mathrm{d}s\] \[\quad+2\int_{0}^{t}\left\langle u_{s}^{i}-\bar{u}_{s}^{i},\left( \mathbb{C}_{H}^{N}\left[\mathfrak{u}_{s}^{N}\right]-\mathbb{C}\texttt{ov}_{Y} \left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right]\right)R_{s}^{-1}\Gamma_{s} \mathrm{d}V_{s}\right\rangle_{\mathscr{H}}\] \[\quad+\int_{0}^{t}\left|\left\langle u_{s}^{i}-\bar{u}_{s}^{i}, \left(\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{s}^{N}\right]-\mathbb{C}\texttt{ov}_ {Y}\left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right]\right)\right\rangle_{\mathscr{ H}}R_{s}^{-1/2}\right|^{2}\mathrm{d}s\] \[\quad-\int_{0}^{t}\left\langle u_{s}^{i}-\bar{u}_{s}^{i},\mathbb{ C}\texttt{ov}_{Y}\left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right]R_{s}^{-1}\left(H(u_{s}^{i} )-H(\bar{u}_{s}^{i})+\mathbb{E}_{H}^{N}[\mathfrak{u}_{s}^{N}]-\mathbb{E}_{Y} \left[H(\bar{u}_{s}^{i})\right]\right)\right\rangle_{\mathscr{H}}\mathrm{d}s.\]
Thus by forming the average and using the Lipschitz assumptions (2), as well as elementary Cauchy-Schwarz inequalities we derive that there exists a constant \(\kappa_{1}(T)>0\), only depending on
time, such that
\[\frac{1}{N} \sum_{i=1}^{N}\left\|r_{t}^{i}\right\|_{\mathscr{H}}^{2}\leq\kappa_ {1}(T)\int_{0}^{t}\frac{1}{N}\sum_{i=1}^{N}\left\|r_{s}^{i}\right\|_{\mathscr{H }}^{2}\ \mathrm{d}s+\mathsf{Im}_{t}\] \[+\int_{0}^{t}\frac{1}{N}\sum_{i=1}^{N}\left\|\mathbb{C}_{H}^{N} \left[\mathfrak{u}_{s}^{N}\right]-\mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{s}^{ i},H(\bar{u}_{s}^{i})\right]\right\|_{\mathscr{H}^{d_{y}}}^{2}\ \left(\left\|H(u_{s}^{\mathrm{ref}})-\frac{H(u_{s}^{i})+\mathbb{E}_{H}^{N} \left[\mathfrak{u}_{s}^{N}\right]}{2}\right\|_{\mathscr{H}}^{2}+2|R_{s}^{-1}| \right)\mathrm{d}s\] \[+2\int_{0}^{t}\frac{1}{N}\sum_{i=1}^{N}\left\|\mathbb{C} \mathtt{ov}_{Y}\left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right]\right\|_{ \mathscr{H}^{d_{y}}}^{2}\left(\left|H(u_{s}^{i})-H(\bar{u}_{s}^{i})\right|^{2 }+\left|\mathbb{E}_{H}^{N}[\mathfrak{u}_{s}^{N}]-\mathbb{E}_{Y}\left[H(\bar{u} _{s}^{i})\right]\right|^{2}\right)\mathrm{d}s, \tag{56}\]
where
\[\mathrm{d}\mathsf{Im}_{t} :=\frac{2}{N}\sum_{i=1}^{N}\left\langle u_{s}^{i}-\bar{u}_{s}^{i},\left(\mathcal{B}(u_{s}^{i})-\mathcal{B}(\bar{u}_{s}^{i})\right)\ \mathrm{d}\bar{W}_{s}^{i}\right\rangle_{\mathscr{H}}\]
is a local martingale. Its concrete form will not matter to our further calculations as we intend to use the stochastic Gronwall lemma [44].
First we note that the (conditional) covariance operator \(\mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right]= \mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{s},H(\bar{u}_{s})\right]\) is independent of \(i=1,\cdots,N\) and that it can be uniformly bounded on any finite time interval \([0,T]\) due to the variance bound (31), which helps us ignore the quadratic covariation of \(\mathsf{Im}\) and is thus especially suited for the one-sided Lipschitz conditions we encounter.
Next we note that
\[\left|\mathbb{E}_{H}^{N}[\mathfrak{u}_{s}^{N}]-\mathbb{E}_{Y}\left[H(\bar{u}_{ s}^{i})\right]\right|^{2}\leq 2\mathrm{Lip}(H)^{2}\frac{1}{N}\sum_{i=1}^{N} \left\|r_{s}^{i}\right\|_{\mathscr{H}^{d}}^{2}+2\left|\mathbb{E}_{H}^{N}\left[ \bar{\mathfrak{u}}_{s}^{N}\right]-\mathbb{E}_{Y}\left[H\left(\bar{u}_{s} \right)\right]\right|^{2}.\]
Finally we note that
\[\left\|\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{s}^{N}\right]- \mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right] \right\|_{\mathscr{H}^{d_{y}}}\] \[\leq\left\|\mathbb{C}_{H}^{N}\left[\mathfrak{u}_{s}^{N}\right]- \mathbb{C}_{H}^{N}\left[\bar{\mathfrak{u}}_{s}^{N}\right]\right\|_{\mathscr{H }^{d_{y}}}+\left\|\mathbb{C}_{H}^{N}\left[\bar{\mathfrak{u}}_{s}^{N}\right]- \mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right] \right\|_{\mathscr{H}^{d_{y}}}\] \[\leq\left\|\frac{1}{N}\sum_{i=1}^{N}(u_{s}^{i}-\mathbb{E}^{N}[ \mathfrak{u}_{s}^{N}])\left(H(u_{s}^{i})-H(\bar{u}_{s}^{i})\right)^{\prime} \right\|_{\mathscr{H}^{d_{y}}}+\left\|\frac{1}{N}\sum_{i=1}^{N}(u_{s}^{i}- \bar{u}_{s}^{i})\left(H(\bar{u}_{s}^{i})-\mathbb{E}_{H}^{N}[\bar{\mathfrak{u}} _{s}^{N}]\right)^{\prime}\right\|_{\mathscr{H}^{d_{y}}}\] \[\leq\left(\sqrt{\frac{\mathrm{Lip}(H)}{N}\sum_{i=1}^{N}\left\|u_ {s}^{i}-\mathbb{E}^{N}[\mathfrak{u}_{s}^{N}]\right\|_{\mathscr{H}^{d_{y}}}^{2}} +\sqrt{\frac{1}{N}\sum_{i=1}^{N}\left|H(\bar{u}_{s}^{i})-\mathbb{E}_{H}^{N}[ \bar{\mathfrak{u}}_{s}^{N}]\right|^{2}}\right)\sqrt{\frac{1}{N}\sum_{i=1}^{N} \left\|r_{s}^{i}\right\|_{\mathscr{H}}^{2}}\] \[\quad+\left\|\mathbb{C}_{H}^{N}\left[\bar{\mathfrak{u}}_{s}^{N} \right]-\mathbb{C}\mathtt{ov}_{Y}\left[\bar{u}_{s}^{i},H(\bar{u}_{s}^{i})\right] \right\|_{\mathscr{H}^{d_{y}}}\]
Using the notation of Definition 36 this allows us to further estimate inequality (56). Thus there
exists a constant \(\kappa_{2}(T)>0\) such that
\[\frac{1}{N}\sum_{i=1}^{N}\left\|r_{i}^{i}\right\|_{\mathscr{H}}^{2} \leq\kappa_{2}(T)\int_{0}^{t}\left(1+\left(\sigma^{N}[\mathfrak{u} _{s}^{N}]+\sigma^{N,H}[\bar{\mathfrak{u}}_{s}^{N}]\right)\mathcal{R}_{H}^{N}( \mathfrak{u}_{s}^{N})\right)\frac{1}{N}\sum_{i=1}^{N}\left\|r_{s}^{i}\right\|_ {\mathscr{H}}^{2}\,\mathrm{d}s \tag{57}\] \[\quad+\kappa_{2}(T)\int_{0}^{t}\left|\mathbb{E}_{H}^{N}\left[ \bar{\mathfrak{u}}_{s}^{N}\right]-\mathbb{E}_{Y}\left[H(\bar{u}_{s})\right] \right|^{2}\mathrm{d}s+\mathsf{Im}_{t}.\]
Using the stopping time \(\tau^{k}\) we thus derive that there exists a constant \(\kappa_{3}(T,k)\) only depending on timeframe \(T\) and the stopping level \(k\) such that
\[\frac{1}{N}\sum_{i=1}^{N}\left\|r_{\min\{t,\tau^{k}\}}^{i}\right\| _{\mathscr{H}}^{2} \leq\kappa_{3}(T,k)\int_{0}^{\min\{t,\tau^{k}\}}\frac{1}{N}\sum_ {i=1}^{N}\left\|r_{s}^{i}\right\|_{\mathscr{H}}^{2}\mathrm{d}s\] \[\quad+\kappa_{3}(T,k)\,\operatorname{LLN}_{H}^{N}(\min\{t,\tau^{ k}\})+\mathsf{Im}_{\min\{t,\tau^{k}\}}.\]
Then by the stochastic Gronwall Lemma [44, Theorem 4, equation (4)], the claim of this Lemma follows immediately.
From this theorem one can also deduce the convergence in probability [32].
We say the convergence rates derived in Theorem 38 are implicit, as they require the processes to be stopped and the stopping times depend on the converging particle system itself. However on the stopped time intervals the rates are optimal in the sense that they correspond to the rates of convergence given by the law of large numbers. This is certainly far from the convergence result one would ultimately desire, but, for general signals and observation functions, nevertheless seems to be the current state of the art, even in the finite dimensional setting, where the same coupling method was used by [32] to obtain similar results for Lipschitz signals and linear observation functions. For bounded observation functions and observation data \(Y\) that is given by a Lipschitz continuous (w.r.t. time) rough path6, [15] were able to prove explicit convergence rates. They used a similar stopping argument as above together with tail bounds for higher order empirical moments of the interacting ensemble. With this they were able to derive a logarithmic decay \(\mathcal{O}\left(\log(N)^{-1}\right)\) of the error w.r.t. ensemble size \(N\) without stopping, which is still far from the desired convergence rate of the law of large numbers.
Footnote 6: Thus excluding Brownian observation noise that we treat here.
Also assuming the boundedness of the observation function \(H\), we are able to prove the asymptotically, (almost) optimal convergence rate based on our exponential moment bounds in Lemma 34 and the following additional assumption on the initial distribution.
**Assumption 39**.: _We assume that for any \(q>0\) there exists an \(N_{0}(q)\in\mathbb{N}\) such that_
\[\sup_{N\geq N_{0}(q)}\mathbb{E}\left[\exp\left(q\ \sigma^{N}[\mathfrak{u}_{0}^{N}] \right)\right]<+\infty.\]
**Remark 40**.: _This assumption is always satisfied for deterministic initial conditions, as for \(u_{0}\sim\delta_{v_{0}}\) for some \(v_{0}\in\mathscr{H}\) one has \(\sigma^{N}[\mathfrak{u}_{0}^{N}]=0\) for all \(N\in\mathbb{N}\). For Gaussian initial conditions this relates to the domain of the moment generating function of \(\chi^{2}\)-distributions and for general random variables to large deviations of the empirical covariance matrix._
We will not investigate further when this assumption is satisfied and just assume it holds. Then we are able to prove the following theorem.
**Theorem 41**.: _For any \(T<+\infty\), \(p\in(0,1)\) and any \(\nu\in(1,1/p)\) there exists an \(\mathcal{N}_{0}\left(T,p,\nu\right)\in\mathbb{N}\) and a \(\kappa\left(T,p,\nu\right)<+\infty\) such that for all \(N\geq\mathcal{N}_{0}\left(T,p,\nu\right)\) we have_
\[\mathbb{E}\left[\sup_{t\leq T}\left(\frac{1}{N}\sum_{i=1}^{N}\left\|r_{t}^{i} \right\|_{\mathscr{H}}^{2}\right)^{p}\right]\leq\kappa\left(T,p,\nu\right) \mathbb{E}\left[\left(\mathrm{LLN}_{H}^{N}(T)\right)^{p\nu}\right]^{1/\nu}. \tag{58}\]
_and as a consequence we have for \(\kappa(T,p):=\inf_{\nu\in(1,1/p)}\kappa(T,p,\nu)\)_
\[\mathbb{E}\left[\sup_{t\leq T}\left(\frac{1}{N}\sum_{i=1}^{N}\left\|r_{t}^{i} \right\|_{\mathscr{H}}^{2}\right)^{p}\right]\leq\kappa\left(T,p\right) \mathbb{E}\left[\mathrm{LLN}_{H}^{N}(T)\right]^{p}, \tag{59}\]
_which in turn implies that for some constant \(C\left(T,p,\left\|H\right\|_{\infty}\right)>0\) that_
\[\mathbb{E}\left[\sup_{t\leq T}\left(\frac{1}{N}\sum_{i=1}^{N}\left\|r_{t}^{i} \right\|_{\mathscr{H}}^{2}\right)^{p}\right]\leq C\left(T,p,\left\|H\right\|_ {\infty}\right)N^{-p}, \tag{60}\]
Proof.: First we note that inequality (57), which was derived in the proof of Theorem 38, can be further simplified when the observation function \(H\) is assumed to be bounded, as then both \(\sigma^{N,H}[\bar{\mathfrak{u}}_{s}^{N}]\) and \(\mathcal{R}_{H}^{N}(\mathfrak{u}_{s}^{N})\) are uniformly bounded and thus there exists a constant \(\kappa_{4}(T)\), such that
\[\frac{1}{N}\sum_{i=1}^{N}\left\|r_{t}^{i}\right\|_{\mathscr{H}}^{2}\leq\kappa_ {4}(T)\int_{0}^{t}\left(1+\sigma^{N}[\mathfrak{u}_{s}^{N}]\right)\frac{1}{N} \sum_{i=1}^{N}\left\|r_{s}^{i}\right\|_{\mathscr{H}}^{2}\ \mathrm{d}s+\kappa_{4}(T)\ \mathrm{ LLN}_{H}^{N}(t)+\mathfrak{lm}_{t}.\]
The stochastic Gronwall inequality [44] thus tells us that for any \(p\in(0,1)\) and \(\mu,\nu>1\) with \(\frac{1}{\mu}+\frac{1}{\nu}=1\) and such that \(p\nu<1\) we have
\[\begin{split}&\mathbb{E}\left[\sup_{t\leq T}\left(\frac{1}{N} \sum_{i=1}^{N}\left\|r_{t}^{i}\right\|_{\mathscr{H}}^{2}\right)^{p}\right]\\ &\leq\left(c_{p\nu}+1\right)^{1/\nu}\mathbb{E}\left[\exp\left(p \mu\ \kappa_{4}(T)\int_{0}^{T}\left(1+\sigma^{N}[\mathfrak{u}_{s}^{N}]\right) \mathrm{d}s\right)\right]\ \kappa_{4}(T)\ \mathbb{E}\left[\left(\mathrm{LLN}_{H}^{N}(t)\right)^{p\nu}\right]^{1/\nu}, \end{split} \tag{61}\]
where
\[c_{p\nu}:=\min\left\{4,1/(p\nu)\right\}\ \frac{\pi p\nu}{\sin(\pi p\nu)}\ \xrightarrow{\nu \to 1/p}+\infty.\]
Now we first note that \(\mu=\frac{\nu}{\nu-1}\). Due to the exponential moment bounds in Lemma 34 we have that for \(q:=\frac{p\nu}{\nu-1}\kappa_{4}(T)T\)
\[\mathbb{E}\left[\exp\left(p\mu\kappa_{4}(T)\int_{0}^{T}\ \sigma^{N}[\mathfrak{u}_{t}^{N}]\right)\right]\leq(\pi+1)\exp\left(\frac{q \left(e^{(2\lambda+1)T}-1\right)}{2(2\lambda+1)}\right)\mathbb{E}\left[\exp \left(2qe^{(2\lambda+1)T}\sigma^{N}[\mathfrak{u}_{0}^{N}]\right)\right]\]
and by Assumption 39 there exists an
\[\mathcal{N}_{0}(T,p,\nu):=N_{0}(q)=N_{0}\left(2qe^{(2\lambda+1)T}\right)=N_{0} \left(2\frac{p\nu}{\nu-1}\kappa_{4}(T)Te^{(2\lambda+1)T}\right)\]
such that
\[\kappa_{5}(T,p\mu):=\sup_{N\geq\mathcal{N}_{0}(T,p,\nu)}\mathbb{E}\left[\exp \left(2\frac{p\nu}{\nu-1}\kappa_{4}(T)Te^{(2\lambda+1)T}\sigma^{N}[\mathfrak{u} _{0}^{N}]\right)\right]<+\infty\]
and therefore for any \(N\geq\mathcal{N}_{0}\left(T,p,\nu\right)\) we have
\[\mathbb{E}\left[\sup_{t\leq T}\left(\frac{1}{N}\sum_{i=1}^{N}\left\|r_{t}^{i} \right\|_{\mathscr{H}}^{2}\right)^{p}\right]\leq\underbrace{\left(c_{p\nu}+1 \right)^{1/\nu}e^{\left(\frac{p\nu}{\nu-1}\kappa_{4}\left(T\right)T\right)} \kappa_{5}\left(T,\frac{p\nu}{\nu-1}\right)\kappa_{4}\left(T\right)}_{:= \kappa_{5}\left(T,p,\nu\right)}\mathbb{E}\left[\left(\text{LLN}_{H}^{N}(t) \right)^{p\nu}\right]^{1/\nu},\]
which proves our first claim. From this one can directly deduce (59) via the Holder inequality and using the fact that for \(\nu\to 1\) both \(\mathcal{N}(T,p,\nu)\) and \(\kappa(T,p,\nu)\) blow up, i.e.
\[\mathcal{N}(T,p,\nu),\kappa(T,p,\nu)\xrightarrow{\nu\to 1}+\infty,\]
as well the blow up \(\kappa(T,p,\nu)\xrightarrow{\nu\to 1/p}+\infty\), which in turn implies that the minimizer of \(\kappa(T,p,\nu)\) for every fixed \(T\) and \(P\) must lie inside the interval \(\left(1,1/p\right)\) and therefore (59) is proven. Finally we are left to show (60). By Definition 37 the term \(\text{LLN}_{H}^{N}(T)\) consists of an error of the empirical mean and an error of the empirical covariance. First we estimate the error of the empirical mean. Using the conditional independence of \(\left(\bar{u}^{i}\right)_{i=1,\cdots,N}\) and the law of total variance we derive
\[\begin{split}&\int_{0}^{T}\mathbb{E}\left[\left|\mathbb{E}_{H}^{N} \left[\bar{u}_{s}^{N}\right]-\mathbb{E}_{Y}\left[H(\bar{u}_{s})\right]\right|^ {2}\right]\mathrm{d}s=\mathbb{E}\left[\int_{0}^{T}\mathbb{E}_{Y}\left[\left| \frac{1}{N}\sum_{i=1}^{N}H\left(\bar{u}_{s}^{i}\right)-\mathbb{E}_{Y}\left[H( \bar{u}_{s})\right]\right|^{2}\right]\mathrm{d}s\right]\\ &=\mathbb{E}\left[\int_{0}^{T}\frac{\mathtt{Var}_{Y}\left[H\left( \bar{u}_{s}\right)\right]}{N}\mathrm{d}s\right]\leq\int_{0}^{T}\frac{\mathrm{ Lip}\left(H\right)^{2}\beta e^{\lambda s}}{N}\mathrm{d}s=\frac{\mathrm{Lip}\left(H \right)^{2}\beta\left(e^{\lambda T}-1\right)}{\lambda\ N}.\end{split} \tag{62}\]
Next we aim to dominate the error of the covariance. To this end we first note that
\[\begin{split}&\mathbb{E}_{Y}\left[\left\|\mathbb{E}_{H}^{N} \left[\bar{u}_{s}^{N}\right]-\mathbb{E}_{Y}\left[\bar{u}_{s},H(\bar{u}_{s}) \right]\right\|_{\mathscr{H}^{d_{y}}}^{2}\right]\\ &\leq 2\mathbb{E}_{Y}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\bar{u}_{s}^ {i}H\left(\bar{u}_{s}^{i}\right)^{\prime}-\mathbb{E}_{Y}\left[\bar{u}_{s}H \left(\bar{u}_{s}\right)\right]\right\|_{\mathscr{H}^{d_{y}}}^{2}\right]\\ &\quad+2\mathbb{E}_{Y}\left[\left\|\mathbb{E}^{N}\left[\bar{u}_{ s}^{N}\right]\mathbb{E}_{H}^{N}\left[\bar{u}_{s}^{N}\right]-\mathbb{E}_{Y}\left[ \bar{u}_{s}\right]\mathbb{E}_{Y}\left[H\left(\bar{u}_{s}\right)\right]^{ \prime}\right\|_{\mathscr{H}^{d_{y}}}^{2}\right].\end{split}\]
Again we use the conditional independence of \(\left(\bar{u}^{i}\right)_{i=1,\cdots,N}\) to deduce
\[\begin{split}&\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N} \bar{u}_{s}^{i}H\left(\bar{u}_{s}^{i}\right)^{\prime}-\mathbb{E}_{Y}\left[ \bar{u}_{s}H\left(\bar{u}_{s}\right)^{\prime}\right]\right\|_{\mathscr{H}^{d_ {y}}}^{2}\right]=\frac{\mathbb{E}\left[\mathbb{E}_{Y}\left[\left\|\bar{u}_{s}H \left(\bar{u}_{s}\right)^{\prime}-\mathbb{E}_{Y}\left[\bar{u}_{s}H\left(\bar{u }_{s}\right)^{\prime}\right]\right\|_{\mathscr{H}^{d_{y}}}^{2}\right]\right]}{N} \\ &\leq\frac{\mathbb{E}\left[\mathbb{E}_{Y}\left[\left\|\bar{u}_{s} H\left(\bar{u}_{s}\right)^{\prime}\right\|_{\mathscr{H}^{d_{y}}}^{2}\right] \right]}{N}\leq\frac{\left\|H\right\|_{\infty}\mathbb{E}\left[\left\|\bar{u}_{s }\right\|_{\mathscr{H}}^{2}\right]}{N}\end{split}\]
By using the bound (46) with \(k,l\geq\left\|H\right\|_{\infty}\) that was derived in the proof of Theorem 28 for the second absolute moments of \(\bar{u}\), we can show that there exists a constant \(C_{1}\left(T,\left\|H\right\|_{\infty}\right)>0\) such that
\[\int_{0}^{T}\mathbb{E}\left[\left\|\frac{1}{N}\sum_{i=1}^{N}\bar{u}_{s}^{i}H \left(\bar{u}_{s}^{i}\right)^{\prime}-\mathbb{E}_{Y}\left[\bar{u}_{s}H\left( \bar{u}_{s}\right)^{\prime}\right]\right\|_{\mathscr{H}^{d_{y}}}^{2}\right] \mathrm{d}s\leq\frac{C_{1}\left(T,\left\|H\right\|_{\infty}\right)}{N} \tag{63}\]
Finally we note that similar to (62) one can easily deduce
\[\begin{split}&\mathbb{E}_{Y}\left[\left\|\mathbb{E}^{N}\left[\bar{u }_{s}^{N}\right]\mathbb{E}_{H}^{N}\left[\bar{u}_{s}^{N}\right]-\mathbb{E}_{Y} \left[\bar{u}_{s}\right]\mathbb{E}_{Y}\left[H\left(\bar{u}_{s}\right)\right]^{ \prime}\right\|_{\mathscr{H}^{d_{y}}}^{2}\right]\\ &\leq 2\mathbb{E}_{Y}\left[\left\|\mathbb{E}^{N}\left[\bar{u}_{s}^{N} \right]-\mathbb{E}_{Y}\left[\bar{u}_{s}\right]\right\|_{\mathscr{H}^{d_{y}}}^{2} \left|\mathbb{E}_{H}^{N}\left[\bar{u}_{s}^{N}\right]\right|^{2}\right]+2 \left\|\mathbb{E}_{Y}\left[\bar{u}_{s}\right]\right\|_{\mathscr{H}^{d_{y}}}^{2} \mathbb{E}_{Y}\left[\left|\mathbb{E}_{H}^{N}\left[\bar{u}_{s}^{N}\right]- \mathbb{E}_{Y}\left[H\left(\bar{u}_{s}\right)\right]\right|^{2}\right]\\ &\leq\frac{\left\|H\right\|_{\infty}^{2}\beta e^{\lambda s}}{N}+ \left\|\mathbb{E}_{Y}\left[\bar{u}_{s}\right]\right\|_{\mathscr{H}^{d_{y}}}^{2} \frac{\mathrm{Lip}\left(H\right)^{2}\beta e^{\lambda s}}{N}.\end{split}\]
Finally we can thus derive by the boundedness (46) of the second absolute moment of \(\bar{u}\), that there exists a constant \(C_{2}\left(T,\left\|H\right\|_{\infty}\right)>0\) such that
\[\int_{0}^{T}\mathbb{E}\left[\left\|\mathbb{E}^{N}\left[\bar{\mathfrak{u}}_{s}^{N }\right]\mathbb{E}_{H}^{N}\left[\bar{\mathfrak{u}}_{s}^{N}\right]-\mathbb{E}_{ Y}\left[\bar{u}_{s}\right]\mathbb{E}_{Y}\left[H\left(\bar{u}_{s}\right)\right]^{ \prime}\right\|_{\mathscr{H}^{d_{y}}}^{2}\right]\mathrm{d}s\leq\frac{C_{2} \left(T,\left\|H\right\|_{\infty}\right)}{N} \tag{64}\]
By the definition of \(\mathrm{LLN}_{H}^{N}\) (in Definition 37), combining the inequalities (62),(63),(64) with (59) concludes our proof.
**Remark 42**.: _Since the constant \(\kappa(T,p,\nu)\) blows up for \(\nu\to 1\) or \(\nu\to 1/p\), we can not simply take the limit \(p\to 1\) in (59). As \(p<1\), the \(p\)-th power is concave and thus we can not deduce that (59) would also hold if the power on the left-hand-side of the inequality would be written outside! We thus say that (59) gives almost optimal rates in the sense that this inequality holds for all \(p<1\), but not for the optimal value \(p=1\), where the the expectations on both sides are indeed of the same nature. In a similar sense the stronger inequality (58) shows almost optimal rates, in the sense that it holds for all \(\nu>1\) but not for \(\nu=1\), the case where the moments on both sides are of the same order._
Besides blowing up when \(p\to 1\), the constant \(\kappa(T,p)\) also grows hyperexponentially in \(T\)! While Theorem 41 thus provides us with (almost) optimal convergence rates, the constants involved are far too large to give useful a priori error estimates even on moderate time intervals.
**Remark 43** (Literature).: _In the finite dimensional and linear Gaussian setting this problem has been tackled by a large number of papers. In this setting a first propagation of chaos result was achieved by [20], even showing uniform in time convergence for stable signals. This result has by now been followed up by several works [7],[8],[9] treating unstable signals by making use of the Riccati equation that appears in this setting. An alternative extension of the EnKBF to nonlinear signals using a Taylor-inspired linearization around the mean, similar to the extended Kalman-Bucy filter (Remark (18)), was considered in [19]. The linearization there also allowed for the use of a decoupled Riccati equation. While an extension of these uniform in time results to our nonlinear setting is certainly highly desirable, we do not investigate it in this paper._
|
2302.04990 | Non-resonant cavity for intensity buildup of multiple lasers | A non-resonant cavity to build up laser intensity is modeled, developed and
tested. It can be used for overlapping multiple lasers of different
wavelengths, increasing their intensities by over an order of magnitude while
maintaining good uniformity. It is simple to set up, has flexible optical
characteristics, and is robust against perturbations. The intensity buildup
requires no resonances, and the wavelength dependence of the performance is
limited only by the mirror coatings. The cavity can be used in applications
requiring a spatially-constrained intensity buildup, for example in atomic and
molecular traps. | Yi Zeng, Nicholas R. Hutzler | 2023-02-10T00:27:54Z | http://arxiv.org/abs/2302.04990v1 | # Non-resonant cavity for intensity buildup of multiple lasers
###### Abstract
A non-resonant cavity to build up laser intensity is modeled, developed and tested. It can be used for overlapping multiple lasers of different wavelengths, increasing their intensities by over an order of magnitude while maintaining good uniformity. It is simple to set up, has flexible optical characteristics, and is robust against perturbations. The intensity buildup requires no resonances, and the wavelength dependence of the performance is limited only by the mirror coatings. The cavity can be used in applications requiring a spatially-constrained intensity buildup, for example in atomic and molecular traps.
## I Introduction
In recent years, cold and controlled molecules have emerged as promising platforms for precision measurements[1; 2; 3; 4], quantum simulations[5; 6; 7; 8; 9], and fundamental chemistry [8; 10; 11]. Laser cooling and trapping, which has been a critical tool driving quantum science in atoms for decades [12], has now been expanded to include a range [13] of diatomic and even polyatomic [14; 15; 16] molecules. Laser cooling of molecules remains a technical challenge, however, due in large part the difficult laser requirements; one typically needs around ten or more CW, high-power (\(0.1-5\) W), narrow linewidth (\(\lesssim\)MHz) lasers, which must have precisely controlled and have time-varying frequency, sidebands, power, and polarizations, in order to laser cool and trap a molecule [13]. These difficulties arise due to the need to address multiple transitions in molecules, many of whose frequency separations are too large to bridge with frequency modulation or shifting. As quantum control extends to even larger and more complicated molecules [17; 4; 18], these requirements will become even more challenging.
One method to ease these difficulties is to use low power lasers and build up intensity with a power build-up cavity [19; 20; 21], where resonance is used to increase the laser power circulating in the cavity. While this technique has excellent performance, it usually requires
active stabilization and can only work with a small number of lasers due to the required resonant condition. Implementing this approach with \(\sim 10\) lasers would be very challenging, especially since laser cooling experiments typically require multiple wavelengths, sidebands, frequency changes, multiple polarizations, etc.
We are therefore interested in non-resonant methods of building up intensity. Typically, one can use a multi-pass setup bouncing the laser beams between two or more mirrors. Such a method is generally useful if the goal is to amplify power in an extended interaction region, for example with a molecule beam [14; 22]. However, the performance is limited if high intensity and uniformity are needed in a confined region, for example the few-mm cross-sectional area of a magneto-optical trap (MOT) [13]. In practice it is generally difficult to have intensity increases of more than a factor of a few in such a small region.
In this manuscript, we present the design and prototyping of a multipass, non-resonant and intensity-building cavity modified from the Herriott cell [23]. Using mostly off-the-shelf parts, our test setup can achieve over an order of magnitude amplification in intensity while maintaining a uniformity comparable to that of a Gaussian beam. It is also easy to set up and tune, flexible in the size of the illumination region, and robust against perturbations.
## II The Herriott cell and its modification
A Herriott cell is commonly used for multipass absorption spectroscopy [24; 25], where the cell increases the interaction path length by factors of few tens or even hundreds, as shown in fig. 1(a). The only components required are two concave mirrors, with one of them having an entry hole drilled through the face, usually near the edge. For multipass absorption, the Herriott cells are usually in a configuration where the cavity length \(d\) is just slightly longer than two times the focal length \(f=R/2\) of the mirrors with radius of curvature \(R\), that is,
Figure 1: (a) Typical Herriott cell setup in a near-confocal configuration used for multi-pass absorption spectroscopy. \(d\) is the spacing between the mirrors, and \(f\) is their focal length. Figure generated using LightTools. (b) Herriott cell in a near-concentric configuration.
nearly confocal, and the laser beam will bounce back and forth tracing out a circle of dots on the mirrors before exiting through the entrance hole. In such a configuration the laser beams form a near-cylindrical shape with little overlap, which is ideal for extending optical path lengths but not for building up intensity, which we aim to achieve by modifying the design.
Figure 3: Cross sections of laser beams in a \(d=3.96f\) Herriott cell, where the spot sizes are roughly uniform, hence the “collimated” configuration. Here the circles indicate the size and position of the reflecting beam. The pattern is generated using a simple model based on ray transfer matrix analysis. (a) Intensity distribution on the near mirror. The entry spot 0, and first three reflecting spots are labeled. (b) Intensity distribution at the middle of the cavity; note that the size scale is 10 times smaller. The first six passes are labeled.
Figure 2: (a) Pattern of spots traced out by the reflecting laser beam on the mirrors. The solid dots are spots on the near mirror (NM), which is the one with the entry hole, and the circles are spots on the far mirror (FM). Figure adapted from ref. [24]. (b) Pattern when the cavity is at a near-concentric configuration, and the angle \(\theta\) between consecutive spots are close to \(180^{\circ}\)
An easy first step would be to narrow the waist down by increasing the cavity length, as shown in fig. 1. The angle between two consecutive reflecting points on the mirrors dictates the waist size, and its relationship to the cavity length is given by [23]
\[\cos(\theta)=1-\frac{d}{2f}, \tag{1}\]
where \(d\) is the cavity length, \(f\) is the focal length of the two mirrors, and \(\theta\) is the angle between two spots on the mirrors, projected on the same plane, as shown in fig. 2. As the cavity approaches the concentric configuration, \(d/f\) approaches 4 and the angle approaches 180\({}^{\circ}\) meaning that consecutive reflection points are on opposite sides so that the laser beams are always near the center at the middle of the cavity, as shown in fig. 1(b) and fig. 2(b). The intensity distribution will look like fig. 3, which shows calculated cross sections of laser beam sizes and positions, on the near mirror and at the middle of the cavity. The input laser beam has to be slightly focused to achieve such a "collimated" configuration, where the laser beam sizes on a same cross section are similar. Note that the laser beam diameter at the cavity center is about 10 times smaller than that at the cavity mirrors.
This near-concentric configuration does not give the desired intensity buildup with uniform distribution, but we can make two further modifications to realize this goal. First, we will move the position of the entry hole closer to the mirror center compared to the typical Herriott cell; second, we will change the divergence of the input beam. As discussed in the following sections, these changes realize the goal of a fairly uniform intensity buildup.
## Modeling and prototyping
In order to achieve a more uniform intensity distribution, we must change the input beam divergence to increase the spot sizes. However, the stock Herriott cell mirrors used in spectroscopy usually have the entry hole very close to the mirror's edge, and we find that making the input beam more diverging results in the reflected beams leaking off the mirror edges, leading to power loss. Thus, we would like to push the holes further in. However, we also do not want the entry hole to be too close to the center because then it will limit the number of passes the cavity can accommodate. In order to determine where the entry hole should be positioned on the near mirror and in general to better understand how the laser beam behaves when bouncing between the two concave mirrors, we implemented a simple
model based on ray transfer matrix analysis [26].
From Eq. (1), we know where the laser beam landed on both the near and far mirrors for each pass, so the beam positions between the two mirrors can be calculated using simple geometry. Ray transfer matrix analysis is used for tracking the beam diameter of the Gaussian laser beam, which will change due to both free-space propagation and reflection from the curved mirror surfaces. The input laser beam diameter is represented by the input ray position and the focusing of the input laser is represented by the input ray angle.
Combining both the location and size information, we now have a full understanding of the laser beam inside the cavity. We can make figures like fig. 3(a) for any cross section along the length of the cavity, and combining with a Gaussian power distribution, we can make contour plots for intensity like fig. 5(c). We were able to generate similar plots, such as fig. 5(d), from ray tracing simulations performed with LightTools 9.0 by Synopsys. Discrepancies between the calculated model and the simulation come from the fact that the simulation uses a input light beam of uniform intensity instead of a Gaussian beam, but it simulates light rays until they exit through either the entry hole or the mirror's edge, while the calculation only use the first orbit, 32 passes, and does not consider any leakage.
Based on the information learned from the model, we built a prototype for testing using mostly off-the-shelf 2-inch optics. A focal length of 200 mm was chosen so that the cavity length of 792 mm, slightly less than \(4f=800\) mm, would fit outside of a vacuum chamber containing atomic and molecular beams for testing. The near mirror is a custom part with a 4 mm diameter hole drilled 5 mm away from the center of the mirror1. We have the two cavity mirrors, and the two launching mirrors on kinematic mounts while the laser light comes out of a fiber launcher with adjustable focus close to the entry hole, as shown in
Figure 4: Test setup for measuring the performance of the prototype, not to scale.
fig. 4. We also insert an anti-reflection(AR)-coated window into the cavity to image the beams while minimally perturbing the paths and intensities.
The entry hole choice was not based on rigorous optimization, but it strikes a good compromise between performance, flexibility and ease of setting up and aligning. The position and size of the entry hole limit how close the first spot back on the near mirror can be to the entry spot. For us, the largest \(\theta\) possible is about \(169^{\circ}\), which allows for about 32 passes per orbit, so even assuming entire beam exits through the entry hole after an orbit, there is a 32-fold increase of total power. Of course, our goal is not only higher power, but also higher intensity with relatively uniform distribution. In that case, a configuration like the one shown in fig. 5 will be used. The input beam is diverged slightly, such that the first spot back on the near mirror is much larger than the entry hole. While some of the light will exit through the hole, it is small enough to be justifiable by the gain in uniformity, as shown in later sections.
From the prototyping experience, we learned that the procedure for setting up this Herriott cell multi-pass is straightforward. Put the two mirrors on kinematic mounts spaced one cavity-length apart, which is typically just short of four times the focal length, about 792 mm for our prototype. Send in the collimated laser beam, sized just under the entry hole, and land it roughly 169 degrees away from the entry hole projection at the far mirror. Adjust the far mirror orientation so that the first bounce-back to the near mirror almost touches the entry hole, like in fig. 3(a). Finally, adjust the near mirror so that the circular patterns of dots appear. In the process, some adjustment of the focusing of the laser beam might be needed to achieve the "collimated" configuration, just so that we have a clear pattern of dots as indicators. Also, if the cavity length is longer than four times the focal length, the final step of forming the circular pattern will fail, since the spots will always move towards the mirror's edge and never curve back. The solution is simply to push the mirrors slightly closer, so that they can curve back to form a circle. The process is fairly robust, without needing any precise placement.
After achieving the "collimated" configuration, diverge the input beam slightly to get a configuration like that in fig. 5. The fact that the cross section at the middle of the cavity has the same shape as that on the mirrors is a very beneficial trait for setting up and optimizing because we can reliably infer the spot characteristics inside the cavity easily just by looking at the mirrors. For optimization, the main concerns are the total power and its
distribution. By using a camera or simply by looking at the mirrors, we can optimize by tuning the parameters like cavity length, entry beam focus, or mirror orientations. The total spot size at the illumination region can be changed by changing the longitudinal distance of that region from the middle of the cavity. Further discussion is in the later flexibility discussion.
Figure 5: Example of a diverging configuration. Calculated cross section patterns of laser beam sizes and positions, contour plot of intensity distribution, and photos of the same configuration in a prototype setup. (a) Pattern on the near mirror. The entry spot 0, and first three spots are labeled. (b) Pattern at the center of the cavity (size scale is 10 times smaller). The first two passes are labeled. (c) Calculated contour plot at cavity center, intensity normalized against input Gaussian beam. (d) Simulated contour plot generated from LightTools, normalized against uniform input beam. (e) Photo of the near mirror, where the bright circle on the right side is the entry hole. (f) Photo of scattered light on an AR coated window placed at cavity center, with intensity normalized against single pass.
## Performance Discussion
To quantify the performance of the setup, we inserted an AR coated window in the middle of the cavity, and measured the light scatter using a camera, as shown in fig. 4. We normalized the signal by comparing it to the scattered light from a single pass of the laser beam expanded to a similar size. The results are in good agreement with our simple model and ray tracing simulations, both for the spot shape as well as total power. The results are shown in fig. 5(e,f) and fig. 6. The shape of the spot patterns matches the prediction for all the configurations we tested. While the intensity results are slightly lower than simulations, it can be improved using higher-grade optics and coatings. Despite the losses, we still see that on average the intensity is amplified by a factor of about 30, with a fairly uniform
Figure 6: Measurement of intensity amplification and distribution. Photos of the scattered light on the AR coated window at the center of the cavity were taken for different configurations using a CMOS camera. (a) is from a single pass of the laser beam and (d) is the contour plot of the same photo. (b) and (e) are photos for the “collimated” configuration. (c) and (f) are for the uniform configuration as shown in fig. 5. All intensities are normalized against (a).
distribution.
Another test was to set up the cavity at our cryogenic atomic and molecular beam source [27], and measure the fluorescence of sodium beams using a laser on the D1 transition. Comparing the results from three configurations: 1. a single pass, as shown in fig. 7(a); 2. the Herriott cell multi-pass in the "collimated" configuration, as shown in fig. 7(b); 3. a diverging configuration optimized for even intensity distribution, as shown in fig. 7(c). Using the diverging Herriott cell as a power buildup cavity results in an increase in atomic fluorescence by a factor of around 25, as shown in fig. 8. This is less than the factor of 30 that might be expected from the intensity increase, mostly due to saturation of the atomic transition, geometry change of the fluorescing atom cloud, and power loss on the vacuum chamber windows. Otherwise, it shows that we can easily get more than one order of magnitude improvement of fluorescence signal using the prototype cavity.
## VI Flexibility and Robustness
One of our main goals with the cavity is the ability to build up intensity for multiple lasers of different wavelengths at the same place. Fortunately, since reflective elements naturally have no chromatic aberration, we do not have to worry about the cavity itself. Inevitably, however, the full setup will require some chromatic element in the beam path. To test if chromatic aberration or other wavelength dependent phenomena are of concern, we tried different lasers launched using the same fiber and aspheric lens. As shown in fig. 9, the
Figure 7: (a) Photo of single-pass light scattered off vacuum chamber window. (b) Photo for the “collimated” Herriott cell setup. (c) Photo for a Herriott cell setup optimized for even intensity distribution.
intensity distributions are very similar between scattered light from a 650 nm laser and a 577 nm laser. This is expected, since the setup is very robust against small perturbations as we will discuss, especially if the perturbation is on the input laser, and not on the cavity itself.
Another very important aspect of our goal is to have an illumination region of adjustable size. The simplest way to change the size of the illuminated spot at the interaction region is by moving the entire cavity lengthwise. As mentioned in the previous section, the cross-sectional shape at the center of the cavity is the same as the pattern on the mirrors, just rotated and of different sizes. It is the case for everywhere in between as well, with the size
Figure 8: Integrated fluorescence of a sodium beam probed on the D1 transition, comparing results from three different configurations: single pass, “collimated” Herriott cell, and diverging configuration like fig. 5.
Figure 9: (a) Photos of the AR coated window placed at the center of the cavity. Camera is shooting at an angle of about \(45^{\circ}\), such that the scattered light from two sides of the window are sufficiently separated. (b) Same as (a) but the laser used is changed from 650 nm to 577 nm.
scaled with the position. We can estimate the size of the illuminated region by noticing that the shape of the ray density roughly looks like a cone (see fig. 1(b)) and should therefore be linear in displacement along the cavity:
\[D(z)\approx 4x_{0}\left(a+2z\frac{1-a}{d}\right), \tag{2}\]
where \(z\) is the longitudinal distance of the illumination region from the middle of the cavity, \(x_{0}\) is the offset of the entry hole from the center of the mirror, and \(a=\pi-\cos^{-1}(1-d/2f)\) is the angle between two consecutive laser spots on the same mirror. The estimated diameter \(D\) of the sphere being covered is twice the diameter of the circle traced out by the beam locations, which is the minimum for diverging configurations. Hence, the estimation is a conservative one. Further increasing the divergence of the input beams will increase the spot size and make the intensity distribution more uniform, though depending on the cavity geometry it might lead to more power loss over the edge of the mirrors.
To test the formula, we measured the size of the illuminated region at different locations along the cavity length. Same measurement setup was used, except the camera is imaging the window with a \(~{}25^{\circ}\) angle, while moving along with the window such that the distance and angle to the window is fixed. Fig. 10 shows good agreement between the measured sizes and the ones calculated from Eq. (2).
Finally, we measured the robustness of the cavity against misalignment. In general, the cavity is very robust against small misalignment that might be c
Figure 10: Plot showing how the illumination region size changes with the longitudinal distance from the middle of the cavity. The measured size is characterized by the diameter of the cross section where the laser light intensity is higher than the single-pass intensity. The calculated diameter is from Eq. (2)
vibrations, or even accidental bumps. For the configuration with more uniform intensity distribution, it is even less prone to misalignment, because in such a configuration the laser beams are expanded to be significantly larger than the entry hole when they arrive at the mirrors, as shown in fig. 5, such that when they leak out of the mirror due to misalignment, there is little loss of power.
To quantify the sensitivity to misalignment, we measured how the scattered light on the AR coated window changes depending on misalignment angles of the mirrors. As expected, turning the cavity mirrors has a much more significant effect than the launching mirror as it changes the cavity condition and deviations accumulate as the laser beam bounces between them. Fig. 11 shows how the total power changes with the misalignment angles.
The launching mirror misalignment angles reflect how the input beam is misaligned. When it is misaligned by more than \(~{}0.24^{\circ}\) the total power within the cavity starts to decrease. The far mirror misalignment has the same effect as the near mirror, both misaligning the cavity. Beyond a limit of \(~{}0.05^{\circ}\), the power loss starts to become significant. Clearly the cavity itself is more susceptible to misalignment than the input laser. Still, both misalignment limits are larger than the typical drifts commonly encountered in the lab; for example, the \(~{}0.24^{\circ}\) misalignment required a half turn of the steering knob on the launching mirror.
Figure 11: Plot showing how misalignment in the launching mirror and far mirror affect the total power inside the illumination region. Significant power loss starts to occur when the launching mirror is misaligned by \(0.24^{\circ}\), and when the far mirror is misaligned by \(0.05^{\circ}\). Both are larger than typical drifts seen in lab for common optics elements.
## Conclusion
We designed and tested a non-resonant cavity for building up laser intensity by over an order of magnitude in a confined region. It is capable of accommodating multiple lasers of different wavelengths and polarizations at the same time and at the same location, and is flexible in the size and shape of the illumination region. Furthermore, it is easy to set up and tune, and is robust against perturbations without active stabilization. These properties make it a very useful and versatile tool for laser intensity buildup, or even as an alternative to common multi-pass setup, especially when multiple laser wavelengths are required.
## Funding
This work was supported by the Heising-Simons Foundation (2022-3361), the Gordon and Betty Moore Foundation (GBMF7947), and the Alfred P. Sloan Foundation (G-2019-12502).
## Acknowledgements
We thank Tim Steimle, our spectroscopy collaborator, for suggesting that we look into the Herriott cell. We also thank our lab members Arian Jadbabaie and Phelan Yu for fruitful discussions.
## Disclosures
A draft of this manuscript was used in a provisional patent application.
|
2308.16317 | High Performance GPU Accelerated MuST Software | The MuST package is a computational software designed for ab initio
electronic structure calculations for solids. The Locally Self-consistent
Multiple Scattering (LSMS) method implemented in MuST allows to perform the
electronic structure calculation for systems with a large number of atoms per
unit cell. For the LSMS method with muffin-tin potential approximation, the
major computational challenge is the matrix inverse for the scattering matrix
calculation, which could take more than 90\% of the computing time. However,
the matrix inverse can be significantly accelerated by modern
graphical-processing-units (GPUs). In this paper, we discuss our approach to
the code acceleration by offloading the matrix inverse tasks to the GPUs
through a Fortran-C interface from the Fortran code to the CUDA code. We report
our performance results showing significant speedup ratio achieved to the
calculations of NiAu alloy, a candidate for thermoelectric materials. | Xiao Liang, Edward Hanna, Derek Simmel, Hang Liu, Yang Wang | 2023-08-30T20:51:07Z | http://arxiv.org/abs/2308.16317v1 | # High Performance GPU Accelerated MuST Software
###### Abstract.
The MuST package is a computational software designed for ab initio electronic structure calculations for solids. The Locally Self-consistent Multiple Scattering (LSMS) method implemented in MuST allows to perform the electronic structure calculation for systems with a large number of atoms per unit cell. For the LSMS method with muffin-tin potential approximation, the major computational challenge is the matrix inverse for the scattering matrix calculation, which could take more than 90% of the computing time. However, the matrix inverse can be significantly accelerated by modern graphical-processing-units (GPUs). In this paper, we discuss our approach to the code acceleration by offloading the matrix inverse tasks to the GPUs through a Fortran-C interface from the Fortran code to the CUDA code. We report our performance results showing significant speedup ratio achieved to the calculations of NiAu alloy, a candidate for thermoelectric materials.
Key words and phrases:density-functional theory, GPU acceleration, Korringa-Kohn-Rostoker method, LSMS, High-entropy random alloy +
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
### KKR Method
For a Schrodinger equation, the Green's function connecting the \(\mathbf{r}\) and \(\mathbf{r}^{\prime}\) is defined as:
\[G(\mathbf{r},\mathbf{r}^{\prime};\epsilon)=\lim_{\eta\to 0}\sum_{k}\frac{\Psi_{k}^{* }(\mathbf{r})\Psi_{k}(\mathbf{r}^{\prime})}{\epsilon-\epsilon_{k}+i\eta} \tag{1}\]
where \(\epsilon_{k}\) and \(\Psi_{k}\) are the \(k\)-th eigen-energy and eigen-function, respectively. Based on the definition, the charge density is:
\[n(\mathbf{r})=\frac{1}{\pi}\mathrm{Im}\int_{-\infty}^{\epsilon_{F}}G(\mathbf{ r},\mathbf{r};\epsilon)d\epsilon \tag{2}\]
In the framework of KKR method, the Green's function for the \(n\)-th site at position \(\mathbf{r}_{n}\) is:
\[G(\mathbf{r}_{n},\mathbf{r}_{n};\epsilon)=\sum_{L,L^{\prime}}Z_{L}^{n}( \mathbf{r}_{n};\epsilon)\tau_{LL^{\prime}}^{nn}(\epsilon)Z_{L^{\prime}}^{n*} (\mathbf{r}_{n};\epsilon)-\sum_{L}Z_{L}^{n}(\mathbf{r}_{n};\epsilon)J_{L}^{n* }(\mathbf{r}_{n};\epsilon) \tag{3}\]
where \(\underline{\tau}^{nn}\) is the \(n\)-th block of the multiple-scattering matrix \(\underline{\tau}\), index \(L\) is a combination of angular momentum and magnetic quantum numbers \(l,m,Z_{L}^{n}(\mathbf{r}_{n};\epsilon)\) is the regular solution and \(J_{L}^{n}(\mathbf{r}_{n};\epsilon)\) is the irregular solution of the Schrodinger equation for the local effective potential at the \(n\)-th site. The multiple-scattering matrix \(\underline{\tau}\) is obtained by the Dyson equation:
\[\underline{\tau}(\epsilon)=[\underline{t}^{-1}(\epsilon)-\underline{g_{0}}( \epsilon)]^{-1} \tag{4}\]
with the inverse of the squared KKR matrix \(\underline{M}(\epsilon)=\underline{t}^{-1}(\epsilon)-\underline{g_{0}}(\epsilon)\), where \(\underline{t}(\epsilon)\) is the single site scattering matrix and \(\underline{g_{0}}(\epsilon)\) is the free particle propagator between two different atom sites. When considering \(N\) atoms in the unit cell and the angular momentum cutoff \(l_{max}\), the rank of \(\underline{M}\) are \(N(l_{max}+1)^{2}\) without spin-canting and \(2N(l_{max}+1)^{2}\) with spin-canting.
### LSMS Method
The LSMS method (Liang and Hanna, 2013) requires the calculation of the multiple-scattering matrix \(\underline{\tau}\) for each atom in the unit cell, with the atom at the center of a cluster, called the local interaction zone (LIZ). For each atom, the Green's function is obtained in the same way as the original KKR method mentioned in Eq.(3) and Eq.(4), except that the KKR matrix in the LSMS method is built by considering the atoms in the LIZ, rather than in the entire space. As a result, in the LSMS method, the rank of the KKR matrix for each atom is \(N_{\text{LIZ}}(l_{max}+1)^{2}\) without spin-canting or \(2N_{\text{LIZ}}(l_{max}+1)^{2}\) with spin-canting, where \(N_{\text{LIZ}}\) is the atom number in the LIZ.
Based on the Dyson equation, the \(\tau\)-matrix for each atom is obtained by a matrix inverse. In previous implementations, since only the first block of the inverted matrix is needed, the matrix inverse is achieved by the block LU algorithm with the complexity scales with \(N_{\text{LIZ}}^{2}\)(Becker et al., 2013; Liang and Hanna, 2013). The total complexity of LSMS scales with \(NN_{\text{LIZ}}^{2}\), which is linear with respect to total atom number \(N\) in the unit cell.
## 3. Implementation and Optimization
In the LSMS method, matrix inverse takes a large proportion of the total computation time. For example, it takes 92% of the total wall-clock time when the rank of the KKR matrix is 4350. However the matrix inverse can be significantly accelerated by GPUs.
### System Architecture
Our GPU benchmarks were performed on four kinds of computing systems: 1) the V100-16G GPU node on Bridges2 at PSC(Birdges et al., 2016); 2) the V100-32GB GPU node on Bridges-2 at PSC; 3) the A100-80G
GPU node on the Brain Image Library cluster at PSC[2]; 4) the A100-40G GPU node on Lonestaro at TACC[1].
Bridges-2 at PSC hosts several models of GPU nodes including 24 GPU nodes with each node having 2x Intel Xeon Gold 6248 20-Core Processor ("Cascade Lake"), with 40 cores on 2 sockets and 16 x 32GB totalling 512GB DDR4-2933 RAM. There are 8 NVIDIA V100 GPU configured on each node. Each V100 GPU has 32GB HBM2 memory.
Bridges-2 at PSC also hosts 9 GPU nodes with each node having 2x Intel Gold 6148 20-core Processors with 40 cores on 2 sockets and 12 x 16GB totalling 192GB DDR4-2666 RAM. There are 8 NVIDIA V100 GPU configured on each node. Each V100 GPU has 16GB HBM2 memory.
The Brain Image Library cluster at PSC has an 8 A100 SXM4 node with 2x EPYC 7543 32-Core Processor ("Milan"), with 64 cores on 2 sockets and 32 x 64GB totalling 2TB RAM DDR4 memory. There are 8 NVIDIA A100 GPU. Each A100 GPU has 80GB HBM2 memory.
Lonestaro at TACC hosts 32 GPU nodes. Each node has 2x AMD EPYC 7763 64-Core Processor ("Milan"), with 128 cores on two sockets, totally 256GB DDR4-3200 RAM. There are 3 NVIDIA A100 GPUs configured on each node, GPU0 on socket 0 and GPU1,2 on socket 1. Each A100 GPU has 40GB HBM2 memory.
### Fortran-C Interface
The main part of the MuST software is written in Fortran programming language, and the GPU calculation is written in CUDA programming language. To offload the matrix inverse on GPUs, we built a C interface to the CUDA code which allows to exchange data with the main Fortran code.
Fig.(1) illustrates the procedure of calling functions for matrix inverse in the MuST program. The KKR matrix \(M\) is built on CPU, after \(M\) is built. The compiling flag ACCEL is used to determine where the matrix inverse is computed. When ACCEL is undefined or set to FALSE the matrix inverse is computed on CPU using the block LU algorithm, while for ACCEL defined to be CUDA. A Fortran subroutine initializes the variables, then transferring variables to the routine in CUDA through the Fortran-C interface.
In general, there are four steps in the GPU program: 1) Initialize memory space on GPU; 2) Copy data from the host memory to the GPU memory; 3) Compute the matrix inverse; and 4) Copy the results from the GPU memory back to the host memory. The inverse of the matrix \(\underline{X}\) is done by solving the linear equation system: \(\underline{X}^{\prime}\underline{X}=\underline{I}\), where \(\underline{I}\) is the identity matrix. When each element in \(\underline{X}\) is a complex number with double precision real part and imaginary part, the matrix inverse can be achieved through two cuSOLVER functions: cusolverDnZgetrf and cusolverDnZgetrs, where cusolverDnZgetrf performs the LU factorization of the matrix and cusolverDnZgetrs solves the linear equation.
Figure 1. Illustration the procedure of calling functions for matrix inverse in the MuST program. Blue rectangles depict the Fortran subroutines. The green rectangle depicts the C program which calls the CUDA and cuSOLVER functions.
Performance Evaluations
We evaluate the performance of the LSMS method implemented in MuST by comparing the wall-clock time with and without GPU accelerations.
### Test Cases
There are totally three test cases in our benchmarks: the CoCrFeMnNi high entropy alloy with spin-canting and the NiAu binary random alloy with and without spin-canting. The concentraion of each element in CoCrFeMnNi is the same as 20%, and the concentration of Ni and Au in the NiAu is 30% and 70% respectively. The total number of atoms is 56 for CoCrFeMnNi and 64 for NiAu.
In our test cases \(l_{max}=4\), \(N_{\text{LIZ}}=135\) for CoCrFeMnNi and \(N_{\text{LIZ}}=249\) for NiAu. The rank of the KKR matrix for CoCrFeMnNi alloy is 6750. The rank of the KKR matrix for NiAu alloy without and with spin-canting is 6225 and 12450, respectively. Each element in the KKR matrix is a complex number with double precision real part and imaginary part.
### Performance Comparisons
Here we present the performance results on various computing systems within different configurations of GPUs and CPUs. We set the OpenMP thread number to 1 through our benchmarks. The detailed data of our benchmarks is depicted on Table.(1). The number in the bracket denotes the ratio of the wall-clock time for the GPU data transfer divides by the GPU compute.
Fig.(2) depicts the comparisons of wall-clock time of the MuST program excluding the I/O operations. The reference CPU benchmarks were performed on the Regular Memory CPU Nodes on Bridges2 at PSC, and the CPU type is the AMD EPYC 7742. The MuST program was compiled with the AMD optimized math library AOCL for the reference CPU benchmarks.
In the figure, the baseline is the performance of 32 CPU cores of the CPU node. Apparently, GPUs are able to accelerate the calculation with a significant amount of speedup ratio. For the case of using either 4 or 8 GPUs, the job running on 16 CPU cores shows roughly 1.5 times faster than running on 8 CPU cores. Running on 32 CPU cores are roughly 1.2 times and 1.3 times faster than running on 16 CPU cores for the case of using 4 and 8 GPUs, respectively. For the case of running on either 8 or 16 CPU cores, using 8 GPUs are roughly 1.2 times faster than using 4 GPUs. However when running 32 CPU cores, using 8 GPUs is roughly 1.4 times faster than using 4 GPUs.
## 5. Discussion and Conclusion
We demonstrated that the Green's function based electronic structure software MuST can achieve high acceleration ratio on acceleration cards like NVIDIA GPUs. Our work opens the door to
Figure 2. The performance comparisons for various configurations of CPUs and GPUs. The CPU number is the total number of the MPI ranks. The MPI ranks are evenly distributed on multiple GPUs.
simulate a large disordered alloy system in a moderate time. We would like to point out that the target for GPU offloading is also a good accelerating target on multi-core CPUs. Our test results using threaded BLAS libraries from Intel MKL and NVHPC shows sub linear speedup in terms of thread numbers on Intel and AMD multi-core CPUs. This helps to achieve same scaling performance with much less number of MPI tasks, and more efficient parallel I/O in future.
In this work the GPU acceleration is only demonstrated on the LSMS method, which takes a cluster approximation to achieve the linear scaling with respect to the number of atoms in the unit cell. In fact, it is quite straightforward to offload the KKR matrix inverse on GPUs to accelerate the original KKR calculation. Further code acceleration is expected if the KKR matrix is constructed on GPUs, instead of on CPUs, in which case the time for data transfer from CPU to GPU will be significantly reduced. Furthermore, the matrix inverse acceleration is not limited to NVIDIA GPUs, the acceleration cards from other vendors are promising to deliver competitive acceleration results. Investigating the performance on other acceleration cards is one of our future goals.
###### Acknowledgements.
X.L. thanks usefull discussions with Markus Eisenbach, Vishnu Raghuraman and Michael Widom. E.H. thanks Tod Pike at PSC for setting up access to the Brain Image Library A100 GPU node. H.L. thanks Dr. Junjie Li for providing his tool to profile BLAS calls in the code and tips of MPI task to GPU binding. This work was supported by the National Science Foundation through the OAC-2139536 Characteristic Science Applications for the Leadership Class Computing Facility award. The MuST package is the product of an open source project supported in part by NSF Office of Advanced Cyberinfrastructure and the Division of Materials Research within the NSF Directorate of Mathematical and Physical Sciences under award number 1931367, 1931445, and 193152.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.